Hey all,
I don't know about you but most of the Solr URLs I issue are fairly
lengthy full of parameters on the query string and browser location
bars aren't long enough/have multi-line capabilities. I tried to find
something that does this but couldn't so I wrote a chrome extension to
help.
Pleas
On 5/10/2012 4:17 PM, Ravi Solr wrote:
Thanks for responding Mr. Heisey... I don't see any parsing errors in
my log but I see lot of exceptions like the one listed belowonce
an exception like this happens weirdness ensues. For example - To
check sanity I queried for uniquekey:"111" from the s
Try using the actual id of the document rather than the shell substitution
variable - if you're trying to delete one document.
To delete all documents, use delete by query:
*:*
See:
http://wiki.apache.org/solr/FAQ#How_can_I_delete_all_documents_from_my_index.3F
-- Jack Krupansky
-Origina
Yes, I agree with you.
But Ajax-SOLR Framework doesn't fit in that manner. Any alternative
solution ?
Anupam
On Fri, May 11, 2012 at 9:41 AM, Klostermeyer, Michael <
mklosterme...@riskexchange.com> wrote:
> Instead of hitting the Solr server directly from the client, I think I
> would go throug
Hi Sohail,
http://search-lucene.com/?q=Join&fc_project=Solr
Hit #1.
Otis
Performance Monitoring for Solr / ElasticSearch / HBase -
http://sematext.com/spm
- Original Message -
> From: Sohail Aboobaker
> To: solr-user@lucene.apache.org
> Cc:
> Sent: Thursday, May 10, 2012 1
Hi,
You've restarted Solr after editing the schema?
And checked the logs? Paste?
Otis
Performance Monitoring for Solr / ElasticSearch / HBase -
http://sematext.com/spm
- Original Message -
> From: Tolga
> To: solr-user@lucene.apache.org
> Cc:
> Sent: Friday, May 11, 2012 12:3
Anyone at all?
Original Message
Subject:Delete documents
Date: Thu, 10 May 2012 22:59:49 +0300
From: Tolga
To: solr-user@lucene.apache.org
Hi,
I've been reading
http://lucene.apache.org/solr/api/doc-files/tutorial.html and in the
section "Deleting Data", I've
Instead of hitting the Solr server directly from the client, I think I would go
through your application server, which would have access to all the users data
and can forward that to the Solr server, thereby hiding it from the client.
Mike
-Original Message-
From: Anupam Bhattacharya [
Also here is my schema
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Import-Handler-Custom-Transformer-not-working-tp3978746p3978748.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have created a custom transformer for dynamic fields but it doesn't seem to
be working correctly and I'm not sure how to debug it with a live running
solr instance.
Here is my transformer
package org.build.com.solr;
import org.apache.solr.handler.dataimport.Context;
import org.apache.solr.han
Is there any way to set the "Expires" header dynamically to the solr
response?
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-custom-dynamic-expire-header-to-the-solr-Response-tp3978170.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for responding Mr. Heisey... I don't see any parsing errors in
my log but I see lot of exceptions like the one listed belowonce
an exception like this happens weirdness ensues. For example - To
check sanity I queried for uniquekey:"111" from the solr admin GUI it
gave back numFound equal
I am trying to import data through my db but I have dynamic fields that I
don't always know the names of. Can someone tell me why something like this
doesn't work.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Data-Import-H
On 5/10/2012 12:27 PM, Ravi Solr wrote:
I clean the entire index and re-indexed it with SOLRJ 3.6. Still I get
the same error every single day. How can I see if the container
returned partial/nonconforming response since it may be hidden by
solrj ?
If the server is sending a non-javabin error r
> It's possible to see what terms are indexed for a field of
> document that
> stored=false?
One way is to use http://wiki.apache.org/solr/LukeRequestHandler
> I have a search that doesn't work with quotes like
> this "field:TEXT Nº
> 1098" when i remove quotes the search find the document
> (
Hi Guys!
I've removed the two largest documents which were very large. One of which
consisted of 1 field and was around 4MB (text)..
This fixed my issue..
Kind regards,
Bram Rongen
On Fri, Apr 20, 2012 at 2:09 PM, Bram Rongen wrote:
> Hmm, reading your reply again I see that Solr only uses t
Sorry, commit=no should have been commit=yes in my previous post.
Regards,
Hi,
I've been reading
http://lucene.apache.org/solr/api/doc-files/tutorial.html and in the
section "Deleting Data", I've edited schema.xml to include a field named
id, issued the command for f in *;java -Ddata=args -Dcommit=no -jar
post.jar "$f";done, went on to the stats page
only to find no
Hello,
Solr accepts fq parameter like: localhost:8080/solr/select/?q=blah+blah
&fq=model:member+model:new_member
Is it possible to pass the fq parameter with alternative syntax like:
fq=model=member&model= new_member or in other way?
Thank you,
Tom
--
View this message in context:
http://lucene.
Hi James,
I just pulled down the newest nightly build of 4.0 and it solves an issue I had
been having with solr ignoring the caching of the child entities. It was
basically opening a new connection for each iteration even though everything
was specified correctly. This was present in my previ
I clean the entire index and re-indexed it with SOLRJ 3.6. Still I get
the same error every single day. How can I see if the container
returned partial/nonconforming response since it may be hidden by
solrj ?
Thanks
Ravi Kiran Bhaskar
On Mon, May 7, 2012 at 2:16 PM, Ravi Solr wrote:
> Hello Mr.
On 5/10/2012 2:02 AM, Tolga wrote:
Apache servers are returning my post with the status messages
HTML_FONT_SIZE_HUGE,HTML_MESSAGE,HTTP_ESCAPED_HOST,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL,URI_HEX,WEIRD_PORT.
I've tried clearing all formatting and a re-post, but the same thing
occurred.
I am attempting to index a DB schema that has a many:one relationship. I
assume I would index this within Solr as a 'multivalue=true' field, is that
correct?
I am currently populating the Solr index w/ a stored procedure in which each DB
record is "flattened" into a single document in Solr. I
Hi,
My requirement is to calculate the sum of a certain field in the result
StatsComponent does what I need eg.
results in
Question.
1. I don't need to calculate min, max, sumOfSquares etc. Is there a way to
limit the stats to the sum and nothing else?
2. Is there going to be a significant
Jasper,
The simple answer is to increase -Xmx :)
What is your ramBufferSizeMB (solrconfig.xml) set to? Default is 32 (MB).
That autocommit you mentioned is a DB commit? Not Solr one, right? If so, why
is commit needed when you *read* data from DB?
Otis
Performance Monitoring for Solr /
Thank you both =)
Gary
Le 10/05/2012 17:59, Otis Gospodnetic a écrit :
Gary - milliseconds, right.
Otis
Performance Monitoring for Solr / ElasticSearch / HBase - http://sematext.com/spm
Yes, milliseconds. --wunder
- Original Message -
From: G.Long
To: solr-user@lucene.apach
Yes
On Thu, May 10, 2012 at 4:57 PM, G.Long wrote:
> Hi :)
>
> In what unit of time is expressed the QTime of a QueryResponse? Is it
> milliseconds?
>
> Gary
>
Gary - milliseconds, right.
Otis
Performance Monitoring for Solr / ElasticSearch / HBase -
http://sematext.com/spm
- Original Message -
> From: G.Long
> To: solr-user@lucene.apache.org
> Cc:
> Sent: Thursday, May 10, 2012 11:57 AM
> Subject: question about solr response qtime
Yes, milliseconds. --wunder
On May 10, 2012, at 8:57 AM, G.Long wrote:
> Hi :)
>
> In what unit of time is expressed the QTime of a QueryResponse? Is it
> milliseconds?
>
> Gary
Hi Jasper,
Solr does handle that for you. Some more stuff to share:
* Solr version?
* JVM version?
* OS?
* Java replication?
* Errors in Solr logs?
* deletion policy section in solrconfig.xml?
* merge policy section in solrconfig.xml?
* ...
You may also want to look at your Index report in SPM
Hi :)
In what unit of time is expressed the QTime of a QueryResponse? Is it
milliseconds?
Gary
You need to perform Garbage Collection tune up on your JVM to handle the OOM
Sent from my iPhone
On May 10, 2012, at 21:06, "Jasper Floor" wrote:
> Hi all,
>
> we've been running Solr 1.4 for about a year with no real problems. As
> of monday it became impossible to do a full import on our mas
The problem is that you can't reproduce easily the hierarchy of
structured data. There is no attribute in lucene index as there can be
in a xml document. If your structured data is not too complex, you
could try to add a field to your schema called "person" and
concatenate all properties (nam
I don't know what is the best solution. You could indeed split your
documents and link them with the patent-number inside the same index. Or
you could also use different cores with a specific schema (one core with
the schema for the patent and one core with the schema for the inventor)
and stil
Le 10/05/2012 15:12, G.Long a écrit :
I think I see what the problem is.
Correct me if I'm wrong but I guess your schema does not represent a
person but something which can contain a list of persons with
different attributes, right?
Yes exactly what I have ! (see my next message)
Actually I have documents like this one, country of inventor is inside
the field "inventor"
It's not exactly an inventor notice, it's a patent notive with several
fields.
The "patent-number" field is the fieldkey.
Should I split my document and use fieldkey to link them (like on normal
databas
I think I see what the problem is.
Correct me if I'm wrong but I guess your schema does not represent a
person but something which can contain a list of persons with different
attributes, right?
The problem is that you can't reproduce easily the hierarchy of
structured data. There is no attri
Perhaps I am missing the obvious but our slaves tend to run out of
disk space. The index sizes grow to multiple times the size of the
master. So I just toss all the data and trigger a replication.
However, can't solr handle this for me?
I'm sorry if I've missed a simple setting which does this for
Hi all,
we've been running Solr 1.4 for about a year with no real problems. As
of monday it became impossible to do a full import on our master
because of an OOM. Now what I think is strange is that even after we
more than doubled the available memory there would still always be an
OOM. We seem t
I don't know the details of your schema, but I would create fields like
name, country, street etc., and a field named role, which contains
values like inventor, applicant, etc.
How would you do it otherwise? Create only four documents, each fierld
containing 80 mio. values?
Greetings,
Kuli
You don't have to create a document per field. You have to create a
document per person.
If inventors, applicants, assignees and attorneys have properties in
common, you could have a model like :
Regards,
Gary
Le 10/05/2012 14:47, Bruno Mannina a écrit :
But I have more than 80 000 000 d
>>Did you mark those fields as multi-valued?
yes, I did.
But I have more than 80 000 000 documents with many fields with this
kind of description?!
i.e:
inventor
applicant
assignee
attorney
I must create for each document 4 documents ??
Le 10/05/2012 14:41, G.Long a écrit :
When you add data into Solr, you add documents which contain fields.
In you
Am 10.05.2012 14:33, schrieb Bruno Mannina:
like that:
CH
FR
but in this case Ioose the link between inventor and its country?
Of course, you need to index the two inventors into two distinct documents.
Did you mark those fields as multi-valued? That won't make much sense IMHO.
Greetings,
K
When you add data into Solr, you add documents which contain fields.
In your case, you should create a document for each of your inventors
with every attribute they could have.
Here is an example in Java:
SolrInputDocument doc = new SolrInputDocument();
doc.addField("inventor", "Rossi");
doc.a
Hello,
I am using Nutch 1.4 with Solr 3.6.0 and would like to get the HTML keywords
and description metatags indexed into Solr. On the Nutch side I have followed
the http://wiki.apache.org/nutch/IndexMetatags to get nutch parsing the
extracting the metatags (using index-metatags and parse-metat
like that:
CH
FR
but in this case Ioose the link between inventor and its country?
if I search an inventor named ROSSI with CH:
q=inventor:rossi and inventor-country=CH
the I will get this result but it's not correct because Rossi is FR.
Le 10/05/2012 14:28, G.Long a écrit :
Hi :)
You could
Hi :)
You could just add a field called country and then add the information
to your document.
Regards,
Gary L.
Le 10/05/2012 14:25, Bruno Mannina a écrit :
Dear,
I can't find how can I define in my schema.xml a field with this format?
My original format is:
WEBER WALTER
CH
ROS
Dear,
I can't find how can I define in my schema.xml a field with this format?
My original format is:
WEBER WALTER
CH
ROSSI PASCAL
FR
I convert it to:
...
WEBER WALTER
ROSSI PASCAL
...
but how can I add Country code to the field without losing the link
between inventor?
Can
Hi,
The whole thinking of score threshold is flawed in this situation.
Chris, you say yourself that you plan to let people subscribe to searches which
are known to have crappy results for perhaps the majority of hits, and there is
no automatic way of rectifying that.
Imagine a search for the tw
Thanks Jack.
I tried (Regex Transformer) it out and the indexing has gone really slow. Is it
(RegEx Transformer) slower than N-Gram Indexing? I mean they may be apples and
oranges but what I mean is finally after extracting the field I want to NGram
Index it. So It seems going in for NGram Inde
You can use "Regex Transformer" to extract from a source field.
See:
http://wiki.apache.org/solr/DataImportHandler#RegexTransformer
-- Jack Krupansky
-Original Message-
From: Husain, Yavar
Sent: Thursday, May 10, 2012 6:04 AM
To: solr-user@lucene.apache.org
Subject: Solr On Fly Field
I have full text in my database and I am indexing that using Solr. Now at
runtime i.e. when the indexing is going on can I extract certain parameters
based on regex and create another field/column on the fly using Solr for that
extracted text?
For example my DB has just 2 columns (DocId & FullT
Hi sujatha,
Basically i just want to explain the use case . The use case is
described below,
1. Create a VM running solr, with one core per customer
2. Index all of each customer's data (config text, metadata, etc) into
a single core
3. Create one fake "partner" per 30 custome
I am newbie in "Solr" thing. But with your advices i am in track now (sort of
way).
It seems that "Lucene" community is responsible, and fortunately it doesn't
turns its back to newbies!
Thank you guys,
Tom
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-query-issues-tp3
Hi Andre,
qs is used when you have explicit phrase query (you need to use quotes for
this) in your search string.
q="lisboa tipos"&qs=1
--- On Wed, 5/9/12, André Maldonado wrote:
> From: André Maldonado
> Subject: EDisMax and Query Phrase Slop Problem
> To: solr-user@lucene.apache.org
> Dat
Right, for Long/Lat I found this information:
<-Long / Lat Field Type->
<-Fields->
Does this look more logical?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Newbie-tries-to-make-a-Schema-xml-tp3974200p3976539.html
Sent from the Solr - User mailing l
Hi,
Thanks sujatha for your response.
I tried to create the core as per the blog url that you gave. But in
that
mkdir -p /etc/solr/conf/$name/conf
cp -a /etc/solr/conftemplate/* /etc/solr/conf/$name/conf/
sed -i "s/CORENAME/$name/" /etc/solr/conf/$name/conf/solrconfig.xml
curl
"h
Hi,
i've applied the patch from
https://issues.apache.org/jira/browse/SOLR-2604 to Solr 3.5. It works
but noticeably slows down the query time. Did someone already solve
that problem?
Cheers,
Valeriy
Hi,
Apache servers are returning my post with the status messages
HTML_FONT_SIZE_HUGE,HTML_MESSAGE,HTTP_ESCAPED_HOST,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL,URI_HEX,WEIRD_PORT.
I've tried clearing all formatting and a re-post, but the same thing
occurred. What to do?
Regards,
60 matches
Mail list logo