But there is an API for sending a delta over the wire, and server side it
does a read, overlay, delete, and insert. And only the fields you sent
will be changed.
*Might require your unchanged fields to all be stored, though.
-Greg
On Fri, Aug 23, 2013 at 7:08 PM, Lance Norskog wrote:
> Solr
On 8/23/2013 10:27 PM, Kamaljeet Kaur wrote:
>
> Actually I wanted every single step to be clear, thats why I asked.
> Now there is written:
>
> "Ensure that your solr schema (schema.xml) has the fields 'id',
> 'name', 'desc'. Change the appropriate details in the data-config.xml"
>
> My schema.
> Actually I wanted every single step to be clear, thats why I asked.
> Now there is written:
>
> "Ensure that your solr schema (schema.xml) has the fields 'id',
> 'name', 'desc'. Change the appropriate details in the data-config.xml"
>
> My schema.xml is not having these fields. That means I hav
On Fri, Aug 23, 2013 at 11:17 PM, Andrea Gazzarini-3 [via Lucene]
wrote:
> Why don't you try?
Actually I wanted every single step to be clear, thats why I asked.
Now there is written:
"Ensure that your solr schema (schema.xml) has the fields 'id',
'name', 'desc'. Change the appropriate details
Solr does not by default generate unique IDs. It uses what you give as
your unique field, usually called 'id'.
What software do you use to index data from your RSS feeds? Maybe that
is creating a new 'id' field?
There is no partial update, Solr (Lucene) always rewrites the complete
document.
I'm getting the same error...Is there any workaround to this?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Grouping-tp2820116p4086425.html
Sent from the Solr - User mailing list archive at Nabble.com.
Any help..? Is it possible to add this pagerank-like behaviour?
Did you look here?
https://cwiki.apache.org/confluence/display/solr/Working+with+External+Files+and+Processes
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-search-by-external-fields-tp4086197p4086408.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello
I want to index the XML below with multivalued fields.
What better way to set the schema.xml since there are nested data?
Thank you.
//String
//String
//Date
//String
//Multivalued
//First register
//String
//String
//
Hi Rob,
I think the wrong Content-type header is getting passed. Try one of these
instead:
curl '
http://localhost:8983/solr/update/csv?commit=true&separator=%09&stream.file=/tmp/sample.tmp
'
OR
curl 'http://localhost:8983/solr/update/csv?commit=true&separator=%09' -H
'Content-type:application/
Hi Bruno,
IntelliJ IDEA has a one-click way of downloading the source jars of
dependencies into your project. I'd look for something similar in Netbeans
rather than trying to hack together a Maven build of Solr yourself.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062 | c: +1 917
I dont want to change solr just extend it, but it would be nice to have the
source code on the project so that I can debug it in Netbeans. Do I need to
include jetty too? By the way (this is a little off-topic sorry) do you
know any site that explains how maven works in a straight-forward way? All
You want to change the solr source code itself or you want to create your
own Tokenizers and things? If the later why not just set up solr as a
dependency in your pom.xml like so:
org.apache.lucene
lucene-test-framework
test
${solr.version}
org.apache.solr
solr-test-framework
Hello all,
I am building Solr's source code through maven in order to develop on top
of it on Netbeans (As no ant task was made to Netbeans... not cool!).
Three doubts about that:
1. How can I execute the solr server?
2. How can i debug the solr server?
3. If I create new packages (RequestHandle
You need the CSV content type header and --data-binary.
I tried this with Solr 4.4:
curl 'http://localhost:8983/solr/update?commit=true&separator=%09' -H
'Content-type:application/csv' --data-binary @sample.tmp
Otherwise, Solr just ignores the request.
-- Jack Krupansky
-Original Messag
The following message addressed to you was quarantined because it likely
contains a virus:
Subject: Problem with SolrCloud + Zookeeper
From: =?GB2312?B?0MvMzsvv?=
However, if you know the sender and are expecting an attachment, please reply
to this message, and we will forward the quarantined
Perhaps an atomic update that only changes the fields you want to change?
-Greg
On Fri, Aug 23, 2013 at 4:16 AM, Luís Portela Afonso
wrote:
> Hi thanks by the answer, but the uniqueId is generated by me. But when solr
> indexes and there is an update in a doc, it deletes the doc and creates a
I completely agree. I would prefer to just rerun the search each time.
However, we are going to be replacing our rdb based search with something
like Solr, and the application currently behaves this way. Our users
understand that the search is essentially a snapshot (and I would guess many
prefe
Thanks for the reply, Jack.
It only looks like spaces, because I did a cut-and-paste. The file in question
does contain tabs instead of spaces, i.e.:
"id""question" "answer""url"
"q99" "Who?" "You!" "none"
Another question I means to ask, is whether this sort of activity
That is what I needed
req.getCore().getResourceLoader().getConfigDir()
Thanx
Bruno
On Fri, Aug 23, 2013 at 3:37 PM, Andrea Gazzarini <
andrea.gazzar...@gmail.com> wrote:
> Yes, if your RequestHandler implements SolrCoreAware you will get a
> SolrCore reference in inform(...) method. In SolrCor
Seems ok assuming that
- you have mysql driver jar in your $SOLR_HOME/lib
- New is database name
- user root / password is valid
- table exists
- SOLR has a schema with the following id and first_name fields declared
About "How do I know if they are wrong?"
Why don't you try?
On 08/23/2013 04
Yes, if your RequestHandler implements SolrCoreAware you will get a
SolrCore reference in inform(...) method. In SolrCore you have all what
you need (specifically SolrResourceLoader is what you need)
Note that if your request handler is a SearchHandler you don't need to
implement that interfa
Hello there,
I just got something to index mysql database talble:
http://wiki.apache.org/solr/DIHQuickStart
Pasted the following in config tag of solrconfig.xml file
(solr-4.4.0/example/solr/collection1/conf/solrconfig.xml):
data-config.xml
Altering the next code, making a data-config.xml
Is there any way inside a handleRequestBody on a RequestHandler to know the
directory where the core configuration is? (schema.xml, solrconfig.xml,
synonyms, etc)
Regards
Bruno
--
Bruno René Santos
Lisboa - Portugal
Hello,
Ahmet, using the faceting component it gives me the document count for a
term, while I am interested in the term counts within a document for a
query term.
Jack, the functionquery termfreq returns indeed the term frequency per
document, but not over a result set.
How to do this over a res
req.getCore().getCoreDescriptor().getCoreContainer().reload(req.getCore().getName());
works like a charm :) thanx a lot
Bruno
On Fri, Aug 23, 2013 at 2:48 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Actually I was suggesting that you execute the CoreAdminHandler from
> within
Actually I was suggesting that you execute the CoreAdminHandler from
within your handler or you can try calling CoreContainer.reload
directly.
On Fri, Aug 23, 2013 at 6:13 PM, Bruno René Santos wrote:
> Hi again,
>
> Thanx for the help :)
>
> I have this handler:
>
> public class SynonymsHandler
If you're using the bundled jetty that comes with the download, check the
etc/jetty.xml property for maxIdleTime and set it appropriately. I get that
error when operations take longer than the property is set to and time out. Do
note that the property is specified in milliseconds!
Thanks,
Greg
Finally something I can help with! I went through the same problems you're
having a short while ago. Check out
https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS for most
of the information you need and be sure to check the comments on the page as
well.
Here's an example fro
Exactly - Solr does not define the punctuation, UAX#29 defines it, and I
have deciphered the UAX#29 rules and included them in my book. Some
punctuation is always punctuation and always removed, and some is
conditional on context - I tried to lay out all the implied rules.
-- Jack Krupansky
-
Your data file appears to use spaces rather than tabs.
-- Jack Krupansky
From: Rob Koeling Ai
Sent: Friday, August 23, 2013 6:38 AM
To: solr-user@lucene.apache.org
Subject: Problem with importing tab-delimited csv file
I'm having trouble importing a tab-delimited file with the csv update hand
Hi again,
Thanx for the help :)
I have this handler:
public class SynonymsHandler extends RequestHandlerBase implements
SolrCoreAware {
public SynonymsHandler() {}
private static Logger log = LoggerFactory.getLogger(SynonymsHandler.class);
@Override
public void handleRequestBody(SolrQueryRequ
Hi JZ,
You can use faceting component.
http://localhost:8080/solr/core/select?q=ahmet&wt=xml&facet=on&facet.field=title&facet.prefix=queryTerm
From: JZ
To: solr-user@lucene.apache.org
Sent: Friday, August 23, 2013 2:43 PM
Subject: Query term count over a r
You can get the term frequency (per document) for a term using the
termfreq() function query in the fl parameter:
fl=*,termfreq(field,'term')
-- Jack Krupansky
-Original Message-
From: JZ
Sent: Friday, August 23, 2013 7:43 AM
To: solr-user@lucene.apache.org
Subject: Query term count
i changed following line (xpath):
On 22. Aug 2013, at 10:06 PM, Alexandre Rafalovitch wrote:
> Ah. That's because Tika processor does not support path extraction. You
> need to nest one more level.
>
> Regards,
> Alex
> On 22 Aug 2013 13:34, "Andreas Owen" wrote:
>
>> i can do it like th
ok but i'm not doing any path extraction, at least i don't think so.
htmlMapper="identity" isn't preserving html
it's reading the content of the pages but it's not putting it into "text_test"
and "text". it's only in "text_test" the copyField isn't working.
data-config.xml:
Yes, discountOverlaps is used in computeNorm which is used at index time. You
should see a change after reindexing.
Cheers,
Markus
-Original message-
> From:Tom Burton-West
> Sent: Thursday 22nd August 2013 23:32
> To: solr-user@lucene.apache.org
> Subject: Re: How to set discountOverl
Hi all,
I would like to get the total count of a query term of a result set. Is
there a way to get this?
I know there is a TermVectorComponent that does this per result (document),
but it would be far too expensive to take the sum over all documents for a
term given that term.
The LukeRequestHan
by monitoring the original and changed systems over long enough periods,
where "long enough" is a parameter (to compute).
Or then going really low-level, if you know which component has been
changed (like they do in Lucene [1]; not always possible in Solr..)
[1] http://people.apache.org/~mikemccan
Hi Roman,
With adminPath="/admin" or adminPath="/admin/cores", no. Interestingly
enough, though, I can access
http://localhost:8983/solr/statements/admin/system
But I can access http://localhost:8983/solr/admin/cores, only when with
adminPath="/admin/cores" (which suggests that this is the right
Hi thanks by the answer, but the uniqueId is generated by me. But when solr
indexes and there is an update in a doc, it deletes the doc and creates a new
one, so it generates a new UUID.
It is not suitable for me, because i want that solr just updates some fields,
because the UUID is the key tha
No, there's nothing like that in Solr. The closest you
could come would be to not do a hard commit (openSearcher=true)
or a soft commit for a very long time. As long as neither
of these things happen, the search results won't
change. But that's a hackish solution.
In fact I question your basic ass
Well, not much in the way of help because you can't do what you
want AFAIK. I don't think UUID is suitable for your use-case. Why not
use your ?
Or generate something yourself...
Best
Erick
On Thu, Aug 22, 2013 at 5:56 PM, Luís Portela Afonso wrote:
> Hi,
>
> How can i prevent solr from updat
According to
http://stackoverflow.com/questions/15734308/solrentityprocessor-is-called-only-once-for-sub-entities?lq=1
we can use the patched SolrEntityProcessor in
https://issues.apache.org/jira/browse/SOLR-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
to solve the sub
I'm having trouble importing a tab-delimited file with the csv update handler.My data file looks like this:"id" "question" "answer" "url""q99" "Who?" "You!" "none"When I send this data to Solr using Curl:curl 'http://localhost:8181/solr/development/update/csv?commit=true&separator=%09' --data @samp
Hi,
I am working on solr 4.4 jetty, and generated the index on "3350128"
records. Now i want to test the query performance. So applied load test with
time of 5 minutes, and 600 virtual users for different solr queries. After
test completion got below errors.
ERROR - 2013-08-23 09:49:43.867; org.a
No exceptions. And leaderVoteWait value will be used only during startup rite
? A new leader will be elected once the leader node is down. Am i right ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Leader-election-tp4086259p4086290.html
Sent from the Solr - User mailing li
Any exceptions in the logs of other replicas. The default
leaderVoteWait time is 3 minutes after which a leader election should
have been initiated automatically.
On Fri, Aug 23, 2013 at 4:01 PM, Srivatsan wrote:
> almost 15 minutes. After that i restarted the entire cluster. I am using solr
> 4.
almost 15 minutes. After that i restarted the entire cluster. I am using solr
4.4 with 1 shard and 3 replicas
--
View this message in context:
http://lucene.472066.n3.nabble.com/Leader-election-tp4086259p4086287.html
Sent from the Solr - User mailing list archive at Nabble.com.
How long has the shard been without a leader? How many shards? How
many replicas per shard? Which version of Solr?
On Fri, Aug 23, 2013 at 2:51 PM, Srivatsan wrote:
> Hi,
>
> I am using solr 4.4 for my search application. I was indexing some 1 million
> docs. At that time, i accidentally killed l
I don't think that should be a problem. Your custom RequestHandler
must call "reload". Note that a new instance of your request handler
will be created and inform will be called on it once reload happens
i.e. you won't be able to keep any state in the request handler across
core reloads.
You can a
Ups, sorry for the las email, wrong language :P
I wanted to say was:
"Maybe if it were thrown an error saying that the version was not compatible
helped in these cases"
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Friday, August 23, 2013 at 10:53 AM, Yago Ri
I found the problem.
The java version was overwritten for a dependency and was 1.5.
Reinstalling java, Solr works as expected.
Tal vez si fuese lanzado un error diciendo que la versión no era compatible
ayudaba en estos casos.
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.
hi all,
Im running solr cloud with solr 4.4 . I have 2 tomcat instances with 4
shards ( 2 in each).
What will happen if one of the tomcats go down during indexing. The otehr
tomcat throws status as " Leader not active" in the logs.
Regards,
Prasi
The version is the 4.4,
I did the download, unzip and run the command in example folder: java -jar
start.jar
Is a fresh install, no modification done.
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Thursday, August 22, 2013 at 9:22 PM, Brendan Grainger wrote:
>
Great! What about inside a RequestHandler source code in Java? I want to
create a requestHandler that receives new synonyms, insert them on the
synonyms file and reload the core.
Regards
Bruno
On Fri, Aug 23, 2013 at 9:28 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Yes, you can
Hi,
I am using solr 4.4 for my search application. I was indexing some 1 million
docs. At that time, i accidentally killed leader node of that collection.
Indexing failed with the exception ,
/org.apache.solr.client.solrj.SolrServerException: No live SolrServers
available to handle this
request:[
Hi Shawn
Thanks a lot. I got it.
Regards
2013/8/22 Shawn Heisey
> On 8/22/2013 2:25 AM, YouPeng Yang wrote:
> > Hi all
> > About the RAMBufferSize and commit ,I have read the doc :
> > http://comments.gmane.org/gmane.comp.jakarta.lucene.solr.user/60544
> >
> >I can not figure out
On Thu, 2013-08-22 at 20:08 +0200, Walter Underwood wrote:
> We warm the file buffers before starting Solr to avoid spending time
> waiting for disk IO. The script is something like this:
>
> for core in core1 core2 core3
> do
> find /apps/solr/data/${core}/index -type f | xargs cat > /dev/nul
Yes, you can use the Core RELOAD command:
https://cwiki.apache.org/confluence/display/solr/CoreAdminHandler+Parameters+and+Usage#CoreAdminHandlerParametersandUsage-%7B%7BRELOAD%7D%7D
On Fri, Aug 23, 2013 at 1:51 PM, Bruno René Santos wrote:
> Hello,
>
> Is it possible to reload the synonyms and
Hello,
Is it possible to reload the synonyms and stopwords files without rebooting
solr?
Regards
Bruno Santos
--
Bruno René Santos
Lisboa - Portugal
61 matches
Mail list logo