Hi,
Thanks,
As i already used "KeywordTokenizerFactory" in my earlier posts.
*
*
*
*
getting same results.
AnilJayanti
--
View this message in context:
http://lucene.472066.n3.nabble.com/auto-
Java has system properties for setting proxies.
http://docs.oracle.com/javase/6/docs/technotes/guides/net/proxies.html
On Fri, Aug 31, 2012 at 3:43 AM, Nagaraj Molala wrote:
> Hi,
>
> Is there any option in the configuration to add proxy setting in the
> rss-data-config.xml file or any configura
Tika generates a block-structured stream of events for the document.
It would be cool to have an alternate Tika processor in the DIH that
generates this stream as XML. You could then use the XPath tools to
grab whatever you want.
On Fri, Aug 31, 2012 at 4:25 AM, Erick Erickson wrote:
> You can al
We are working on optimizing query performance. My concern was to ensure
some stable QoS. Given our API and UI layout, user may generate an
expensive query. Given the nature of the service, user may want to
"hack" it. Currently, our Search API is a good point to try to inflict
DoS on our server
http://wiki.apache.org/solr/DataImportHandlerFaq#Blob_values_in_my_table_are_added_to_the_Solr_document_as_object_strings_like_B.401f23c5
On Sat, Sep 1, 2012 at 2:17 AM, Cirelli, Stephen J.
wrote:
> Anyone know why I'm getting this exception? I'm following the example
> here < http://wiki.apache.
Anyone know why I'm getting this exception? I'm following the example
here < http://wiki.apache.org/solr/DataImportHandler#ScriptTransformer>
but I get the below error. The field type in my schema.xml is string,
text doesn't work either. Why would I get an error that there's no split
method on a st
1. Use filter queries
> Here a example of query, there are any incorrect o anything that can I
> change?
> http://xxx:8893/solr/candidate/select/?q=+(IdCandidateStatus:2)+(IdCobranded:3)+(IdLocation1:12))+(LastLoginDate:[2011-08-26T00:00:00Z
> TO 2012-08-28T00:00:00Z])
What is the logic here? Are
Of course I saw my error within seconds of pressing send. The
"invariants" block should appear outside the "defaults" block in the
RequestHandler.
On 08/31/2012 04:25 PM, Carrie Coy wrote:
(solr4-beta) I'm trying to follow the instructions in this article:
http://searchhub.org/dev/2011/06/20
(solr4-beta) I'm trying to follow the instructions in this article:
http://searchhub.org/dev/2011/06/20/solr-powered-isfdb-part-10/ to apply
a custom sort order to search results:
Essentially, it involves creating a new qq parameter, and substituting
it into the original q parameter as a local
Looks like it was because it was on multiple lines. Or there was a tab
in there. I put the query all on one line and it runs now.
-Original Message-
From: Cirelli, Stephen J.
Sent: Friday, August 31, 2012 3:22 PM
To: solr-user@lucene.apache.org
Subject: DIH jdbc4.MySQLSyntaxErrorException
No, it should process all of the files that get listed. I'm taking a look at
the issue you opened, SOLR-3779. This is also similar to SOLR-3307, although
that was reported as a bug with "threads" in 3.6, which is no longer a feature
in 4.0.
James Dyer
E-Commerce Systems
Ingram Content Group
(
I have very long SQL statements that span multiple lines and it works for me.
You might want to paste your SQL into a tool like Squirrel and see if it
executes outside of DIH. One guess I have is you've got something like this...
where blah='${my.property}'
...but the variable is not resolving
Is there some limitation to how complex of a select I can use? I'm
getting this error when I try to execute my query. The query runs fine
inside MySQL Workbench.
" Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:
You have an error in your SQL syntax; check the manual that corr
Try this:
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.KeywordTokenizerFactory
On Thu, Aug 30, 2012 at 9:07 PM, aniljayanti wrote:
> Hi,
>
> thanks,
>
> I checked with given changes, getting below error saying that SOLR is not
> allowing without tokenizer.
>
> org.apache.
Chris,
This is really good stuff, I said stuff not really thinking/knowing about
the index inner-workings.
I was thinking if I could use "copyField", as in my previous example:
But I guess I would have had to write a custom processor and define a
specific field type.
I guess a more elegant
> Thanks Mr.Yagami. I'll look into that.
Hi Bing,
You can this data-config.xml to index txt files on disk. Add these fields to
schema.xml
link
There's no need to read the entire text file into memory. I mean, each
line/entry stands on its own and can be sent to Solr to be indexed by
itself.
-- Jack Krupansky
-Original Message-
From: Bing Hua
Sent: Friday, August 31, 2012 11:56 AM
To: solr-user@lucene.apache.org
Subject: Re:
Thanks Mr.Yagami. I'll look into that.
Jack, for the latter two options, they both require reading the entire text
file into memory, right?
Bing
--
View this message in context:
http://lucene.472066.n3.nabble.com/Send-plain-text-file-to-solr-for-indexing-tp4004515p4004772.html
Sent from the S
Agreed. There are a lot of products that do this already. Writing it from
scratch in Solr seems like a huge waste of time. You should also check out
Graylog2: http://graylog2.org/
wunder
On Aug 31, 2012, at 7:05 AM, Alexandre Rafalovitch wrote:
> Have you tried looking at http://logstash.net/
Think of the log file as a flat database, each line/entry a "row". So, each
log line/entry would need to be added to Solr as a separate document.
Maybe you could do this using DIH and a LineEntityProcessor and
RegexTransformer, DateFormatTransformer, etc.
-- Jack Krupansky
-Original Mess
I have looked at splunk and logstash but want to explore solr to do the job.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/need-basic-information-tp4004588p4004763.html
Sent from the Solr - User mailing list archive at Nabble.com.
Data Import Handler can also be used to ingest plain text files.
Or you can use SolrJ and write your own code to process the text files
yourself and add their content to the desired field.
Or write a script in Python or some other scripting language to form a
SolrXML/JSON/CSV wrapper around y
> So in order to use solrcell I'll have
> to add a number of dependent libraries,
> which is one of what I'm trying to avoid. The second thing
> is, solrcell
> still parses the plain text files and I don't want it to
> make any change to
> those of my exported files.
You can index plain text file
So in order to use solrcell I'll have to add a number of dependent libraries,
which is one of what I'm trying to avoid. The second thing is, solrcell
still parses the plain text files and I don't want it to make any change to
those of my exported files.
Any ideas?
Bing
--
View this message in c
Hi
I am facing an issue with french characters being converted to junk
characters after indexing.
I am XPathEntityProcessor for indexing the xml file i have generated using
the Java application.
After digging into the issue i found that the cause for the same is because
the xml files are in
Hi,
thanks for your responses!
I made a more simple query with only one facet and without any boosting
stuff so it should be easier to focus the problem
facet=on&facet.mincount=1&facet.limit=100&rows=0&start=0&q=+(+%2Bmitbestimmung++)+&facet.field=navNetwork&qt=only_queryfields_edismax&debugQuer
Have you tried looking at http://logstash.net/ first? Or Splunk
(http://www.splunk.com/) if you have money These might be a better
starting point than bare SOLR.
Regards,
Alex
Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is t
What do you expect? I really have no clue what to say
without that information...
You might want to review:
http://wiki.apache.org/solr/UsingMailingLists
Best
Erick
On Thu, Aug 30, 2012 at 4:17 PM, zsy715 wrote:
> Hi all,
>
> I am new to Solr. Now I have got a problem so I hope you guys can he
So, just do another query before doing the main query. What's the problem?
Be more specific. Walk us through the sequence of processing that you need.
-- Jack Krupansky
-Original Message-
From: johannes.schwendin...@blum.com
Sent: Friday, August 31, 2012 1:52 AM
To: solr-user@lucene.a
In addition to Jack's comment, what would the correct behavior
be for "sorting on a multivalued field"? The reason this is disallowed
is because there is no correct behavior in the general case.
Imagine you have two entries, aardvark and emu in your
multiValued field. How should that document sort
You can also move the Tika processing off Solr to the client and perhaps have
more control there. I haven't tried this particular thing, so
see: http://searchhub.org/dev/2012/02/14/indexing-with-solrj/
Best
Erick
On Thu, Aug 30, 2012 at 9:35 AM, Markus Jelsma
wrote:
> Tika can do this but S
Pesky requirements But yep, you can't just use &wt=json...
In fact it sounds like no matter what you do you have to transform
things somehow...
You could _consider_ (and I'm not recommending, just being sure
you know it exists) writing a custom response writer. See
JSONResponseWriter for a m
You got what i am looking for but indexing part is where i am not sure how
it needs to be done.
So to send these log files for indexing in CSV format, is it just a
configuration change to pull these 3 fields from each line in text files or
i need to write code for that.
I simplified the lines in
Based on the stack trace it seems that DIH uses URLConnection. You
might want to try setting the proxy related system properties for the
jvm that runs Solr:
http://docs.oracle.com/javase/6/docs/technotes/guides/net/proxies.html
--
Sami Siren
On Fri, Aug 31, 2012 at 9:58 AM, Molala, Nagaraj (GE
34 matches
Mail list logo