Hi pravesh Thank u for ur reply..
i am using DIH,but right now i dont have timestamp in my
databaseinstead of that i have datecreated column which wont updated for
the changes .when ever i created some thing it just store the created
time.is there any chance to
If you mean using DIH? Then you need to have a timestamp column in your DB,
which has to be updated to current timestamp, whenever you are modifying the
record in DB.
For rest just go through the http://wiki.apache.org/solr/DataImportHandler
DIH wiki here
Thanx
Pravesh
--
View this message in
Just follow the http://wiki.apache.org/solr/DisMaxQParserPlugin dismax wiki
You just need to:
1. Duplicate the dismax request handler entry in your solrconfig.xml
2. change the name to some other unique name for e.g. "notLoggedDismax"
3. set the /qf /fields to you /content_dup/ field (or other c
On Sun, Oct 9, 2011 at 11:30 PM, Jamie Johnson wrote:
> I'm doing some work on the solrcloud branch in SVN and am noticing
> some strange (but perhaps expected) behavior when executing queries.
> I have setup a simple 2 shard cluster, indexed 50 documents into each
> (verified by accessing http://
: Conceptually
: the Join-approach looks like it would work from paper, although I'm not a
: big fan of introducing a lot of complexity to the frontend / querying part
: of the solution.
you lost me there -- i don't see how using join would impact the front end
/ query side at all. your query c
Brandon Ramirez wrote:
>
> I may not be understanding the question correctly, but I think the dismax
> parser would solve this since you can specify the fields that you want to
> search against. So you just need a pre-login field list and a post-login
> field list in your application logic. Or
: I figure it out.. thanks for pointing me in the right direction... so at the
: end solr field type text was changed for text_general
they key take away here seems to be that when you upgraded from Solr 3.3
to Solr 3.4 you stoped using your old schema (which you copied from the
3.3 example), a
Hi,
If you have 4Gb on your server total, try giving about 1Gb to Solr, leaving 3Gb
for OS, OS caching and mem-allocation outside the JVM.
Also, add 'ulimit -v unlimited' and 'ulimit -s 10240' to /etc/profile to
increase virtual memory and stack limit.
And you should also consider upgrading to
On 10/07/2011 6:21 PM, � wrote:
Hi,
What Solr version?
Solr Implementation Version: 1.4.1 955763M - mark - 2010-06-17 18:06:42.
Its running on a Suse Linux VM.
How often do you do commits, or do you use autocommit?
I had been doing commits every 100 documents (the entire set is about
3
Hi,
The highlighter will only highlight words from your main query. So to get
highlighting for your example, add a query in "q" with the words you need
highlighted:
.../solr/select?fq=type:cat&q=type:cat&hl=on&hl.fl=type
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.co
Hi all,
I'm doing search stress testing over my solr cluster and when I get
many concurrent random words dict searchers I get the follow response:
With HTTP status 200
I have reach the concurrency limit? pool limit?
It's is a Solr 1.4.1 with tomcat 6.0.32 and I set 500 as maxThreads on
Tomc
Hi all,
I use solr filter query, for example: type:cat and
enable highlight for all fields on, but highlighter
return empty result.
How can I enable highlight for filter query?
**
*Thanks in advance,*
*Pavel Drobushevich*
*mailto:* p.drobushev...@gmail.com*
**skype*: pavel_drabushevich
*profile:
Hi,
for those who may be interested, I resolved it (with a little help from
urlrewrite user group :-) ) by using type="proxy" rule.
S
Inizio: Finotti Simone [tech...@yoox.com]
Inviato: venerdì 7 ottobre 2011 11.38
Fine: solr-user@lucene.apache.org
Oggetto
Hi,
We index structured documents, with numbered chapters, paragraphs and
sentences. After doing a (rather complex) search, we may get multiple matches
in each result doc. We want to highlight those matches in our front-end and
currently we do a simple string match of the query words against th
On Mon, Oct 10, 2011 at 4:40 PM, Chantal Ackermann
wrote:
[...]
> (2) response-to-update.xsl (this goes into
> $SOLR_HOME/sourceCore/conf/xslt/):
[...]
Thanks for sharing that, and will check it out when possible.
This looks like it would be very useful, as we once rolled our
own half-baked, no
On 10/10/2011 5:18 AM, Alexander Valet | edelight wrote:
> Hi,
>
> we are about to develop a replication dashboard for our master / slaves setup
> to monitor and control replication through a single interface.
>
> Has anybody experience with this and could share some hint, ideas, learnings
> wit
hello,
I'm quite new to Solr and are trying to get a grip on it all. I'm currently
reading and enjoying the "Solr 1.4 Enterprise Search Server" book. I'm trying
and failing and need some advise on the following.
given the following schema for "subscriptions"
I may not be understanding the question correctly, but I think the dismax
parser would solve this since you can specify the fields that you want to
search against. So you just need a pre-login field list and a post-login field
list in your application logic. Or like pravesh suggested, create m
Hi,
I have a four indexes which I'm sharding together for our general search
system, each Index has
it's own solr search webapp in tomcat and they all produce correct results
individually.
Two of the indexes when sharded together return the correct numFound and a
loaded
SolrDocumentList, whe
Hi there,
I have been using cores to built up new cores (because of various
reasons). (I am not using SOLR as data storage, the cores are re-indexed
frequently.)
This solution works for releases 1.4 and 3 as it does not use the
SolrEntityProcessor.
To load data from another SOLR core and populat
> I would like to know how we can achieve the *range search*
> for these date
> formats like [2011-11-02:10 03:37.236088 to
> 2011-12-02:10 03:37.236088]..?
You can use frange query parser plugin.
http://www.lucidimagination.com/blog/2009/07/06/ranges-over-functions-in-solr-14/
Thanks for u reply guys,
As suggest both formats q={!term f=object}2011-11-02:10 03:37.236088 and
2011-11-02\:10\:03\:37.236088 has worked perfectly for search.
I would like to know how we can achieve the *range search* for these date
formats like [2011-11-02:10 03:37.236088 to 2011-12-02:10 03
On Sat, Oct 1, 2011 at 11:55 PM, Lord Khan Han wrote:
> Is there anyway to get correct Qtime when we use http caching ? I think Solr
> caching also the Qtime so giving the the same Qtime in response what ever
> takes it to finish .. How I can set Qtime correcly from solr when I use
> http caching
Can you clarify following:
1) Is it that: You want to hide some documents from search when user is not
logged-in?
OR
2) Is it that: You want to hide some fields of some documents from search
when user is not logged-in?
For Point 2; one solution can be that while indexing the documents, you can
Hi,
we are about to develop a replication dashboard for our master / slaves setup
to monitor and control replication through a single interface.
Has anybody experience with this and could share some hint, ideas, learnings
with it?
Or even point us to some example code or tool already developed.
> My default Request Handler looks like
> this:
>
> class="org.apache.solr.handler.component.SearchHandler"
> default="true">
>
>
>
> all
>
>
>
>
>
>
>
> highlight
>
Hi Jan
Thanks for getting back to me. Here's the details:
JVM: jdk1.6.0_27
App Server: Tomcat 7
OS: Centos x86_64 5.4
RAM: 8GB
JVM RAM: 4GB
JVM ARGS: -Xms4G -Xmx4G -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
-XX:NewRatio=3 -XX:PermSize=128M -XX:MaxPermSize=256M
-Djava.util.logging.manager=org.apache
My default Request Handler looks like this:
all
highlight
collapse
facet
And I have u
> Oh also: Does DIH have any
> experimental way for folks to be reading data
> from one solr core and then massaging it and importing it
> into another core?
SolrEntityProcessor can do that.
https://issues.apache.org/jira/browse/SOLR-1499
29 matches
Mail list logo