I think that indexing the access information is going to work nicely, and I
agree that sticking with the simplest/solr way is best. The constraint is
super simple... you can view this set of documents or you can't... based on
an api key: fq=api_key:xxx
Thanks for the feedback on this guys!
Matt
2
Totally agree, do it at indexing time, in the index.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from 'http://blogs.techrepubli
If you COULD solve your problem by indexing 'public', or other tokens from a
limited vocabulary of document roles, in a field -- then I'd definitely suggest
you look into doing that, rather than doing odd things with Solr instead. If
the only barrier is not currently having sufficient logic at t
Your not doing optimize, I think optimize would delete your old index.
Try it out with additional parameter optimize=true
- Espen
On Thu, Jan 20, 2011 at 11:30 AM, Bernd Fehling
wrote:
> Hi list,
>
> after sending full-import=true&clean=true&commit=true
> Solr 4.x (apache-solr-4.0-2010-11-24_09-
I see the file
-rw-rw-r-- 1 feeddo feeddo0 Dec 15 01:19
lucene-cdaa80c0fefe1a7dfc7aab89298c614c-write.lock
was created on Dec. 15. At the end of the replication, as far as I
remember, the SnapPuller tries to open the writer to ensure the old
files are deleted, and in
your case it cannot obtai
A "collection" is your data, like newspaper articles or movie titles.
It is a user-level concept, not really a Solr design concept.
A "core" is a Solr/Lucene index. It is addressable as
solr/collection-name on one machine.
You can use a core to store a collection, or you can break it up among
mul
The file system checked out, I also tried creating a slave on a
different machine and could reproduce the issue. I logged SOLR-2329.
On Sat, Dec 18, 2010 at 8:01 PM, Lance Norskog wrote:
> This could be a quirk of the native locking feature. What's the file
> system? Can you fsck it?
>
> If this
Em,
yes, you can replace the index (get the new one into a separate folder
like index.new and then rename it to the index folder) outside the
Solr, then just do the http call to reload the core.
Note that the old index files may still be in use (continue to serve
the queries while reloading), eve
Got it, here are the links that I have on RBAC/ACL/Access Control. Some of
these
are specific to Solr.
http://www.xaprb.com/blog/2006/08/16/how-to-build-role-based-access-control-in-sql/
http://www.xaprb.com/blog/2006/08/18/role-based-access-control-in-sql-part-2/
http://php.dzone.com/artic
Hi Erick,
thanks for your response.
Yes, it's really not that easy.
However, the target is to avoid any kind of master-slave-setup.
The most recent idea i got is to create a new core with a data-dir pointing
to an already existing directory with a fully optimized index.
Regards,
Em
--
View t
1024 is the default number, it can be increased. See MaxBooleanClauses
in solrconfig.xml
This shouldn't be a problem with 2K clauses, but expanding it to tens of
thousands is probably a mistake (but test to be sure).
Best
Erick
On Sat, Jan 22, 2011 at 3:50 PM, Matt Mitchell wrote:
> Hey thanks
This seems far too complex to me. Why not just optimize on the master
and let replication do all the rest for you?
Best
Erick
On Fri, Jan 21, 2011 at 1:07 PM, Em wrote:
>
> Hi,
>
> are there no experiences or thoughts?
> How would you solve this at Lucene-Level?
>
> Regards
>
>
> Em wrote:
> >
I'm assuming that this is just one example of many different
kinds of transformations you could do. It *seems* like a variant
of a synonym analyzer, so you could write a custom analyzer
(it's not actuall hard) to create a bunch of synonyms
for your "special" terms at index time. Or you could use th
OK, Idea from left field off the top of my head, so don't take it for
gospel...
Create a second index where you send your data, each phrase is really a
"document"
and query *that* index for your autosuggest. Perhaps this could be a
secondary core.
It could even be a set of *special* documents in y
Dang! There were hot, clickable links in the web mail I put them in. I guess
you
guys can search for those strings on google and find them. Sorry.
- Original Message
From: Dennis Gearon
To: solr-user@lucene.apache.org
Sent: Sat, January 22, 2011 1:09:26 PM
Subject: Re: api key filt
Yep, that's about it. By far the main constraint is memory and the caches
are what eats it up. So by minimizing the caches on the master (since they
are filled by searching) you speed that part up.
By maximizing the cache settings on the servers, you make them go as fast
as possible.
RamBufferSiz
The links didn't work, so here the are again, NOT from a sent folder:
PHP Access Control - PHP5 CMS Framework Development | PHP Zone
A Role-Based Access Control (RBAC) system for PHP
Appendix C: Task-Field Access
Role-based access control in SQL, part 2 at Xaprb
PHP Access Control - PHP5 CMS F
See below.
On Wed, Jan 19, 2011 at 7:26 PM, Joscha Feth wrote:
> Hello Erick,
>
> Thanks for your answer!
>
> But I question why you *require* many different indexes. [...] including
> > isolating one
> > users'
> > data from all others, [...]
>
>
> Yes, thats exactly what I am after - I need to
Hey thanks I'll definitely have a read. The only problem with this though,
is that our api is a thin layer of app-code, with solr only (no db), we
index data from our sql db into solr, and push the index off for
consumption.
The only other idea I had was to send a list of the allowed document ids
The only way that you would have that many api keys per record, is if one of
them represented 'public', right? 'public' is a ROLE. Your answer is to use
RBAC
style techniques.
Here are some links that I have on the subject. What I'm thinking of doing is:
Sorry for formatting, Firefox is freaki
Just wanted to see if others are handling this in some special way, but I
think this is pretty simple.
We have a database of api keys that map to "allowed" db records. I'm
planning on indexing the db records into solr, along with their api keys in
an indexed, non-stored, multi-valued field. Then,
Hello list,
i want to experiment with the new SolrCloud feature. So far, I got
absolutely no experience in distributed search with Solr.
However, there are some things that remain unclear to me:
1 ) What is the usecase of a collection?
As far as I understood: A collection is the same as a core b
>
> [] ASF Mirrors (linked in our release announcements or via the Lucene website)
>
> [x] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [] I/we build them from source via an SVN/Git checkout.
>
> [] Other (someone in your company mirrors them internally or via a downstre
I tried to build yeaterdays svn trunk of 4.0 and got massive failures... The
Hudson zipped up version seems to work without any issues. Has anyone else seem
this build issue on the Mac? I guess this also has to do with Grants recent
poll...
Adam
On Jan 22, 2011, at 6:34 AM, Robert Muir wrote
On Fri, Jan 21, 2011 at 11:53 PM, Lance Norskog wrote:
> The Solr 4 branch is nowhere near ready for prime time. For example,
> within the past week code was added that forces you to completely
> reindex all of the documents you had. Solr 4 is really the "trunk".
> The low-level stuff is being mas
> Where do you get your Lucene/Solr downloads from?
>
> [] ASF Mirrors (linked in our release announcements or via the Lucene website)
>
> [X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [X] I/we build them from source via an SVN/Git checkout.
>
> [] Other (someone in your c
I got the solution. Attach one complete sample code I made as follows.
Thanks,
LB
package com.greatfree.Solr;
import org.apache.solr.client.solrj.SolrServer;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
import org.apache
27 matches
Mail list logo