[
https://issues.apache.org/jira/browse/RANGER-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15709536#comment-15709536
]
Don Bosco Durai commented on RANGER-1234:
-----------------------------------------
Yes, Hive would be more complex, because the Ranger plugin runs only at the
HiveServer2 process level, while transformation/masking happens at the
YARN/LLAP level. So the row level filtering is done by adding the appropriate
where clause with HiveServer2 itself (which is relatively easy), but for the
data transformation, the policies are pre-evaluated and shipped (configured)
via vectorized UDF to the job. So there is additional dependency with the Hive
architecture.
With HBase, Ranger plugin runs within all Region Server and it is the same case
with Kafka Brokers. So, if we generalize our architecture, it will be easy to
support both modes (in-process and remote-process).
We should also list what other components we need to support and prioritize
based on demand. Solr provides a lot of hooks, but also more freedom to write
custom functions, which sometimes can be difficult to manage. I know, HAWQ is
also currently integrating with Ranger.
> Post-evaluation phase user extensions
> -------------------------------------
>
> Key: RANGER-1234
> URL: https://issues.apache.org/jira/browse/RANGER-1234
> Project: Ranger
> Issue Type: Improvement
> Reporter: Nigel Jones
>
> As per
> https://cwiki.apache.org/confluence/display/RANGER/Dynamic+Policy+Hooks+in+Ranger+-+Configure+and+Use
> we have the ability to add
> - content enricher
> - condition evaluator
> user extensions to a ranger plugin.
> However the third phase -- post evaluation -- could also lend itself to user
> extension such as additional user defined filtering of resultant data sets
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)