[
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=479180&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479180
]
ASF GitHub Bot logged work on HADOOP-16830:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 04/Sep/20 16:17
Start Date: 04/Sep/20 16:17
Worklog Time Spent: 10m
Work Description: steveloughran commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-687247574
OK, despite my force push losing @jimmy-zuber-amzn 's comments, I agree with
the points about thread safety. In my head I'd imagined that we'd build that
implementation map once and then iterate over it, but I can see benefits in
supporting dynamic addition of new values to both the snapshot and the dynamic
ones.
Snapshot: add an entry to the map
Dynamic: add new atomic long etc entries to the appropriate map
this would let us create a minimal snapshot then pass it around, and as it
was passed around it would collect values, *without you needing to define up
front all stats to collect*. This work here needs to be lined up for that with
iterators of maps being resilient to new values being added.
For the dynamic stuff -> ConcurrentHashMap.
For Snapshot, it's trickier as they need to be java serializable, so that
Spark & can can forward them around. There I will have to do one of
*mark the maps all as transient, and then in read/write data actually save
then restore the data as treemaps (or just arrays of entries)
* Make accessors to the iterators synchronized and do a snapshot of the
iterator. I think that will actually be the easiest approach...I just need to
make sure the operations to update the maps are also synchronized
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 479180)
Time Spent: 50m (was: 40m)
> Add public IOStatistics API; S3A to support
> -------------------------------------------
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs, fs/s3
> Affects Versions: 3.3.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
> Time Spent: 50m
> Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take,
> by collecting exactly those operations done during the execution of FS API
> calls by their individual worker threads, and returning these to their job
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one;
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread,
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]