On 11/26/2018 1:39 PM, Viresh Kumar wrote:
Hi,

This series adds performance state propagation support in genpd core.
The propagation happens from the sub-domains to their masters. More
details can be found in the individual commit logs.

This is tested on hikey960 by faking power domains in such a way that
the CPU devices have two power domains and both of them have the same
master domain. The CPU device, as well as its power domains have
"required-opps" property set and the performance requirement from the
CPU eventually configures all the domains (2 sub-domains and 1 master).

I validated this using the rpmh powerdomain driver [1] where I had to model
a relationship across cx and mx powerdomains, so that mx is always >= cx.
Seems to work as expected, I will respin the rpmh powerdomain patches soon
(Though its awaiting Rob's review/ack for the corner bindings)

Tested-by: Rajendra Nayak <[email protected]>

[1] https://patchwork.ozlabs.org/cover/935289/


Based on opp/linux-next branch (which is 4.20-rc1 +
multiple-power-domain-support-in-opp-core + some OPP fixes).

v1->V2:
- First patch (1/5) is new and an improvement to earlier stuff.
- Move genpd_status_on() check to _genpd_reeval_performance_state() from
   _genpd_set_performance_state().
- Improve dev_pm_opp_xlate_performance_state() to handle 1:1 pstate
   mapping between genpd and its master and also to fix a problem while
   finding the dst_table.
- Handle pstate=0 case properly.

--
viresh

Viresh Kumar (5):
   OPP: Improve _find_table_of_opp_np()
   OPP: Add dev_pm_opp_xlate_performance_state() helper
   PM / Domains: Save OPP table pointer in genpd
   PM / Domains: Factorize dev_pm_genpd_set_performance_state()
   PM / Domains: Propagate performance state updates

  drivers/base/power/domain.c | 211 +++++++++++++++++++++++++++---------
  drivers/opp/core.c          |  59 ++++++++++
  drivers/opp/of.c            |  14 ++-
  include/linux/pm_domain.h   |   6 +
  include/linux/pm_opp.h      |   7 ++
  5 files changed, 244 insertions(+), 53 deletions(-)

Reply via email to