PSCI: Introduce new platform interface to describe topology

This patch removes the assumption in the current PSCI implementation that MPIDR
based affinity levels map directly to levels in a power domain tree. This
enables PSCI generic code to support complex power domain topologies as
envisaged by PSCIv1.0 specification. The platform interface for querying
the power domain topology has been changed such that:

1. The generic PSCI code does not generate MPIDRs and use them to query the
   platform about the number of power domains at a particular power level. The
   platform now provides a description of the power domain tree on the SoC
   through a data structure. The existing platform APIs to provide the same
   information have been removed.

2. The linear indices returned by plat_core_pos_by_mpidr() and
   plat_my_core_pos() are used to retrieve core power domain nodes from the
   power domain tree. Power domains above the core level are accessed using a
   'parent' field in the tree node descriptors.

The platform describes the power domain tree in an array of 'unsigned
char's. The first entry in the array specifies the number of power domains at
the highest power level implemented in the system. Each susbsequent entry
corresponds to a power domain and contains the number of power domains that are
its direct children. This array is exported to the generic PSCI implementation
via the new `plat_get_power_domain_tree_desc()` platform API.

The PSCI generic code uses this array to populate its internal power domain tree
using the Breadth First Search like algorithm. The tree is split into two
arrays:

1. An array that contains all the core power domain nodes

2. An array that contains all the other power domain nodes

A separate array for core nodes allows certain core specific optimisations to
be implemented e.g. remove the bakery lock, re-use per-cpu data framework for
storing some information.

Entries in the core power domain array are allocated such that the
array index of the domain is equal to the linear index returned by
plat_core_pos_by_mpidr() and plat_my_core_pos() for the MPIDR
corresponding to that domain. This relationship is key to be able to use
an MPIDR to find the corresponding core power domain node, traverse to higher
power domain nodes and index into arrays that contain core specific
information.

An introductory document has been added to briefly describe the new interface.

Change-Id: I4b444719e8e927ba391cae48a23558308447da13
This commit is contained in:
Soby Mathew 2015-04-08 17:42:06 +01:00 committed by Achin Gupta
parent 12d0d00d1e
commit 82dcc03981
11 changed files with 785 additions and 688 deletions

295
docs/psci-pd-tree.md Normal file
View File

@ -0,0 +1,295 @@
------------
Requirements
------------
1. A platform must export the `plat_get_aff_count()` and
`plat_get_aff_state()` APIs to enable the generic PSCI code to
populate a tree that describes the hierarchy of power domains in the
system. This approach is inflexible because a change to the topology
requires a change in the code.
It would be much simpler for the platform to describe its power domain tree
in a data structure.
2. The generic PSCI code generates MPIDRs in order to populate the power domain
tree. It also uses an MPIDR to find a node in the tree. The assumption that
a platform will use exactly the same MPIDRs as generated by the generic PSCI
code is not scalable. The use of an MPIDR also restricts the number of
levels in the power domain tree to four.
Therefore, there is a need to decouple allocation of MPIDRs from the
mechanism used to populate the power domain topology tree.
3. The current arrangement of the power domain tree requires a binary search
over the sibling nodes at a particular level to find a specified power
domain node. During a power management operation, the tree is traversed from
a 'start' to an 'end' power level. The binary search is required to find the
node at each level. The natural way to perform this traversal is to
start from a leaf node and follow the parent node pointer to reach the end
level.
Therefore, there is a need to define data structures that implement the tree in
a way which facilitates such a traversal.
4. The attributes of a core power domain differ from the attributes of power
domains at higher levels. For example, only a core power domain can be identified
using an MPIDR. There is no requirement to perform state coordination while
performing a power management operation on the core power domain.
Therefore, there is a need to implement the tree in a way which facilitates this
distinction between a leaf and non-leaf node and any associated
optimizations.
------
Design
------
### Describing a power domain tree
To fulfill requirement 1., the existing platform APIs
`plat_get_aff_count()` and `plat_get_aff_state()` have been
removed. A platform must define an array of unsigned chars such that:
1. The first entry in the array specifies the number of power domains at the
highest power level implemented in the platform. This caters for platforms
where the power domain tree does not have a single root node, for example,
the FVP has two cluster power domains at the highest level (1).
2. Each subsequent entry corresponds to a power domain and contains the number
of power domains that are its direct children.
3. The size of the array minus the first entry will be equal to the number of
non-leaf power domains.
4. The value in each entry in the array is used to find the number of entries
to consider at the next level. The sum of the values (number of children) of
all the entries at a level specifies the number of entries in the array for
the next level.
The following example power domain topology tree will be used to describe the
above text further. The leaf and non-leaf nodes in this tree have been numbered
separately.
```
+-+
|0|
+-+
/ \
/ \
/ \
/ \
/ \
/ \
/ \
/ \
/ \
/ \
+-+ +-+
|1| |2|
+-+ +-+
/ \ / \
/ \ / \
/ \ / \
/ \ / \
+-+ +-+ +-+ +-+
|3| |4| |5| |6|
+-+ +-+ +-+ +-+
+---+-----+ +----+----| +----+----+ +----+-----+-----+
| | | | | | | | | | | | |
| | | | | | | | | | | | |
v v v v v v v v v v v v v
+-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +--+ +--+ +--+
|0| |1| |2| |3| |4| |5| |6| |7| |8| |9| |10| |11| |12|
+-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +--+ +--+ +--+
```
This tree is defined by the platform as the array described above as follows:
```
#define PLAT_NUM_POWER_DOMAINS 20
#define PLATFORM_CORE_COUNT 13
#define PSCI_NUM_NON_CPU_PWR_DOMAINS \
(PLAT_NUM_POWER_DOMAINS - PLATFORM_CORE_COUNT)
unsigned char plat_power_domain_tree_desc[] = { 1, 2, 2, 2, 3, 3, 3, 4};
```
### Removing assumptions about MPIDRs used in a platform
To fulfill requirement 2., it is assumed that the platform assigns a
unique number (core index) between `0` and `PLAT_CORE_COUNT - 1` to each core
power domain. MPIDRs could be allocated in any manner and will not be used to
populate the tree.
`plat_core_pos_by_mpidr(mpidr)` will return the core index for the core
corresponding to the MPIDR. It will return an error (-1) if an MPIDR is passed
which is not allocated or corresponds to an absent core. The semantics of this
platform API have changed since it is required to validate the passed MPIDR. It
has been made a mandatory API as a result.
Another mandatory API, `plat_my_core_pos()` has been added to return the core
index for the calling core. This API provides a more lightweight mechanism to get
the index since there is no need to validate the MPIDR of the calling core.
The platform should assign the core indices (as illustrated in the diagram above)
such that, if the core nodes are numbered from left to right, then the index
for a core domain will be the same as the index returned by
`plat_core_pos_by_mpidr()` or `plat_my_core_pos()` for that core. This
relationship allows the core nodes to be allocated in a separate array
(requirement 4.) during `psci_setup()` in such an order that the index of the
core in the array is the same as the return value from these APIs.
#### Dealing with holes in MPIDR allocation
For platforms where the number of allocated MPIDRs is equal to the number of
core power domains, for example, Juno and FVPs, the logic to convert an MPIDR to
a core index should remain unchanged. Both Juno and FVP use a simple collision
proof hash function to do this.
It is possible that on some platforms, the allocation of MPIDRs is not
contiguous or certain cores have been disabled. This essentially means that the
MPIDRs have been sparsely allocated, that is, the size of the range of MPIDRs
used by the platform is not equal to the number of core power domains.
The platform could adopt one of the following approaches to deal with this
scenario:
1. Implement more complex logic to convert a valid MPIDR to a core index while
maintaining the relationship described earlier. This means that the power
domain tree descriptor will not describe any core power domains which are
disabled or absent. Entries will not be allocated in the tree for these
domains.
2. Treat unallocated MPIDRs and disabled cores as absent but still describe them
in the power domain descriptor, that is, the number of core nodes described
is equal to the size of the range of MPIDRs allocated. This approach will
lead to memory wastage since entries will be allocated in the tree but will
allow use of a simpler logic to convert an MPIDR to a core index.
### Traversing through and distinguishing between core and non-core power domains
To fulfill requirement 3 and 4, separate data structures have been defined
to represent leaf and non-leaf power domain nodes in the tree.
```
/*******************************************************************************
* The following two data structures implement the power domain tree. The tree
* is used to track the state of all the nodes i.e. power domain instances
* described by the platform. The tree consists of nodes that describe CPU power
* domains i.e. leaf nodes and all other power domains which are parents of a
* CPU power domain i.e. non-leaf nodes.
******************************************************************************/
typedef struct non_cpu_pwr_domain_node {
/*
* Index of the first CPU power domain node level 0 which has this node
* as its parent.
*/
unsigned int cpu_start_idx;
/*
* Number of CPU power domains which are siblings of the domain indexed
* by 'cpu_start_idx' i.e. all the domains in the range 'cpu_start_idx
* -> cpu_start_idx + ncpus' have this node as their parent.
*/
unsigned int ncpus;
/* Index of the parent power domain node */
unsigned int parent_node;
-----
} non_cpu_pd_node_t;
typedef struct cpu_pwr_domain_node {
unsigned long mpidr;
/* Index of the parent power domain node */
unsigned int parent_node;
-----
} cpu_pd_node_t;
```
The power domain tree is implemented as a combination of the following data
structures.
```
non_cpu_pd_node_t psci_non_cpu_pd_nodes[PSCI_NUM_NON_CPU_PWR_DOMAINS];
cpu_pd_node_t psci_cpu_pd_nodes[PLATFORM_CORE_COUNT];
```
### Populating the power domain tree
The `populate_power_domain_tree()` function in `psci_setup.c` implements the
algorithm to parse the power domain descriptor exported by the platform to
populate the two arrays. It is essentially a breadth-first-search. The nodes for
each level starting from the root are laid out one after another in the
`psci_non_cpu_pd_nodes` and `psci_cpu_pd_nodes` arrays as follows:
```
psci_non_cpu_pd_nodes -> [[Level 3 nodes][Level 2 nodes][Level 1 nodes]]
psci_cpu_pd_nodes -> [Level 0 nodes]
```
For the example power domain tree illustrated above, the `psci_cpu_pd_nodes`
will be populated as follows. The value in each entry is the index of the parent
node. Other fields have been ignored for simplicity.
```
+-------------+ ^
CPU0 | 3 | |
+-------------+ |
CPU1 | 3 | |
+-------------+ |
CPU2 | 3 | |
+-------------+ |
CPU3 | 4 | |
+-------------+ |
CPU4 | 4 | |
+-------------+ |
CPU5 | 4 | | PLATFORM_CORE_COUNT
+-------------+ |
CPU6 | 5 | |
+-------------+ |
CPU7 | 5 | |
+-------------+ |
CPU8 | 5 | |
+-------------+ |
CPU9 | 6 | |
+-------------+ |
CPU10 | 6 | |
+-------------+ |
CPU11 | 6 | |
+-------------+ |
CPU12 | 6 | v
+-------------+
```
The `psci_non_cpu_pd_nodes` array will be populated as follows. The value in
each entry is the index of the parent node.
```
+-------------+ ^
PD0 | -1 | |
+-------------+ |
PD1 | 0 | |
+-------------+ |
PD2 | 0 | |
+-------------+ |
PD3 | 1 | | PLAT_NUM_POWER_DOMAINS -
+-------------+ | PLATFORM_CORE_COUNT
PD4 | 1 | |
+-------------+ |
PD5 | 2 | |
+-------------+ |
PD6 | 2 | |
+-------------+ v
```
Each core can find its node in the `psci_cpu_pd_nodes` array using the
`plat_my_core_pos()` function. When a core is turned on, the normal world
provides an MPIDR. The `plat_core_pos_by_mpidr()` function is used to validate
the MPIDR before using it to find the corresponding core node. The non-core power
domain nodes do not need to be identified.

View File

@ -43,6 +43,19 @@
#define PSCI_NUM_PWR_DOMAINS (2 * PLATFORM_CORE_COUNT)
#endif
#define PSCI_NUM_NON_CPU_PWR_DOMAINS (PSCI_NUM_PWR_DOMAINS - \
PLATFORM_CORE_COUNT)
/* This is the power level corresponding to a CPU */
#define PSCI_CPU_PWR_LVL 0
/*
* The maximum power level supported by PSCI. Since PSCI CPU_SUSPEND
* uses the old power_state parameter format which has 2 bits to specify the
* power level, this constant is defined to be 3.
*/
#define PSCI_MAX_PWR_LVL 3
/*******************************************************************************
* Defines for runtime services func ids
******************************************************************************/
@ -137,16 +150,11 @@
#define PSCI_E_NOT_PRESENT -7
#define PSCI_E_DISABLED -8
/*******************************************************************************
* PSCI power domain state related constants. A power domain instance could
* be present or absent physically to cater for asymmetric topologies. If
* present then it could be in one of the 4 further defined states.
******************************************************************************/
#define PSCI_STATE_SHIFT 1
#define PSCI_STATE_MASK 0xff
#define PSCI_INVALID_MPIDR ~(0ULL)
#define PSCI_PWR_DOMAIN_ABSENT 0x0
#define PSCI_PWR_DOMAIN_PRESENT 0x1
/*******************************************************************************
* PSCI power domain state related constants.
******************************************************************************/
#define PSCI_STATE_ON 0x0
#define PSCI_STATE_OFF 0x1
#define PSCI_STATE_ON_PENDING 0x2
@ -170,9 +178,10 @@
* this information will not reside on a cache line shared with another cpu.
******************************************************************************/
typedef struct psci_cpu_data {
uint32_t power_state;
uint32_t power_state; /* The power state from CPU_SUSPEND */
unsigned char psci_state; /* The state of this CPU as seen by PSCI */
#if !USE_COHERENT_MEM
bakery_info_t pcpu_bakery_info[PSCI_NUM_PWR_DOMAINS];
bakery_info_t pcpu_bakery_info[PSCI_NUM_NON_CPU_PWR_DOMAINS];
#endif
} psci_cpu_data_t;
@ -230,7 +239,7 @@ void __dead2 psci_power_down_wfi(void);
void psci_cpu_on_finish_entry(void);
void psci_cpu_suspend_finish_entry(void);
void psci_register_spd_pm_hook(const spd_pm_ops_t *);
int psci_get_suspend_stateid_by_mpidr(unsigned long);
int psci_get_suspend_stateid_by_idx(unsigned long);
int psci_get_suspend_stateid(void);
int psci_get_suspend_pwrlvl(void);

View File

@ -183,8 +183,7 @@ struct entry_point_info *bl31_plat_get_next_image_ep_info(uint32_t type);
* Mandatory PSCI functions (BL3-1)
******************************************************************************/
int platform_setup_pm(const struct plat_pm_ops **);
unsigned int plat_get_pwr_domain_count(unsigned int, unsigned long);
unsigned int plat_get_pwr_domain_state(unsigned int, unsigned long);
const unsigned char *plat_get_power_domain_tree_desc(void);
/*******************************************************************************
* Optional BL3-1 functions (may be overridden)

View File

@ -46,16 +46,20 @@
const spd_pm_ops_t *psci_spd_pm;
/*******************************************************************************
* Grand array that holds the platform's topology information for state
* management of power domain instances. Each node (pwr_map_node) in the array
* corresponds to a power domain instance e.g. cluster, cpu within an mpidr
* Arrays that hold the platform's power domain tree information for state
* management of power domains.
* Each node in the array 'psci_non_cpu_pd_nodes' corresponds to a power domain
* which is an ancestor of a CPU power domain.
* Each node in the array 'psci_cpu_pd_nodes' corresponds to a cpu power domain
******************************************************************************/
pwr_map_node_t psci_pwr_domain_map[PSCI_NUM_PWR_DOMAINS]
non_cpu_pd_node_t psci_non_cpu_pd_nodes[PSCI_NUM_NON_CPU_PWR_DOMAINS]
#if USE_COHERENT_MEM
__attribute__ ((section("tzfw_coherent_mem")))
#endif
;
cpu_pd_node_t psci_cpu_pd_nodes[PLATFORM_CORE_COUNT];
/*******************************************************************************
* Pointer to functions exported by the platform to complete power mgmt. ops
******************************************************************************/
@ -64,29 +68,31 @@ const plat_pm_ops_t *psci_plat_pm_ops;
/*******************************************************************************
* Check that the maximum power level supported by the platform makes sense
* ****************************************************************************/
CASSERT(PLAT_MAX_PWR_LVL <= MPIDR_MAX_AFFLVL && \
PLAT_MAX_PWR_LVL >= MPIDR_AFFLVL0, \
CASSERT(PLAT_MAX_PWR_LVL <= PSCI_MAX_PWR_LVL && \
PLAT_MAX_PWR_LVL >= PSCI_CPU_PWR_LVL, \
assert_platform_max_pwrlvl_check);
/*******************************************************************************
* This function is passed an array of pointers to power domain nodes in the
* topology tree for an mpidr. It iterates through the nodes to find the
* highest power level where the power domain is marked as physically powered
* off.
* This function is passed a cpu_index and the highest level in the topology
* tree. It iterates through the nodes to find the highest power level at which
* a domain is physically powered off.
******************************************************************************/
uint32_t psci_find_max_phys_off_pwrlvl(uint32_t start_pwrlvl,
uint32_t end_pwrlvl,
pwr_map_node_t *mpidr_nodes[])
uint32_t psci_find_max_phys_off_pwrlvl(uint32_t end_pwrlvl,
unsigned int cpu_idx)
{
uint32_t max_pwrlvl = PSCI_INVALID_DATA;
int max_pwrlvl, level;
unsigned int parent_idx = psci_cpu_pd_nodes[cpu_idx].parent_node;
for (; start_pwrlvl <= end_pwrlvl; start_pwrlvl++) {
if (mpidr_nodes[start_pwrlvl] == NULL)
continue;
if (psci_get_phys_state(cpu_idx, PSCI_CPU_PWR_LVL) != PSCI_STATE_OFF)
return PSCI_INVALID_DATA;
if (psci_get_phys_state(mpidr_nodes[start_pwrlvl]) ==
PSCI_STATE_OFF)
max_pwrlvl = start_pwrlvl;
max_pwrlvl = PSCI_CPU_PWR_LVL;
for (level = PSCI_CPU_PWR_LVL + 1; level <= end_pwrlvl; level++) {
if (psci_get_phys_state(parent_idx, level) == PSCI_STATE_OFF)
max_pwrlvl = level;
parent_idx = psci_non_cpu_pd_nodes[parent_idx].parent_node;
}
return max_pwrlvl;
@ -103,21 +109,14 @@ unsigned int psci_is_last_on_cpu(void)
unsigned long mpidr = read_mpidr_el1() & MPIDR_AFFINITY_MASK;
unsigned int i;
for (i = psci_pwr_lvl_limits[MPIDR_AFFLVL0].min;
i <= psci_pwr_lvl_limits[MPIDR_AFFLVL0].max; i++) {
assert(psci_pwr_domain_map[i].level == MPIDR_AFFLVL0);
if (!(psci_pwr_domain_map[i].state & PSCI_AFF_PRESENT))
continue;
if (psci_pwr_domain_map[i].mpidr == mpidr) {
assert(psci_get_state(&psci_pwr_domain_map[i])
for (i = 0; i < PLATFORM_CORE_COUNT; i++) {
if (psci_cpu_pd_nodes[i].mpidr == mpidr) {
assert(psci_get_state(i, PSCI_CPU_PWR_LVL)
== PSCI_STATE_ON);
continue;
}
if (psci_get_state(&psci_pwr_domain_map[i]) != PSCI_STATE_OFF)
if (psci_get_state(i, PSCI_CPU_PWR_LVL) != PSCI_STATE_OFF)
return 0;
}
@ -135,18 +134,12 @@ int get_power_on_target_pwrlvl(void)
#if DEBUG
unsigned int state;
pwr_map_node_t *node;
/* Retrieve our node from the topology tree */
node = psci_get_pwr_map_node(read_mpidr_el1() & MPIDR_AFFINITY_MASK,
MPIDR_AFFLVL0);
assert(node);
/*
* Sanity check the state of the cpu. It should be either suspend or "on
* pending"
*/
state = psci_get_state(node);
state = psci_get_state(plat_my_core_pos(), PSCI_CPU_PWR_LVL);
assert(state == PSCI_STATE_SUSPEND || state == PSCI_STATE_ON_PENDING);
#endif
@ -163,103 +156,74 @@ int get_power_on_target_pwrlvl(void)
}
/*******************************************************************************
* Simple routine to set the id of a power domain instance at a given level
* in the mpidr. The assumption is that the affinity level and the power
* level are the same.
* PSCI helper function to get the parent nodes corresponding to a cpu_index.
******************************************************************************/
unsigned long mpidr_set_pwr_domain_inst(unsigned long mpidr,
unsigned char pwr_inst,
int pwr_lvl)
void psci_get_parent_pwr_domain_nodes(unsigned int cpu_idx,
int end_lvl,
unsigned int node_index[])
{
unsigned long aff_shift;
unsigned int parent_node = psci_cpu_pd_nodes[cpu_idx].parent_node;
int i;
assert(pwr_lvl <= MPIDR_AFFLVL3);
/*
* Decide the number of bits to shift by depending upon
* the power level
*/
aff_shift = get_afflvl_shift(pwr_lvl);
/* Clear the existing affinity instance & set the new one*/
mpidr &= ~(((unsigned long)MPIDR_AFFLVL_MASK) << aff_shift);
mpidr |= ((unsigned long)pwr_inst) << aff_shift;
return mpidr;
}
/*******************************************************************************
* This function sanity checks a range of power levels.
******************************************************************************/
int psci_check_pwrlvl_range(int start_pwrlvl, int end_pwrlvl)
{
/* Sanity check the parameters passed */
if (end_pwrlvl > PLAT_MAX_PWR_LVL)
return PSCI_E_INVALID_PARAMS;
if (start_pwrlvl < MPIDR_AFFLVL0)
return PSCI_E_INVALID_PARAMS;
if (end_pwrlvl < start_pwrlvl)
return PSCI_E_INVALID_PARAMS;
return PSCI_E_SUCCESS;
}
/*******************************************************************************
* This function is passed an array of pointers to power domain nodes in the
* topology tree for an mpidr and the state which each node should transition
* to. It updates the state of each node between the specified power levels.
******************************************************************************/
void psci_do_state_coordination(uint32_t start_pwrlvl,
uint32_t end_pwrlvl,
pwr_map_node_t *mpidr_nodes[],
uint32_t state)
{
uint32_t level;
for (level = start_pwrlvl; level <= end_pwrlvl; level++) {
if (mpidr_nodes[level] == NULL)
continue;
psci_set_state(mpidr_nodes[level], state);
for (i = PSCI_CPU_PWR_LVL + 1; i <= end_lvl; i++) {
*node_index++ = parent_node;
parent_node = psci_non_cpu_pd_nodes[parent_node].parent_node;
}
}
/*******************************************************************************
* This function is passed an array of pointers to power domain nodes in the
* topology tree for an mpidr. It picks up locks for each power level bottom
* up in the range specified.
* This function is passed a cpu_index and the highest level in the topology
* tree and the state which each node should transition to. It updates the
* state of each node between the specified power levels.
******************************************************************************/
void psci_acquire_pwr_domain_locks(int start_pwrlvl,
int end_pwrlvl,
pwr_map_node_t *mpidr_nodes[])
void psci_do_state_coordination(int end_pwrlvl,
unsigned int cpu_idx,
uint32_t state)
{
int level;
unsigned int parent_idx = psci_cpu_pd_nodes[cpu_idx].parent_node;
psci_set_state(cpu_idx, state, PSCI_CPU_PWR_LVL);
for (level = start_pwrlvl; level <= end_pwrlvl; level++) {
if (mpidr_nodes[level] == NULL)
continue;
psci_lock_get(mpidr_nodes[level]);
for (level = PSCI_CPU_PWR_LVL + 1; level <= end_pwrlvl; level++) {
psci_set_state(parent_idx, state, level);
parent_idx = psci_non_cpu_pd_nodes[parent_idx].parent_node;
}
}
/*******************************************************************************
* This function is passed an array of pointers to power domain nodes in the
* topology tree for an mpidr. It releases the lock for each power level top
* down in the range specified.
* This function is passed a cpu_index and the highest level in the topology
* tree that the operation should be applied to. It picks up locks in order of
* increasing power domain level in the range specified.
******************************************************************************/
void psci_release_pwr_domain_locks(int start_pwrlvl,
int end_pwrlvl,
pwr_map_node_t *mpidr_nodes[])
void psci_acquire_pwr_domain_locks(int end_pwrlvl, unsigned int cpu_idx)
{
unsigned int parent_idx = psci_cpu_pd_nodes[cpu_idx].parent_node;
int level;
for (level = end_pwrlvl; level >= start_pwrlvl; level--) {
if (mpidr_nodes[level] == NULL)
continue;
/* No locking required for level 0. Hence start locking from level 1 */
for (level = PSCI_CPU_PWR_LVL + 1; level <= end_pwrlvl; level++) {
psci_lock_get(&psci_non_cpu_pd_nodes[parent_idx]);
parent_idx = psci_non_cpu_pd_nodes[parent_idx].parent_node;
}
}
psci_lock_release(mpidr_nodes[level]);
/*******************************************************************************
* This function is passed a cpu_index and the highest level in the topology
* tree that the operation should be applied to. It releases the locks in order
* of decreasing power domain level in the range specified.
******************************************************************************/
void psci_release_pwr_domain_locks(int end_pwrlvl, unsigned int cpu_idx)
{
unsigned int parent_idx, parent_nodes[PLAT_MAX_PWR_LVL] = {0};
int level;
/* Get the parent nodes */
psci_get_parent_pwr_domain_nodes(cpu_idx, end_pwrlvl, parent_nodes);
/* Unlock top down. No unlocking required for level 0. */
for (level = end_pwrlvl; level >= PSCI_CPU_PWR_LVL + 1; level--) {
parent_idx = parent_nodes[level - 1];
psci_lock_release(&psci_non_cpu_pd_nodes[parent_idx]);
}
}
@ -332,21 +296,22 @@ int psci_get_ns_ep_info(entry_point_info_t *ep,
}
/*******************************************************************************
* This function takes a pointer to a power domain node in the topology tree
* and returns its state. State of a non-leaf node needs to be calculated.
* This function takes an index and level of a power domain node in the topology
* tree and returns its state. State of a non-leaf node needs to be calculated.
******************************************************************************/
unsigned short psci_get_state(pwr_map_node_t *node)
unsigned short psci_get_state(unsigned int idx,
int level)
{
#if !USE_COHERENT_MEM
flush_dcache_range((uint64_t) node, sizeof(*node));
#endif
assert(node->level >= MPIDR_AFFLVL0 && node->level <= MPIDR_MAX_AFFLVL);
/* A cpu node just contains the state which can be directly returned */
if (node->level == MPIDR_AFFLVL0)
return (node->state >> PSCI_STATE_SHIFT) & PSCI_STATE_MASK;
if (level == PSCI_CPU_PWR_LVL) {
flush_cpu_data_by_index(idx, psci_svc_cpu_data.psci_state);
return get_cpu_data_by_index(idx, psci_svc_cpu_data.psci_state);
}
#if !USE_COHERENT_MEM
flush_dcache_range((uint64_t) &psci_non_cpu_pd_nodes[idx],
sizeof(psci_non_cpu_pd_nodes[idx]));
#endif
/*
* For a power level higher than a cpu, the state has to be
* calculated. It depends upon the value of the reference count
@ -355,35 +320,35 @@ unsigned short psci_get_state(pwr_map_node_t *node)
* count. If the reference count is 0 then the power level is
* OFF else ON.
*/
if (node->ref_count)
if (psci_non_cpu_pd_nodes[idx].ref_count)
return PSCI_STATE_ON;
else
return PSCI_STATE_OFF;
}
/*******************************************************************************
* This function takes a pointer to a power domain node in the topology
* tree and a target state. State of a non-leaf node needs to be converted
* to a reference count. State of a leaf node can be set directly.
* This function takes an index and level of a power domain node in the topology
* tree and a target state. State of a non-leaf node needs to be converted to
* a reference count. State of a leaf node can be set directly.
******************************************************************************/
void psci_set_state(pwr_map_node_t *node, unsigned short state)
void psci_set_state(unsigned int idx,
unsigned short state,
int level)
{
assert(node->level >= MPIDR_AFFLVL0 && node->level <= MPIDR_MAX_AFFLVL);
/*
* For a power level higher than a cpu, the state is used
* to decide whether the reference count is incremented or
* decremented. Entry into the ON_PENDING state does not have
* effect.
*/
if (node->level > MPIDR_AFFLVL0) {
if (level > PSCI_CPU_PWR_LVL) {
switch (state) {
case PSCI_STATE_ON:
node->ref_count++;
psci_non_cpu_pd_nodes[idx].ref_count++;
break;
case PSCI_STATE_OFF:
case PSCI_STATE_SUSPEND:
node->ref_count--;
psci_non_cpu_pd_nodes[idx].ref_count--;
break;
case PSCI_STATE_ON_PENDING:
/*
@ -393,15 +358,16 @@ void psci_set_state(pwr_map_node_t *node, unsigned short state)
return;
default:
assert(0);
}
} else {
node->state &= ~(PSCI_STATE_MASK << PSCI_STATE_SHIFT);
node->state |= (state & PSCI_STATE_MASK) << PSCI_STATE_SHIFT;
}
#if !USE_COHERENT_MEM
flush_dcache_range((uint64_t) node, sizeof(*node));
flush_dcache_range((uint64_t) &psci_non_cpu_pd_nodes[idx],
sizeof(psci_non_cpu_pd_nodes[idx]));
#endif
}
} else {
set_cpu_data_by_index(idx, psci_svc_cpu_data.psci_state, state);
flush_cpu_data_by_index(idx, psci_svc_cpu_data.psci_state);
}
}
/*******************************************************************************
@ -411,11 +377,12 @@ void psci_set_state(pwr_map_node_t *node, unsigned short state)
* tell whether that's actually happened or not. So we err on the side of
* caution & treat the power domain as being turned off.
******************************************************************************/
unsigned short psci_get_phys_state(pwr_map_node_t *node)
unsigned short psci_get_phys_state(unsigned int idx,
int level)
{
unsigned int state;
state = psci_get_state(node);
state = psci_get_state(idx, level);
return get_phys_state(state);
}
@ -429,60 +396,41 @@ unsigned short psci_get_phys_state(pwr_map_node_t *node)
* coherency at the interconnect level in addition to gic cpu interface.
******************************************************************************/
void psci_power_up_finish(int end_pwrlvl,
pwrlvl_power_on_finisher_t pon_handler)
pwrlvl_power_on_finisher_t pon_handler)
{
mpidr_pwr_map_nodes_t mpidr_nodes;
int rc;
unsigned int cpu_idx = plat_my_core_pos();
unsigned int max_phys_off_pwrlvl;
/*
* Collect the pointers to the nodes in the topology tree for
* each power domain instances in the mpidr. If this function does
* not return successfully then either the mpidr or the power
* levels are incorrect. Either case is an irrecoverable error.
*/
rc = psci_get_pwr_map_nodes(read_mpidr_el1() & MPIDR_AFFINITY_MASK,
MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
if (rc != PSCI_E_SUCCESS)
panic();
/*
* This function acquires the lock corresponding to each power
* level so that by the time all locks are taken, the system topology
* is snapshot and state management can be done safely.
*/
psci_acquire_pwr_domain_locks(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
psci_acquire_pwr_domain_locks(end_pwrlvl,
cpu_idx);
max_phys_off_pwrlvl = psci_find_max_phys_off_pwrlvl(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
max_phys_off_pwrlvl = psci_find_max_phys_off_pwrlvl(end_pwrlvl,
cpu_idx);
assert(max_phys_off_pwrlvl != PSCI_INVALID_DATA);
/* Perform generic, architecture and platform specific handling */
pon_handler(mpidr_nodes, max_phys_off_pwrlvl);
pon_handler(cpu_idx, max_phys_off_pwrlvl);
/*
* This function updates the state of each power instance
* corresponding to the mpidr in the range of power levels
* corresponding to the cpu index in the range of power levels
* specified.
*/
psci_do_state_coordination(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes,
PSCI_STATE_ON);
psci_do_state_coordination(end_pwrlvl,
cpu_idx,
PSCI_STATE_ON);
/*
* This loop releases the lock corresponding to each power level
* in the reverse order to which they were acquired.
*/
psci_release_pwr_domain_locks(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
psci_release_pwr_domain_locks(end_pwrlvl,
cpu_idx);
}
/*******************************************************************************
@ -533,8 +481,8 @@ int psci_spd_migrate_info(uint64_t *mpidr)
void psci_print_power_domain_map(void)
{
#if LOG_LEVEL >= LOG_LEVEL_INFO
pwr_map_node_t *node;
unsigned int idx;
unsigned int idx, state;
/* This array maps to the PSCI_STATE_X definitions in psci.h */
static const char *psci_state_str[] = {
"ON",
@ -544,14 +492,20 @@ void psci_print_power_domain_map(void)
};
INFO("PSCI Power Domain Map:\n");
for (idx = 0; idx < PSCI_NUM_PWR_DOMAINS; idx++) {
node = &psci_pwr_domain_map[idx];
if (!(node->state & PSCI_PWR_DOMAIN_PRESENT)) {
continue;
}
INFO(" pwrInst: Level %u, MPID 0x%lx, State %s\n",
node->level, node->mpidr,
psci_state_str[psci_get_state(node)]);
for (idx = 0; idx < (PSCI_NUM_PWR_DOMAINS - PLATFORM_CORE_COUNT); idx++) {
state = psci_get_state(idx, psci_non_cpu_pd_nodes[idx].level);
INFO(" Domain Node : Level %u, parent_node %d, State %s\n",
psci_non_cpu_pd_nodes[idx].level,
psci_non_cpu_pd_nodes[idx].parent_node,
psci_state_str[state]);
}
for (idx = 0; idx < PLATFORM_CORE_COUNT; idx++) {
state = psci_get_state(idx, PSCI_CPU_PWR_LVL);
INFO(" CPU Node : MPID 0x%lx, parent_node %d, State %s\n",
psci_cpu_pd_nodes[idx].mpidr,
psci_cpu_pd_nodes[idx].parent_node,
psci_state_str[state]);
}
#endif
}

View File

@ -28,7 +28,6 @@
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <arch.h>
#include <asm_macros.S>
#include <assert_macros.S>
#include <platform_def.h>
@ -67,7 +66,7 @@ func psci_do_pwrdown_cache_maintenance
* platform.
* ---------------------------------------------
*/
cmp x0, #MPIDR_AFFLVL0
cmp x0, #PSCI_CPU_PWR_LVL
b.eq do_core_pwr_dwn
bl prepare_cluster_pwr_dwn
b do_stack_maintenance

View File

@ -240,32 +240,26 @@ int psci_cpu_off(void)
int psci_affinity_info(unsigned long target_affinity,
unsigned int lowest_affinity_level)
{
int rc = PSCI_E_INVALID_PARAMS;
unsigned int pwr_domain_state;
pwr_map_node_t *node;
unsigned int cpu_idx;
unsigned char cpu_pwr_domain_state;
if (lowest_affinity_level > PLAT_MAX_PWR_LVL)
return rc;
/* We dont support level higher than PSCI_CPU_PWR_LVL */
if (lowest_affinity_level > PSCI_CPU_PWR_LVL)
return PSCI_E_INVALID_PARAMS;
node = psci_get_pwr_map_node(target_affinity, lowest_affinity_level);
if (node && (node->state & PSCI_PWR_DOMAIN_PRESENT)) {
/* Calculate the cpu index of the target */
cpu_idx = plat_core_pos_by_mpidr(target_affinity);
if (cpu_idx == -1)
return PSCI_E_INVALID_PARAMS;
/*
* TODO: For power levels higher than 0 i.e. cpu, the
* state will always be either ON or OFF. Need to investigate
* how critical is it to support ON_PENDING here.
*/
pwr_domain_state = psci_get_state(node);
cpu_pwr_domain_state = psci_get_state(cpu_idx, PSCI_CPU_PWR_LVL);
/* A suspended cpu is available & on for the OS */
if (pwr_domain_state == PSCI_STATE_SUSPEND) {
pwr_domain_state = PSCI_STATE_ON;
}
rc = pwr_domain_state;
/* A suspended cpu is available & on for the OS */
if (cpu_pwr_domain_state == PSCI_STATE_SUSPEND) {
cpu_pwr_domain_state = PSCI_STATE_ON;
}
return rc;
return cpu_pwr_domain_state;
}
int psci_migrate(unsigned long target_cpu)

View File

@ -32,6 +32,7 @@
#include <arch_helpers.h>
#include <assert.h>
#include <debug.h>
#include <platform.h>
#include <string.h>
#include "psci_private.h"
@ -50,8 +51,7 @@
******************************************************************************/
int psci_do_cpu_off(int end_pwrlvl)
{
int rc;
mpidr_pwr_map_nodes_t mpidr_nodes;
int rc, idx = plat_my_core_pos();
unsigned int max_phys_off_pwrlvl;
/*
@ -60,28 +60,13 @@ int psci_do_cpu_off(int end_pwrlvl)
*/
assert(psci_plat_pm_ops->pwr_domain_off);
/*
* Collect the pointers to the nodes in the topology tree for
* each power domain instance in the mpidr. If this function does
* not return successfully then either the mpidr or the power
* levels are incorrect. Either way, this an internal TF error
* therefore assert.
*/
rc = psci_get_pwr_map_nodes(read_mpidr_el1() & MPIDR_AFFINITY_MASK,
MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
assert(rc == PSCI_E_SUCCESS);
/*
* This function acquires the lock corresponding to each power
* level so that by the time all locks are taken, the system topology
* is snapshot and state management can be done safely.
*/
psci_acquire_pwr_domain_locks(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
psci_acquire_pwr_domain_locks(end_pwrlvl,
idx);
/*
* Call the cpu off handler registered by the Secure Payload Dispatcher
@ -96,17 +81,14 @@ int psci_do_cpu_off(int end_pwrlvl)
/*
* This function updates the state of each power domain instance
* corresponding to the mpidr in the range of power levels
* corresponding to the cpu index in the range of power levels
* specified.
*/
psci_do_state_coordination(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes,
PSCI_STATE_OFF);
psci_do_state_coordination(end_pwrlvl,
idx,
PSCI_STATE_OFF);
max_phys_off_pwrlvl = psci_find_max_phys_off_pwrlvl(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
max_phys_off_pwrlvl = psci_find_max_phys_off_pwrlvl(end_pwrlvl, idx);
assert(max_phys_off_pwrlvl != PSCI_INVALID_DATA);
/*
@ -126,9 +108,8 @@ exit:
* Release the locks corresponding to each power level in the
* reverse order to which they were acquired.
*/
psci_release_pwr_domain_locks(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
psci_release_pwr_domain_locks(end_pwrlvl,
idx);
/*
* Check if all actions needed to safely power down this cpu have

View File

@ -71,8 +71,8 @@ int psci_cpu_on_start(unsigned long target_cpu,
int end_pwrlvl)
{
int rc;
mpidr_pwr_map_nodes_t target_cpu_nodes;
unsigned long psci_entrypoint;
unsigned int target_idx = plat_core_pos_by_mpidr(target_cpu);
/*
* This function must only be called on platforms where the
@ -81,33 +81,14 @@ int psci_cpu_on_start(unsigned long target_cpu,
assert(psci_plat_pm_ops->pwr_domain_on &&
psci_plat_pm_ops->pwr_domain_on_finish);
/*
* Collect the pointers to the nodes in the topology tree for
* each power domain instance in the mpidr. If this function does
* not return successfully then either the mpidr or the power
* levels are incorrect.
*/
rc = psci_get_pwr_map_nodes(target_cpu,
MPIDR_AFFLVL0,
end_pwrlvl,
target_cpu_nodes);
assert(rc == PSCI_E_SUCCESS);
/*
* This function acquires the lock corresponding to each power
* level so that by the time all locks are taken, the system topology
* is snapshot and state management can be done safely.
*/
psci_acquire_pwr_domain_locks(MPIDR_AFFLVL0,
end_pwrlvl,
target_cpu_nodes);
/* Protect against multiple CPUs trying to turn ON the same target CPU */
psci_spin_lock_cpu(target_idx);
/*
* Generic management: Ensure that the cpu is off to be
* turned on.
*/
rc = cpu_on_validate_state(psci_get_state(
target_cpu_nodes[MPIDR_AFFLVL0]));
rc = cpu_on_validate_state(psci_get_state(target_idx, PSCI_CPU_PWR_LVL));
if (rc != PSCI_E_SUCCESS)
goto exit;
@ -121,13 +102,12 @@ int psci_cpu_on_start(unsigned long target_cpu,
/*
* This function updates the state of each affinity instance
* corresponding to the mpidr in the range of affinity levels
* corresponding to the mpidr in the range of power domain levels
* specified.
*/
psci_do_state_coordination(MPIDR_AFFLVL0,
end_pwrlvl,
target_cpu_nodes,
PSCI_STATE_ON_PENDING);
psci_do_state_coordination(end_pwrlvl,
target_idx,
PSCI_STATE_ON_PENDING);
/*
* Perform generic, architecture and platform specific handling.
@ -150,20 +130,12 @@ int psci_cpu_on_start(unsigned long target_cpu,
cm_init_context_by_index(target_idx, ep);
else
/* Restore the state on error. */
psci_do_state_coordination(MPIDR_AFFLVL0,
end_pwrlvl,
target_cpu_nodes,
PSCI_STATE_OFF);
psci_do_state_coordination(end_pwrlvl,
target_idx,
PSCI_STATE_OFF);
exit:
/*
* This loop releases the lock corresponding to each power level
* in the reverse order to which they were acquired.
*/
psci_release_pwr_domain_locks(MPIDR_AFFLVL0,
end_pwrlvl,
target_cpu_nodes);
psci_spin_unlock_cpu(target_idx);
return rc;
}
@ -171,12 +143,12 @@ exit:
* The following function finish an earlier power on request. They
* are called by the common finisher routine in psci_common.c.
******************************************************************************/
void psci_cpu_on_finish(pwr_map_node_t *node[], int pwrlvl)
void psci_cpu_on_finish(unsigned int cpu_idx,
int max_off_pwrlvl)
{
assert(node[pwrlvl]->level == pwrlvl);
/* Ensure we have been explicitly woken up by another cpu */
assert(psci_get_state(node[MPIDR_AFFLVL0]) == PSCI_STATE_ON_PENDING);
assert(psci_get_state(cpu_idx, PSCI_CPU_PWR_LVL)
== PSCI_STATE_ON_PENDING);
/*
* Plat. management: Perform the platform specific actions
@ -184,7 +156,7 @@ void psci_cpu_on_finish(pwr_map_node_t *node[], int pwrlvl)
* register. The actual state of this cpu has already been
* changed.
*/
psci_plat_pm_ops->pwr_domain_on_finish(pwrlvl);
psci_plat_pm_ops->pwr_domain_on_finish(max_off_pwrlvl);
/*
* Arch. management: Enable data cache and manage stack memory
@ -198,6 +170,15 @@ void psci_cpu_on_finish(pwr_map_node_t *node[], int pwrlvl)
*/
bl31_arch_setup();
/*
* Lock the CPU spin lock to make sure that the context initialization
* is done. Since the lock is only used in this function to create
* a synchronization point with cpu_on_start(), it can be released
* immediately.
*/
psci_spin_lock_cpu(cpu_idx);
psci_spin_unlock_cpu(cpu_idx);
/*
* Call the cpu on finish handler registered by the Secure Payload
* Dispatcher to let it do any bookeeping. If the handler encounters an
@ -206,6 +187,10 @@ void psci_cpu_on_finish(pwr_map_node_t *node[], int pwrlvl)
if (psci_spd_pm && psci_spd_pm->svc_on_finish)
psci_spd_pm->svc_on_finish(0);
/* Populate the mpidr field within the cpu node array */
/* This needs to be done only once */
psci_cpu_pd_nodes[cpu_idx].mpidr = read_mpidr() & MPIDR_AFFINITY_MASK;
/*
* Generic management: Now we just need to retrieve the
* information that we had stashed away during the cpu_on
@ -216,4 +201,3 @@ void psci_cpu_on_finish(pwr_map_node_t *node[], int pwrlvl)
/* Clean caches before re-entering normal world */
dcsw_op_louis(DCCSW);
}

View File

@ -34,25 +34,30 @@
#include <arch.h>
#include <bakery_lock.h>
#include <bl_common.h>
#include <cpu_data.h>
#include <psci.h>
#include <spinlock.h>
/*
* The following helper macros abstract the interface to the Bakery
* Lock API.
*/
#if USE_COHERENT_MEM
#define psci_lock_init(pwr_map, idx) bakery_lock_init(&(pwr_map)[(idx)].lock)
#define psci_lock_get(node) bakery_lock_get(&((node)->lock))
#define psci_lock_release(node) bakery_lock_release(&((node)->lock))
#define psci_lock_init(non_cpu_pd_node, idx) \
bakery_lock_init(&(non_cpu_pd_node)[(idx)].lock)
#define psci_lock_get(non_cpu_pd_node) \
bakery_lock_get(&((non_cpu_pd_node)->lock))
#define psci_lock_release(non_cpu_pd_node) \
bakery_lock_release(&((non_cpu_pd_node)->lock))
#else
#define psci_lock_init(pwr_map, idx) \
((pwr_map)[(idx)].pwr_domain_index = (idx))
#define psci_lock_get(node) \
bakery_lock_get((node)->pwr_domain_index,\
CPU_DATA_PSCI_LOCK_OFFSET)
#define psci_lock_release(node) \
bakery_lock_release((node)->pwr_domain_index,\
CPU_DATA_PSCI_LOCK_OFFSET)
#define psci_lock_init(non_cpu_pd_node, idx) \
((non_cpu_pd_node)[(idx)].lock_index = (idx))
#define psci_lock_get(non_cpu_pd_node) \
bakery_lock_get((non_cpu_pd_node)->lock_index, \
CPU_DATA_PSCI_LOCK_OFFSET)
#define psci_lock_release(non_cpu_pd_node) \
bakery_lock_release((non_cpu_pd_node)->lock_index, \
CPU_DATA_PSCI_LOCK_OFFSET)
#endif
/*
@ -75,39 +80,76 @@
define_psci_cap(PSCI_MIG_INFO_UP_CPU_AARCH64) | \
define_psci_cap(PSCI_SYSTEM_SUSPEND_AARCH64))
/*
* Helper macros for the CPU level spinlocks
*/
#define psci_spin_lock_cpu(idx) spin_lock(&psci_cpu_pd_nodes[idx].cpu_lock)
#define psci_spin_unlock_cpu(idx) spin_unlock(&psci_cpu_pd_nodes[idx].cpu_lock)
/*******************************************************************************
* The following two data structures hold the topology tree which in turn tracks
* the state of the all the power domain instances supported by the platform.
* The following two data structures implement the power domain tree. The tree
* is used to track the state of all the nodes i.e. power domain instances
* described by the platform. The tree consists of nodes that describe CPU power
* domains i.e. leaf nodes and all other power domains which are parents of a
* CPU power domain i.e. non-leaf nodes.
******************************************************************************/
typedef struct pwr_map_node {
unsigned long mpidr;
typedef struct non_cpu_pwr_domain_node {
/*
* Index of the first CPU power domain node level 0 which has this node
* as its parent.
*/
unsigned int cpu_start_idx;
/*
* Number of CPU power domains which are siblings of the domain indexed
* by 'cpu_start_idx' i.e. all the domains in the range 'cpu_start_idx
* -> cpu_start_idx + ncpus' have this node as their parent.
*/
unsigned int ncpus;
/*
* Index of the parent power domain node.
* TODO: Figure out whether to whether using pointer is more efficient.
*/
unsigned int parent_node;
unsigned char ref_count;
unsigned char state;
unsigned char level;
#if USE_COHERENT_MEM
bakery_lock_t lock;
#else
/* For indexing the bakery_info array in per CPU data */
unsigned char pwr_domain_index;
unsigned char lock_index;
#endif
} pwr_map_node_t;
} non_cpu_pd_node_t;
typedef struct pwr_lvl_limits_node {
int min;
int max;
} pwr_lvl_limits_node_t;
typedef struct cpu_pwr_domain_node {
unsigned long mpidr;
typedef pwr_map_node_t (*mpidr_pwr_map_nodes_t[MPIDR_MAX_AFFLVL + 1]);
typedef void (*pwrlvl_power_on_finisher_t)(pwr_map_node_t *mpidr_nodes[],
int pwrlvl);
/*
* Index of the parent power domain node.
* TODO: Figure out whether to whether using pointer is more efficient.
*/
unsigned int parent_node;
/*
* A CPU power domain does not require state coordination like its
* parent power domains. Hence this node does not include a bakery
* lock. A spinlock is required by the CPU_ON handler to prevent a race
* when multiple CPUs try to turn ON the same target CPU.
*/
spinlock_t cpu_lock;
} cpu_pd_node_t;
typedef void (*pwrlvl_power_on_finisher_t)(unsigned int cpu_idx,
int max_off_pwrlvl);
/*******************************************************************************
* Data prototypes
******************************************************************************/
extern const plat_pm_ops_t *psci_plat_pm_ops;
extern pwr_map_node_t psci_pwr_domain_map[PSCI_NUM_PWR_DOMAINS];
extern pwr_lvl_limits_node_t psci_pwr_domain_map[MPIDR_MAX_AFFLVL + 1];
extern non_cpu_pd_node_t psci_non_cpu_pd_nodes[PSCI_NUM_NON_CPU_PWR_DOMAINS];
extern cpu_pd_node_t psci_cpu_pd_nodes[PLATFORM_CORE_COUNT];
extern uint32_t psci_caps;
/*******************************************************************************
@ -119,56 +161,47 @@ extern const spd_pm_ops_t *psci_spd_pm;
* Function prototypes
******************************************************************************/
/* Private exported functions from psci_common.c */
unsigned short psci_get_state(pwr_map_node_t *node);
unsigned short psci_get_phys_state(pwr_map_node_t *node);
void psci_set_state(pwr_map_node_t *node, unsigned short state);
unsigned long mpidr_set_pwr_domain_inst(unsigned long, unsigned char, int);
unsigned short psci_get_state(unsigned int idx, int level);
unsigned short psci_get_phys_state(unsigned int idx, int level);
void psci_set_state(unsigned int idx, unsigned short state, int level);
int psci_validate_mpidr(unsigned long mpidr);
int get_power_on_target_pwrlvl(void);
void psci_power_up_finish(int end_pwrlvl,
pwrlvl_power_on_finisher_t pon_handler);
int psci_get_ns_ep_info(entry_point_info_t *ep,
uint64_t entrypoint, uint64_t context_id);
int psci_check_pwrlvl_range(int start_pwrlvl, int end_pwrlvl);
void psci_do_state_coordination(uint32_t start_pwrlvl,
uint32_t end_pwrlvl,
pwr_map_node_t *mpidr_nodes[],
uint32_t state);
void psci_acquire_pwr_domain_locks(int start_pwrlvl,
int end_pwrlvl,
pwr_map_node_t *mpidr_nodes[]);
void psci_release_pwr_domain_locks(int start_pwrlvl,
int end_pwrlvl,
mpidr_pwr_map_nodes_t mpidr_nodes);
void psci_get_parent_pwr_domain_nodes(unsigned int cpu_idx,
int end_lvl,
unsigned int node_index[]);
void psci_do_state_coordination(int end_pwrlvl,
unsigned int cpu_idx,
uint32_t state);
void psci_acquire_pwr_domain_locks(int end_pwrlvl,
unsigned int cpu_idx);
void psci_release_pwr_domain_locks(int end_pwrlvl,
unsigned int cpu_idx);
void psci_print_power_domain_map(void);
uint32_t psci_find_max_phys_off_pwrlvl(uint32_t start_pwrlvl,
uint32_t end_pwrlvl,
pwr_map_node_t *mpidr_nodes[]);
uint32_t psci_find_max_phys_off_pwrlvl(uint32_t end_pwrlvl,
unsigned int cpu_idx);
unsigned int psci_is_last_on_cpu(void);
int psci_spd_migrate_info(uint64_t *mpidr);
/* Private exported functions from psci_setup.c */
int psci_get_pwr_map_nodes(unsigned long mpidr,
int start_pwrlvl,
int end_pwrlvl,
pwr_map_node_t *mpidr_nodes[]);
pwr_map_node_t *psci_get_pwr_map_node(unsigned long, int);
/* Private exported functions from psci_cpu_on.c */
/* Private exported functions from psci_on.c */
int psci_cpu_on_start(unsigned long target_cpu,
entry_point_info_t *ep,
int end_pwrlvl);
entry_point_info_t *ep,
int end_pwrlvl);
void psci_cpu_on_finish(pwr_map_node_t *node[], int pwrlvl);
void psci_cpu_on_finish(unsigned int cpu_idx,
int max_off_pwrlvl);
/* Private exported functions from psci_cpu_off.c */
int psci_do_cpu_off(int end_pwrlvl);
/* Private exported functions from psci_cpu_suspend.c */
/* Private exported functions from psci_suspend.c */
void psci_cpu_suspend_start(entry_point_info_t *ep,
int end_pwrlvl);
void psci_cpu_suspend_finish(pwr_map_node_t *node[], int pwrlvl);
void psci_cpu_suspend_finish(unsigned int cpu_idx,
int max_off_pwrlvl);
void psci_set_suspend_power_state(unsigned int power_state);

View File

@ -42,335 +42,203 @@
* Per cpu non-secure contexts used to program the architectural state prior
* return to the normal world.
* TODO: Use the memory allocator to set aside memory for the contexts instead
* of relying on platform defined constants. Using PSCI_NUM_PWR_DOMAINS will be
* an overkill.
* of relying on platform defined constants.
******************************************************************************/
static cpu_context_t psci_ns_context[PLATFORM_CORE_COUNT];
/*******************************************************************************
* In a system, a certain number of power domain instances are present at a
* power level. The cumulative number of instances across all levels are
* stored in 'psci_pwr_domain_map'. The topology tree has been flattenned into
* this array. To retrieve nodes, information about the extents of each power
* level i.e. start index and end index needs to be present.
* 'psci_pwr_lvl_limits' stores this information.
******************************************************************************/
pwr_lvl_limits_node_t psci_pwr_lvl_limits[MPIDR_MAX_AFFLVL + 1];
/******************************************************************************
* Define the psci capability variable.
*****************************************************************************/
uint32_t psci_caps;
/*******************************************************************************
* Routines for retrieving the node corresponding to a power domain instance
* in the mpidr. The first one uses binary search to find the node corresponding
* to the mpidr (key) at a particular power level. The second routine decides
* extents of the binary search at each power level.
* Function which initializes the 'psci_non_cpu_pd_nodes' or the
* 'psci_cpu_pd_nodes' corresponding to the power level.
******************************************************************************/
static int psci_pwr_domain_map_get_idx(unsigned long key,
int min_idx,
int max_idx)
static void psci_init_pwr_domain_node(int node_idx, int parent_idx, int level)
{
int mid;
/*
* Terminating condition: If the max and min indices have crossed paths
* during the binary search then the key has not been found.
*/
if (max_idx < min_idx)
return PSCI_E_INVALID_PARAMS;
/*
* Make sure we are within array limits.
*/
assert(min_idx >= 0 && max_idx < PSCI_NUM_PWR_DOMAINS);
/*
* Bisect the array around 'mid' and then recurse into the array chunk
* where the key is likely to be found. The mpidrs in each node in the
* 'psci_pwr_domain_map' for a given power level are stored in an
* ascending order which makes the binary search possible.
*/
mid = min_idx + ((max_idx - min_idx) >> 1); /* Divide by 2 */
if (psci_pwr_domain_map[mid].mpidr > key)
return psci_pwr_domain_map_get_idx(key, min_idx, mid - 1);
else if (psci_pwr_domain_map[mid].mpidr < key)
return psci_pwr_domain_map_get_idx(key, mid + 1, max_idx);
else
return mid;
}
pwr_map_node_t *psci_get_pwr_map_node(unsigned long mpidr, int pwr_lvl)
{
int rc;
if (pwr_lvl > PLAT_MAX_PWR_LVL)
return NULL;
/* Right shift the mpidr to the required power level */
mpidr = mpidr_mask_lower_afflvls(mpidr, pwr_lvl);
rc = psci_pwr_domain_map_get_idx(mpidr,
psci_pwr_lvl_limits[pwr_lvl].min,
psci_pwr_lvl_limits[pwr_lvl].max);
if (rc >= 0)
return &psci_pwr_domain_map[rc];
else
return NULL;
}
/*******************************************************************************
* This function populates an array with nodes corresponding to a given range of
* power levels in an mpidr. It returns successfully only when the power
* levels are correct, the mpidr is valid i.e. no power level is absent from
* the topology tree & the power domain instance at level 0 is not absent.
******************************************************************************/
int psci_get_pwr_map_nodes(unsigned long mpidr,
int start_pwrlvl,
int end_pwrlvl,
pwr_map_node_t *mpidr_nodes[])
{
int rc = PSCI_E_INVALID_PARAMS, level;
pwr_map_node_t *node;
rc = psci_check_pwrlvl_range(start_pwrlvl, end_pwrlvl);
if (rc != PSCI_E_SUCCESS)
return rc;
for (level = start_pwrlvl; level <= end_pwrlvl; level++) {
/*
* Grab the node for each power level. No power level
* can be missing as that would mean that the topology tree
* is corrupted.
*/
node = psci_get_pwr_map_node(mpidr, level);
if (node == NULL) {
rc = PSCI_E_INVALID_PARAMS;
break;
}
/*
* Skip absent power levels unless it's power level 0.
* An absent cpu means that the mpidr is invalid. Save the
* pointer to the node for the present power level
*/
if (!(node->state & PSCI_PWR_DOMAIN_PRESENT)) {
if (level == MPIDR_AFFLVL0) {
rc = PSCI_E_INVALID_PARAMS;
break;
}
mpidr_nodes[level] = NULL;
} else
mpidr_nodes[level] = node;
}
return rc;
}
/*******************************************************************************
* Function which initializes the 'pwr_map_node' corresponding to a power
* domain instance. Each node has a unique mpidr, level and bakery lock.
******************************************************************************/
static void psci_init_pwr_map_node(unsigned long mpidr,
int level,
unsigned int idx)
{
unsigned char state;
uint32_t linear_id;
psci_pwr_domain_map[idx].mpidr = mpidr;
psci_pwr_domain_map[idx].level = level;
psci_lock_init(psci_pwr_domain_map, idx);
/*
* If an power domain instance is present then mark it as OFF
* to begin with.
*/
state = plat_get_pwr_domain_state(level, mpidr);
psci_pwr_domain_map[idx].state = state;
/*
* Check if this is a CPU node and is present in which case certain
* other initialisations are required.
*/
if (level != MPIDR_AFFLVL0)
return;
if (!(state & PSCI_PWR_DOMAIN_PRESENT))
return;
/*
* Mark the cpu as OFF. Higher power level reference counts
* have already been memset to 0
*/
psci_set_state(&psci_pwr_domain_map[idx], PSCI_STATE_OFF);
/*
* Associate a non-secure context with this power
* instance through the context management library.
*/
linear_id = plat_core_pos_by_mpidr(mpidr);
assert(linear_id < PLATFORM_CORE_COUNT);
/* Invalidate the suspend context for the node */
set_cpu_data_by_index(linear_id,
psci_svc_cpu_data.power_state,
PSCI_INVALID_DATA);
flush_cpu_data_by_index(linear_id, psci_svc_cpu_data);
cm_set_context_by_index(linear_id,
(void *) &psci_ns_context[linear_id],
NON_SECURE);
}
/*******************************************************************************
* Core routine used by the Breadth-First-Search algorithm to populate the
* power domain tree. Each level in the tree corresponds to a power level. This
* routine's aim is to traverse to the target power level and populate nodes
* in the 'psci_pwr_domain_map' for all the siblings at that level. It uses the
* current power level to keep track of how many levels from the root of the
* tree have been traversed. If the current power level != target power level,
* then the platform is asked to return the number of children that each
* power domain instance has at the current power level. Traversal is then done
* for each child at the next lower level i.e. current power level - 1.
*
* CAUTION: This routine assumes that power domain instance ids are allocated
* in a monotonically increasing manner at each power level in a mpidr starting
* from 0. If the platform breaks this assumption then this code will have to
* be reworked accordingly.
******************************************************************************/
static unsigned int psci_init_pwr_map(unsigned long mpidr,
unsigned int pwrmap_idx,
int cur_pwrlvl,
int tgt_pwrlvl)
{
unsigned int ctr, pwr_inst_count;
assert(cur_pwrlvl >= tgt_pwrlvl);
/*
* Find the number of siblings at the current power level &
* assert if there are none 'cause then we have been invoked with
* an invalid mpidr.
*/
pwr_inst_count = plat_get_pwr_domain_count(cur_pwrlvl, mpidr);
assert(pwr_inst_count);
if (tgt_pwrlvl < cur_pwrlvl) {
for (ctr = 0; ctr < pwr_inst_count; ctr++) {
mpidr = mpidr_set_pwr_domain_inst(mpidr, ctr,
cur_pwrlvl);
pwrmap_idx = psci_init_pwr_map(mpidr,
pwrmap_idx,
cur_pwrlvl - 1,
tgt_pwrlvl);
}
if (level > PSCI_CPU_PWR_LVL) {
psci_non_cpu_pd_nodes[node_idx].level = level;
psci_lock_init(psci_non_cpu_pd_nodes, node_idx);
psci_non_cpu_pd_nodes[node_idx].parent_node = parent_idx;
} else {
for (ctr = 0; ctr < pwr_inst_count; ctr++, pwrmap_idx++) {
mpidr = mpidr_set_pwr_domain_inst(mpidr, ctr,
cur_pwrlvl);
psci_init_pwr_map_node(mpidr, cur_pwrlvl, pwrmap_idx);
}
/* pwrmap_idx is 1 greater than the max index of cur_pwrlvl */
psci_pwr_lvl_limits[cur_pwrlvl].max = pwrmap_idx - 1;
psci_cpu_pd_nodes[node_idx].parent_node = parent_idx;
/* Initialize with an invalid mpidr */
psci_cpu_pd_nodes[node_idx].mpidr = PSCI_INVALID_MPIDR;
/*
* Mark the cpu as OFF.
*/
set_cpu_data_by_index(node_idx,
psci_svc_cpu_data.psci_state,
PSCI_STATE_OFF);
/* Invalidate the suspend context for the node */
set_cpu_data_by_index(node_idx,
psci_svc_cpu_data.power_state,
PSCI_INVALID_DATA);
flush_cpu_data_by_index(node_idx, psci_svc_cpu_data);
cm_set_context_by_index(node_idx,
(void *) &psci_ns_context[node_idx],
NON_SECURE);
}
return pwrmap_idx;
}
/*******************************************************************************
* This function initializes the topology tree by querying the platform. To do
* so, it's helper routines implement a Breadth-First-Search. At each power
* level the platform conveys the number of power domain instances that exist
* i.e. the power instance count. The algorithm populates the
* psci_pwr_domain_map* recursively using this information. On a platform that
* implements two clusters of 4 cpus each, the populated pwr_map_array would
* look like this:
* This functions updates cpu_start_idx and ncpus field for each of the node in
* psci_non_cpu_pd_nodes[]. It does so by comparing the parent nodes of each of
* the CPUs and check whether they match with the parent of the previous
* CPU. The basic assumption for this work is that children of the same parent
* are allocated adjacent indices. The platform should ensure this though proper
* mapping of the CPUs to indices via plat_core_pos_by_mpidr() and
* plat_my_core_pos() APIs.
*******************************************************************************/
static void psci_update_pwrlvl_limits(void)
{
int cpu_idx, j;
unsigned int nodes_idx[PLAT_MAX_PWR_LVL] = {0};
unsigned int temp_index[PLAT_MAX_PWR_LVL];
for (cpu_idx = 0; cpu_idx < PLATFORM_CORE_COUNT; cpu_idx++) {
psci_get_parent_pwr_domain_nodes(cpu_idx,
PLAT_MAX_PWR_LVL,
temp_index);
for (j = PLAT_MAX_PWR_LVL - 1; j >= 0; j--) {
if (temp_index[j] != nodes_idx[j]) {
nodes_idx[j] = temp_index[j];
psci_non_cpu_pd_nodes[nodes_idx[j]].cpu_start_idx
= cpu_idx;
}
psci_non_cpu_pd_nodes[nodes_idx[j]].ncpus++;
}
}
}
/*******************************************************************************
* Core routine to populate the power domain tree. The tree descriptor passed by
* the platform is populated breadth-first and the first entry in the map
* informs the number of root power domains. The parent nodes of the root nodes
* will point to an invalid entry(-1).
******************************************************************************/
static void populate_power_domain_tree(const unsigned char *topology)
{
unsigned int i, j = 0, num_nodes_at_lvl = 1, num_nodes_at_next_lvl;
unsigned int node_index = 0, parent_node_index = 0, num_children;
int level = PLAT_MAX_PWR_LVL;
/*
* For each level the inputs are:
* - number of nodes at this level in plat_array i.e. num_nodes_at_level
* This is the sum of values of nodes at the parent level.
* - Index of first entry at this level in the plat_array i.e.
* parent_node_index.
* - Index of first free entry in psci_non_cpu_pd_nodes[] or
* psci_cpu_pd_nodes[] i.e. node_index depending upon the level.
*/
while (level >= PSCI_CPU_PWR_LVL) {
num_nodes_at_next_lvl = 0;
/*
* For each entry (parent node) at this level in the plat_array:
* - Find the number of children
* - Allocate a node in a power domain array for each child
* - Set the parent of the child to the parent_node_index - 1
* - Increment parent_node_index to point to the next parent
* - Accumulate the number of children at next level.
*/
for (i = 0; i < num_nodes_at_lvl; i++) {
assert(parent_node_index <=
PSCI_NUM_NON_CPU_PWR_DOMAINS);
num_children = topology[parent_node_index];
for (j = node_index;
j < node_index + num_children; j++)
psci_init_pwr_domain_node(j,
parent_node_index - 1,
level);
node_index = j;
num_nodes_at_next_lvl += num_children;
parent_node_index++;
}
num_nodes_at_lvl = num_nodes_at_next_lvl;
level--;
/* Reset the index for the cpu power domain array */
if (level == PSCI_CPU_PWR_LVL)
node_index = 0;
}
/* Validate the sanity of array exported by the platform */
assert(j == PLATFORM_CORE_COUNT);
#if !USE_COHERENT_MEM
/* Flush the non CPU power domain data to memory */
flush_dcache_range((uint64_t) &psci_non_cpu_pd_nodes,
sizeof(psci_non_cpu_pd_nodes));
#endif
}
/*******************************************************************************
* This function initializes the power domain topology tree by querying the
* platform. The power domain nodes higher than the CPU are populated in the
* array psci_non_cpu_pd_nodes[] and the CPU power domains are populated in
* psci_cpu_pd_nodes[]. The platform exports its static topology map through the
* populate_power_domain_topology_tree() API. The algorithm populates the
* psci_non_cpu_pd_nodes and psci_cpu_pd_nodes iteratively by using this
* topology map. On a platform that implements two clusters of 2 cpus each, and
* supporting 3 domain levels, the populated psci_non_cpu_pd_nodes would look
* like this:
*
* <- cpus cluster0 -><- cpus cluster1 ->
* ---------------------------------------------------
* | 0 | 1 | 0 | 1 | 2 | 3 | 0 | 1 | 2 | 3 |
* | system node | cluster 0 node | cluster 1 node |
* ---------------------------------------------------
* ^ ^
* cluster __| cpu __|
* limit limit
*
* The first 2 entries are of the cluster nodes. The next 4 entries are of cpus
* within cluster 0. The last 4 entries are of cpus within cluster 1.
* The 'psci_pwr_lvl_limits' array contains the max & min index of each power
* level within the 'psci_pwr_domain_map' array. This allows restricting search
* of a node at a power level between the indices in the limits array.
* And populated psci_cpu_pd_nodes would look like this :
* <- cpus cluster0 -><- cpus cluster1 ->
* ------------------------------------------------
* | CPU 0 | CPU 1 | CPU 2 | CPU 3 |
* ------------------------------------------------
******************************************************************************/
int32_t psci_setup(void)
{
unsigned long mpidr = read_mpidr();
int pwrlvl, pwrmap_idx, max_pwrlvl;
pwr_map_node_t *node;
const unsigned char *topology_tree;
psci_plat_pm_ops = NULL;
/* Query the topology map from the platform */
topology_tree = plat_get_power_domain_tree_desc();
/* Find out the maximum power level that the platform implements */
max_pwrlvl = PLAT_MAX_PWR_LVL;
assert(max_pwrlvl <= MPIDR_MAX_AFFLVL);
/* Populate the power domain arrays using the platform topology map */
populate_power_domain_tree(topology_tree);
/*
* This call traverses the topology tree with help from the platform and
* populates the power map using a breadth-first-search recursively.
* We assume that the platform allocates power domain instance ids from
* 0 onwards at each power level in the mpidr. FIRST_MPIDR = 0.0.0.0
*/
pwrmap_idx = 0;
for (pwrlvl = max_pwrlvl; pwrlvl >= MPIDR_AFFLVL0; pwrlvl--) {
pwrmap_idx = psci_init_pwr_map(FIRST_MPIDR,
pwrmap_idx,
max_pwrlvl,
pwrlvl);
}
/* Update the CPU limits for each node in psci_non_cpu_pd_nodes */
psci_update_pwrlvl_limits();
/* Populate the mpidr field of cpu node for this CPU */
psci_cpu_pd_nodes[plat_my_core_pos()].mpidr =
read_mpidr() & MPIDR_AFFINITY_MASK;
#if !USE_COHERENT_MEM
/*
* The psci_pwr_domain_map only needs flushing when it's not allocated
* in coherent memory.
* The psci_non_cpu_pd_nodes only needs flushing when it's not allocated in
* coherent memory.
*/
flush_dcache_range((uint64_t) &psci_pwr_domain_map,
sizeof(psci_pwr_domain_map));
flush_dcache_range((uint64_t) &psci_non_cpu_pd_nodes,
sizeof(psci_non_cpu_pd_nodes));
#endif
/*
* Set the bounds for number of instances of each level in the map. Also
* flush out the entire array so that it's visible to subsequent power
* management operations. The 'psci_pwr_lvl_limits' array is allocated
* in normal memory. It will be accessed when the mmu is off e.g. after
* reset. Hence it needs to be flushed.
*/
for (pwrlvl = MPIDR_AFFLVL0; pwrlvl < max_pwrlvl; pwrlvl++) {
psci_pwr_lvl_limits[pwrlvl].min =
psci_pwr_lvl_limits[pwrlvl + 1].max + 1;
}
flush_dcache_range((unsigned long) psci_pwr_lvl_limits,
sizeof(psci_pwr_lvl_limits));
flush_dcache_range((uint64_t) &psci_cpu_pd_nodes,
sizeof(psci_cpu_pd_nodes));
/*
* Mark the power domain instances in our mpidr as ON. No need to lock
* as this is the primary cpu.
* Mark the current CPU and its parent power domains as ON. No need to lock
* as the system is UP on the primary at this stage of boot.
*/
mpidr &= MPIDR_AFFINITY_MASK;
for (pwrlvl = MPIDR_AFFLVL0; pwrlvl <= max_pwrlvl; pwrlvl++) {
node = psci_get_pwr_map_node(mpidr, pwrlvl);
assert(node);
/* Mark each present node as ON. */
if (node->state & PSCI_PWR_DOMAIN_PRESENT)
psci_set_state(node, PSCI_STATE_ON);
}
psci_do_state_coordination(PLAT_MAX_PWR_LVL, plat_my_core_pos(),
PSCI_STATE_ON);
platform_setup_pm(&psci_plat_pm_ops);
assert(psci_plat_pm_ops);

View File

@ -82,15 +82,15 @@ int psci_get_suspend_stateid(void)
}
/*******************************************************************************
* This function gets the state id of the cpu specified by the 'mpidr' parameter
* This function gets the state id of the cpu specified by the cpu index
* from the power state parameter saved in the per-cpu data array. Returns
* PSCI_INVALID_DATA if the power state saved is invalid.
******************************************************************************/
int psci_get_suspend_stateid_by_mpidr(unsigned long mpidr)
int psci_get_suspend_stateid_by_idx(unsigned long cpu_idx)
{
unsigned int power_state;
power_state = get_cpu_data_by_index(plat_core_pos_by_mpidr(mpidr),
power_state = get_cpu_data_by_index(cpu_idx,
psci_svc_cpu_data.power_state);
return ((power_state == PSCI_INVALID_DATA) ?
@ -114,12 +114,10 @@ int psci_get_suspend_stateid_by_mpidr(unsigned long mpidr)
* the state transition has been done, no further error is expected and it is
* not possible to undo any of the actions taken beyond that point.
******************************************************************************/
void psci_cpu_suspend_start(entry_point_info_t *ep,
int end_pwrlvl)
void psci_cpu_suspend_start(entry_point_info_t *ep, int end_pwrlvl)
{
int skip_wfi = 0;
mpidr_pwr_map_nodes_t mpidr_nodes;
unsigned int max_phys_off_pwrlvl;
unsigned int max_phys_off_pwrlvl, idx = plat_my_core_pos();
unsigned long psci_entrypoint;
/*
@ -129,25 +127,13 @@ void psci_cpu_suspend_start(entry_point_info_t *ep,
assert(psci_plat_pm_ops->pwr_domain_suspend &&
psci_plat_pm_ops->pwr_domain_suspend_finish);
/*
* Collect the pointers to the nodes in the topology tree for
* each power domain instance in the mpidr. If this function does
* not return successfully then either the mpidr or the power
* levels are incorrect. Either way, this an internal TF error
* therefore assert.
*/
if (psci_get_pwr_map_nodes(read_mpidr_el1() & MPIDR_AFFINITY_MASK,
MPIDR_AFFLVL0, end_pwrlvl, mpidr_nodes) != PSCI_E_SUCCESS)
assert(0);
/*
* This function acquires the lock corresponding to each power
* level so that by the time all locks are taken, the system topology
* is snapshot and state management can be done safely.
*/
psci_acquire_pwr_domain_locks(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
psci_acquire_pwr_domain_locks(end_pwrlvl,
idx);
/*
* We check if there are any pending interrupts after the delay
@ -169,17 +155,15 @@ void psci_cpu_suspend_start(entry_point_info_t *ep,
/*
* This function updates the state of each power domain instance
* corresponding to the mpidr in the range of power levels
* corresponding to the cpu index in the range of power levels
* specified.
*/
psci_do_state_coordination(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes,
PSCI_STATE_SUSPEND);
psci_do_state_coordination(end_pwrlvl,
idx,
PSCI_STATE_SUSPEND);
max_phys_off_pwrlvl = psci_find_max_phys_off_pwrlvl(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
max_phys_off_pwrlvl = psci_find_max_phys_off_pwrlvl(end_pwrlvl,
idx);
assert(max_phys_off_pwrlvl != PSCI_INVALID_DATA);
/*
@ -210,9 +194,8 @@ exit:
* Release the locks corresponding to each power level in the
* reverse order to which they were acquired.
*/
psci_release_pwr_domain_locks(MPIDR_AFFLVL0,
end_pwrlvl,
mpidr_nodes);
psci_release_pwr_domain_locks(end_pwrlvl,
idx);
if (!skip_wfi)
psci_power_down_wfi();
}
@ -221,15 +204,14 @@ exit:
* The following functions finish an earlier suspend request. They
* are called by the common finisher routine in psci_common.c.
******************************************************************************/
void psci_cpu_suspend_finish(pwr_map_node_t *node[], int pwrlvl)
void psci_cpu_suspend_finish(unsigned int cpu_idx, int max_off_pwrlvl)
{
int32_t suspend_level;
uint64_t counter_freq;
assert(node[pwrlvl]->level == pwrlvl);
/* Ensure we have been woken up from a suspended state */
assert(psci_get_state(node[MPIDR_AFFLVL0]) == PSCI_STATE_SUSPEND);
assert(psci_get_state(cpu_idx, PSCI_CPU_PWR_LVL)
== PSCI_STATE_SUSPEND);
/*
* Plat. management: Perform the platform specific actions
@ -238,7 +220,7 @@ void psci_cpu_suspend_finish(pwr_map_node_t *node[], int pwrlvl)
* wrong then assert as there is no way to recover from this
* situation.
*/
psci_plat_pm_ops->pwr_domain_suspend_finish(pwrlvl);
psci_plat_pm_ops->pwr_domain_suspend_finish(max_off_pwrlvl);
/*
* Arch. management: Enable the data cache, manage stack memory and
@ -275,4 +257,3 @@ void psci_cpu_suspend_finish(pwr_map_node_t *node[], int pwrlvl)
/* Clean caches before re-entering normal world */
dcsw_op_louis(DCCSW);
}