PSCI: Add framework to handle composite power states

The state-id field in the power-state parameter of a CPU_SUSPEND call can be
used to describe composite power states specific to a platform. The current PSCI
implementation does not interpret the state-id field. It relies on the target
power level and the state type fields in the power-state parameter to perform
state coordination and power management operations. The framework introduced
in this patch allows the PSCI implementation to intepret generic global states
like RUN, RETENTION or OFF from the State-ID to make global state coordination
decisions and reduce the complexity of platform ports. It adds support to
involve the platform in state coordination which facilitates the use of
composite power states and improves the support for entering standby states
at multiple power domains.

The patch also includes support for extended state-id format for the power
state parameter as specified by PSCIv1.0.

The PSCI implementation now defines a generic representation of the power-state
parameter. It depends on the platform port to convert the power-state parameter
(possibly encoding a composite power state) passed in a CPU_SUSPEND call to this
representation via the `validate_power_state()` plat_psci_ops handler. It is an
array where each index corresponds to a power level. Each entry contains the
local power state the power domain at that power level could enter.

The meaning of the local power state values is platform defined, and may vary
between levels in a single platform. The PSCI implementation constrains the
values only so that it can classify the state as RUN, RETENTION or OFF as
required by the specification:
   * zero means RUN
   * all OFF state values at all levels must be higher than all RETENTION
     state values at all levels
   * the platform provides PLAT_MAX_RET_STATE and PLAT_MAX_OFF_STATE values
     to the framework

The platform also must define the macros PLAT_MAX_RET_STATE and
PLAT_MAX_OFF_STATE which lets the PSCI implementation find out which power
domains have been requested to enter a retention or power down state. The PSCI
implementation does not interpret the local power states defined by the
platform. The only constraint is that the PLAT_MAX_RET_STATE <
PLAT_MAX_OFF_STATE.

For a power domain tree, the generic implementation maintains an array of local
power states. These are the states requested for each power domain by all the
cores contained within the domain. During a request to place multiple power
domains in a low power state, the platform is passed an array of requested
power-states for each power domain through the plat_get_target_pwr_state()
API. It coordinates amongst these states to determine a target local power
state for the power domain. A default weak implementation of this API is
provided in the platform layer which returns the minimum of the requested
power-states back to the PSCI state coordination.

Finally, the plat_psci_ops power management handlers are passed the target
local power states for each affected power domain using the generic
representation described above. The platform executes operations specific to
these target states.

The platform power management handler for placing a power domain in a standby
state (plat_pm_ops_t.pwr_domain_standby()) is now only used as a fast path for
placing a core power domain into a standby or retention state should now be
used to only place the core power domain in a standby or retention state.

The extended state-id power state format can be enabled by setting the
build flag PSCI_EXTENDED_STATE_ID=1 and it is disabled by default.

Change-Id: I9d4123d97e179529802c1f589baaa4101759d80c
This commit is contained in:
Soby Mathew 2015-04-07 12:16:56 +01:00 committed by Achin Gupta
parent 82dcc03981
commit 8ee2498039
13 changed files with 956 additions and 468 deletions

View File

@ -66,6 +66,9 @@ ARM_CCI_PRODUCT_ID := 400
ASM_ASSERTION := ${DEBUG}
# Build option to choose whether Trusted firmware uses Coherent memory or not.
USE_COHERENT_MEM := 1
# Flag used to choose the power state format viz Extended State-ID or the Original
# format.
PSCI_EXTENDED_STATE_ID := 0
# Default FIP file name
FIP_NAME := fip.bin
# By default, use the -pedantic option in the gcc command line
@ -268,6 +271,10 @@ $(eval $(call add_define,LOG_LEVEL))
$(eval $(call assert_boolean,USE_COHERENT_MEM))
$(eval $(call add_define,USE_COHERENT_MEM))
# Process PSCI_EXTENDED_STATE_ID flag
$(eval $(call assert_boolean,PSCI_EXTENDED_STATE_ID))
$(eval $(call add_define,PSCI_EXTENDED_STATE_ID))
# Process Generate CoT flags
$(eval $(call assert_boolean,GENERATE_COT))
$(eval $(call assert_boolean,CREATE_KEYS))

View File

@ -353,6 +353,15 @@ performed.
vector address can be programmed or is fixed on the platform. It can take
either 0 (fixed) or 1 (programmable). Default is 0.
* `PSCI_EXTENDED_STATE_ID`: As per PSCI1.0 Specification, there are 2 formats
possible for the PSCI power-state parameter viz original and extended
State-ID formats. This flag if set to 1, configures the generic PSCI layer
to use the extended format. The default value of this flag is 0, which
means by default the original power-state format is used by the PSCI
implementation. This flag should be specified by the platform makefile
and it governs the return value of PSCI_FEATURES API for CPU_SUSPEND
smc function id.
#### ARM development platform specific build options
* `ARM_TSP_RAM_LOCATION`: location of the TSP binary. Options:

View File

@ -129,6 +129,9 @@ void init_cpu_ops(void);
#define flush_cpu_data(_m) flush_dcache_range((uint64_t) \
&(_cpu_data()->_m), \
sizeof(_cpu_data()->_m))
#define inv_cpu_data(_m) inv_dcache_range((uint64_t) \
&(_cpu_data()->_m), \
sizeof(_cpu_data()->_m))
#define flush_cpu_data_by_index(_ix, _m) \
flush_dcache_range((uint64_t) \
&(_cpu_data_by_index(_ix)->_m), \

View File

@ -97,27 +97,35 @@
* PSCI CPU_SUSPEND 'power_state' parameter specific defines
******************************************************************************/
#define PSTATE_ID_SHIFT 0
#if PSCI_EXTENDED_STATE_ID
#define PSTATE_VALID_MASK 0xB0000000
#define PSTATE_TYPE_SHIFT 30
#define PSTATE_ID_MASK 0xfffffff
#else
#define PSTATE_VALID_MASK 0xFCFE0000
#define PSTATE_TYPE_SHIFT 16
#define PSTATE_PWR_LVL_SHIFT 24
#define PSTATE_ID_MASK 0xffff
#define PSTATE_TYPE_MASK 0x1
#define PSTATE_PWR_LVL_MASK 0x3
#define PSTATE_VALID_MASK 0xFCFE0000
#define PSTATE_TYPE_STANDBY 0x0
#define PSTATE_TYPE_POWERDOWN 0x1
#define psci_get_pstate_id(pstate) (((pstate) >> PSTATE_ID_SHIFT) & \
PSTATE_ID_MASK)
#define psci_get_pstate_type(pstate) (((pstate) >> PSTATE_TYPE_SHIFT) & \
PSTATE_TYPE_MASK)
#define psci_get_pstate_pwrlvl(pstate) ((pstate >> PSTATE_PWR_LVL_SHIFT) & \
#define psci_get_pstate_pwrlvl(pstate) (((pstate) >> PSTATE_PWR_LVL_SHIFT) & \
PSTATE_PWR_LVL_MASK)
#define psci_make_powerstate(state_id, type, pwrlvl) \
(((state_id) & PSTATE_ID_MASK) << PSTATE_ID_SHIFT) |\
(((type) & PSTATE_TYPE_MASK) << PSTATE_TYPE_SHIFT) |\
(((pwrlvl) & PSTATE_PWR_LVL_MASK) << PSTATE_PWR_LVL_SHIFT)
#endif /* __PSCI_EXTENDED_STATE_ID__ */
#define PSTATE_TYPE_STANDBY 0x0
#define PSTATE_TYPE_POWERDOWN 0x1
#define PSTATE_TYPE_MASK 0x1
#define psci_get_pstate_id(pstate) (((pstate) >> PSTATE_ID_SHIFT) & \
PSTATE_ID_MASK)
#define psci_get_pstate_type(pstate) (((pstate) >> PSTATE_TYPE_SHIFT) & \
PSTATE_TYPE_MASK)
#define psci_check_power_state(pstate) ((pstate) & PSTATE_VALID_MASK)
/*******************************************************************************
* PSCI CPU_FEATURES feature flag specific defines
@ -126,6 +134,11 @@
#define FF_PSTATE_SHIFT 1
#define FF_PSTATE_ORIG 0
#define FF_PSTATE_EXTENDED 1
#if PSCI_EXTENDED_STATE_ID
#define FF_PSTATE FF_PSTATE_EXTENDED
#else
#define FF_PSTATE FF_PSTATE_ORIG
#endif
/* Features flags for CPU SUSPEND OS Initiated mode support. Bits [0:0] */
#define FF_MODE_SUPPORT_SHIFT 0
@ -152,25 +165,70 @@
#define PSCI_INVALID_MPIDR ~(0ULL)
/*******************************************************************************
* PSCI power domain state related constants.
******************************************************************************/
#define PSCI_STATE_ON 0x0
#define PSCI_STATE_OFF 0x1
#define PSCI_STATE_ON_PENDING 0x2
#define PSCI_STATE_SUSPEND 0x3
#define PSCI_INVALID_DATA -1
#define get_phys_state(x) (x != PSCI_STATE_ON ? \
PSCI_STATE_OFF : PSCI_STATE_ON)
#define psci_validate_power_state(pstate) (pstate & PSTATE_VALID_MASK)
#ifndef __ASSEMBLY__
#include <stdint.h>
#include <types.h>
/*
* These are the states reported by the PSCI_AFFINITY_INFO API for the specified
* CPU. The definitions of these states can be found in Section 5.7.1 in the
* PSCI specification (ARM DEN 0022C).
*/
typedef enum aff_info_state {
AFF_STATE_ON = 0,
AFF_STATE_OFF = 1,
AFF_STATE_ON_PENDING = 2
} aff_info_state_t;
/*
* Macro to represent invalid affinity level within PSCI.
*/
#define PSCI_INVALID_DATA -1
/*
* Type for representing the local power state at a particular level.
*/
typedef uint8_t plat_local_state_t;
/* The local state macro used to represent RUN state. */
#define PSCI_LOCAL_STATE_RUN 0
/*
* Macro to test whether the plat_local_state is RUN state
*/
#define is_local_state_run(plat_local_state) \
((plat_local_state) == PSCI_LOCAL_STATE_RUN)
/*
* Macro to test whether the plat_local_state is RETENTION state
*/
#define is_local_state_retn(plat_local_state) \
(((plat_local_state) > PSCI_LOCAL_STATE_RUN) && \
((plat_local_state) <= PLAT_MAX_RET_STATE))
/*
* Macro to test whether the plat_local_state is OFF state
*/
#define is_local_state_off(plat_local_state) \
(((plat_local_state) > PLAT_MAX_RET_STATE) && \
((plat_local_state) <= PLAT_MAX_OFF_STATE))
/*****************************************************************************
* This data structure defines the representation of the power state parameter
* for its exchange between the generic PSCI code and the platform port. For
* example, it is used by the platform port to specify the requested power
* states during a power management operation. It is used by the generic code
* to inform the platform about the target power states that each level
* should enter.
****************************************************************************/
typedef struct psci_power_state {
/*
* The pwr_domain_state[] stores the local power state at each level
* for the CPU.
*/
plat_local_state_t pwr_domain_state[PLAT_MAX_PWR_LVL + 1];
} psci_power_state_t;
/*******************************************************************************
* Structure used to store per-cpu information relevant to the PSCI service.
@ -178,8 +236,15 @@
* this information will not reside on a cache line shared with another cpu.
******************************************************************************/
typedef struct psci_cpu_data {
uint32_t power_state; /* The power state from CPU_SUSPEND */
unsigned char psci_state; /* The state of this CPU as seen by PSCI */
/* State as seen by PSCI Affinity Info API */
aff_info_state_t aff_info_state;
/*
* Highest power level which takes part in a power management
* operation.
*/
int8_t target_pwrlvl;
/* The local power state of this CPU */
plat_local_state_t local_state;
#if !USE_COHERENT_MEM
bakery_info_t pcpu_bakery_info[PSCI_NUM_NON_CPU_PWR_DOMAINS];
#endif
@ -189,22 +254,23 @@ typedef struct psci_cpu_data {
* Structure populated by platform specific code to export routines which
* perform common low level pm functions
******************************************************************************/
typedef struct plat_pm_ops {
void (*pwr_domain_standby)(unsigned int power_state);
int (*pwr_domain_on)(unsigned long mpidr,
unsigned long sec_entrypoint,
unsigned int pwrlvl);
void (*pwr_domain_off)(unsigned int pwrlvl);
typedef struct plat_psci_ops {
void (*cpu_standby)(plat_local_state_t cpu_state);
int (*pwr_domain_on)(u_register_t mpidr,
unsigned long sec_entrypoint);
void (*pwr_domain_off)(const psci_power_state_t *target_state);
void (*pwr_domain_suspend)(unsigned long sec_entrypoint,
unsigned int pwrlvl);
void (*pwr_domain_on_finish)(unsigned int pwrlvl);
void (*pwr_domain_suspend_finish)(unsigned int pwrlvl);
const psci_power_state_t *target_state);
void (*pwr_domain_on_finish)(const psci_power_state_t *target_state);
void (*pwr_domain_suspend_finish)(const psci_power_state_t *target_state);
void (*system_off)(void) __dead2;
void (*system_reset)(void) __dead2;
int (*validate_power_state)(unsigned int power_state);
int (*validate_power_state)(unsigned int power_state,
psci_power_state_t *req_state);
int (*validate_ns_entrypoint)(unsigned long ns_entrypoint);
unsigned int (*get_sys_suspend_power_state)(void);
} plat_pm_ops_t;
void (*get_sys_suspend_power_state)(
psci_power_state_t *req_state);
} plat_psci_ops_t;
/*******************************************************************************
* Optional structure populated by the Secure Payload Dispatcher to be given a
@ -239,9 +305,6 @@ void __dead2 psci_power_down_wfi(void);
void psci_cpu_on_finish_entry(void);
void psci_cpu_suspend_finish_entry(void);
void psci_register_spd_pm_hook(const spd_pm_ops_t *);
int psci_get_suspend_stateid_by_idx(unsigned long);
int psci_get_suspend_stateid(void);
int psci_get_suspend_pwrlvl(void);
uint64_t psci_smc_handler(uint32_t smc_fid,
uint64_t x1,

View File

@ -32,12 +32,11 @@
#define __PLATFORM_H__
#include <stdint.h>
#include <psci.h>
/*******************************************************************************
* Forward declarations
******************************************************************************/
struct plat_pm_ops;
struct meminfo;
struct image_info;
struct entry_point_info;
@ -182,9 +181,16 @@ struct entry_point_info *bl31_plat_get_next_image_ep_info(uint32_t type);
/*******************************************************************************
* Mandatory PSCI functions (BL3-1)
******************************************************************************/
int platform_setup_pm(const struct plat_pm_ops **);
int plat_setup_psci_ops(const struct plat_psci_ops **);
const unsigned char *plat_get_power_domain_tree_desc(void);
/*******************************************************************************
* Optional PSCI functions (BL3-1).
******************************************************************************/
plat_local_state_t plat_get_target_pwr_state(unsigned int lvl,
const plat_local_state_t *states,
unsigned int ncpu);
/*******************************************************************************
* Optional BL3-1 functions (may be overridden)
******************************************************************************/

View File

@ -0,0 +1,63 @@
/*
* Copyright (c) 2015, ARM Limited and Contributors. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* Neither the name of ARM nor the names of its contributors may be used
* to endorse or promote products derived from this software without specific
* prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <arch.h>
#include <assert.h>
#include <platform.h>
#include <psci.h>
/*
* The PSCI generic code uses this API to let the platform participate in state
* coordination during a power management operation. It compares the platform
* specific local power states requested by each cpu for a given power domain
* and returns the coordinated target power state that the domain should
* enter. A platform assigns a number to a local power state. This default
* implementation assumes that the platform assigns these numbers in order of
* increasing depth of the power state i.e. for two power states X & Y, if X < Y
* then X represents a shallower power state than Y. As a result, the
* coordinated target local power state for a power domain will be the minimum
* of the requested local power states.
*/
plat_local_state_t plat_get_target_pwr_state(unsigned int lvl,
const plat_local_state_t *states,
unsigned int ncpu)
{
plat_local_state_t target = PLAT_MAX_OFF_STATE, temp;
assert(ncpu);
do {
temp = *states++;
if (temp < target)
target = temp;
} while (--ncpu);
return target;
}

View File

@ -45,6 +45,26 @@
*/
const spd_pm_ops_t *psci_spd_pm;
/*
* PSCI requested local power state map. This array is used to store the local
* power states requested by a CPU for power levels from level 1 to
* PLAT_MAX_PWR_LVL. It does not store the requested local power state for power
* level 0 (PSCI_CPU_PWR_LVL) as the requested and the target power state for a
* CPU are the same.
*
* During state coordination, the platform is passed an array containing the
* local states requested for a particular non cpu power domain by each cpu
* within the domain.
*
* TODO: Dense packing of the requested states will cause cache thrashing
* when multiple power domains write to it. If we allocate the requested
* states at each power level in a cache-line aligned per-domain memory,
* the cache thrashing can be avoided.
*/
static plat_local_state_t
psci_req_local_pwr_states[PLAT_MAX_PWR_LVL][PLATFORM_CORE_COUNT];
/*******************************************************************************
* Arrays that hold the platform's power domain tree information for state
* management of power domains.
@ -63,39 +83,82 @@ cpu_pd_node_t psci_cpu_pd_nodes[PLATFORM_CORE_COUNT];
/*******************************************************************************
* Pointer to functions exported by the platform to complete power mgmt. ops
******************************************************************************/
const plat_pm_ops_t *psci_plat_pm_ops;
const plat_psci_ops_t *psci_plat_pm_ops;
/*******************************************************************************
/******************************************************************************
* Check that the maximum power level supported by the platform makes sense
* ****************************************************************************/
*****************************************************************************/
CASSERT(PLAT_MAX_PWR_LVL <= PSCI_MAX_PWR_LVL && \
PLAT_MAX_PWR_LVL >= PSCI_CPU_PWR_LVL, \
assert_platform_max_pwrlvl_check);
/*******************************************************************************
* This function is passed a cpu_index and the highest level in the topology
* tree. It iterates through the nodes to find the highest power level at which
* a domain is physically powered off.
******************************************************************************/
uint32_t psci_find_max_phys_off_pwrlvl(uint32_t end_pwrlvl,
unsigned int cpu_idx)
/*
* The plat_local_state used by the platform is one of these types: RUN,
* RETENTION and OFF. The platform can define further sub-states for each type
* apart from RUN. This categorization is done to verify the sanity of the
* psci_power_state passed by the platform and to print debug information. The
* categorization is done on the basis of the following conditions:
*
* 1. If (plat_local_state == 0) then the category is STATE_TYPE_RUN.
*
* 2. If (0 < plat_local_state <= PLAT_MAX_RET_STATE), then the category is
* STATE_TYPE_RETN.
*
* 3. If (plat_local_state > PLAT_MAX_RET_STATE), then the category is
* STATE_TYPE_OFF.
*/
typedef enum plat_local_state_type {
STATE_TYPE_RUN = 0,
STATE_TYPE_RETN,
STATE_TYPE_OFF
} plat_local_state_type_t;
/* The macro used to categorize plat_local_state. */
#define find_local_state_type(plat_local_state) \
((plat_local_state) ? ((plat_local_state > PLAT_MAX_RET_STATE) \
? STATE_TYPE_OFF : STATE_TYPE_RETN) \
: STATE_TYPE_RUN)
/******************************************************************************
* Check that the maximum retention level supported by the platform is less
* than the maximum off level.
*****************************************************************************/
CASSERT(PLAT_MAX_RET_STATE < PLAT_MAX_OFF_STATE, \
assert_platform_max_off_and_retn_state_check);
/******************************************************************************
* This function ensures that the power state parameter in a CPU_SUSPEND request
* is valid. If so, it returns the requested states for each power level.
*****************************************************************************/
int psci_validate_power_state(unsigned int power_state,
psci_power_state_t *state_info)
{
int max_pwrlvl, level;
unsigned int parent_idx = psci_cpu_pd_nodes[cpu_idx].parent_node;
/* Check SBZ bits in power state are zero */
if (psci_check_power_state(power_state))
return PSCI_E_INVALID_PARAMS;
if (psci_get_phys_state(cpu_idx, PSCI_CPU_PWR_LVL) != PSCI_STATE_OFF)
return PSCI_INVALID_DATA;
assert(psci_plat_pm_ops->validate_power_state);
max_pwrlvl = PSCI_CPU_PWR_LVL;
/* Validate the power_state using platform pm_ops */
return psci_plat_pm_ops->validate_power_state(power_state, state_info);
}
for (level = PSCI_CPU_PWR_LVL + 1; level <= end_pwrlvl; level++) {
if (psci_get_phys_state(parent_idx, level) == PSCI_STATE_OFF)
max_pwrlvl = level;
/******************************************************************************
* This function retrieves the `psci_power_state_t` for system suspend from
* the platform.
*****************************************************************************/
void psci_query_sys_suspend_pwrstate(psci_power_state_t *state_info)
{
/*
* Assert that the required pm_ops hook is implemented to ensure that
* the capability detected during psci_setup() is valid.
*/
assert(psci_plat_pm_ops->get_sys_suspend_power_state);
parent_idx = psci_non_cpu_pd_nodes[parent_idx].parent_node;
}
return max_pwrlvl;
/*
* Query the platform for the power_state required for system suspend
*/
psci_plat_pm_ops->get_sys_suspend_power_state(state_info);
}
/*******************************************************************************
@ -106,17 +169,15 @@ uint32_t psci_find_max_phys_off_pwrlvl(uint32_t end_pwrlvl,
******************************************************************************/
unsigned int psci_is_last_on_cpu(void)
{
unsigned long mpidr = read_mpidr_el1() & MPIDR_AFFINITY_MASK;
unsigned int i;
unsigned int cpu_idx, my_idx = plat_my_core_pos();
for (i = 0; i < PLATFORM_CORE_COUNT; i++) {
if (psci_cpu_pd_nodes[i].mpidr == mpidr) {
assert(psci_get_state(i, PSCI_CPU_PWR_LVL)
== PSCI_STATE_ON);
for (cpu_idx = 0; cpu_idx < PLATFORM_CORE_COUNT; cpu_idx++) {
if (cpu_idx == my_idx) {
assert(psci_get_aff_info_state() == AFF_STATE_ON);
continue;
}
if (psci_get_state(i, PSCI_CPU_PWR_LVL) != PSCI_STATE_OFF)
if (psci_get_aff_info_state_by_idx(cpu_idx) != AFF_STATE_OFF)
return 0;
}
@ -132,17 +193,6 @@ int get_power_on_target_pwrlvl(void)
{
int pwrlvl;
#if DEBUG
unsigned int state;
/*
* Sanity check the state of the cpu. It should be either suspend or "on
* pending"
*/
state = psci_get_state(plat_my_core_pos(), PSCI_CPU_PWR_LVL);
assert(state == PSCI_STATE_SUSPEND || state == PSCI_STATE_ON_PENDING);
#endif
/*
* Assume that this cpu was suspended and retrieve its target power
* level. If it is invalid then it could only have been turned off
@ -155,6 +205,119 @@ int get_power_on_target_pwrlvl(void)
return pwrlvl;
}
/******************************************************************************
* Helper function to update the requested local power state array. This array
* does not store the requested state for the CPU power level. Hence an
* assertion is added to prevent us from accessing the wrong index.
*****************************************************************************/
static void psci_set_req_local_pwr_state(unsigned int pwrlvl,
unsigned int cpu_idx,
plat_local_state_t req_pwr_state)
{
assert(pwrlvl > PSCI_CPU_PWR_LVL);
psci_req_local_pwr_states[pwrlvl - 1][cpu_idx] = req_pwr_state;
}
/******************************************************************************
* This function initializes the psci_req_local_pwr_states.
*****************************************************************************/
void psci_init_req_local_pwr_states(void)
{
/* Initialize the requested state of all non CPU power domains as OFF */
memset(&psci_req_local_pwr_states, PLAT_MAX_OFF_STATE,
sizeof(psci_req_local_pwr_states));
}
/******************************************************************************
* Helper function to return a reference to an array containing the local power
* states requested by each cpu for a power domain at 'pwrlvl'. The size of the
* array will be the number of cpu power domains of which this power domain is
* an ancestor. These requested states will be used to determine a suitable
* target state for this power domain during psci state coordination. An
* assertion is added to prevent us from accessing the CPU power level.
*****************************************************************************/
static plat_local_state_t *psci_get_req_local_pwr_states(int pwrlvl,
int cpu_idx)
{
assert(pwrlvl > PSCI_CPU_PWR_LVL);
return &psci_req_local_pwr_states[pwrlvl - 1][cpu_idx];
}
/******************************************************************************
* Helper function to return the current local power state of each power domain
* from the current cpu power domain to its ancestor at the 'end_pwrlvl'. This
* function will be called after a cpu is powered on to find the local state
* each power domain has emerged from.
*****************************************************************************/
static void psci_get_target_local_pwr_states(uint32_t end_pwrlvl,
psci_power_state_t *target_state)
{
int lvl;
unsigned int parent_idx;
plat_local_state_t *pd_state = target_state->pwr_domain_state;
pd_state[PSCI_CPU_PWR_LVL] = psci_get_cpu_local_state();
parent_idx = psci_cpu_pd_nodes[plat_my_core_pos()].parent_node;
/* Copy the local power state from node to state_info */
for (lvl = PSCI_CPU_PWR_LVL + 1; lvl <= end_pwrlvl; lvl++) {
#if !USE_COHERENT_MEM
/*
* If using normal memory for psci_non_cpu_pd_nodes, we need
* to flush before reading the local power state as another
* cpu in the same power domain could have updated it and this
* code runs before caches are enabled.
*/
flush_dcache_range(
(uint64_t)&psci_non_cpu_pd_nodes[parent_idx],
sizeof(psci_non_cpu_pd_nodes[parent_idx]));
#endif
pd_state[lvl] = psci_non_cpu_pd_nodes[parent_idx].local_state;
parent_idx = psci_non_cpu_pd_nodes[parent_idx].parent_node;
}
/* Set the the higher levels to RUN */
for (; lvl <= PLAT_MAX_PWR_LVL; lvl++)
target_state->pwr_domain_state[lvl] = PSCI_LOCAL_STATE_RUN;
}
/******************************************************************************
* Helper function to set the target local power state that each power domain
* from the current cpu power domain to its ancestor at the 'end_pwrlvl' will
* enter. This function will be called after coordination of requested power
* states has been done for each power level.
*****************************************************************************/
static void psci_set_target_local_pwr_states(uint32_t end_pwrlvl,
const psci_power_state_t *target_state)
{
int lvl;
unsigned int parent_idx;
const plat_local_state_t *pd_state = target_state->pwr_domain_state;
psci_set_cpu_local_state(pd_state[PSCI_CPU_PWR_LVL]);
/*
* Need to flush as local_state will be accessed with Data Cache
* disabled during power on
*/
flush_cpu_data(psci_svc_cpu_data.local_state);
parent_idx = psci_cpu_pd_nodes[plat_my_core_pos()].parent_node;
/* Copy the local_state from state_info */
for (lvl = 1; lvl <= end_pwrlvl; lvl++) {
psci_non_cpu_pd_nodes[parent_idx].local_state = pd_state[lvl];
#if !USE_COHERENT_MEM
flush_dcache_range(
(uint64_t)&psci_non_cpu_pd_nodes[parent_idx],
sizeof(psci_non_cpu_pd_nodes[parent_idx]));
#endif
parent_idx = psci_non_cpu_pd_nodes[parent_idx].parent_node;
}
}
/*******************************************************************************
* PSCI helper function to get the parent nodes corresponding to a cpu_index.
******************************************************************************/
@ -171,23 +334,207 @@ void psci_get_parent_pwr_domain_nodes(unsigned int cpu_idx,
}
}
/*******************************************************************************
* This function is passed a cpu_index and the highest level in the topology
* tree and the state which each node should transition to. It updates the
* state of each node between the specified power levels.
******************************************************************************/
void psci_do_state_coordination(int end_pwrlvl,
unsigned int cpu_idx,
uint32_t state)
/******************************************************************************
* This function is invoked post CPU power up and initialization. It sets the
* affinity info state, target power state and requested power state for the
* current CPU and all its ancestor power domains to RUN.
*****************************************************************************/
void psci_set_pwr_domains_to_run(uint32_t end_pwrlvl)
{
int level;
unsigned int parent_idx = psci_cpu_pd_nodes[cpu_idx].parent_node;
psci_set_state(cpu_idx, state, PSCI_CPU_PWR_LVL);
int lvl;
unsigned int parent_idx, cpu_idx = plat_my_core_pos();
parent_idx = psci_cpu_pd_nodes[cpu_idx].parent_node;
for (level = PSCI_CPU_PWR_LVL + 1; level <= end_pwrlvl; level++) {
psci_set_state(parent_idx, state, level);
/* Reset the local_state to RUN for the non cpu power domains. */
for (lvl = PSCI_CPU_PWR_LVL + 1; lvl <= end_pwrlvl; lvl++) {
psci_non_cpu_pd_nodes[parent_idx].local_state =
PSCI_LOCAL_STATE_RUN;
#if !USE_COHERENT_MEM
flush_dcache_range(
(uint64_t)&psci_non_cpu_pd_nodes[parent_idx],
sizeof(psci_non_cpu_pd_nodes[parent_idx]));
#endif
psci_set_req_local_pwr_state(lvl,
cpu_idx,
PSCI_LOCAL_STATE_RUN);
parent_idx = psci_non_cpu_pd_nodes[parent_idx].parent_node;
}
/* Set the affinity info state to ON */
psci_set_aff_info_state(AFF_STATE_ON);
psci_set_cpu_local_state(PSCI_LOCAL_STATE_RUN);
flush_cpu_data(psci_svc_cpu_data);
}
/******************************************************************************
* This function is passed the local power states requested for each power
* domain (state_info) between the current CPU domain and its ancestors until
* the target power level (end_pwrlvl). It updates the array of requested power
* states with this information.
*
* Then, for each level (apart from the CPU level) until the 'end_pwrlvl', it
* retrieves the states requested by all the cpus of which the power domain at
* that level is an ancestor. It passes this information to the platform to
* coordinate and return the target power state. If the target state for a level
* is RUN then subsequent levels are not considered. At the CPU level, state
* coordination is not required. Hence, the requested and the target states are
* the same.
*
* The 'state_info' is updated with the target state for each level between the
* CPU and the 'end_pwrlvl' and returned to the caller.
*
* This function will only be invoked with data cache enabled and while
* powering down a core.
*****************************************************************************/
void psci_do_state_coordination(int end_pwrlvl, psci_power_state_t *state_info)
{
unsigned int lvl, parent_idx, cpu_idx = plat_my_core_pos();
unsigned int start_idx, ncpus;
plat_local_state_t target_state, *req_states;
parent_idx = psci_cpu_pd_nodes[cpu_idx].parent_node;
/* For level 0, the requested state will be equivalent
to target state */
for (lvl = PSCI_CPU_PWR_LVL + 1; lvl <= end_pwrlvl; lvl++) {
/* First update the requested power state */
psci_set_req_local_pwr_state(lvl, cpu_idx,
state_info->pwr_domain_state[lvl]);
/* Get the requested power states for this power level */
start_idx = psci_non_cpu_pd_nodes[parent_idx].cpu_start_idx;
req_states = psci_get_req_local_pwr_states(lvl, start_idx);
/*
* Let the platform coordinate amongst the requested states at
* this power level and return the target local power state.
*/
ncpus = psci_non_cpu_pd_nodes[parent_idx].ncpus;
target_state = plat_get_target_pwr_state(lvl,
req_states,
ncpus);
state_info->pwr_domain_state[lvl] = target_state;
/* Break early if the negotiated target power state is RUN */
if (is_local_state_run(state_info->pwr_domain_state[lvl]))
break;
parent_idx = psci_non_cpu_pd_nodes[parent_idx].parent_node;
}
/*
* This is for cases when we break out of the above loop early because
* the target power state is RUN at a power level < end_pwlvl.
* We update the requested power state from state_info and then
* set the target state as RUN.
*/
for (lvl = lvl + 1; lvl <= end_pwrlvl; lvl++) {
psci_set_req_local_pwr_state(lvl, cpu_idx,
state_info->pwr_domain_state[lvl]);
state_info->pwr_domain_state[lvl] = PSCI_LOCAL_STATE_RUN;
}
/* Update the target state in the power domain nodes */
psci_set_target_local_pwr_states(end_pwrlvl, state_info);
}
/******************************************************************************
* This function validates a suspend request by making sure that if a standby
* state is requested then no power level is turned off and the highest power
* level is placed in a standby/retention state.
*
* It also ensures that the state level X will enter is not shallower than the
* state level X + 1 will enter.
*
* This validation will be enabled only for DEBUG builds as the platform is
* expected to perform these validations as well.
*****************************************************************************/
int psci_validate_suspend_req(const psci_power_state_t *state_info,
unsigned int is_power_down_state)
{
unsigned int max_off_lvl, target_lvl, max_retn_lvl;
plat_local_state_t state;
plat_local_state_type_t req_state_type, deepest_state_type;
int i;
/* Find the target suspend power level */
target_lvl = psci_find_target_suspend_lvl(state_info);
if (target_lvl == PSCI_INVALID_DATA)
return PSCI_E_INVALID_PARAMS;
/* All power domain levels are in a RUN state to begin with */
deepest_state_type = STATE_TYPE_RUN;
for (i = target_lvl; i >= PSCI_CPU_PWR_LVL; i--) {
state = state_info->pwr_domain_state[i];
req_state_type = find_local_state_type(state);
/*
* While traversing from the highest power level to the lowest,
* the state requested for lower levels has to be the same or
* deeper i.e. equal to or greater than the state at the higher
* levels. If this condition is true, then the requested state
* becomes the deepest state encountered so far.
*/
if (req_state_type < deepest_state_type)
return PSCI_E_INVALID_PARAMS;
deepest_state_type = req_state_type;
}
/* Find the highest off power level */
max_off_lvl = psci_find_max_off_lvl(state_info);
/* The target_lvl is either equal to the max_off_lvl or max_retn_lvl */
max_retn_lvl = PSCI_INVALID_DATA;
if (target_lvl != max_off_lvl)
max_retn_lvl = target_lvl;
/*
* If this is not a request for a power down state then max off level
* has to be invalid and max retention level has to be a valid power
* level.
*/
if (!is_power_down_state && (max_off_lvl != PSCI_INVALID_DATA ||
max_retn_lvl == PSCI_INVALID_DATA))
return PSCI_E_INVALID_PARAMS;
return PSCI_E_SUCCESS;
}
/******************************************************************************
* This function finds the highest power level which will be powered down
* amongst all the power levels specified in the 'state_info' structure
*****************************************************************************/
unsigned int psci_find_max_off_lvl(const psci_power_state_t *state_info)
{
int i;
for (i = PLAT_MAX_PWR_LVL; i >= PSCI_CPU_PWR_LVL; i--) {
if (is_local_state_off(state_info->pwr_domain_state[i]))
return i;
}
return PSCI_INVALID_DATA;
}
/******************************************************************************
* This functions finds the level of the highest power domain which will be
* placed in a low power state during a suspend operation.
*****************************************************************************/
unsigned int psci_find_target_suspend_lvl(const psci_power_state_t *state_info)
{
int i;
for (i = PLAT_MAX_PWR_LVL; i >= PSCI_CPU_PWR_LVL; i--) {
if (!is_local_state_run(state_info->pwr_domain_state[i]))
return i;
}
return PSCI_INVALID_DATA;
}
/*******************************************************************************
@ -295,97 +642,6 @@ int psci_get_ns_ep_info(entry_point_info_t *ep,
return PSCI_E_SUCCESS;
}
/*******************************************************************************
* This function takes an index and level of a power domain node in the topology
* tree and returns its state. State of a non-leaf node needs to be calculated.
******************************************************************************/
unsigned short psci_get_state(unsigned int idx,
int level)
{
/* A cpu node just contains the state which can be directly returned */
if (level == PSCI_CPU_PWR_LVL) {
flush_cpu_data_by_index(idx, psci_svc_cpu_data.psci_state);
return get_cpu_data_by_index(idx, psci_svc_cpu_data.psci_state);
}
#if !USE_COHERENT_MEM
flush_dcache_range((uint64_t) &psci_non_cpu_pd_nodes[idx],
sizeof(psci_non_cpu_pd_nodes[idx]));
#endif
/*
* For a power level higher than a cpu, the state has to be
* calculated. It depends upon the value of the reference count
* which is managed by each node at the next lower power level
* e.g. for a cluster, each cpu increments/decrements the reference
* count. If the reference count is 0 then the power level is
* OFF else ON.
*/
if (psci_non_cpu_pd_nodes[idx].ref_count)
return PSCI_STATE_ON;
else
return PSCI_STATE_OFF;
}
/*******************************************************************************
* This function takes an index and level of a power domain node in the topology
* tree and a target state. State of a non-leaf node needs to be converted to
* a reference count. State of a leaf node can be set directly.
******************************************************************************/
void psci_set_state(unsigned int idx,
unsigned short state,
int level)
{
/*
* For a power level higher than a cpu, the state is used
* to decide whether the reference count is incremented or
* decremented. Entry into the ON_PENDING state does not have
* effect.
*/
if (level > PSCI_CPU_PWR_LVL) {
switch (state) {
case PSCI_STATE_ON:
psci_non_cpu_pd_nodes[idx].ref_count++;
break;
case PSCI_STATE_OFF:
case PSCI_STATE_SUSPEND:
psci_non_cpu_pd_nodes[idx].ref_count--;
break;
case PSCI_STATE_ON_PENDING:
/*
* A power level higher than a cpu will not undergo
* a state change when it is about to be turned on
*/
return;
default:
assert(0);
#if !USE_COHERENT_MEM
flush_dcache_range((uint64_t) &psci_non_cpu_pd_nodes[idx],
sizeof(psci_non_cpu_pd_nodes[idx]));
#endif
}
} else {
set_cpu_data_by_index(idx, psci_svc_cpu_data.psci_state, state);
flush_cpu_data_by_index(idx, psci_svc_cpu_data.psci_state);
}
}
/*******************************************************************************
* A power domain could be on, on_pending, suspended or off. These are the
* logical states it can be in. Physically either it is off or on. When it is in
* the state on_pending then it is about to be turned on. It is not possible to
* tell whether that's actually happened or not. So we err on the side of
* caution & treat the power domain as being turned off.
******************************************************************************/
unsigned short psci_get_phys_state(unsigned int idx,
int level)
{
unsigned int state;
state = psci_get_state(idx, level);
return get_phys_state(state);
}
/*******************************************************************************
* Generic handler which is called when a cpu is physically powered on. It
* traverses the node information and finds the highest power level powered
@ -396,34 +652,31 @@ unsigned short psci_get_phys_state(unsigned int idx,
* coherency at the interconnect level in addition to gic cpu interface.
******************************************************************************/
void psci_power_up_finish(int end_pwrlvl,
pwrlvl_power_on_finisher_t pon_handler)
pwrlvl_power_on_finisher_t power_on_handler)
{
unsigned int cpu_idx = plat_my_core_pos();
unsigned int max_phys_off_pwrlvl;
psci_power_state_t state_info = { {PSCI_LOCAL_STATE_RUN} };
/*
* This function acquires the lock corresponding to each power
* level so that by the time all locks are taken, the system topology
* is snapshot and state management can be done safely.
* This function acquires the lock corresponding to each power level so
* that by the time all locks are taken, the system topology is snapshot
* and state management can be done safely.
*/
psci_acquire_pwr_domain_locks(end_pwrlvl,
cpu_idx);
max_phys_off_pwrlvl = psci_find_max_phys_off_pwrlvl(end_pwrlvl,
cpu_idx);
assert(max_phys_off_pwrlvl != PSCI_INVALID_DATA);
/* Perform generic, architecture and platform specific handling */
pon_handler(cpu_idx, max_phys_off_pwrlvl);
psci_get_target_local_pwr_states(end_pwrlvl, &state_info);
/*
* This function updates the state of each power instance
* corresponding to the cpu index in the range of power levels
* specified.
* Perform generic, architecture and platform specific handling.
*/
psci_do_state_coordination(end_pwrlvl,
cpu_idx,
PSCI_STATE_ON);
power_on_handler(cpu_idx, &state_info);
/*
* Set the requested and target state of this CPU and all the higher
* power domains which are ancestors of this CPU to run.
*/
psci_set_pwr_domains_to_run(end_pwrlvl);
/*
* This loop releases the lock corresponding to each power level
@ -481,31 +734,39 @@ int psci_spd_migrate_info(uint64_t *mpidr)
void psci_print_power_domain_map(void)
{
#if LOG_LEVEL >= LOG_LEVEL_INFO
unsigned int idx, state;
unsigned int idx;
plat_local_state_t state;
plat_local_state_type_t state_type;
/* This array maps to the PSCI_STATE_X definitions in psci.h */
static const char *psci_state_str[] = {
static const char *psci_state_type_str[] = {
"ON",
"RETENTION",
"OFF",
"ON_PENDING",
"SUSPEND"
};
INFO("PSCI Power Domain Map:\n");
for (idx = 0; idx < (PSCI_NUM_PWR_DOMAINS - PLATFORM_CORE_COUNT); idx++) {
state = psci_get_state(idx, psci_non_cpu_pd_nodes[idx].level);
INFO(" Domain Node : Level %u, parent_node %d, State %s\n",
for (idx = 0; idx < (PSCI_NUM_PWR_DOMAINS - PLATFORM_CORE_COUNT);
idx++) {
state_type = find_local_state_type(
psci_non_cpu_pd_nodes[idx].local_state);
INFO(" Domain Node : Level %u, parent_node %d,"
" State %s (0x%x)\n",
psci_non_cpu_pd_nodes[idx].level,
psci_non_cpu_pd_nodes[idx].parent_node,
psci_state_str[state]);
psci_state_type_str[state_type],
psci_non_cpu_pd_nodes[idx].local_state);
}
for (idx = 0; idx < PLATFORM_CORE_COUNT; idx++) {
state = psci_get_state(idx, PSCI_CPU_PWR_LVL);
INFO(" CPU Node : MPID 0x%lx, parent_node %d, State %s\n",
state = psci_get_cpu_local_state_by_idx(idx);
state_type = find_local_state_type(state);
INFO(" CPU Node : MPID 0x%lx, parent_node %d,"
" State %s (0x%x)\n",
psci_cpu_pd_nodes[idx].mpidr,
psci_cpu_pd_nodes[idx].parent_node,
psci_state_str[state]);
psci_state_type_str[state_type],
psci_get_cpu_local_state_by_idx(idx));
}
#endif
}

View File

@ -35,6 +35,7 @@
#include <platform.h>
#include <runtime_svc.h>
#include <std_svc.h>
#include <string.h>
#include "psci_private.h"
/*******************************************************************************
@ -93,72 +94,82 @@ int psci_cpu_suspend(unsigned int power_state,
unsigned long context_id)
{
int rc;
unsigned int target_pwrlvl, pstate_type;
unsigned int target_pwrlvl, is_power_down_state;
entry_point_info_t ep;
psci_power_state_t state_info = { {PSCI_LOCAL_STATE_RUN} };
plat_local_state_t cpu_pd_state;
/* Check SBZ bits in power state are zero */
if (psci_validate_power_state(power_state))
return PSCI_E_INVALID_PARAMS;
/* Sanity check the requested state */
target_pwrlvl = psci_get_pstate_pwrlvl(power_state);
if (target_pwrlvl > PLAT_MAX_PWR_LVL)
return PSCI_E_INVALID_PARAMS;
/* Validate the power_state using platform pm_ops */
if (psci_plat_pm_ops->validate_power_state) {
rc = psci_plat_pm_ops->validate_power_state(power_state);
if (rc != PSCI_E_SUCCESS) {
assert(rc == PSCI_E_INVALID_PARAMS);
return PSCI_E_INVALID_PARAMS;
}
/* Validate the power_state parameter */
rc = psci_validate_power_state(power_state, &state_info);
if (rc != PSCI_E_SUCCESS) {
assert(rc == PSCI_E_INVALID_PARAMS);
return rc;
}
/* Validate the entrypoint using platform pm_ops */
if (psci_plat_pm_ops->validate_ns_entrypoint) {
rc = psci_plat_pm_ops->validate_ns_entrypoint(entrypoint);
if (rc != PSCI_E_SUCCESS) {
assert(rc == PSCI_E_INVALID_PARAMS);
return PSCI_E_INVALID_PARAMS;
}
}
/* Determine the 'state type' in the 'power_state' parameter */
pstate_type = psci_get_pstate_type(power_state);
/*
* Ensure that we have a platform specific handler for entering
* a standby state.
* Get the value of the state type bit from the power state parameter.
*/
if (pstate_type == PSTATE_TYPE_STANDBY) {
if (!psci_plat_pm_ops->pwr_domain_standby)
is_power_down_state = psci_get_pstate_type(power_state);
/* Sanity check the requested suspend levels */
assert (psci_validate_suspend_req(&state_info, is_power_down_state)
== PSCI_E_SUCCESS);
target_pwrlvl = psci_find_target_suspend_lvl(&state_info);
/* Fast path for CPU standby.*/
if (is_cpu_standby_req(is_power_down_state, target_pwrlvl)) {
if (!psci_plat_pm_ops->cpu_standby)
return PSCI_E_INVALID_PARAMS;
psci_plat_pm_ops->pwr_domain_standby(power_state);
/*
* Set the state of the CPU power domain to the platform
* specific retention state and enter the standby state.
*/
cpu_pd_state = state_info.pwr_domain_state[PSCI_CPU_PWR_LVL];
psci_set_cpu_local_state(cpu_pd_state);
psci_plat_pm_ops->cpu_standby(cpu_pd_state);
/* Upon exit from standby, set the state back to RUN. */
psci_set_cpu_local_state(PSCI_LOCAL_STATE_RUN);
return PSCI_E_SUCCESS;
}
/*
* Verify and derive the re-entry information for
* the non-secure world from the non-secure state from
* where this call originated.
* If a power down state has been requested, we need to verify entry
* point and program entry information.
*/
rc = psci_get_ns_ep_info(&ep, entrypoint, context_id);
if (rc != PSCI_E_SUCCESS)
return rc;
if (is_power_down_state) {
if (psci_plat_pm_ops->validate_ns_entrypoint) {
rc = psci_plat_pm_ops->validate_ns_entrypoint(entrypoint);
if (rc != PSCI_E_SUCCESS) {
assert(rc == PSCI_E_INVALID_PARAMS);
return rc;
}
}
/* Save PSCI power state parameter for the core in suspend context */
psci_set_suspend_power_state(power_state);
/*
* Verify and derive the re-entry information for
* the non-secure world from the non-secure state from
* where this call originated.
*/
rc = psci_get_ns_ep_info(&ep, entrypoint, context_id);
if (rc != PSCI_E_SUCCESS)
return rc;
}
/*
* Do what is needed to enter the power down state. Upon success,
* enter the final wfi which will power down this CPU.
* enter the final wfi which will power down this CPU. This function
* might return if the power down was abandoned for any reason, e.g.
* arrival of an interrupt
*/
psci_cpu_suspend_start(&ep,
target_pwrlvl);
target_pwrlvl,
&state_info,
is_power_down_state);
/* Reset PSCI power state parameter for the core. */
psci_set_suspend_power_state(PSCI_INVALID_DATA);
return PSCI_E_SUCCESS;
}
@ -166,7 +177,7 @@ int psci_system_suspend(unsigned long entrypoint,
unsigned long context_id)
{
int rc;
unsigned int power_state;
psci_power_state_t state_info;
entry_point_info_t ep;
/* Validate the entrypoint using platform pm_ops */
@ -174,7 +185,7 @@ int psci_system_suspend(unsigned long entrypoint,
rc = psci_plat_pm_ops->validate_ns_entrypoint(entrypoint);
if (rc != PSCI_E_SUCCESS) {
assert(rc == PSCI_E_INVALID_PARAMS);
return PSCI_E_INVALID_PARAMS;
return rc;
}
}
@ -191,28 +202,25 @@ int psci_system_suspend(unsigned long entrypoint,
if (rc != PSCI_E_SUCCESS)
return rc;
/*
* Assert that the required pm_ops hook is implemented to ensure that
* the capability detected during psci_setup() is valid.
*/
assert(psci_plat_pm_ops->get_sys_suspend_power_state);
/* Query the psci_power_state for system suspend */
psci_query_sys_suspend_pwrstate(&state_info);
/* Ensure that the psci_power_state makes sense */
assert(psci_find_target_suspend_lvl(&state_info) == PLAT_MAX_PWR_LVL);
assert(psci_validate_suspend_req(&state_info, PSTATE_TYPE_POWERDOWN)
== PSCI_E_SUCCESS);
assert(is_local_state_off(state_info.pwr_domain_state[PLAT_MAX_PWR_LVL]));
/*
* Query the platform for the power_state required for system suspend
* Do what is needed to enter the system suspend state. This function
* might return if the power down was abandoned for any reason, e.g.
* arrival of an interrupt
*/
power_state = psci_plat_pm_ops->get_sys_suspend_power_state();
psci_cpu_suspend_start(&ep,
PLAT_MAX_PWR_LVL,
&state_info,
PSTATE_TYPE_POWERDOWN);
/* Save PSCI power state parameter for the core in suspend context */
psci_set_suspend_power_state(power_state);
/*
* Do what is needed to enter the power down state. Upon success,
* enter the final wfi which will power down this cpu.
*/
psci_cpu_suspend_start(&ep, PLAT_MAX_PWR_LVL);
/* Reset PSCI power state parameter for the core. */
psci_set_suspend_power_state(PSCI_INVALID_DATA);
return PSCI_E_SUCCESS;
}
@ -240,26 +248,18 @@ int psci_cpu_off(void)
int psci_affinity_info(unsigned long target_affinity,
unsigned int lowest_affinity_level)
{
unsigned int cpu_idx;
unsigned char cpu_pwr_domain_state;
unsigned int target_idx;
/* We dont support level higher than PSCI_CPU_PWR_LVL */
if (lowest_affinity_level > PSCI_CPU_PWR_LVL)
return PSCI_E_INVALID_PARAMS;
/* Calculate the cpu index of the target */
cpu_idx = plat_core_pos_by_mpidr(target_affinity);
if (cpu_idx == -1)
target_idx = plat_core_pos_by_mpidr(target_affinity);
if (target_idx == -1)
return PSCI_E_INVALID_PARAMS;
cpu_pwr_domain_state = psci_get_state(cpu_idx, PSCI_CPU_PWR_LVL);
/* A suspended cpu is available & on for the OS */
if (cpu_pwr_domain_state == PSCI_STATE_SUSPEND) {
cpu_pwr_domain_state = PSCI_STATE_ON;
}
return cpu_pwr_domain_state;
return psci_get_aff_info_state_by_idx(target_idx);
}
int psci_migrate(unsigned long target_cpu)
@ -337,10 +337,9 @@ int psci_features(unsigned int psci_fid)
if (psci_fid == PSCI_CPU_SUSPEND_AARCH32 ||
psci_fid == PSCI_CPU_SUSPEND_AARCH64) {
/*
* The trusted firmware uses the original power state format
* and does not support OS Initiated Mode.
* The trusted firmware does not support OS Initiated Mode.
*/
return (FF_PSTATE_ORIG << FF_PSTATE_SHIFT) |
return (FF_PSTATE << FF_PSTATE_SHIFT) |
((!FF_SUPPORTS_OS_INIT_MODE) << FF_MODE_SUPPORT_SHIFT);
}

View File

@ -36,6 +36,17 @@
#include <string.h>
#include "psci_private.h"
/******************************************************************************
* Construct the psci_power_state to request power OFF at all power levels.
******************************************************************************/
static void psci_set_power_off_state(psci_power_state_t *state_info)
{
int lvl;
for (lvl = PSCI_CPU_PWR_LVL; lvl <= PLAT_MAX_PWR_LVL; lvl++)
state_info->pwr_domain_state[lvl] = PLAT_MAX_OFF_STATE;
}
/******************************************************************************
* Top level handler which is called when a cpu wants to power itself down.
* It's assumed that along with turning the cpu power domain off, power
@ -52,7 +63,7 @@
int psci_do_cpu_off(int end_pwrlvl)
{
int rc, idx = plat_my_core_pos();
unsigned int max_phys_off_pwrlvl;
psci_power_state_t state_info;
/*
* This function must only be called on platforms where the
@ -79,29 +90,27 @@ int psci_do_cpu_off(int end_pwrlvl)
goto exit;
}
/*
* This function updates the state of each power domain instance
* corresponding to the cpu index in the range of power levels
* specified.
*/
psci_do_state_coordination(end_pwrlvl,
idx,
PSCI_STATE_OFF);
/* Construct the psci_power_state for CPU_OFF */
psci_set_power_off_state(&state_info);
max_phys_off_pwrlvl = psci_find_max_phys_off_pwrlvl(end_pwrlvl, idx);
assert(max_phys_off_pwrlvl != PSCI_INVALID_DATA);
/*
* This function is passed the requested state info and
* it returns the negotiated state info for each power level upto
* the end level specified.
*/
psci_do_state_coordination(end_pwrlvl, &state_info);
/*
* Arch. management. Perform the necessary steps to flush all
* cpu caches.
*/
psci_do_pwrdown_cache_maintenance(max_phys_off_pwrlvl);
psci_do_pwrdown_cache_maintenance(psci_find_max_off_lvl(&state_info));
/*
* Plat. management: Perform platform specific actions to turn this
* cpu off e.g. exit cpu coherency, program the power controller etc.
*/
psci_plat_pm_ops->pwr_domain_off(max_phys_off_pwrlvl);
psci_plat_pm_ops->pwr_domain_off(&state_info);
exit:
/*
@ -111,6 +120,16 @@ exit:
psci_release_pwr_domain_locks(end_pwrlvl,
idx);
/*
* Set the affinity info state to OFF. This writes directly to main
* memory as caches are disabled, so cache maintenance is required
* to ensure that later cached reads of aff_info_state return
* AFF_STATE_OFF.
*/
flush_cpu_data(psci_svc_cpu_data.aff_info_state);
psci_set_aff_info_state(AFF_STATE_OFF);
inv_cpu_data(psci_svc_cpu_data.aff_info_state);
/*
* Check if all actions needed to safely power down this cpu have
* successfully completed. Enter a wfi loop which will allow the

View File

@ -44,18 +44,36 @@
* This function checks whether a cpu which has been requested to be turned on
* is OFF to begin with.
******************************************************************************/
static int cpu_on_validate_state(unsigned int psci_state)
static int cpu_on_validate_state(aff_info_state_t aff_state)
{
if (psci_state == PSCI_STATE_ON || psci_state == PSCI_STATE_SUSPEND)
if (aff_state == AFF_STATE_ON)
return PSCI_E_ALREADY_ON;
if (psci_state == PSCI_STATE_ON_PENDING)
if (aff_state == AFF_STATE_ON_PENDING)
return PSCI_E_ON_PENDING;
assert(psci_state == PSCI_STATE_OFF);
assert(aff_state == AFF_STATE_OFF);
return PSCI_E_SUCCESS;
}
/*******************************************************************************
* This function sets the aff_info_state in the per-cpu data of the CPU
* specified by cpu_idx
******************************************************************************/
static void psci_set_aff_info_state_by_idx(unsigned int cpu_idx,
aff_info_state_t aff_state)
{
set_cpu_data_by_index(cpu_idx,
psci_svc_cpu_data.aff_info_state,
aff_state);
/*
* Flush aff_info_state as it will be accessed with caches turned OFF.
*/
flush_cpu_data_by_index(cpu_idx, psci_svc_cpu_data.aff_info_state);
}
/*******************************************************************************
* Generic handler which is called to physically power on a cpu identified by
* its mpidr. It performs the generic, architectural, platform setup and state
@ -67,8 +85,8 @@ static int cpu_on_validate_state(unsigned int psci_state)
* platform handler as it can return error.
******************************************************************************/
int psci_cpu_on_start(unsigned long target_cpu,
entry_point_info_t *ep,
int end_pwrlvl)
entry_point_info_t *ep,
int end_pwrlvl)
{
int rc;
unsigned long psci_entrypoint;
@ -88,7 +106,7 @@ int psci_cpu_on_start(unsigned long target_cpu,
* Generic management: Ensure that the cpu is off to be
* turned on.
*/
rc = cpu_on_validate_state(psci_get_state(target_idx, PSCI_CPU_PWR_LVL));
rc = cpu_on_validate_state(psci_get_aff_info_state_by_idx(target_idx));
if (rc != PSCI_E_SUCCESS)
goto exit;
@ -101,13 +119,9 @@ int psci_cpu_on_start(unsigned long target_cpu,
psci_spd_pm->svc_on(target_cpu);
/*
* This function updates the state of each affinity instance
* corresponding to the mpidr in the range of power domain levels
* specified.
* Set the Affinity info state of the target cpu to ON_PENDING.
*/
psci_do_state_coordination(end_pwrlvl,
target_idx,
PSCI_STATE_ON_PENDING);
psci_set_aff_info_state_by_idx(target_idx, AFF_STATE_ON_PENDING);
/*
* Perform generic, architecture and platform specific handling.
@ -120,9 +134,8 @@ int psci_cpu_on_start(unsigned long target_cpu,
* of the target cpu to allow it to perform the necessary
* steps to power on.
*/
rc = psci_plat_pm_ops->pwr_domain_on(target_cpu,
psci_entrypoint,
MPIDR_AFFLVL0);
rc = psci_plat_pm_ops->pwr_domain_on((u_register_t)target_cpu,
psci_entrypoint);
assert(rc == PSCI_E_SUCCESS || rc == PSCI_E_INTERN_FAIL);
if (rc == PSCI_E_SUCCESS)
@ -130,9 +143,7 @@ int psci_cpu_on_start(unsigned long target_cpu,
cm_init_context_by_index(target_idx, ep);
else
/* Restore the state on error. */
psci_do_state_coordination(end_pwrlvl,
target_idx,
PSCI_STATE_OFF);
psci_set_aff_info_state_by_idx(target_idx, AFF_STATE_OFF);
exit:
psci_spin_unlock_cpu(target_idx);
@ -141,22 +152,19 @@ exit:
/*******************************************************************************
* The following function finish an earlier power on request. They
* are called by the common finisher routine in psci_common.c.
* are called by the common finisher routine in psci_common.c. The `state_info`
* is the psci_power_state from which this CPU has woken up from.
******************************************************************************/
void psci_cpu_on_finish(unsigned int cpu_idx,
int max_off_pwrlvl)
psci_power_state_t *state_info)
{
/* Ensure we have been explicitly woken up by another cpu */
assert(psci_get_state(cpu_idx, PSCI_CPU_PWR_LVL)
== PSCI_STATE_ON_PENDING);
/*
* Plat. management: Perform the platform specific actions
* for this cpu e.g. enabling the gic or zeroing the mailbox
* register. The actual state of this cpu has already been
* changed.
*/
psci_plat_pm_ops->pwr_domain_on_finish(max_off_pwrlvl);
psci_plat_pm_ops->pwr_domain_on_finish(state_info);
/*
* Arch. management: Enable data cache and manage stack memory
@ -179,6 +187,9 @@ void psci_cpu_on_finish(unsigned int cpu_idx,
psci_spin_lock_cpu(cpu_idx);
psci_spin_unlock_cpu(cpu_idx);
/* Ensure we have been explicitly woken up by another cpu */
assert(psci_get_aff_info_state() == AFF_STATE_ON_PENDING);
/*
* Call the cpu on finish handler registered by the Secure Payload
* Dispatcher to let it do any bookeeping. If the handler encounters an

View File

@ -80,12 +80,36 @@
define_psci_cap(PSCI_MIG_INFO_UP_CPU_AARCH64) | \
define_psci_cap(PSCI_SYSTEM_SUSPEND_AARCH64))
/*
* Helper macros to get/set the fields of PSCI per-cpu data.
*/
#define psci_set_aff_info_state(aff_state) \
set_cpu_data(psci_svc_cpu_data.aff_info_state, aff_state)
#define psci_get_aff_info_state() \
get_cpu_data(psci_svc_cpu_data.aff_info_state)
#define psci_get_aff_info_state_by_idx(idx) \
get_cpu_data_by_index(idx, psci_svc_cpu_data.aff_info_state)
#define psci_get_suspend_pwrlvl() \
get_cpu_data(psci_svc_cpu_data.target_pwrlvl)
#define psci_set_suspend_pwrlvl(target_lvl) \
set_cpu_data(psci_svc_cpu_data.target_pwrlvl, target_lvl)
#define psci_set_cpu_local_state(state) \
set_cpu_data(psci_svc_cpu_data.local_state, state)
#define psci_get_cpu_local_state() \
get_cpu_data(psci_svc_cpu_data.local_state)
#define psci_get_cpu_local_state_by_idx(idx) \
get_cpu_data_by_index(idx, psci_svc_cpu_data.local_state)
/*
* Helper macros for the CPU level spinlocks
*/
#define psci_spin_lock_cpu(idx) spin_lock(&psci_cpu_pd_nodes[idx].cpu_lock)
#define psci_spin_unlock_cpu(idx) spin_unlock(&psci_cpu_pd_nodes[idx].cpu_lock)
/* Helper macro to identify a CPU standby request in PSCI Suspend call */
#define is_cpu_standby_req(is_power_down_state, retn_lvl) \
(((!(is_power_down_state)) && ((retn_lvl) == 0)) ? 1 : 0)
/*******************************************************************************
* The following two data structures implement the power domain tree. The tree
* is used to track the state of all the nodes i.e. power domain instances
@ -113,7 +137,8 @@ typedef struct non_cpu_pwr_domain_node {
*/
unsigned int parent_node;
unsigned char ref_count;
plat_local_state_t local_state;
unsigned char level;
#if USE_COHERENT_MEM
bakery_lock_t lock;
@ -142,12 +167,12 @@ typedef struct cpu_pwr_domain_node {
} cpu_pd_node_t;
typedef void (*pwrlvl_power_on_finisher_t)(unsigned int cpu_idx,
int max_off_pwrlvl);
psci_power_state_t *state_info);
/*******************************************************************************
* Data prototypes
******************************************************************************/
extern const plat_pm_ops_t *psci_plat_pm_ops;
extern const plat_psci_ops_t *psci_plat_pm_ops;
extern non_cpu_pd_node_t psci_non_cpu_pd_nodes[PSCI_NUM_NON_CPU_PWR_DOMAINS];
extern cpu_pd_node_t psci_cpu_pd_nodes[PLATFORM_CORE_COUNT];
extern uint32_t psci_caps;
@ -161,28 +186,31 @@ extern const spd_pm_ops_t *psci_spd_pm;
* Function prototypes
******************************************************************************/
/* Private exported functions from psci_common.c */
unsigned short psci_get_state(unsigned int idx, int level);
unsigned short psci_get_phys_state(unsigned int idx, int level);
void psci_set_state(unsigned int idx, unsigned short state, int level);
int psci_validate_power_state(unsigned int power_state,
psci_power_state_t *state_info);
void psci_query_sys_suspend_pwrstate(psci_power_state_t *state_info);
int psci_validate_mpidr(unsigned long mpidr);
int get_power_on_target_pwrlvl(void);
void psci_init_req_local_pwr_states(void);
void psci_power_up_finish(int end_pwrlvl,
pwrlvl_power_on_finisher_t pon_handler);
pwrlvl_power_on_finisher_t power_on_handler);
int psci_get_ns_ep_info(entry_point_info_t *ep,
uint64_t entrypoint, uint64_t context_id);
void psci_get_parent_pwr_domain_nodes(unsigned int cpu_idx,
int end_lvl,
unsigned int node_index[]);
void psci_do_state_coordination(int end_pwrlvl,
unsigned int cpu_idx,
uint32_t state);
psci_power_state_t *state_info);
void psci_acquire_pwr_domain_locks(int end_pwrlvl,
unsigned int cpu_idx);
void psci_release_pwr_domain_locks(int end_pwrlvl,
unsigned int cpu_idx);
int psci_validate_suspend_req(const psci_power_state_t *state_info,
unsigned int is_power_down_state_req);
unsigned int psci_find_max_off_lvl(const psci_power_state_t *state_info);
unsigned int psci_find_target_suspend_lvl(const psci_power_state_t *state_info);
void psci_set_pwr_domains_to_run(uint32_t end_pwrlvl);
void psci_print_power_domain_map(void);
uint32_t psci_find_max_phys_off_pwrlvl(uint32_t end_pwrlvl,
unsigned int cpu_idx);
unsigned int psci_is_last_on_cpu(void);
int psci_spd_migrate_info(uint64_t *mpidr);
@ -192,18 +220,19 @@ int psci_cpu_on_start(unsigned long target_cpu,
int end_pwrlvl);
void psci_cpu_on_finish(unsigned int cpu_idx,
int max_off_pwrlvl);
psci_power_state_t *state_info);
/* Private exported functions from psci_cpu_off.c */
int psci_do_cpu_off(int end_pwrlvl);
/* Private exported functions from psci_suspend.c */
/* Private exported functions from psci_pwrlvl_suspend.c */
void psci_cpu_suspend_start(entry_point_info_t *ep,
int end_pwrlvl);
void psci_cpu_suspend_finish(unsigned int cpu_idx,
int max_off_pwrlvl);
int end_pwrlvl,
psci_power_state_t *state_info,
unsigned int is_power_down_state_req);
void psci_set_suspend_power_state(unsigned int power_state);
void psci_cpu_suspend_finish(unsigned int cpu_idx,
psci_power_state_t *state_info);
/* Private exported functions from psci_helpers.S */
void psci_do_pwrdown_cache_maintenance(uint32_t pwr_level);

View File

@ -61,26 +61,30 @@ static void psci_init_pwr_domain_node(int node_idx, int parent_idx, int level)
psci_non_cpu_pd_nodes[node_idx].level = level;
psci_lock_init(psci_non_cpu_pd_nodes, node_idx);
psci_non_cpu_pd_nodes[node_idx].parent_node = parent_idx;
psci_non_cpu_pd_nodes[node_idx].local_state =
PLAT_MAX_OFF_STATE;
} else {
psci_cpu_data_t *svc_cpu_data;
psci_cpu_pd_nodes[node_idx].parent_node = parent_idx;
/* Initialize with an invalid mpidr */
psci_cpu_pd_nodes[node_idx].mpidr = PSCI_INVALID_MPIDR;
/*
* Mark the cpu as OFF.
*/
set_cpu_data_by_index(node_idx,
psci_svc_cpu_data.psci_state,
PSCI_STATE_OFF);
svc_cpu_data =
&(_cpu_data_by_index(node_idx)->psci_svc_cpu_data);
/* Invalidate the suspend context for the node */
set_cpu_data_by_index(node_idx,
psci_svc_cpu_data.power_state,
PSCI_INVALID_DATA);
/* Set the Affinity Info for the cores as OFF */
svc_cpu_data->aff_info_state = AFF_STATE_OFF;
flush_cpu_data_by_index(node_idx, psci_svc_cpu_data);
/* Invalidate the suspend level for the cpu */
svc_cpu_data->target_pwrlvl = PSCI_INVALID_DATA;
/* Set the power state to OFF state */
svc_cpu_data->local_state = PLAT_MAX_OFF_STATE;
flush_dcache_range((uint64_t)svc_cpu_data,
sizeof(*svc_cpu_data));
cm_set_context_by_index(node_idx,
(void *) &psci_ns_context[node_idx],
@ -233,14 +237,15 @@ int32_t psci_setup(void)
flush_dcache_range((uint64_t) &psci_cpu_pd_nodes,
sizeof(psci_cpu_pd_nodes));
/*
* Mark the current CPU and its parent power domains as ON. No need to lock
* as the system is UP on the primary at this stage of boot.
*/
psci_do_state_coordination(PLAT_MAX_PWR_LVL, plat_my_core_pos(),
PSCI_STATE_ON);
psci_init_req_local_pwr_states();
platform_setup_pm(&psci_plat_pm_ops);
/*
* Set the requested and target state of this CPU and all the higher
* power domain levels for this CPU to run.
*/
psci_set_pwr_domains_to_run(PLAT_MAX_PWR_LVL);
plat_setup_psci_ops(&psci_plat_pm_ops);
assert(psci_plat_pm_ops);
/* Initialize the psci capability */

View File

@ -42,82 +42,97 @@
#include "psci_private.h"
/*******************************************************************************
* This function saves the power state parameter passed in the current PSCI
* cpu_suspend call in the per-cpu data array.
* This function does generic and platform specific operations after a wake-up
* from standby/retention states at multiple power levels.
******************************************************************************/
void psci_set_suspend_power_state(unsigned int power_state)
static void psci_suspend_to_standby_finisher(unsigned int cpu_idx,
psci_power_state_t *state_info,
unsigned int end_pwrlvl)
{
set_cpu_data(psci_svc_cpu_data.power_state, power_state);
flush_cpu_data(psci_svc_cpu_data.power_state);
psci_acquire_pwr_domain_locks(end_pwrlvl,
cpu_idx);
/*
* Plat. management: Allow the platform to do operations
* on waking up from retention.
*/
psci_plat_pm_ops->pwr_domain_suspend_finish(state_info);
/*
* Set the requested and target state of this CPU and all the higher
* power domain levels for this CPU to run.
*/
psci_set_pwr_domains_to_run(end_pwrlvl);
psci_release_pwr_domain_locks(end_pwrlvl,
cpu_idx);
}
/*******************************************************************************
* This function gets the power level till which the current cpu could be
* powered down during a cpu_suspend call. Returns PSCI_INVALID_DATA if the
* power state is invalid.
* This function does generic and platform specific suspend to power down
* operations.
******************************************************************************/
int psci_get_suspend_pwrlvl(void)
static void psci_suspend_to_pwrdown_start(int end_pwrlvl,
entry_point_info_t *ep,
psci_power_state_t *state_info)
{
unsigned int power_state;
/* Save PSCI target power level for the suspend finisher handler */
psci_set_suspend_pwrlvl(end_pwrlvl);
power_state = get_cpu_data(psci_svc_cpu_data.power_state);
/*
* Flush the target power level as it will be accessed on power up with
* Data cache disabled.
*/
flush_cpu_data(psci_svc_cpu_data.target_pwrlvl);
return ((power_state == PSCI_INVALID_DATA) ?
power_state : psci_get_pstate_pwrlvl(power_state));
}
/*
* Call the cpu suspend handler registered by the Secure Payload
* Dispatcher to let it do any book-keeping. If the handler encounters an
* error, it's expected to assert within
*/
if (psci_spd_pm && psci_spd_pm->svc_suspend)
psci_spd_pm->svc_suspend(0);
/*******************************************************************************
* This function gets the state id of the current cpu from the power state
* parameter saved in the per-cpu data array. Returns PSCI_INVALID_DATA if the
* power state saved is invalid.
******************************************************************************/
int psci_get_suspend_stateid(void)
{
unsigned int power_state;
/*
* Store the re-entry information for the non-secure world.
*/
cm_init_my_context(ep);
power_state = get_cpu_data(psci_svc_cpu_data.power_state);
return ((power_state == PSCI_INVALID_DATA) ?
power_state : psci_get_pstate_id(power_state));
}
/*******************************************************************************
* This function gets the state id of the cpu specified by the cpu index
* from the power state parameter saved in the per-cpu data array. Returns
* PSCI_INVALID_DATA if the power state saved is invalid.
******************************************************************************/
int psci_get_suspend_stateid_by_idx(unsigned long cpu_idx)
{
unsigned int power_state;
power_state = get_cpu_data_by_index(cpu_idx,
psci_svc_cpu_data.power_state);
return ((power_state == PSCI_INVALID_DATA) ?
power_state : psci_get_pstate_id(power_state));
/*
* Arch. management. Perform the necessary steps to flush all
* cpu caches. Currently we assume that the power level correspond
* the cache level.
* TODO : Introduce a mechanism to query the cache level to flush
* and the cpu-ops power down to perform from the platform.
*/
psci_do_pwrdown_cache_maintenance(psci_find_max_off_lvl(state_info));
}
/*******************************************************************************
* Top level handler which is called when a cpu wants to suspend its execution.
* It is assumed that along with suspending the cpu power domain, power domains
* at higher levels until the target power level will be suspended as well.
* It finds the highest level where a domain has to be suspended by traversing
* the node information and then performs generic, architectural, platform
* setup and state management required to suspend that power domain and domains
* below it. * e.g. For a cpu that's to be suspended, it could mean programming
* the power controller whereas for a cluster that's to be suspended, it will
* call the platform specific code which will disable coherency at the
* interconnect level if the cpu is the last in the cluster and also the
* program the power controller.
* at higher levels until the target power level will be suspended as well. It
* coordinates with the platform to negotiate the target state for each of
* the power domain level till the target power domain level. It then performs
* generic, architectural, platform setup and state management required to
* suspend that power domain level and power domain levels below it.
* e.g. For a cpu that's to be suspended, it could mean programming the
* power controller whereas for a cluster that's to be suspended, it will call
* the platform specific code which will disable coherency at the interconnect
* level if the cpu is the last in the cluster and also the program the power
* controller.
*
* All the required parameter checks are performed at the beginning and after
* the state transition has been done, no further error is expected and it is
* not possible to undo any of the actions taken beyond that point.
******************************************************************************/
void psci_cpu_suspend_start(entry_point_info_t *ep, int end_pwrlvl)
void psci_cpu_suspend_start(entry_point_info_t *ep,
int end_pwrlvl,
psci_power_state_t *state_info,
unsigned int is_power_down_state)
{
int skip_wfi = 0;
unsigned int max_phys_off_pwrlvl, idx = plat_my_core_pos();
unsigned int idx = plat_my_core_pos();
unsigned long psci_entrypoint;
/*
@ -146,39 +161,20 @@ void psci_cpu_suspend_start(entry_point_info_t *ep, int end_pwrlvl)
}
/*
* Call the cpu suspend handler registered by the Secure Payload
* Dispatcher to let it do any bookeeping. If the handler encounters an
* error, it's expected to assert within
* This function is passed the requested state info and
* it returns the negotiated state info for each power level upto
* the end level specified.
*/
if (psci_spd_pm && psci_spd_pm->svc_suspend)
psci_spd_pm->svc_suspend(0);
psci_do_state_coordination(end_pwrlvl, state_info);
/*
* This function updates the state of each power domain instance
* corresponding to the cpu index in the range of power levels
* specified.
*/
psci_do_state_coordination(end_pwrlvl,
idx,
PSCI_STATE_SUSPEND);
psci_entrypoint = 0;
if (is_power_down_state) {
psci_suspend_to_pwrdown_start(end_pwrlvl, ep, state_info);
max_phys_off_pwrlvl = psci_find_max_phys_off_pwrlvl(end_pwrlvl,
idx);
assert(max_phys_off_pwrlvl != PSCI_INVALID_DATA);
/*
* Store the re-entry information for the non-secure world.
*/
cm_init_my_context(ep);
/* Set the secure world (EL3) re-entry point after BL1 */
psci_entrypoint = (unsigned long) psci_cpu_suspend_finish_entry;
/*
* Arch. management. Perform the necessary steps to flush all
* cpu caches.
*/
psci_do_pwrdown_cache_maintenance(max_phys_off_pwrlvl);
/* Set the secure world (EL3) re-entry point after BL1. */
psci_entrypoint =
(unsigned long) psci_cpu_suspend_finish_entry;
}
/*
* Plat. management: Allow the platform to perform the
@ -186,8 +182,7 @@ void psci_cpu_suspend_start(entry_point_info_t *ep, int end_pwrlvl)
* platform defined mailbox with the psci entrypoint,
* program the power controller etc.
*/
psci_plat_pm_ops->pwr_domain_suspend(psci_entrypoint,
max_phys_off_pwrlvl);
psci_plat_pm_ops->pwr_domain_suspend(psci_entrypoint, state_info);
exit:
/*
@ -195,23 +190,41 @@ exit:
* reverse order to which they were acquired.
*/
psci_release_pwr_domain_locks(end_pwrlvl,
idx);
if (!skip_wfi)
idx);
if (skip_wfi)
return;
if (is_power_down_state)
psci_power_down_wfi();
/*
* We will reach here if only retention/standby states have been
* requested at multiple power levels. This means that the cpu
* context will be preserved.
*/
wfi();
/*
* After we wake up from context retaining suspend, call the
* context retaining suspend finisher.
*/
psci_suspend_to_standby_finisher(idx, state_info, end_pwrlvl);
}
/*******************************************************************************
* The following functions finish an earlier suspend request. They
* are called by the common finisher routine in psci_common.c.
* are called by the common finisher routine in psci_common.c. The `state_info`
* is the psci_power_state from which this CPU has woken up from.
******************************************************************************/
void psci_cpu_suspend_finish(unsigned int cpu_idx, int max_off_pwrlvl)
void psci_cpu_suspend_finish(unsigned int cpu_idx,
psci_power_state_t *state_info)
{
int32_t suspend_level;
uint64_t counter_freq;
/* Ensure we have been woken up from a suspended state */
assert(psci_get_state(cpu_idx, PSCI_CPU_PWR_LVL)
== PSCI_STATE_SUSPEND);
assert(psci_get_aff_info_state() == AFF_STATE_ON && is_local_state_off(\
state_info->pwr_domain_state[PSCI_CPU_PWR_LVL]));
/*
* Plat. management: Perform the platform specific actions
@ -220,7 +233,7 @@ void psci_cpu_suspend_finish(unsigned int cpu_idx, int max_off_pwrlvl)
* wrong then assert as there is no way to recover from this
* situation.
*/
psci_plat_pm_ops->pwr_domain_suspend_finish(max_off_pwrlvl);
psci_plat_pm_ops->pwr_domain_suspend_finish(state_info);
/*
* Arch. management: Enable the data cache, manage stack memory and
@ -244,8 +257,8 @@ void psci_cpu_suspend_finish(unsigned int cpu_idx, int max_off_pwrlvl)
psci_spd_pm->svc_suspend_finish(suspend_level);
}
/* Invalidate the suspend context for the node */
psci_set_suspend_power_state(PSCI_INVALID_DATA);
/* Invalidate the suspend level for the cpu */
psci_set_suspend_pwrlvl(PSCI_INVALID_DATA);
/*
* Generic management: Now we just need to retrieve the