Commit Graph

62 Commits

Author SHA1 Message Date
Jan Dabros bb9549babc aarch64: Fix stack pointer maintenance on EA handling path
EA handlers for exceptions taken from lower ELs at the end invokes
el3_exit function. However there was a bug with sp maintenance which
resulted in el3_exit setting runtime stack to context. This in turn
caused memory corruption on consecutive EL3 entries.

Signed-off-by: Jan Dabros <jsd@semihalf.com>
Change-Id: I0424245c27c369c864506f4baa719968890ce659
2019-12-18 08:47:10 +01:00
Artsem Artsemenka db3ae8538b S-EL2 Support: Check for AArch64
Check that entry point information requesting S-EL2
has AArch64 as an execution state during context setup.

Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com>
Change-Id: I447263692fed6e55c1b076913e6eb73b1ea735b7
2019-12-06 17:42:45 +00:00
Achin Gupta 0376e7c4aa Add support for enabling S-EL2
This patch adds support for enabling S-EL2 if this EL is specified in the entry
point information being used to initialise a secure context. It is the caller's
responsibility to check if S-EL2 is available on the system before requesting
this EL through the entry point information.

Signed-off-by: Achin Gupta <achin.gupta@arm.com>
Change-Id: I2752964f078ab528b2e80de71c7d2f35e60569e1
2019-12-06 17:42:45 +00:00
Soby Mathew c5235cae8e Merge "AArch32: Disable Secure Cycle Counter" into integration 2019-09-27 10:55:15 +00:00
Alexei Fedorov c3e8b0be9b AArch32: Disable Secure Cycle Counter
This patch changes implementation for disabling Secure Cycle
Counter. For ARMv8.5 the counter gets disabled by setting
SDCR.SCCD bit on CPU cold/warm boot. For the earlier
architectures PMCR register is saved/restored on secure
world entry/exit from/to Non-secure state, and cycle counting
gets disabled by setting PMCR.DP bit.
In 'include\aarch32\arch.h' header file new
ARMv8.5-PMU related definitions were added.

Change-Id: Ia8845db2ebe8de940d66dff479225a5b879316f8
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-09-26 15:36:02 +00:00
Justin Chadwell 019b03a300 Fix MTE support from causing unused variable warnings
assert() calls are removed in release builds, and if that assert call is
the only use of a variable, an unused variable warning will be triggered
in a release build. This patch fixes this problem when
CTX_INCLUDE_MTE_REGS by not using an intermediate variable to store the
results of get_armv8_5_mte_support().

Change-Id: I529e10ec0b2c8650d2c3ab52c4f0cecc0b3a670e
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
2019-09-20 09:17:55 +01:00
Sandrine Bailleux ea735643cb Merge changes from topic "db/unsigned_long" into integration
* changes:
  Unsigned long should not be used as per coding guidelines
  SCTLR and ACTLR are 32-bit for AArch32 and 64-bit for AArch64
2019-09-18 14:30:09 +00:00
Deepika Bhavnani eeb5a7b595 SCTLR and ACTLR are 32-bit for AArch32 and 64-bit for AArch64
AArch64 System register SCTLR_EL1[31:0] is architecturally mapped
to AArch32 System register SCTLR[31:0]
AArch64 System register ACTLR_EL1[31:0] is architecturally mapped
to AArch32 System register ACTLR[31:0].

`u_register_t` should be used when it's important to store the
contents of a register in its native size

Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com>
Change-Id: I0055422f8cc0454405e011f53c1c4ddcaceb5779
2019-09-13 23:51:02 +03:00
Alexei Fedorov ed108b5605 Refactor ARMv8.3 Pointer Authentication support code
This patch provides the following features and makes modifications
listed below:
- Individual APIAKey key generation for each CPU.
- New key generation on every BL31 warm boot and TSP CPU On event.
- Per-CPU storage of APIAKey added in percpu_data[]
  of cpu_data structure.
- `plat_init_apiakey()` function replaced with `plat_init_apkey()`
  which returns 128-bit value and uses Generic timer physical counter
  value to increase the randomness of the generated key.
  The new function can be used for generation of all ARMv8.3-PAuth keys
- ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
- New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
  generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
  pauth_disable_el1()` and `pauth_disable_el3()` functions disable
  PAuth for EL1 and EL3 respectively;
  `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
  cpu-data structure.
- Combined `save_gp_pauth_registers()` function replaces calls to
  `save_gp_registers()` and `pauth_context_save()`;
  `restore_gp_pauth_registers()` replaces `pauth_context_restore()`
  and `restore_gp_registers()` calls.
- `restore_gp_registers_eret()` function removed with corresponding
  code placed in `el3_exit()`.
- Fixed the issue when `pauth_t pauth_ctx` structure allocated space
  for 12 uint64_t PAuth registers instead of 10 by removal of macro
  CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
  and assigning its value to CTX_PAUTH_REGS_END.
- Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
  in `msr	spsel`  instruction instead of hard-coded values.
- Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.

Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-09-13 14:11:59 +01:00
Justin Chadwell 9dd94382bd Enable MTE support in both secure and non-secure worlds
This patch adds support for the new Memory Tagging Extension arriving in
ARMv8.5. MTE support is now enabled by default on systems that support
at EL0. To enable it at ELx for both the non-secure and the secure
world, the compiler flag CTX_INCLUDE_MTE_REGS includes register saving
and restoring when necessary in order to prevent register leakage
between the worlds.

Change-Id: I2d4ea993d6b11654ea0d4757d00ca20d23acf36c
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
2019-09-09 16:23:33 +01:00
Alexei Fedorov e290a8fcbc AArch64: Disable Secure Cycle Counter
This patch fixes an issue when secure world timing information
can be leaked because Secure Cycle Counter is not disabled.
For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD
bit on CPU cold/warm boot.
For the earlier architectures PMCR_EL0 register is saved/restored
on secure world entry/exit from/to Non-secure state, and cycle
counting gets disabled by setting PMCR_EL0.DP bit.
'include\aarch64\arch.h' header file was tided up and new
ARMv8.5-PMU related definitions were added.

Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-08-21 15:43:24 +01:00
Soby Mathew b7e398d64c Enable MTE support unilaterally for Normal World
This patch enables MTE for Normal world if the CPU suppors it. Enabling
MTE for secure world will be done later.

Change-Id: I9ef64460beaba15e9a9c20ab02da4fb2208b6f7d
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
2019-07-12 09:27:25 +01:00
Sandrine Bailleux 3ca26bed7e Fix restoring APIBKey registers
Instruction key A was incorrectly restored in the instruction key B
registers.

Change-Id: I4cb81ac72180442c077898509cb696c9d992eda3
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2019-03-14 13:57:16 +01:00
Antonio Niño Díaz dbd0bcfe00
Merge pull request #1848 from antonio-nino-diaz-arm/an/docs
Minor changes to documentation and comments
2019-03-01 09:16:58 +00:00
Antonio Nino Diaz 73308618fe Minor changes to documentation and comments
Fix some typos and clarify some sentences.

Change-Id: Id276d1ced9a991b4eddc5c47ad9a825e6b29ef74
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-02-28 13:35:21 +00:00
Antonio Niño Díaz 64503b2f81
Merge pull request #1839 from loumay-arm/lm/a7x_errata
Cortex-A73/75/76 errata workaround
2019-02-28 10:19:24 +00:00
Antonio Nino Diaz b86048c40c Add support for pointer authentication
The previous commit added the infrastructure to load and save
ARMv8.3-PAuth registers during Non-secure <-> Secure world switches, but
didn't actually enable pointer authentication in the firmware.

This patch adds the functionality needed for platforms to provide
authentication keys for the firmware, and a new option (ENABLE_PAUTH) to
enable pointer authentication in the firmware itself. This option is
disabled by default, and it requires CTX_INCLUDE_PAUTH_REGS to be
enabled.

Change-Id: I35127ec271e1198d43209044de39fa712ef202a5
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-02-27 11:58:09 +00:00
Antonio Nino Diaz 5283962eba Add ARMv8.3-PAuth registers to CPU context
ARMv8.3-PAuth adds functionality that supports address authentication of
the contents of a register before that register is used as the target of
an indirect branch, or as a load.

This feature is supported only in AArch64 state.

This feature is mandatory in ARMv8.3 implementations.

This feature adds several registers to EL1. A new option called
CTX_INCLUDE_PAUTH_REGS has been added to select if the TF needs to save
them during Non-secure <-> Secure world switches. This option must be
enabled if the hardware has the registers or the values will be leaked
during world switches.

To prevent leaks, this patch also disables pointer authentication in the
Secure world if CTX_INCLUDE_PAUTH_REGS is 0. Any attempt to use it will
be trapped in EL3.

Change-Id: I27beba9907b9a86c6df1d0c5bf6180c972830855
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-02-27 11:08:59 +00:00
Antonio Nino Diaz 4d1ccf0ecc Cleanup context handling library
Minor style cleanup.

Change-Id: Ief19dece41a989e2e8157859a265701549f6c585
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-02-27 11:08:59 +00:00
Louis Mayencourt 5f5d1ed7d5 Add workaround for errata 764081 of Cortex-A75
Implicit Error Synchronization Barrier (IESB) might not be correctly
generated in Cortex-A75 r0p0. To prevent this, IESB are enabled at all
expection levels.

Change-Id: I2a1a568668a31e4f3f38d0fba1d632ad9939e5ad
Signed-off-by: Louis Mayencourt <louis.mayencourt@arm.com>
2019-02-26 15:53:57 +00:00
Paul Beesley 8aabea3358 Correct typographical errors
Corrects typos in core code, documentation files, drivers, Arm
platforms and services.

None of the corrections affect code; changes are limited to comments
and other documentation.

Change-Id: I5c1027b06ef149864f315ccc0ea473e2a16bfd1d
Signed-off-by: Paul Beesley <paul.beesley@arm.com>
2019-01-15 15:16:02 +00:00
Antonio Nino Diaz 09d40e0e08 Sanitise includes across codebase
Enforce full include path for includes. Deprecate old paths.

The following folders inside include/lib have been left unchanged:

- include/lib/cpus/${ARCH}
- include/lib/el3_runtime/${ARCH}

The reason for this change is that having a global namespace for
includes isn't a good idea. It defeats one of the advantages of having
folders and it introduces problems that are sometimes subtle (because
you may not know the header you are actually including if there are two
of them).

For example, this patch had to be created because two headers were
called the same way: e0ea0928d5 ("Fix gpio includes of mt8173 platform
to avoid collision."). More recently, this patch has had similar
problems: 46f9b2c3a2 ("drivers: add tzc380 support").

This problem was introduced in commit 4ecca33988 ("Move include and
source files to logical locations"). At that time, there weren't too
many headers so it wasn't a real issue. However, time has shown that
this creates problems.

Platforms that want to preserve the way they include headers may add the
removed paths to PLAT_INCLUDES, but this is discouraged.

Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-01-04 10:43:17 +00:00
Antonio Nino Diaz a0fee7474f context_mgmt: Fix MISRA defects
The macro EL_IMPLEMENTED() has been deprecated in favour of the new
function el_implemented().

Change-Id: Ic9b1b81480b5e019b50a050e8c1a199991bf0ca9
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-11-01 14:15:39 +00:00
Antonio Nino Diaz 40daecc1be Fix MISRA defects in extension libs
No functional changes.

Change-Id: I2f28f20944f552447ac4e9e755493cd7c0ea1192
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-10-29 14:41:48 +00:00
Jeenu Viswambharan 3ff4aaaca4 AArch64: Enable lower ELs to use pointer authentication
Pointer authentication is an Armv8.3 feature that introduces
instructions that can be used to authenticate and verify pointers.

Pointer authentication instructions are allowed to be accessed from all
ELs but only when EL3 explicitly allows for it; otherwise, their usage
will trap to EL3. Since EL3 doesn't have trap handling in place, this
patch unconditionally disables all related traps to EL3 to avoid
potential misconfiguration leading to an unhandled EL3 exception.

Fixes ARM-software/tf-issues#629

Change-Id: I9bd2efe0dc714196f503713b721ffbf05672c14d
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-10-16 08:31:13 +01:00
Daniel Boulby 87c8513498 Mark BL31 initialization functions
Mark the initialization functions in BL31, such as context management,
EHF, RAS and PSCI as __init so that they can be reclaimed by the
platform when no longer needed

Change-Id: I7446aeee3dde8950b0f410cb766b7a2312c20130
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
2018-10-03 11:47:30 +01:00
Julius Werner 24f671f3a9 context_mgmt: Fix HANDLE_EA_EL3_FIRST implementation
This patch fixes a bug in the context management code that causes it to
ignore the HANDLE_EA_EL3_FIRST compile-time option and instead always
configure SCR_EL3 to force all external aborts to trap into EL3. The
code used #ifdef to read compile-time option declared with add_define in
the Makefile... however, those options are always defined, they're just
defined to either 0 or 1, so #if is the correct syntax to check for
them. Also update the documentation to match.

This bug has existed since the Nov 2017 commit 76454abf4 (AArch64:
Introduce External Abort handling), which changed the
HANDLE_EA_EL3_FIRST option to use add_define.

Change-Id: I7189f41d0daee78fa2fcf4066323e663e1e04d3d
Signed-off-by: Julius Werner <jwerner@chromium.org>
2018-08-29 17:16:20 -07:00
Jeenu Viswambharan 5f83591880 AArch64: Enable MPAM for lower ELs
Memory Partitioning And Monitoring is an Armv8.4 feature that enables
various memory system components and resources to define partitions.
Software running at various ELs can then assign themselves to the
desired partition to control their performance aspects.

With this patch, when ENABLE_MPAM_FOR_LOWER_ELS is set to 1, EL3 allows
lower ELs to access their own MPAM registers without trapping to EL3.
This patch however doesn't make use of partitioning in EL3; platform
initialisation code should configure and use partitions in EL3 if
required.

Change-Id: I5a55b6771ccaa0c1cffc05543d2116b60cbbcdcd
Co-authored-by: James Morse <james.morse@arm.com>
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-08-20 07:32:19 +01:00
Dimitris Papastamos d003b19093
Merge pull request #1392 from dp-arm/dp/cve_2018_3639
Implement workaround for CVE-2018-3639 on Cortex A57/A72/A73 and A75
2018-05-29 09:28:05 +01:00
Antonio Nino Diaz 1634cae89d context_mgmt: Make cm_init_context_common public
This function can be currently accessed through the wrappers
cm_init_context_by_index() and cm_init_my_context(). However, they only
work on contexts that are associated to a CPU.

By making this function public, it is possible to set up a context that
isn't associated to any CPU. For consistency, it has been renamed to
cm_setup_context().

Change-Id: Ib2146105abc8137bab08745a8adb30ca2c4cedf4
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-05-23 13:50:18 +01:00
Dimitris Papastamos fe007b2e15 Add support for dynamic mitigation for CVE-2018-3639
Some CPUS may benefit from using a dynamic mitigation approach for
CVE-2018-3639.  A new SMC interface is defined to allow software
executing in lower ELs to enable or disable the mitigation for their
execution context.

It should be noted that regardless of the state of the mitigation for
lower ELs, code executing in EL3 is always mitigated against
CVE-2018-3639.

NOTE: This change is a compatibility break for any platform using
the declare_cpu_ops_workaround_cve_2017_5715 macro.  Migrate to
the declare_cpu_ops_wa macro instead.

Change-Id: I3509a9337ad217bbd96de9f380c4ff8bf7917013
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-05-23 12:45:48 +01:00
Dimitris Papastamos 10df381100
Merge pull request #1376 from vwadekar/cm-init-actlr-el1
lib: el3_runtime: initialise actlr_el1 to hardware defaults
2018-05-15 18:40:46 +01:00
Varun Wadekar 2ab9617ef2 lib: el3_runtime: initialise actlr_el1 to hardware defaults
The context management library initialises the CPU context for the
secure/non-secure worlds to zero. This leads to zeros being stored
to the actual registers when we restore the CPU context, during a
world switch. Denver CPUs dont expect zero to be written to the
implementation defined, actlr_el1 register, at any point of time.
Writing a zero to some fields of this register, results in an
UNDEFINED exception.

This patch bases the context actlr_el1 value on the actual hardware
register, to maintain parity with the expected settings

Change-Id: I1c806d7ff12daa7fd1e5c72825494b81454948f2
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
2018-05-09 08:58:15 -07:00
Jeenu Viswambharan 1a7c1cfe70 RAS: Add fault injection support
The ARMv8.4 RAS extensions introduce architectural support for software
to inject faults into the system in order to test fault-handling
software. This patch introduces the build option FAULT_HANDLING_SUPPORT
to allow for lower ELs to use registers in the Standard Error Record to
inject fault. The build option RAS_EXTENSIONS must also be enabled along
with fault injection.

This feature is intended for testing purposes only, and is advisable to
keep disabled for production images.

Change-Id: I6f7a4454b15aec098f9505a10eb188c2f928f7ea
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-04 08:33:17 +01:00
Jeenu Viswambharan 14c6016ad5 AArch64: Introduce RAS handling
RAS extensions are mandatory for ARMv8.2 CPUs, but are also optional
extensions to base ARMv8.0 architecture.

This patch adds build system support to enable RAS features in ARM
Trusted Firmware. A boolean build option RAS_EXTENSION is introduced for
this.

With RAS_EXTENSION, an Exception Synchronization Barrier (ESB) is
inserted at all EL3 vector entry and exit. ESBs will synchronize pending
external aborts before entering EL3, and therefore will contain and
attribute errors to lower EL execution. Any errors thus synchronized are
detected via. DISR_EL1 register.

When RAS_EXTENSION is set to 1, HANDLE_EL3_EA_FIRST must also be set to 1.

Change-Id: I38a19d84014d4d8af688bd81d61ba582c039383a
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-04 08:33:17 +01:00
Jeenu Viswambharan ef653d93cc AArch64: Refactor GP register restore to separate function
At present, the function that restores general purpose registers also
does ERET. Refactor the restore code to restore general purpose
registers without ERET to complement the save function.

The macro save_x18_to_x29_sp_el0 was used only once, and is therefore
removed, and its contents expanded inline for readability.

No functional changes, but with this patch:

  - The SMC return path will incur an branch-return and an additional
    register load.

  - The unknown SMC path restores registers x0 to x3.

Change-Id: I7a1a63e17f34f9cde810685d70a0ad13ca3b7c50
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-04 08:32:42 +01:00
Antonio Nino Diaz 085e80ec11 Rename 'smcc' to 'smccc'
When the source code says 'SMCC' it is talking about the SMC Calling
Convention. The correct acronym is SMCCC. This affects a few definitions
and file names.

Some files have been renamed (smcc.h, smcc_helpers.h and smcc_macros.S)
but the old files have been kept for compatibility, they include the
new ones with an ERROR_DEPRECATED guard.

Change-Id: I78f94052a502436fdd97ca32c0fe86bd58173f2f
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-03-21 10:49:27 +00:00
David Cunado 1a853370ff Enable SVE for Non-secure world
This patch adds a new build option, ENABLE_SVE_FOR_NS, which when set
to one EL3 will check to see if the Scalable Vector Extension (SVE) is
implemented when entering and exiting the Non-secure world.

If SVE is implemented, EL3 will do the following:

- Entry to Non-secure world: SIMD, FP and SVE functionality is enabled.

- Exit from Non-secure world: SIMD, FP and SVE functionality is
  disabled. As SIMD and FP registers are part of the SVE Z-registers
  then any use of SIMD / FP functionality would corrupt the SVE
  registers.

The build option default is 1. The SVE functionality is only supported
on AArch64 and so the build option is set to zero when the target
archiecture is AArch32.

This build option is not compatible with the CTX_INCLUDE_FPREGS - an
assert will be raised on platforms where SVE is implemented and both
ENABLE_SVE_FOR_NS and CTX_INCLUDE_FPREGS are set to 1.

Also note this change prevents secure world use of FP&SIMD registers on
SVE-enabled platforms. Existing Secure-EL1 Payloads will not work on
such platforms unless ENABLE_SVE_FOR_NS is set to 0.

Additionally, on the first entry into the Non-secure world the SVE
functionality is enabled and the SVE Z-register length is set to the
maximum size allowed by the architecture. This includes the use case
where EL2 is implemented but not used.

Change-Id: Ie2d733ddaba0b9bef1d7c9765503155188fe7dae
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-11-30 17:45:09 +00:00
Dimitris Papastamos ef69e1ea62 AMU: Implement support for aarch32
The `ENABLE_AMU` build option can be used to enable the
architecturally defined AMU counters.  At present, there is no support
for the auxiliary counter group.

Change-Id: Ifc7532ef836f83e629f2a146739ab61e75c4abc8
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-11-29 09:36:35 +00:00
Dimitris Papastamos 380559c1c3 AMU: Implement support for aarch64
The `ENABLE_AMU` build option can be used to enable the
architecturally defined AMU counters.  At present, there is no support
for the auxiliary counter group.

Change-Id: I7ea0c0a00327f463199d1b0a481f01dadb09d312
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-11-29 09:36:05 +00:00
Dimitris Papastamos 281a08cc64 Refactor Statistical Profiling Extensions implementation
Factor out SPE operations in a separate file.  Use the publish
subscribe framework to drain the SPE buffers before entering secure
world.  Additionally, enable SPE before entering normal world.

A side effect of this change is that the profiling buffers are now
only drained when a transition from normal world to secure world
happens.  Previously they were drained also on return from secure
world, which is unnecessary as SPE is not supported in S-EL1.

Change-Id: I17582c689b4b525770dbb6db098b3a0b5777b70a
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-11-20 09:55:01 +00:00
Dimitris Papastamos 0fd0f22298 Factor out extension enabling to a separate function
Factor out extension enabling to a separate function that is called
before exiting from EL3 for first entry into Non-secure world.

Change-Id: Ic21401ebba531134d08643c0a1ca9de0fc590a1b
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-11-20 09:55:01 +00:00
David Cunado 91089f360a Move FPEXC32_EL2 to FP Context
The FPEXC32_EL2 register controls SIMD and FP functionality when the
lower ELs are executing in AArch32 mode. It is architecturally mapped
to AArch32 system register FPEXC.

This patch removes FPEXC32_EL2 register from the System Register context
and adds it to the floating-point context. EL3 only saves / restores the
floating-point context if the build option CTX_INCLUDE_FPREGS is set to 1.

The rationale for this change is that if the Secure world is using FP
functionality and EL3 is not managing the FP context, then the Secure
world will save / restore the appropriate FP registers.

NOTE - this is a break in behaviour in the unlikely case that
CTX_INCLUDE_FPREGS is set to 0 and the platform contains an AArch32
Secure Payload that modifies FPEXC, but does not save and restore
this register

Change-Id: Iab80abcbfe302752d52b323b4abcc334b585c184
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-11-15 22:42:05 +00:00
Dimitris Papastamos 17b4c0dd0a aarch64: Add PubSub events to capture security state transitions
Add events that trigger before entry to normal/secure world.  The
events trigger after the normal/secure context has been restored.

Similarly add events that trigger after leaving normal/secure world.
The events trigger after the normal/secure context has been saved.

Change-Id: I1b48a7ea005d56b1f25e2b5313d77e67d2f02bc5
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-10-31 10:33:27 +00:00
David Cunado 3e61b2b543 Init and save / restore of PMCR_EL0 / PMCR
Currently TF does not initialise the PMCR_EL0 register in
the secure context or save/restore the register.

In particular, the DP field may not be set to one to prohibit
cycle counting in the secure state, even though event counting
generally is prohibited via the default setting of MDCR_EL3.SMPE
to 0.

This patch initialises PMCR_EL0.DP to one in the secure state
to prohibit cycle counting and also initialises other fields
that have an architectually UNKNOWN reset value.

Additionally, PMCR_EL0 is added to the list of registers that are
saved and restored during a world switch.

Similar changes are made for PMCR for the AArch32 execution state.

NOTE: secure world code at lower ELs that assume other values in PMCR_EL0
will be impacted.

Change-Id: Iae40e8c0a196d74053accf97063ebc257b4d2f3a
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-10-13 09:48:48 +01:00
davidcunado-arm 413115e152 Merge pull request #1019 from etienne-lms/log-size
CPU_DATA_LOG2SIZE depends on cache line size
2017-09-07 00:40:59 +01:00
Etienne Carriere 86606eb51e cpu log buffer size depends on cache line size
Platform may use specific cache line sizes. Since CACHE_WRITEBACK_GRANULE
defines the platform specific cache line size, it is used to define the
size of the cpu data structure CPU_DATA_SIZE aligned on cache line size.

Introduce assembly macro 'mov_imm' for AArch32 to simplify implementation
of function '_cpu_data_by_index'.

Change-Id: Ic2d49ffe0c3e51649425fd9c8c99559c582ac5a1
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-09-01 10:22:20 +02:00
Ken Kuang 2e09d4f804 fix a typo about sctlr_el2
which will cause write_sctlr_el2 use all sctlr_el1 value except the EE bit

The code doesn't "Use SCTLR_EL1.EE value to initialise sctlr_el2"
but, read out SCTLR_EL1 and clear EE bit, then set to sctlr_el2

Signed-off-by: Ken Kuang <ken.kuang@spreadtrum.com>
2017-08-23 16:39:18 +08:00
dp-arm d832aee900 aarch64: Enable Statistical Profiling Extensions for lower ELs
SPE is only supported in non-secure state.  Accesses to SPE specific
registers from SEL1 will trap to EL3.  During a world switch, before
`TTBR` is modified the SPE profiling buffers are drained.  This is to
avoid a potential invalid memory access in SEL1.

SPE is architecturally specified only for AArch64.

Change-Id: I04a96427d9f9d586c331913d815fdc726855f6b0
Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
2017-06-22 10:33:19 +01:00
David Cunado 18f2efd67d Fully initialise essential control registers
This patch updates the el3_arch_init_common macro so that it fully
initialises essential control registers rather then relying on hardware
to set the reset values.

The context management functions are also updated to fully initialise
the appropriate control registers when initialising the non-secure and
secure context structures and when preparing to leave EL3 for a lower
EL.

This gives better alignement with the ARM ARM which states that software
must initialise RES0 and RES1 fields with 0 / 1.

This patch also corrects the following typos:

"NASCR definitions" -> "NSACR definitions"

Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-06-21 17:57:54 +01:00