Commit Graph

477 Commits

Author SHA1 Message Date
Jeenu Viswambharan 64ee263e20 DynamIQ: Enable MMU without using stack
Having an active stack while enabling MMU has shown coherency problems.
This patch builds on top of translation library changes that introduces
MMU-enabling without using stacks.

Previously, with HW_ASSISTED_COHERENCY, data caches were disabled while
enabling MMU only because of active stack. Now that we can enable MMU
without using stack, we can enable both MMU and data caches at the same
time.

NOTE: Since this feature depends on using translation table library v2,
disallow using translation table library v1 with HW_ASSISTED_COHERENCY.

Fixes ARM-software/tf-issues#566

Change-Id: Ie55aba0c23ee9c5109eb3454cb8fa45d74f8bbb2
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-06-27 11:31:30 +01:00
Jeenu Viswambharan 92bec97f5c xlat v1: Provide direct MMU-enabling stubs
An earlier patch split MMU-enabling function for translation library v2.
Although we don't intend to introduce the exact same functionality for
xlat v1, this patch introduces stubs for directly enabling MMU to
maintain API-compatibility.

Change-Id: Id7d56e124c80af71de999fcda10f1734b50bca97
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-06-27 11:31:30 +01:00
Jeenu Viswambharan 0cc7aa8964 xlat v2: Split MMU setup and enable
At present, the function provided by the translation library to enable
MMU constructs appropriate values for translation library, and programs
them to the right registers. The construction of initial values,
however, is only required once as both the primary and secondaries
program the same values.

Additionally, the MMU-enabling function is written in C, which means
there's an active stack at the time of enabling MMU. On some systems,
like Arm DynamIQ, having active stack while enabling MMU during warm
boot might lead to coherency problems.

This patch addresses both the above problems by:

  - Splitting the MMU-enabling function into two: one that sets up
    values to be programmed into the registers, and another one that
    takes the pre-computed values and writes to the appropriate
    registers. With this, the primary effectively calls both functions
    to have the MMU enabled, but secondaries only need to call the
    latter.

  - Rewriting the function that enables MMU in assembly so that it
    doesn't use stack.

This patch fixes a bunch of MISRA issues on the way.

Change-Id: I0faca97263a970ffe765f0e731a1417e43fbfc45
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-06-27 11:31:30 +01:00
Dimitris Papastamos 9dfd755303
Merge pull request #1437 from jeenu-arm/ras-remaining
SDEI dispatch changes to enable RAS use cases
2018-06-22 09:36:59 +01:00
Antonio Nino Diaz 3a1b7b108a xlat: Remove mmap_attr_t enum type
The values defined in this type are used in logical operations, which
goes against MISRA Rule 10.1: "Operands shall not be of an inappropriate
essential type".

Now, `unsigned int` is used instead. This also allows us to move the
dynamic mapping bit from 30 to 31. It was an undefined behaviour in the
past because an enum is signed by default, and bit 31 corresponds to the
sign bit. It is undefined behaviour to modify the sign bit. Now, bit 31
is free to use as it was originally meant to be.

mmap_attr_t is now defined as an `unsigned int` for backwards
compatibility.

Change-Id: I6b31218c14b9c7fdabebe432de7fae6e90a97f34
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-06-22 08:36:21 +01:00
Jeenu Viswambharan e7b9473e15 BL31: Introduce jump primitives
This patch introduces setjmp() and ongjmp() primitives to enable
standard setjmp/longjmp style execution. Both APIs parameters take a
pointer to struct jmpbuf type, which hosts CPU registers saved/restored
during jump.

As per the standard usage:

  - setjmp() return 0 when a jump is setup; and a non-zero value when
    returning from jump.

  - The caller of setjmp() must not return, or otherwise update stack
    pointer since.

Change-Id: I4af1d32e490cfa547979631b762b4cba188d0551
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-06-21 16:15:23 +01:00
Antonio Nino Diaz 7febd83e22 xlat_v2: Fix descriptor debug print
The XN, PXN and UXN bits are part of the upper attributes, not the
lower attributes.

Change-Id: Ia5e83f06f2a8de88b551f55f1d36d694918ccbc0
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-06-21 10:53:39 +01:00
Dimitris Papastamos b8dc3f146d
Merge pull request #1430 from dp-arm/dp/cpulib
cpulib: Add ISBs or comment why they are unneeded
2018-06-19 15:07:30 +01:00
Dimitris Papastamos c0b7606f91
Merge pull request #1420 from Yann-lms/mm_cursor_size_check
xlat_v2: add a check on mm_cursor->size to avoid infinite loop
2018-06-19 13:39:55 +01:00
Dimitris Papastamos bd5a76ac7c cpulib: Add ISBs or comment why they are unneeded
Change-Id: I18a41bb9fedda635c3c002a7f112578808410ef6
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-06-19 10:34:51 +01:00
Dimitris Papastamos 59c4346383
Merge pull request #1415 from antonio-nino-diaz-arm/an/spm-fixes
Minor fixes to SPM
2018-06-14 14:33:13 +01:00
Yann Gautier 75df62699b xlat_v2: add a check on mm_cursor->size to avoid infinite loop
The issue can occur if end_va is equal to the max architecture address,
and when mm_cursor point to the last entry of mmap_region_t table: {0}.
The first line of the while will then be true, e.g. on AARCH32, we have:
mm_cursor->base_va (=0) + mm_cursor->size (=0) - 1 == end_va (=0xFFFFFFFF)
And the mm_cursor->size = 0 will be lesser than mm->size

A check on mm_cursor->size != 0 should be done as in the previous while,
to avoid such kind of infinite loop.

fixes arm-software/tf-issues#594

Signed-off-by: Yann Gautier <yann.gautier@st.com>
2018-06-14 14:36:20 +02:00
Dimitris Papastamos 74a44dca29
Merge pull request #1399 from danielboulby-arm/db/MISRA
MISRA 5.1, 5.3 & 5.7 compliance changes
2018-06-13 13:32:14 +01:00
Antonio Nino Diaz a0b9bb79a0 xlat v2: Introduce xlat granule size helpers
The function xlat_arch_is_granule_size_supported() can be used to check
if a specific granule size is supported. In Armv8, AArch32 only supports
4 KiB pages. AArch64 supports 4 KiB, 16 KiB or 64 KiB depending on the
implementation, which is detected at runtime.

The function xlat_arch_get_max_supported_granule_size() returns the max
granule size supported by the implementation.

Even though right now they are only used by SPM, they may be useful in
other places in the future. This patch moves the code currently in SPM
to the xlat tables lib so that it can be reused.

Change-Id: If54624a5ecf20b9b9b7f38861b56383a03bbc8a4
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-06-13 09:19:41 +01:00
Daniel Boulby 776ff52a8d Fix MISRA Rule 5.7 Part 3
Rule 5.7: A tag name shall be a unique identifier

Follow convention of shorter names for smaller scope to fix
violations of MISRA rule 5.7

Fixed For:
    make ARM_TSP_RAM_LOCATION=tdram LOG_LEVEL=50 PLAT=fvp SPD=opteed

Change-Id: I5fbb5d6ebddf169550eddb07ed880f5c8076bb76
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
2018-06-12 13:21:36 +01:00
Daniel Boulby 4069292388 Fix MISRA Rule 5.7 Part 1
Rule 5.7: A tag name shall be a unique identifier

There were 2 amu_ctx struct type definitions:
    - In lib/extensions/amu/aarch64/amu.c
    - In lib/cpus/aarch64/cpuamu.c

Renamed the latter to cpuamu_ctx to avoid this name clash

To avoid violation of Rule 8.3 also change name of function
amu_ctxs to unique name (cpuamu_ctxs) since it now returns a
different type (cpuamu_ctx) than the other amu_ctxs function

Fixed for:
    make LOG_LEVEL=50 PLAT=fvp

Change-Id: Ieeb7e390ec2900fd8b775bef312eda93804a43ed
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
2018-06-12 13:21:36 +01:00
Daniel Boulby 7cb81945d5 Fix MISRA Rule 5.3 Part 4
Use a _ prefix for macro arguments to prevent that argument from
hiding variables of the same name in the outer scope

Rule 5.3: An identifier declared in an inner scope shall not
          hide an identifier declared in an outer scope

Fixed For:
    make PLAT=fvp USE_COHERENT_MEM=0

Change-Id: If50c583d3b63799ee6852626b15be00c0f6b10a0
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
2018-06-12 13:21:36 +01:00
Daniel Boulby 896a5902ec Fix MISRA Rule 5.3 Part 2
Use a _ prefix for Macro arguments to prevent that argument from
hiding variables of the same name in the outer scope

Rule 5.3: An identifier declared in an inner scope shall not
          hide an identifier declared in an outer scope

Fixed For:
    make LOG_LEVEL=50 PLAT=fvp

Change-Id: I67b6b05cbad4aeca65ce52981b4679b340604708
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
2018-06-12 13:21:36 +01:00
Dimitris Papastamos e109b0ffea
Merge pull request #1391 from jts-arm/misra
MISRA rule 21.15 fix
2018-06-12 13:01:35 +01:00
John Tsichritzis bdcd33a858 MISRA rule 21.15 fix
Rule 21.15: The pointer arguments to the Standard Library functions
    memcpy, memmove and memcmp shall be pointers to qualified or unqualified
    versions of compatible types.

    Basically that means that both pointer arguments must be of the same
    type. However, even if the pointers passed as arguments to the above
    functions are of the same type, Coverity still thinks it's a violation
    if we do pointer arithmetics directly at the function call. Thus the
    pointer arithmetic operations were moved outside of the function
    argument.

    First detected on the following configuration
            make PLAT=fvp LOG_LEVEL=50

    Change-Id: I8b912ec1bfa6f2d60857cb1bd453981fd7001b94
    Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
2018-06-11 11:41:09 +01:00
Dimitris Papastamos 608529aa24
Merge pull request #1397 from dp-arm/dp/cortex-a76
Add support for Cortex-A76 and Cortex-Ares
2018-06-08 14:01:38 +01:00
Dimitris Papastamos d6b798097e Implement dynamic mitigation for CVE-2018-3639 on Cortex-A76
The Cortex-A76 implements SMCCC_ARCH_WORKAROUND_2 as defined in
"Firmware interfaces for mitigating cache speculation vulnerabilities
System Software on Arm Systems"[0].

Dynamic mitigation for CVE-2018-3639 is enabled/disabled by
setting/clearning bit 16 (Disable load pass store) of `CPUACTLR2_EL1`.

NOTE: The generic code that implements dynamic mitigation does not
currently implement the expected semantics when dispatching an SDEI
event to a lower EL.  This will be fixed in a separate patch.

[0] https://developer.arm.com/cache-speculation-vulnerability-firmware-specification

Change-Id: I8fb2862b9ab24d55a0e9693e48e8be4df32afb5a
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-06-08 11:46:31 +01:00
Dimitris Papastamos 040b546e94 Implement Cortex-Ares 1043202 erratum workaround
The workaround uses the instruction patching feature of the Ares cpu.

Change-Id: I868fce0dc0e8e41853dcce311f01ee3867aabb59
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-06-08 11:46:31 +01:00
Dimitris Papastamos 08268e27ab Add AMU support for Cortex-Ares
Change-Id: Ia170c12d3929a616ba80eb7645c301066641f5cc
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-06-08 11:46:31 +01:00
Isla Mitchell abbffe98ed Add support for Cortex-Ares and Cortex-A76 CPUs
Both Cortex-Ares and Cortex-A76 CPUs use the ARM DynamIQ Shared Unit
(DSU).  The power-down and power-up sequences are therefore mostly
managed in hardware, and required software operations are simple.

Change-Id: I3a9447b5bdbdbc5ed845b20f6564d086516fa161
Signed-off-by: Isla Mitchell <isla.mitchell@arm.com>
2018-06-08 11:46:31 +01:00
Dimitris Papastamos 2b91536625 Fast path SMCCC_ARCH_WORKAROUND_1 calls from AArch32
When SMCCC_ARCH_WORKAROUND_1 is invoked from a lower EL running in
AArch32 state, ensure that the SMC call will take a shortcut in EL3.
This minimizes the time it takes to apply the mitigation in EL3.

When lower ELs run in AArch32, it is preferred that they execute the
`BPIALL` instruction to invalidate the BTB.  However, on some cores
the `BPIALL` instruction may be a no-op and thus would benefit from
making the SMCCC_ARCH_WORKAROUND_1 call go through the fast path.

Change-Id: Ia38abd92efe2c4b4a8efa7b70f260e43c5bda8a5
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-06-07 14:34:45 +01:00
Dimitris Papastamos d003b19093
Merge pull request #1392 from dp-arm/dp/cve_2018_3639
Implement workaround for CVE-2018-3639 on Cortex A57/A72/A73 and A75
2018-05-29 09:28:05 +01:00
Antonio Nino Diaz 1634cae89d context_mgmt: Make cm_init_context_common public
This function can be currently accessed through the wrappers
cm_init_context_by_index() and cm_init_my_context(). However, they only
work on contexts that are associated to a CPU.

By making this function public, it is possible to set up a context that
isn't associated to any CPU. For consistency, it has been renamed to
cm_setup_context().

Change-Id: Ib2146105abc8137bab08745a8adb30ca2c4cedf4
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-05-23 13:50:18 +01:00
Dimitris Papastamos fe007b2e15 Add support for dynamic mitigation for CVE-2018-3639
Some CPUS may benefit from using a dynamic mitigation approach for
CVE-2018-3639.  A new SMC interface is defined to allow software
executing in lower ELs to enable or disable the mitigation for their
execution context.

It should be noted that regardless of the state of the mitigation for
lower ELs, code executing in EL3 is always mitigated against
CVE-2018-3639.

NOTE: This change is a compatibility break for any platform using
the declare_cpu_ops_workaround_cve_2017_5715 macro.  Migrate to
the declare_cpu_ops_wa macro instead.

Change-Id: I3509a9337ad217bbd96de9f380c4ff8bf7917013
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-05-23 12:45:48 +01:00
Dimitris Papastamos e086570815 aarch32: Implement static workaround for CVE-2018-3639
Implement static mitigation for CVE-2018-3639 on
Cortex A57 and A72.

Change-Id: I83409a16238729b84142b19e258c23737cc1ddc3
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-05-23 12:45:48 +01:00
Dimitris Papastamos b8a25bbb0b Implement static workaround for CVE-2018-3639
For affected CPUs, this approach enables the mitigation during EL3
initialization, following every PE reset. No mechanism is provided to
disable the mitigation at runtime.

This approach permanently mitigates the entire software stack and no
additional mitigation code is required in other software components.

TF-A implements this approach for the following affected CPUs:

*   Cortex-A57 and Cortex-A72, by setting bit 55 (Disable load pass store) of
    `CPUACTLR_EL1` (`S3_1_C15_C2_0`).

*   Cortex-A73, by setting bit 3 of `S3_0_C15_C0_0` (not documented in the
    Technical Reference Manual (TRM)).

*   Cortex-A75, by setting bit 35 (reserved in TRM) of `CPUACTLR_EL1`
    (`S3_0_C15_C1_0`).

Additionally, a new SMC interface is implemented to allow software
executing in lower ELs to discover whether the system is mitigated
against CVE-2018-3639.

Refer to "Firmware interfaces for mitigating cache speculation
vulnerabilities System Software on Arm Systems"[0] for more
information.

[0] https://developer.arm.com/cache-speculation-vulnerability-firmware-specification

Change-Id: I084aa7c3bc7c26bf2df2248301270f77bed22ceb
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-05-23 12:45:48 +01:00
Dimitris Papastamos 2c3a10780d Rename symbols and files relating to CVE-2017-5715
This patch renames symbols and files relating to CVE-2017-5715 to make
it easier to introduce new symbols and files for new CVE mitigations.

Change-Id: I24c23822862ca73648c772885f1690bed043dbc7
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-05-23 12:45:48 +01:00
Dimitris Papastamos 2c893f50ac
Merge pull request #1378 from vwadekar/denver-cve-2017-5715
CVE-2017-5715 mitigation for Denver CPUs
2018-05-16 10:59:25 +01:00
Varun Wadekar b0301467bc Workaround for CVE-2017-5715 on NVIDIA Denver CPUs
Flush the indirect branch predictor and RSB on entry to EL3 by issuing
a newly added instruction for Denver CPUs. Support for this operation
can be determined by comparing bits 19:16 of ID_AFR0_EL1 with 0b0001.

To achieve this without performing any branch instruction, a per-cpu
vbar is installed which executes the workaround and then branches off
to the corresponding vector entry in the main vector table. A side
effect of this change is that the main vbar is configured before any
reset handling. This is to allow the per-cpu reset function to override
the vbar setting.

Change-Id: Ief493cd85935bab3cfee0397e856db5101bc8011
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
2018-05-15 15:53:50 -07:00
Dimitris Papastamos 10df381100
Merge pull request #1376 from vwadekar/cm-init-actlr-el1
lib: el3_runtime: initialise actlr_el1 to hardware defaults
2018-05-15 18:40:46 +01:00
Dimitris Papastamos a513506b07
Merge pull request #1373 from jeenu-arm/ras-support
RAS support
2018-05-15 15:34:20 +01:00
Varun Wadekar 2ab9617ef2 lib: el3_runtime: initialise actlr_el1 to hardware defaults
The context management library initialises the CPU context for the
secure/non-secure worlds to zero. This leads to zeros being stored
to the actual registers when we restore the CPU context, during a
world switch. Denver CPUs dont expect zero to be written to the
implementation defined, actlr_el1 register, at any point of time.
Writing a zero to some fields of this register, results in an
UNDEFINED exception.

This patch bases the context actlr_el1 value on the actual hardware
register, to maintain parity with the expected settings

Change-Id: I1c806d7ff12daa7fd1e5c72825494b81454948f2
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
2018-05-09 08:58:15 -07:00
Dimitris Papastamos 885ca54a75
Merge pull request #1377 from robertovargas-arm/compiler-warnings
Compiler warnings
2018-05-09 13:40:35 +01:00
Roberto Vargas a83a74d230 Don't use variables as tf_printf format strings
Using variables as format strings can generate security problems when
the user can control those strings. Some compilers generate warnings
in that cases, even when the variables are constants and are not
controlled by the user.

Change-Id: I65dee1d1b66feab38cbf298290a86fa56e6cca40
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-05-09 11:26:36 +01:00
danh-arm 43d71452b2
Merge pull request #1354 from robertovargas-arm/mem_protect
ARM platforms: Demonstrate mem_protect from el3_runtime
2018-05-08 11:21:04 +01:00
Jeenu Viswambharan 1a7c1cfe70 RAS: Add fault injection support
The ARMv8.4 RAS extensions introduce architectural support for software
to inject faults into the system in order to test fault-handling
software. This patch introduces the build option FAULT_HANDLING_SUPPORT
to allow for lower ELs to use registers in the Standard Error Record to
inject fault. The build option RAS_EXTENSIONS must also be enabled along
with fault injection.

This feature is intended for testing purposes only, and is advisable to
keep disabled for production images.

Change-Id: I6f7a4454b15aec098f9505a10eb188c2f928f7ea
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-04 08:33:17 +01:00
Jeenu Viswambharan ca6d918582 RAS: Allow individual interrupt registration
EHF currently allows for registering interrupt handlers for a defined
priority ranges. This is primarily targeted at various EL3 dispatchers
to own ranges of secure interrupt priorities in order to delegate
execution to lower ELs.

The RAS support added by earlier patches necessitates registering
handlers based on interrupt number so that error handling agents shall
receive and handle specific Error Recovery or Fault Handling interrupts
at EL3.

This patch introduces a macro, RAS_INTERRUPTS() to declare an array of
interrupt numbers and handlers. Error handling agents can use this macro
to register handlers for individual RAS interrupts. The array is
expected to be sorted in the increasing order of interrupt numbers.

As part of RAS initialisation, the list of all RAS interrupts are sorted
based on their ID so that, given an interrupt, its handler can be looked
up with a simple binary search.

For an error handling agent that wants to handle a RAS interrupt,
platform must:

  - Define PLAT_RAS_PRI to be the priority of all RAS exceptions.

  - Enumerate interrupts to have the GIC driver program individual EL3
    interrupts to the required priority range. This is required by EHF
    even before this patch.

Documentation to follow.

Change-Id: I9471e4887ff541f8a7a63309e9cd8f771f76aeda
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-04 08:33:17 +01:00
Jeenu Viswambharan 362599eca4 RAS: Add support for node registration
Previous patches added frameworks for handling RAS errors. This patch
introduces features that the platform can use to enumerate and iterate
RAS nodes:

  - The REGISTER_RAS_NODES() can be used to expose an array of
    ras_node_info_t structures. Each ras_node_info_t describes a RAS
    node, along with handlers for probing the node for error, and if
    did record an error, another handler to handle it.

  - The macro for_each_ras_node() can be used to iterate over the
    registered RAS nodes, probe for, and handle any errors.

The common platform EA handler has been amended using error handling
primitives introduced by both this and previous patches.

Change-Id: I2e13f65a88357bc48cd97d608db6c541fad73853
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-04 08:33:17 +01:00
Jeenu Viswambharan 30d81c36da RAS: Add helpers to access Standard Error Records
The ARMv8 RAS Extensions introduced Standard Error Records which are a
set of standard registers through which:

  - Platform can configure RAS node policy; e.g., notification
    mechanism;

  - RAS nodes can record and expose error information for error handling
    agents.

Standard Error Records can either be accessed via. memory-mapped
or System registers. This patch adds helper functions to access
registers and fields within an error record.

Change-Id: I6594ba799f4a1789d7b1e45b3e17fd40e7e0ba5c
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-04 08:33:17 +01:00
Jeenu Viswambharan 14c6016ad5 AArch64: Introduce RAS handling
RAS extensions are mandatory for ARMv8.2 CPUs, but are also optional
extensions to base ARMv8.0 architecture.

This patch adds build system support to enable RAS features in ARM
Trusted Firmware. A boolean build option RAS_EXTENSION is introduced for
this.

With RAS_EXTENSION, an Exception Synchronization Barrier (ESB) is
inserted at all EL3 vector entry and exit. ESBs will synchronize pending
external aborts before entering EL3, and therefore will contain and
attribute errors to lower EL execution. Any errors thus synchronized are
detected via. DISR_EL1 register.

When RAS_EXTENSION is set to 1, HANDLE_EL3_EA_FIRST must also be set to 1.

Change-Id: I38a19d84014d4d8af688bd81d61ba582c039383a
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-04 08:33:17 +01:00
Jeenu Viswambharan ef653d93cc AArch64: Refactor GP register restore to separate function
At present, the function that restores general purpose registers also
does ERET. Refactor the restore code to restore general purpose
registers without ERET to complement the save function.

The macro save_x18_to_x29_sp_el0 was used only once, and is therefore
removed, and its contents expanded inline for readability.

No functional changes, but with this patch:

  - The SMC return path will incur an branch-return and an additional
    register load.

  - The unknown SMC path restores registers x0 to x3.

Change-Id: I7a1a63e17f34f9cde810685d70a0ad13ca3b7c50
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2018-05-04 08:32:42 +01:00
danh-arm 0ef858bdad
Merge pull request #1370 from antonio-nino-diaz-arm/an/fix-parange
xlat: Have all values of PARange for 8.x architectures
2018-05-03 16:48:14 +01:00
Antonio Nino Diaz d3c4487cd5 xlat: Have all values of PARange for 8.x architectures
In AArch64, the field ID_AA64MMFR0_EL1.PARange has a different set of
allowed values depending on the architecture version.

Previously, we only compiled the Trusted Firmware with the values that
were allowed by the architecture. However, given that this field is
read-only, it is easier to compile the code with all values regardless
of the target architecture.

Change-Id: I57597ed103dd0189b1fb738a9ec5497391c10dd1
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-05-02 11:23:56 +01:00
Roberto Vargas 638b034cc3 ARM platforms: Demonstrate mem_protect from el3_runtime
Previously mem_protect used to be only supported from BL2. This is not
helpful in the case when ARM TF-A BL2 is not used. This patch demonstrates
mem_protect from el3_runtime firmware on ARM Platforms specifically
when RESET_TO_BL31 or RESET_TO_SP_MIN flag is set as BL2 may be absent
in these cases. The Non secure DRAM is dynamically mapped into EL3 mmap
tables temporarily and then the protected regions are then cleared. This
avoids the need to map the non secure DRAM permanently to BL31/sp_min.

The stack size is also increased, because DYNAMIC_XLAT_TABLES require
a bigger stack.

Change-Id: Ia44c594192ed5c5adc596c0cff2c7cc18c001fde
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-05-01 15:25:25 +01:00
Antonio Nino Diaz 01c0a38ef0 xlat: Set AP[1] to 1 when it is RES1
According to the ARMv8 ARM issue C.a:

    AP[1] is valid only for stage 1 of a translation regime that can
    support two VA ranges. It is RES 1 when stage 1 translations can
    support only one VA range.

This means that, even though this bit is ignored, it should be set to 1
in the EL3 and EL2 translation regimes.

For translation regimes consisting on EL0 and a higher regime this bit
selects between control at EL0 or at the higher Exception level. The
regimes that support two VA ranges are EL1&0 and EL2&0 (the later one
is only available since ARMv8.1).

This fix has to be applied to both versions of the translation tables
library.

Change-Id: If19aaf588551bac7aeb6e9a686cf0c2068e7c181
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-04-26 12:59:08 +01:00