Commit Graph

412 Commits

Author SHA1 Message Date
Dimitris Papastamos 0767d50e69 AMU: Add configuration helpers for aarch64
Add some AMU helper functions to allow configuring, reading and
writing of the Group 0 and Group 1 counters.  Documentation for these
helpers will come in a separate patch.

Change-Id: I656e070d2dae830c22414f694aa655341d4e2c40
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-01-11 12:27:29 +00:00
Dimitris Papastamos 59902b7c4c AMU: Add plat interface to select which group 1 counters to enable
A new platform macro `PLAT_AMU_GROUP1_COUNTERS_MASK` controls which
group 1 counters should be enabled. The maximum number of group 1
counters supported by AMUv1 is 16 so the mask can be at most 0xffff.
If the platform does not define this mask, no group 1 counters are
enabled.

A related platform macro `PLAT_AMU_GROUP1_NR_COUNTERS` is used by
generic code to allocate an array to save and restore the counters on
CPU suspend.

Change-Id: I6d135badf4846292de931a43bb563077f42bb47b
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-01-11 12:27:27 +00:00
Dimitris Papastamos 7593252cee Add PubSub events for CPU powerdown/powerup
The suspend hook is published at the start of a CPU powerdown
operation.  The resume hook is published at the end of a CPU powerup
operation.

Change-Id: I50c05e2dde0d33834095ac41b4fcea4c161bb434
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-01-11 10:33:41 +00:00
Dimitris Papastamos 780edd86a0 Use PFR0 to identify need for mitigation of CVE-2017-5915
If the CSV2 field reads as 1 then branch targets trained in one
context cannot affect speculative execution in a different context.
In that case skip the workaround on Cortex A75.

Change-Id: I4d5504cba516a67311fb5f0657b08f72909cbd38
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-01-11 10:26:15 +00:00
Dimitris Papastamos a1781a211a Workaround for CVE-2017-5715 on Cortex A73 and A75
Invalidate the Branch Target Buffer (BTB) on entry to EL3 by
temporarily dropping into AArch32 Secure-EL1 and executing the
`BPIALL` instruction.

This is achieved by using 3 vector tables.  There is the runtime
vector table which is used to handle exceptions and 2 additional
tables which are required to implement this workaround.  The
additional tables are `vbar0` and `vbar1`.

The sequence of events for handling a single exception is
as follows:

1) Install vector table `vbar0` which saves the CPU context on entry
   to EL3 and sets up the Secure-EL1 context to execute in AArch32 mode
   with the MMU disabled and I$ enabled.  This is the default vector table.

2) Before doing an ERET into Secure-EL1, switch vbar to point to
   another vector table `vbar1`.  This is required to restore EL3 state
   when returning from the workaround, before proceeding with normal EL3
   exception handling.

3) While in Secure-EL1, the `BPIALL` instruction is executed and an
   SMC call back to EL3 is performed.

4) On entry to EL3 from Secure-EL1, the saved context from step 1) is
   restored.  The vbar is switched to point to `vbar0` in preparation to
   handle further exceptions.  Finally a branch to the runtime vector
   table entry is taken to complete the handling of the original
   exception.

This workaround is enabled by default on the affected CPUs.

NOTE
====

There are 4 different stubs in Secure-EL1.  Each stub corresponds to
an exception type such as Sync/IRQ/FIQ/SError.  Each stub will move a
different value in `R0` before doing an SMC call back into EL3.
Without this piece of information it would not be possible to know
what the original exception type was as we cannot use `ESR_EL3` to
distinguish between IRQs and FIQs.

Change-Id: I90b32d14a3735290b48685d43c70c99daaa4b434
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-01-11 10:26:15 +00:00
Dimitris Papastamos f62ad32269 Workaround for CVE-2017-5715 on Cortex A57 and A72
Invalidate the Branch Target Buffer (BTB) on entry to EL3 by disabling
and enabling the MMU.  To achieve this without performing any branch
instruction, a per-cpu vbar is installed which executes the workaround
and then branches off to the corresponding vector entry in the main
vector table.  A side effect of this change is that the main vbar is
configured before any reset handling.  This is to allow the per-cpu
reset function to override the vbar setting.

This workaround is enabled by default on the affected CPUs.

Change-Id: I97788d38463a5840a410e3cea85ed297a1678265
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2018-01-11 10:26:15 +00:00
Antonio Nino Diaz 96abc22b94 xlat v2: Correctly unmap regions on map error
`mm_cursor` doesn't have the needed data because the `memmove()` that
is called right before it overwrites that information. In order to get
the information of the region that was being mapped, `mm` has to be used
instead (like it is done to fill the fields of `unmap_mm`).

If the incorrect information is read, this check isn't reliable and
`xlat_tables_unmap_region` may be requested to unmap memory that isn't
mapped at all, triggering assertions.

Change-Id: I602d4ac83095d4e5dac9deb34aa5d00d00e6c289
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-01-10 15:39:37 +00:00
davidcunado-arm 211d307c6b
Merge pull request #1178 from davidcunado-arm/dc/enable_sve
Enable SVE for Non-secure world
2017-12-11 12:29:47 +00:00
davidcunado-arm a852ec4605
Merge pull request #1168 from matt2048/master
Replace macro ASM_ASSERTION with macro ENABLE_ASSERTIONS
2017-12-04 22:39:40 +00:00
David Cunado 1a853370ff Enable SVE for Non-secure world
This patch adds a new build option, ENABLE_SVE_FOR_NS, which when set
to one EL3 will check to see if the Scalable Vector Extension (SVE) is
implemented when entering and exiting the Non-secure world.

If SVE is implemented, EL3 will do the following:

- Entry to Non-secure world: SIMD, FP and SVE functionality is enabled.

- Exit from Non-secure world: SIMD, FP and SVE functionality is
  disabled. As SIMD and FP registers are part of the SVE Z-registers
  then any use of SIMD / FP functionality would corrupt the SVE
  registers.

The build option default is 1. The SVE functionality is only supported
on AArch64 and so the build option is set to zero when the target
archiecture is AArch32.

This build option is not compatible with the CTX_INCLUDE_FPREGS - an
assert will be raised on platforms where SVE is implemented and both
ENABLE_SVE_FOR_NS and CTX_INCLUDE_FPREGS are set to 1.

Also note this change prevents secure world use of FP&SIMD registers on
SVE-enabled platforms. Existing Secure-EL1 Payloads will not work on
such platforms unless ENABLE_SVE_FOR_NS is set to 0.

Additionally, on the first entry into the Non-secure world the SVE
functionality is enabled and the SVE Z-register length is set to the
maximum size allowed by the architecture. This includes the use case
where EL2 is implemented but not used.

Change-Id: Ie2d733ddaba0b9bef1d7c9765503155188fe7dae
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-11-30 17:45:09 +00:00
Dimitris Papastamos ef69e1ea62 AMU: Implement support for aarch32
The `ENABLE_AMU` build option can be used to enable the
architecturally defined AMU counters.  At present, there is no support
for the auxiliary counter group.

Change-Id: Ifc7532ef836f83e629f2a146739ab61e75c4abc8
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-11-29 09:36:35 +00:00
Dimitris Papastamos 380559c1c3 AMU: Implement support for aarch64
The `ENABLE_AMU` build option can be used to enable the
architecturally defined AMU counters.  At present, there is no support
for the auxiliary counter group.

Change-Id: I7ea0c0a00327f463199d1b0a481f01dadb09d312
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-11-29 09:36:05 +00:00
Dimitris Papastamos 0319a97747 Implement support for the Activity Monitor Unit on Cortex A75
The Cortex A75 has 5 AMU counters.  The first three counters are fixed
and the remaining two are programmable.

A new build option is introduced, `ENABLE_AMU`.  When set, the fixed
counters will be enabled for use by lower ELs.  The programmable
counters are currently disabled.

Change-Id: I4bd5208799bb9ed7d2596e8b0bfc87abbbe18740
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-11-29 09:36:05 +00:00
davidcunado-arm 71f8a6a9b0
Merge pull request #1145 from etienne-lms/rfc-armv7-2
Support ARMv7 architectures
2017-11-23 23:41:24 +00:00
davidcunado-arm 1c64838d4b
Merge pull request #1164 from robertovargas-arm/psci-affinity
Flush the affinity data in psci_affinity_info
2017-11-23 10:18:06 +00:00
Matt Ma 5f70d8de5b Replace macro ASM_ASSERTION with macro ENABLE_ASSERTIONS
This patch replaces the macro ASM_ASSERTION with the macro
ENABLE_ASSERTIONS in ARM Cortex-A53/57/72 MPCore Processor
related files. There is build error when ASM_ASSERTION is set
to 1 and ENABLE_ASSERTIONS is set to 0 because function
asm_assert in common/aarch32/debug.S is defined in the macro
ENABLE_ASSERTIONS but is called with the macro ASM_ASSERTION.

There is also the indication to use ENABLE_ASSERTIONS but not
ASM_ASSERTION in the Makefile.

Signed-off-by: Matt Ma <matt.ma@spreadtrum.com>
2017-11-23 09:44:07 +08:00
davidcunado-arm fe964ecf12
Merge pull request #1163 from antonio-nino-diaz-arm/an/parange
Add ARMv8.2 ID_AA64MMFR0_EL1.PARange value
2017-11-23 00:39:55 +00:00
Roberto Vargas 8fd307ffd6 Flush the affinity data in psci_affinity_info
There is an edge case where the cache maintaince done in
psci_do_cpu_off may not seen by some cores. This case is handled in
psci_cpu_on_start but it hasn't handled in psci_affinity_info.

Change-Id: I4d64f3d1ca9528e364aea8d04e2d254f201e1702
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2017-11-20 13:38:43 +00:00
Dimitris Papastamos 281a08cc64 Refactor Statistical Profiling Extensions implementation
Factor out SPE operations in a separate file.  Use the publish
subscribe framework to drain the SPE buffers before entering secure
world.  Additionally, enable SPE before entering normal world.

A side effect of this change is that the profiling buffers are now
only drained when a transition from normal world to secure world
happens.  Previously they were drained also on return from secure
world, which is unnecessary as SPE is not supported in S-EL1.

Change-Id: I17582c689b4b525770dbb6db098b3a0b5777b70a
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-11-20 09:55:01 +00:00
Dimitris Papastamos 0fd0f22298 Factor out extension enabling to a separate function
Factor out extension enabling to a separate function that is called
before exiting from EL3 for first entry into Non-secure world.

Change-Id: Ic21401ebba531134d08643c0a1ca9de0fc590a1b
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-11-20 09:55:01 +00:00
Antonio Nino Diaz 6504b2c5b0 Add ARMv8.2 ID_AA64MMFR0_EL1.PARange value
If an implementation of ARMv8.2 includes ARMv8.2-LPA, the value 0b0110
is permitted in ID_AA64MMFR0_EL1.PARange, which means that the Physical
Address range supported is 52 bits (4 PiB). It is a reserved value
otherwise.

Change-Id: Ie0147218e9650aa09f0034a9ee03c1cca8db908a
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-11-17 09:52:53 +00:00
David Cunado 91089f360a Move FPEXC32_EL2 to FP Context
The FPEXC32_EL2 register controls SIMD and FP functionality when the
lower ELs are executing in AArch32 mode. It is architecturally mapped
to AArch32 system register FPEXC.

This patch removes FPEXC32_EL2 register from the System Register context
and adds it to the floating-point context. EL3 only saves / restores the
floating-point context if the build option CTX_INCLUDE_FPREGS is set to 1.

The rationale for this change is that if the Secure world is using FP
functionality and EL3 is not managing the FP context, then the Secure
world will save / restore the appropriate FP registers.

NOTE - this is a break in behaviour in the unlikely case that
CTX_INCLUDE_FPREGS is set to 0 and the platform contains an AArch32
Secure Payload that modifies FPEXC, but does not save and restore
this register

Change-Id: Iab80abcbfe302752d52b323b4abcc334b585c184
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-11-15 22:42:05 +00:00
davidcunado-arm 9500d5a438
Merge pull request #1148 from antonio-nino-diaz-arm/an/spm
Introduce Secure Partition Manager
2017-11-09 22:38:37 +00:00
Antonio Nino Diaz ad02a7596f xlat: Make function to calculate TCR PA bits public
This function can be useful to setup TCR_ELx by callers that don't use
the translation tables library to setup the system registers related
to them. By making it common, it can be reused whenever it is needed
without duplicating code.

Change-Id: Ibfada9e846d2a6cd113b1925ac911bb27327d375
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-11-08 18:05:14 +00:00
Etienne Carriere 1d791530d0 ARMv7: division support for missing __aeabi_*divmod
ARMv7-A architectures that do not support the Virtualization extensions
do not support instructions for the 32bit division. This change provides
a software implementation for 32bit division.

The division implementation is dumped from the OP-TEE project
http://github.com/OP-TEE/optee_os. The code was slightly modified
to pass trusted firmware checkpatch requirements and copyright is
given to the ARM trusted firmware initiative and its contributors.

Change-Id: Idae0c7b80a0d75eac9bd41ae121921d4c5af3fa3
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 14:42:07 +01:00
Etienne Carriere 86e2683597 ARMv7 may not support Generic Timer Extension
If ARMv7 based platform does not set ARM_CORTEX_Ax=yes, platform
shall define ARMV7_SUPPORTS_GENERIC_TIMER to enable generic timer
support.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 14:41:47 +01:00
Etienne Carriere 51b992ecec ARMv7 may not support large page addressing
ARCH_SUPPORTS_LARGE_PAGE_ADDRESSING allows build environment to
handle specific case when target ARMv7 core only supports 32bit MMU
descriptor mode.

If ARMv7 based platform does not set ARM_CORTEX_Ax=yes, platform
shall define ARMV7_SUPPORTS_LARGE_PAGE_ADDRESSING to enable
large page addressing support.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 13:53:47 +01:00
Etienne Carriere 1ca8d02316 ARMv7: introduce Cortex-A12
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 13:49:55 +01:00
Etienne Carriere 778e411dc9 ARMv7: introduce Cortex-A17
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 13:49:52 +01:00
Etienne Carriere 6ff43c2639 ARMv7: introduce Cortex-A7
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 13:49:49 +01:00
Etienne Carriere d56a846121 ARMv7: introduce Cortex-A5
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 13:49:45 +01:00
Etienne Carriere e3148c2b53 ARMv7: introduce Cortex-A9
As Cortex-A9 needs to manually enable program flow prediction,
do not reset SCTLR[Z] at entry. Platform should enable it only
once MMU is enabled.

Change-Id: I34e1ee2da73221903f7767f23bc6fc10ad01e3de
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 13:49:43 +01:00
Etienne Carriere 10922e7ade ARMv7: introduce Cortex-A15
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 13:49:40 +01:00
Etienne Carriere 0147bef523 ARMv7 does not support STL instruction
Also need to add a SEV instruction in ARMv7 spin_unlock which
is implicit in ARMv8.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 13:49:29 +01:00
Soby Mathew bfc87a8dff Fix PSCI STAT time stamp collection
This patch includes various fixes for PSCI STAT functionality
relating to timestamp collection:

1. The PSCI stat accounting for retention states for higher level
power domains were done outside the locks which could lead to
spurious values in some race conditions. This is moved inside
the locks. Also, the call to start the stat accounting was redundant
which is now removed.

2. The timestamp wrap-around case when calculating residency did
not cater for AArch32. This is now fixed.

3. In the warm boot path, `plat_psci_stat_accounting_stop()` was
getting invoked prior to population of target power states. This
is now corrected.

Change-Id: I851526455304fb74ff0a724f4d5318cd89e19589
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
2017-11-03 13:27:34 +00:00
Dimitris Papastamos 17b4c0dd0a aarch64: Add PubSub events to capture security state transitions
Add events that trigger before entry to normal/secure world.  The
events trigger after the normal/secure context has been restored.

Similarly add events that trigger after leaving normal/secure world.
The events trigger after the normal/secure context has been saved.

Change-Id: I1b48a7ea005d56b1f25e2b5313d77e67d2f02bc5
Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
2017-10-31 10:33:27 +00:00
Jeenu Viswambharan bd0c347781 PSCI: Publish CPU ON event
This allows other EL3 components to subscribe to CPU on events.

Update Firmware Design guide to list psci_cpu_on_finish as an available
event.

Change-Id: Ida774afe0f9cdce4021933fcc33a9527ba7aaae2
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2017-10-23 08:15:11 +01:00
davidcunado-arm 8b9f419e3e Merge pull request #1136 from antonio-nino-diaz-arm/an/xlat-get-set-attr
Add APIs to get and modify attributes of memory regions
2017-10-20 17:17:09 +01:00
davidcunado-arm ccd0c24cf8 Merge pull request #1127 from davidcunado-arm/dc/pmrc_init
Init and save / restore of PMCR_EL0 / PMCR
2017-10-17 13:53:17 +01:00
davidcunado-arm 5d2f87e850 Merge pull request #1126 from robertovargas-arm/psci-v1.1
Update PSCI to v1.1
2017-10-17 12:18:23 +01:00
Antonio Nino Diaz ec0c8fdacf Introduce functions to disable the MMU in EL1
The implementation is the same as those used to disable it in EL3.

Change-Id: Ibfe7e69034a691fbf57477c5a76a8cdca28f6b26
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-10-17 12:02:37 +01:00
Sandrine Bailleux 996d6b390d xlat: Introduce API to change memory attributes of a region
This patch introduces a new API in the translation tables library
(v2), that allows to change the memory attributes of a memory
region. It may be used to change its execution permissions and
data access permissions.

As a prerequisite, the memory must be already mapped. Moreover, it
must be mapped at the finest granularity (currently 4 KB).

Change-Id: I242a8c6f0f3ef2b0a81a61e28706540462faca3c
Co-authored-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
Co-authored-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-10-17 12:02:36 +01:00
Sandrine Bailleux 1be910bb3d xlat: Introduce API to get memory attributes of a region
This patch introduces a new API in the translation tables library
(v2), that allows to query the memory attributes of a memory block
or a memory page.

Change-Id: I45a8b39a53da39e7617cbac4bff5658dc1b20a11
Co-authored-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
Co-authored-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-10-17 12:02:36 +01:00
davidcunado-arm 4468442933 Merge pull request #1123 from robertovargas-arm/reset2
Integration of reset2 PSCI v1.1 functionality
2017-10-16 16:31:13 +01:00
Roberto Vargas 4ce9b8eaf6 mem_protect: Fix PSCI FEATURES API for MEM_PROTECT_CHECK
With this patch the PSCI_FEATURES API correctly reports availability
of the PSCI_MEM_PROTECT_CHECK API - PSCI_MEM_CHK_RANGE_AARCH64 is
added to the PSCI capabilities mask, PSCI_CAP_64BIT_MASK

Change-Id: Ic90ee804deaadf0f948dc2d46ac5fe4121ef77ae
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2017-10-13 12:12:20 +01:00
David Cunado 3e61b2b543 Init and save / restore of PMCR_EL0 / PMCR
Currently TF does not initialise the PMCR_EL0 register in
the secure context or save/restore the register.

In particular, the DP field may not be set to one to prohibit
cycle counting in the secure state, even though event counting
generally is prohibited via the default setting of MDCR_EL3.SMPE
to 0.

This patch initialises PMCR_EL0.DP to one in the secure state
to prohibit cycle counting and also initialises other fields
that have an architectually UNKNOWN reset value.

Additionally, PMCR_EL0 is added to the list of registers that are
saved and restored during a world switch.

Similar changes are made for PMCR for the AArch32 execution state.

NOTE: secure world code at lower ELs that assume other values in PMCR_EL0
will be impacted.

Change-Id: Iae40e8c0a196d74053accf97063ebc257b4d2f3a
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-10-13 09:48:48 +01:00
Roberto Vargas 36a8f8fd47 reset2: Add PSCI system_reset2 function
This patch implements PSCI_SYSTEM_RESET2 API as defined in PSCI
v1.1 specification. The specification allows architectural and
vendor-specific resets via this API. In the current specification,
there is only one architectural reset, the warm reset. This reset is
intended to provide a fast reboot path that guarantees not to reset
system main memory.

Change-Id: I057bb81a60cd0fe56465dbb5791d8e1cca025bd3
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2017-10-13 08:08:22 +01:00
davidcunado-arm 0f49d4968b Merge pull request #1117 from antonio-nino-diaz-arm/an/xlat-improvements
Improvements to the translation tables library v2
2017-10-09 23:09:29 +01:00
Antonio Nino Diaz 609c91917f xlat: Add support for EL0 and EL1 mappings
This patch introduces the ability of the xlat tables library to manage
EL0 and EL1 mappings from a higher exception level.

Attributes MT_USER and MT_PRIVILEGED have been added to allow the user
specify the target EL in the translation regime EL1&0.

REGISTER_XLAT_CONTEXT2 macro is introduced to allow creating a
xlat_ctx_t that targets a given translation regime (EL1&0 or EL3).

A new member is added to xlat_ctx_t to represent the translation regime
the xlat_ctx_t manages. The execute_never mask member is removed as it
is computed from existing information.

Change-Id: I95e14abc3371d7a6d6a358cc54c688aa9975c110
Co-authored-by: Douglas Raillard <douglas.raillard@arm.com>
Co-authored-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
Co-authored-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-10-05 14:32:12 +01:00
Douglas Raillard b4ae615bd7 xlat: Introduce function xlat_arch_tlbi_va_regime()
Introduce a variant of the TLB invalidation helper function that
allows the targeted translation regime to be specified, rather than
defaulting to the current one.

This new function is useful in the context of EL3 software managing
translation tables for the S-EL1&0 translation regime, as then it
might need to invalidate S-EL1&0 TLB entries rather than EL3 ones.

Define a new enumeration to be able to represent translation regimes in
the xlat tables library.

Change-Id: Ibe4438dbea2d7a6e7470bfb68ff805d8bf6b07e5
Co-authored-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
Co-authored-by: Douglas Raillard <douglas.raillard@arm.com>
Co-authored-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-10-05 14:32:12 +01:00
Sandrine Bailleux f301da44fa xlat: Always compile TLB invalidation functions
TLB invalidation functions used to be conditionally compiled in.
They were enabled only when using the dynamic mapping feature.
because only then would we need to modify page tables on the fly.

Actually there are other use cases where invalidating TLBs is required.
When changing memory attributes in existing translation descriptors for
example. These other use cases do not necessarily depend on the dynamic
mapping feature.

This patch removes this dependency and always compile TLB invalidation
functions in. If they're not used, they will be removed from the binary
at link-time anyway so there's no consequence on the memory footprint
if these functions are not called.

Change-Id: I1c33764ae900eb00073ee23b7d0d53d4efa4dd21
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-10-05 14:32:12 +01:00
Sandrine Bailleux fdb1964c34 xlat: Introduce MAP_REGION2() macro
The current implementation of the memory mapping API favours mapping
memory regions using the biggest possible block size in order to
reduce the number of translation tables needed.

In some cases, this behaviour might not be desirable. When translation
tables are edited at run-time, coarse-grain mappings like that might
need splitting into finer-grain tables. This operation has a
performance cost.

The MAP_REGION2() macro allows to specify the granularity of
translation tables used for the initial mapping of a memory region.
This might increase performance for memory regions that are likely to
be edited in the future, at the expense of a potentially increased
memory footprint.

The Translation Tables Library Design Guide has been updated to
explain the use case for this macro. Also added a few intermediate
titles to make the guide easier to digest.

Change-Id: I04de9302e0ee3d326b8877043a9f638766b81b7b
Co-authored-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
Co-authored-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-10-05 14:32:12 +01:00
davidcunado-arm c64d1345a8 Merge pull request #1109 from robertovargas-arm/mem_protect
Mem protect
2017-10-04 16:23:59 +01:00
Roberto Vargas 43cbaf0615 Add mem_region utility functions
This commit introduces a new type (mem_region_t) used to describe
memory regions and it adds two utility functions:

	- clear_mem_regions: This function clears (write 0) to a set
		of regions described with an array of mem_region_t.

	- mem_region_in_array_chk This function checks if a
		region is covered by some of the regions described
		with an array of mem_region_t.

Change-Id: I12ce549f5e81dd15ac0981645f6e08ee7c120811
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2017-09-25 13:32:20 +01:00
Roberto Vargas d4c596be87 mem_protect: Add mem_protect API
This patch adds the generic code that links the psci smc handler
with the platform function that implements the mem_protect and
mem_check_range functionalities. These functions are  optional
APIs added in PSCI v1.1 (ARM DEN022D).

Change-Id: I3bac1307a5ce2c7a196ace76db8317e8d8c8bb3f
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2017-09-25 13:32:20 +01:00
Antonio Nino Diaz 3388b38dc3 Set TCR_EL1.EPD1 bit to 1
In the S-EL1&0 translation regime we aren't using the higher VA range,
whose translation table base address is held in TTBR1_EL1. The bit
TCR_EL1.EPD1 can be used to disable translations using TTBR1_EL1, but
the code wasn't setting it to 1. Additionally, other fields in TCR1_EL1
associated with the higher VA range (TBI1, TG1, SH1, ORGN1, IRGN1 and
A1) weren't set correctly as they were left as 0. In particular, 0 is a
reserved value for TG1. Also, TBBR1_EL1 was not explicitly set and its
reset value is UNKNOWN.

Therefore memory accesses to the higher VA range would result in
unpredictable behaviour as a translation table walk would be attempted
using an UNKNOWN value in TTBR1_EL1.

On the FVP and Juno platforms accessing the higher VA range resulted in
a translation fault, but this may not always be the case on all
platforms.

This patch sets the bit TCR_EL1.EPD1 to 1 so that any kind of
unpredictable behaviour is prevented.

This bug only affects the AArch64 version of the code, the AArch32
version sets this bit to 1 as expected.

Change-Id: I481c000deda5bc33a475631301767b9e0474a303
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-09-21 11:57:11 +01:00
Douglas Raillard df312c5a2b xlat: simplify mmap_add_region_check parameters (#1101)
Use a mmap_region_t as parameter instead of getting a parameter for each
structure member. This reduces the scope of changes when adding members
to mmap_region_t.

Also align on the convention of using mm_cursor as a variable name for
the currently inspected region when iterating on the region array.

Change-Id: If40bc4351b56c64b214e60dda27276d11ce9dbb3
Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2017-09-21 08:42:21 +01:00
davidcunado-arm ea12986b87 Merge pull request #1099 from douglas-raillard-arm/dr/fix_mm_copy
xlat: fix mm copy when adding a region
2017-09-19 18:30:15 +01:00
davidcunado-arm 756f9bb86e Merge pull request #1094 from douglas-raillard-arm/dr/fix_mmap_add_dynamic_region
xlat: Use MAP_REGION macro as compatibility layer
2017-09-15 14:32:08 +01:00
Douglas Raillard 73addb728d xlat: fix mm copy when adding a region
mmap_add_region_ctx and mmap_add_dynamic_region_ctx are clearing members
that they are not aware of by copying each member one by one. Replace
this by structure assignment.

Change-Id: I7c70cb408c8a8eb551402a5d8d956c1fb7f32b55
Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2017-09-14 10:37:32 +01:00
davidcunado-arm f18f5f9867 Merge pull request #1092 from jeenu-arm/errata-workarounds
Errata workarounds
2017-09-13 14:52:24 +01:00
davidcunado-arm 800a55ea7e Merge pull request #1087 from robertovargas-arm/psci_do_cpu_off
Reduce time lock in psci_do_cpu_off
2017-09-11 18:19:03 +01:00
Douglas Raillard 769d65da77 xlat: Use MAP_REGION macro as compatibility layer
Use the MAP_REGION to build the mmap_region_t argument in wrappers like
mmap_add_region(). Evolution of the mmap_region_t might require adding
new members with a non-zero default value. Users of MAP_REGION are
protected against such evolution. This commit also protects users of
mmap_add_region() and mmap_add_dynamic_region() functions against these
evolutions.

Also make the MAP_REGION macro implementation more explicit and make it
a mmap_region_t compound literal to make it useable as a function
parameter on its own and to prevent using it in initialization of
variables of different type.

Change-Id: I7bfc4689f6dd4dd23c895b65f628d8ee991fc161
Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2017-09-11 15:47:33 +01:00
Eleanor Bonnici 6de9b3364b Cortex-A72: Implement workaround for erratum 859971
Erratum 855971 applies to revision r0p3 or earlier Cortex-A72 CPUs. The
recommended workaround is to disable instruction prefetch.

Change-Id: I7fde74ee2a8a23b2a8a1891b260f0eb909fad4bf
Signed-off-by: Eleanor Bonnici <Eleanor.bonnici@arm.com>
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2017-09-07 14:22:02 +01:00
Eleanor Bonnici 45b52c202f Cortex-A57: Implement workaround for erratum 859972
Erratum 855972 applies to revision r1p3 or earlier Cortex-A57 CPUs. The
recommended workaround is to disable instruction prefetch.

Change-Id: I56eeac0b753eb1432bd940083372ad6f7e93b16a
Signed-off-by: Eleanor Bonnici <Eleanor.bonnici@arm.com>
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2017-09-07 14:22:02 +01:00
davidcunado-arm 413115e152 Merge pull request #1019 from etienne-lms/log-size
CPU_DATA_LOG2SIZE depends on cache line size
2017-09-07 00:40:59 +01:00
Roberto Vargas 216e58a312 Reduce time lock in psci_do_cpu_off
psci_set_power_off_state only initializes a local variable, so there
isn't any reason why it should be done while the lock is held.

Change-Id: I1c62f4cd5d860d102532e5a5350152180d41d127
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2017-09-06 14:48:15 +01:00
Etienne Carriere 86606eb51e cpu log buffer size depends on cache line size
Platform may use specific cache line sizes. Since CACHE_WRITEBACK_GRANULE
defines the platform specific cache line size, it is used to define the
size of the cpu data structure CPU_DATA_SIZE aligned on cache line size.

Introduce assembly macro 'mov_imm' for AArch32 to simplify implementation
of function '_cpu_data_by_index'.

Change-Id: Ic2d49ffe0c3e51649425fd9c8c99559c582ac5a1
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-09-01 10:22:20 +02:00
danh-arm b15bab6bbc Merge pull request #1066 from islmit01/im/enable_cnp_bit
Enable CnP bit for ARMv8.2 CPUs
2017-08-30 14:34:57 +01:00
Eleanor Bonnici 80bcf98151 CPU: Correct names of implementation-defined aux regs
At present, various CPU register macros that refer to CPUACTLR are named
ACTLR. This patch fixes that.

The previous register names are retained, but guarded by the
ERROR_DEPRECATED macro, so as not to break platforms that continue using
the old names.

Change-Id: Ia872196d81803f8f390b887d149e0fd054df519b
Signed-off-by: Eleanor Bonnici <Eleanor.bonnici@arm.com>
2017-08-29 13:52:48 +01:00
davidcunado-arm 01ebe3d2c6 Merge pull request #1059 from kenkuang/intergration
fix a typo abort sctlr_el2
2017-08-25 17:25:39 +01:00
Isla Mitchell 9fce2725a4 Enable CnP bit for ARMv8.2 CPUs
This patch enables the CnP (Common not Private) bit for secure page
tables so that multiple PEs in the same Inner Shareable domain can use
the same translation table entries for a given stage of translation in
a particular translation regime. This only takes effect when ARM
Trusted Firmware is built with ARM_ARCH_MINOR >= 2.

ARM Trusted Firmware Design has been updated to include a description
of this feature usage.

Change-Id: I698305f047400119aa1900d34c65368022e410b8
Signed-off-by: Isla Mitchell <isla.mitchell@arm.com>
2017-08-24 17:23:43 +01:00
Jeenu Viswambharan f45e232ab9 Add macro to test for minimum architecture version
The macro concisely expresses and requires architecture version to be at
least as required by its arguments. This would be useful when extending
Trusted Firmware functionality for future architecture revisions.

Replace similar usage in the current code base with the new macro.

Change-Id: I9dcd0aa71a663eabd02ed9632b8ce87611fa5a57
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2017-08-24 17:23:43 +01:00
Alistair Francis 5722b78cdb psci_common: Resolve GCC static analysis false positive
Previously commit 555ebb34db8f3424c1b394df2f10ecf9c1f70901 attmpted to fix this
GCC issue:

services/std_svc/psci/psci_common.c: In function 'psci_do_state_coordination':
services/std_svc/psci/psci_common.c:220:27: error: array subscript is above
array bounds [-Werror=array-bounds]
  psci_req_local_pwr_states[pwrlvl - 1][cpu_idx] = req_pwr_state;

This fix doesn't work as asserts aren't built in non-debug build flows.

Let's use GCCs #pragma option (documented here:
https://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html) to avoid
this false positive instead.

Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
2017-08-23 14:04:59 -07:00
Ken Kuang 2e09d4f804 fix a typo about sctlr_el2
which will cause write_sctlr_el2 use all sctlr_el1 value except the EE bit

The code doesn't "Use SCTLR_EL1.EE value to initialise sctlr_el2"
but, read out SCTLR_EL1 and clear EE bit, then set to sctlr_el2

Signed-off-by: Ken Kuang <ken.kuang@spreadtrum.com>
2017-08-23 16:39:18 +08:00
Summer Qin 54661cd248 Add Trusted OS extra image parsing support for ARM standard platforms
Trusted OS may have extra images to be loaded. Load them one by one
and do the parsing. In this patch, ARM TF need to load up to 3 images
for optee os: header, pager and paged images. Header image is the info
about optee os and images. Pager image include pager code and data.
Paged image include the paging parts using virtual memory.

Change-Id: Ia3bcfa6d8a3ed7850deb5729654daca7b00be394
Signed-off-by: Summer Qin <summer.qin@arm.com>
2017-08-09 18:06:05 +08:00
davidcunado-arm 3e0cba5283 Merge pull request #1021 from vwadekar/psci-early-suspend-handler
lib: psci: early suspend handler for platforms
2017-08-01 12:36:42 +01:00
davidcunado-arm 235581cfb7 Merge pull request #1045 from sandrine-bailleux-arm/sb/xlat-lib-ctx
Fix sign of variable in xlat_tables_print()
2017-08-01 10:44:38 +01:00
Sandrine Bailleux 664e69311e xlat lib v2: Fix sign of debug loop variable
This patch changes the sign of the loop variable used in
xlat_tables_print(). It needs to be unsigned because it is compared
against another unsigned int.

Change-Id: I2b3cee7990dd75e8ebd2701de3860ead7cad8dc8
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-08-01 09:18:51 +01:00
Varun Wadekar 1862d6203c lib: psci: early suspend handler for platforms
This patch adds an early suspend handler, that executes with
SMP and data cache enabled. This handler allows platforms to
perform any early actions during the CPU suspend entry sequence.

This handler is optional and platforms can choose to implement it
depending on their needs. The `pwr_domain_suspend` handler still
exists and platforms can keep on using it without any side effects.

Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
2017-07-31 11:41:17 -07:00
davidcunado-arm ddc5bfdb6f Merge pull request #1035 from sandrine-bailleux-arm/sb/xlat-lib-ctx
Translation table library v2 improvements
2017-07-31 14:29:54 +01:00
davidcunado-arm d9f18155e0 Merge pull request #1033 from davidcunado-arm/dc/psci_flush
Address edge case for stale PSCI CPU data in cache
2017-07-31 08:45:44 +01:00
davidcunado-arm 881cf37438 Merge pull request #1031 from robertovargas-arm/assert_format
Use standard UNIX file:line format in assert
2017-07-26 12:31:18 +01:00
David Cunado 71341d2366 Address edge case for stale PSCI CPU data in cache
There is a theoretical edge case during CPU_ON where the cache
may contain stale data for the target CPU data - this can occur
under the following conditions:

- the target CPU is in another cluster from the current
- the target CPU was the last CPU to shutdown on its cluster
- the cluster was removed from coherency as part of the CPU shutdown

In this case the cache maintenace that was performed as part of the
target CPUs shutdown was not seen by the current CPU's cluster. And
so the cache may contain stale data for the target CPU.

This patch adds a cache maintenance operation (flush) for the
cache-line containing the target CPU data - this ensures that the
target CPU data is read from main memory.

Change-Id: If8cfd42639b03174f60669429b7f7a757027d0fb
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-07-26 11:59:00 +01:00
Sandrine Bailleux 0044231d43 xlat lib: Fix some types
Fix the type length and signedness of some of the constants and
variables used in the translation table library.

This patch supersedes Pull Request #1018:
https://github.com/ARM-software/arm-trusted-firmware/pull/1018

Change-Id: Ibd45faf7a4fb428a0bf71c752551d35800212fb2
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-07-26 09:28:23 +01:00
Sandrine Bailleux 7bba6884a0 Import ctzdi2.c from LLVM compiler-rt
When using __builtin_ctzll() in AArch32 code, the compiler may translate
that into a call to the __ctzdi2() function. In this case, the linking
phase fails because TF doesn't provide an implementation for it.

This patch imports the implementation of the __ctzdi2() function from
LLVM's compiler-rt project and hooks it into TF's build system. The
ctzdi2.c file is an unmodified copy from the master branch as of
July 19 2017 (SVN revision: 308480).

Change-Id: I96766a025ba28e1afc6ef6a5c4ef91d85fc8f32b
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-07-26 09:28:23 +01:00
Sandrine Bailleux 347621bb47 xlat lib v2: Remove hard-coded virtual address space size
Previous patches have made it possible to specify the physical and
virtual address spaces sizes for each translation context. However,
there are still some places in the code where the physical (resp.
virtual) address space size is assumed to be PLAT_PHY_ADDR_SPACE_SIZE
(resp. PLAT_VIRT_ADDR_SPACE_SIZE).

This patch removes them and reads the relevant address space size
from the translation context itself instead. This information is now
passed in argument to the enable_mmu_arch() function, which needs it
to configure the TCR_ELx.T0SZ field (in AArch64) or the TTBCR.T0SZ
field (in AArch32) appropriately.

Change-Id: I20b0e68b03a143e998695d42911d9954328a06aa
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-07-26 09:28:23 +01:00
Sandrine Bailleux d83f357952 xlat lib v2: Refactor the functions enabling the MMU
This patch refactors both the AArch32 and AArch64 versions of the
function enable_mmu_arch().

In both versions, the code now computes the VMSA-related system
registers upfront then program them in one go (rather than interleaving
the 2).

In the AArch64 version, this allows to reduce the amount of code
generated by the C preprocessor and limits it to the actual differences
between EL1 and EL3.

In the AArch32 version, this patch also removes the function
enable_mmu_internal_secure() and moves its code directly inside
enable_mmu_arch(), as it was its only caller.

Change-Id: I35c09b6db4404916cbb2e2fd3fda2ad59f935954
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-07-26 09:28:23 +01:00
Sandrine Bailleux 99f6079891 xlat lib v2: Remove init_xlat_tables_arch() function
In both the AArch32 and AArch64 versions, this function used to check
the sanity of the PLAT_PHY_ADDR_SPACE_SIZE in regard to the
architectural maximum value. Instead, export the
xlat_arch_get_max_supported_pa() function and move the debug
assertion in AArch-agnostic code.

The AArch64 used to also precalculate the TCR.PS field value, based
on the size of the physical address space. This is now done directly
by enable_mmu_arch(), which now receives the physical address space size
in argument.

Change-Id: Ie77ea92eb06db586f28784fdb479c6e27dd1acc1
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-07-26 09:28:23 +01:00
Sandrine Bailleux a9ad848ccf xlat lib v2: Expose *_ctx() APIs
In a previous patch, the xlat_ctx_t type has been made public.
This patch now makes the *_ctx() APIs public.

Each API now has a *_ctx() variant. Most of them were already implemented
and this patch just makes them public. However, some of them were missing
so this patch introduces them.

Now that all these APIs are public, there's no good reason for splitting
them accross 2 files (xlat_tables_internal.c and xlat_tables_common.c).
Therefore, this patch moves all code into xlat_tables_internal.c and
removes xlat_tables_common.c. It removes it from the library's makefile
as well.

This last change introduces a compatibility break for platform ports
that specifically include the xlat_tables_common.c file instead of
including the library's Makefile. The UniPhier platform makefile has
been updated to now omit this file from the list of source files.

The prototype of mmap_add_region_ctx() has been slightly changed. The
mmap_region_t passed in argument needs to be constant because it gets
called from map_add(), which receives a constant region. The former
implementation of mmap_add() used to cast the const qualifier away,
which is not a good practice.

Also remove init_xlation_table(), which was a sub-function of
init_xlat_tables(). Now there's just init_xlat_tables() (and
init_xlat_tables_ctx()). Both names were too similar, which was
confusing. Besides, now that all the code is in a single file,
it's no longer needed to have 2 functions for that.

Change-Id: I4ed88c68e44561c3902fbebb89cb197279c5293b
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-07-26 09:20:05 +01:00
Sandrine Bailleux 55c84964f7 xlat lib v2: Export translation context as an opaque type
At the moment, the translation context type (xlat_ctx_t) is a private
type reserved for the internal usage of the translation table library.
All exported APIs (implemented in xlat_tables_common.c) are wrappers
over the internal implementations that use such a translation context.

These wrappers unconditionally pass the current translation context
representing the memory mappings of the executing BL image. This means
that the caller has no control over which translation context the
library functions act on.

As a first step to make this code more flexible, this patch exports
the 'xlat_ctx_t' type. Note that, although the declaration of this type
is now public, its definition stays private. A macro is introduced to
statically allocate and initialize such a translation context.

The library now internally uses this macro to allocate the default
translation context for the running BL image.

Change-Id: Icece1cde4813fac19452c782b682c758142b1489
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-07-25 13:09:00 +01:00
Sandrine Bailleux 8933c34bbc xlat lib: Reorganize architectural defs
Move the header files that provide translation tables architectural
definitions from the library v2 source files to the library include
directory. This allows to share these definitions between both
versions (v1 and v2) of the library.

Create a new header file that includes the AArch32 or AArch64
definitions based on the AARCH32 build flag, so that the library user
doesn't have to worry about handling it on their side.

Also repurpose some of the definitions the header files provide to
concentrate on the things that differ between AArch32 and AArch64.
As a result they now contain the following information:
 - the first table level that allows block descriptors;
 - the architectural limits of the virtual address space;
 - the initial lookup level to cover the entire address space.

Additionally, move the XLAT_TABLE_LEVEL_MIN macro from
xlat_tables_defs.h to the AArch32/AArch64 architectural definitions.

This new organisation eliminates duplicated information in the AArch32
and AArch64 versions. It also decouples these architectural files from
any platform-specific information. Previously, they were dependent on
the address space size, which is platform-specific.

Finally, for the v2 of the library, move the compatibility code for
ADDR_SPACE_SIZE into a C file as it is not needed outside of this
file. For v1, this code hasn't been changed and stays in a header
file because it's needed by several files.

Change-Id: If746c684acd80eebf918abd3ab6e8481d004ac68
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-07-25 13:09:00 +01:00
Sandrine Bailleux 0350bc6d05 xlat lib v2: Print some debug statistics
This patch adds some debug prints to display some statistics about page
tables usage. They are printed only if the LOG_LEVEL is at least 50
(i.e. VERBOSE).

Sample output for BL1:

VERBOSE:    Translation tables state:
VERBOSE:      Max allowed PA:  0xffffffff
VERBOSE:      Max allowed VA:  0xffffffff
VERBOSE:      Max mapped PA:   0x7fffffff
VERBOSE:      Max mapped VA:   0x7fffffff
VERBOSE:      Initial lookup level: 1
VERBOSE:      Entries @initial lookup level: 4
VERBOSE:      Used 4 sub-tables out of 5 (spare: 1)

Change-Id: If38956902e9616cdcd6065ecd140fe21482597ea
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2017-07-25 13:09:00 +01:00
Roberto Vargas d52be21f03 Use standard UNIX file:line format in assert
This format is understood by almost all the UNIX tools (vi, emacs, acme, ...),
and it allows these tools to jump directly to the line where the assert
failed.

Change-Id: I648fa93c7cc65f911a17dcad5e1a775ac1ae5ed4
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2017-07-19 05:57:40 +01:00
Isla Mitchell 2a4b4b71ba Fix order of #includes
This fix modifies the order of system includes to meet the ARM TF coding
standard. There are some exceptions in order to retain header groupings,
minimise changes to imported headers, and where there are headers within
the #if and #ifndef statements.

Change-Id: I65085a142ba6a83792b26efb47df1329153f1624
Signed-off-by: Isla Mitchell <isla.mitchell@arm.com>
2017-07-12 14:45:31 +01:00
Douglas Raillard c2b8806fb6 Introduce TF_LDFLAGS
Use TF_LDFLAGS from the Makefiles, and still append LDFLAGS as well to
the compiler's invocation. This allows passing extra options from the
make command line using LDFLAGS.

Document new LDFLAGS Makefile option.

Change-Id: I88c5ac26ca12ac2b2d60a6f150ae027639991f27
Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2017-06-28 15:03:05 +01:00
danh-arm 0d182a0b95 Merge pull request #1002 from douglas-raillard-arm/dr/fix_errata_a53
Apply workarounds for A53 Cat A Errata 835769 and 843419
2017-06-28 13:47:40 +01:00
danh-arm 267d4bf946 Merge pull request #1001 from davidcunado-arm/dc/fix-signed-comparisons
Resolve signed-unsigned comparison issues
2017-06-28 13:46:46 +01:00
danh-arm d70a7d0ce0 Merge pull request #978 from etienne-lms/minor-build
Minor build fixes
2017-06-28 13:46:19 +01:00
David Cunado 0dd4195114 Resolve signed-unsigned comparison issues
A recent commit 030567e6f5 added U()/ULL()
macro to TF constants. This has caused some signed-unsigned comparison
warnings / errors in the TF static analysis.

This patch addresses these issues by migrating impacted variables from
signed ints to unsigned ints and vice verse where applicable.

Change-Id: I4b4c739a3fa64aaf13b69ad1702c66ec79247e53
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-06-27 09:57:21 +01:00