Commit Graph

15 Commits

Author SHA1 Message Date
Justin Chadwell 9624c0a9e0 Fix Coverity #261967, Infinite loop
Coverity has identified that the __aeabi_imod function will loop forever
if the denominator is not a power of 2, which is probably not the
desired behaviour.

The functions in the rest of the file are compiler implementations of
division if ARMv7 does not implement division which is permitted by the
spec. However, while most of the functions in the file are documented
and referenced in other places online, __aeabi_uimod and __aeabi_imod
are not. For this reason, these functions have been removed from the
code base, which also removes the Coverity error.

Change-Id: I20066d72365329a8b03a5536d865c4acaa2139ae
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
2019-08-06 13:06:03 +01:00
Joel Hutton dd4cf2c745 Cortex A9:errata 794073 workaround
On Cortex A9 an errata can cause the processor to violate the rules for
speculative fetches when the MMU is off but branch prediction has not
been disabled. The workaround for this is to execute an Invalidate
Entire Branch Prediction Array (BPIALL) followed by a DSB.

see:http://arminfo.emea.arm.com/help/topic/com.arm.doc.uan0009d/UAN0009_cortex_a9_errata_r4.pdf
for more details.

Change-Id: I9146c1fa7563a79f4e15b6251617b9620a587c93
Signed-off-by: Joel Hutton <Joel.Hutton@arm.com>
2019-04-12 10:10:32 +00:00
Joel Hutton c554e1ad82 cache_helpers.s:fix mixed tabs and spaces
Change-Id: I8b7c7888d09200410e1a1c11a070c94dd8013ea7
Signed-off-by: Joel Hutton <Joel.Hutton@Arm.com>
2019-04-10 10:57:58 +01:00
Joel Hutton f999faca06 Add note about erratum 814220 for A7
On Cortex-A7 an L2 set/way cache maintenance operation can overtake
an L1 set/way cache maintenance operation. The mitigation for this is
to use a `DSB` instruction before changing cache. The cache cleaning
code happens to already be doing this, so only a comment was added.

Change-Id: Ia1ffb8ca8b6bbbba422ed6f6818671ef9fe02d90
Signed-off-by: Joel Hutton <Joel.Hutton@Arm.com>
2019-04-10 10:57:58 +01:00
Paul Beesley 8aabea3358 Correct typographical errors
Corrects typos in core code, documentation files, drivers, Arm
platforms and services.

None of the corrections affect code; changes are limited to comments
and other documentation.

Change-Id: I5c1027b06ef149864f315ccc0ea473e2a16bfd1d
Signed-off-by: Paul Beesley <paul.beesley@arm.com>
2019-01-15 15:16:02 +00:00
Antonio Nino Diaz 8422a8406b libc: armclang: Implement compiler printf symbols
armclang replaces calls to printf by calls to one of the symbols
__0printf, __1printf or __2printf. This patch adds new functions with
these names that internally call printf so that the Trusted Firmware can
be compiled with this compiler.

Change-Id: I06a0e3e5001232fe5b2577615666ddd66e81eef0
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-08-22 10:26:05 +01:00
Etienne Carriere 1d791530d0 ARMv7: division support for missing __aeabi_*divmod
ARMv7-A architectures that do not support the Virtualization extensions
do not support instructions for the 32bit division. This change provides
a software implementation for 32bit division.

The division implementation is dumped from the OP-TEE project
http://github.com/OP-TEE/optee_os. The code was slightly modified
to pass trusted firmware checkpatch requirements and copyright is
given to the ARM trusted firmware initiative and its contributors.

Change-Id: Idae0c7b80a0d75eac9bd41ae121921d4c5af3fa3
Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2017-11-08 14:42:07 +01:00
Soby Mathew 3ec5204c49 Exit early if size zero for cache helpers
This patch enables cache helper functions `flush_dcache_range`,
`clean_dcache_range` and `invalidate_dcache_range` to exit early
if the size argument specified is zero

Change-Id: I0b63e8f4bd3d47ec08bf2a0b0b9a7ff8a269a9b0
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
2017-06-21 17:46:28 +01:00
dp-arm 82cb2c1ad9 Use SPDX license identifiers
To make software license auditing simpler, use SPDX[0] license
identifiers instead of duplicating the license text in every file.

NOTE: Files that have been imported by FreeBSD have not been modified.

[0]: https://spdx.org/

Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761a
Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
2017-05-03 09:39:28 +01:00
Antonio Nino Diaz 044bb2faab Remove build option `ASM_ASSERTION`
The build option `ENABLE_ASSERTIONS` should be used instead. That way
both C and ASM assertions can be enabled or disabled together.

All occurrences of `ASM_ASSERTION` in common code and ARM platforms have
been replaced by `ENABLE_ASSERTIONS`.

ASM_ASSERTION has been removed from the user guide.

Change-Id: I51f1991f11b9b7ff83e787c9a3270c274748ec6f
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-04-20 09:58:28 +01:00
Douglas Raillard 355a5d0336 Replace ASM signed tests with unsigned
ge, lt, gt and le condition codes in assembly provide a signed test
whereas hs, lo, hi and ls provide the unsigned counterpart. Signed tests
should only be used when strictly necessary, as using them on logically
unsigned values can lead to inverting the test for high enough values.
All offsets, addresses and usually counters are actually unsigned
values, and should be tested as such.

Replace the occurrences of signed condition codes where it was
unnecessary by an unsigned test as the unsigned tests allow the full
range of unsigned values to be used without inverting the result with
some large operands.

Change-Id: I58b7e98d03e3a4476dfb45230311f296d224980a
Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2017-03-20 10:38:43 +00:00
Douglas Raillard 308d359b26 Introduce unified API to zero memory
Introduce zeromem_dczva function on AArch64 that can handle unaligned
addresses and make use of DC ZVA instruction to zero a whole block at a
time. This zeroing takes place directly in the cache to speed it up
without doing external memory access.

Remove the zeromem16 function on AArch64 and replace it with an alias to
zeromem. This zeromem16 function is now deprecated.

Remove the 16-bytes alignment constraint on __BSS_START__ in
firmware-design.md as it is now not mandatory anymore (it used to comply
with zeromem16 requirements).

Change the 16-bytes alignment constraints in SP min's linker script to a
8-bytes alignment constraint as the AArch32 zeromem implementation is now
more efficient on 8-bytes aligned addresses.

Introduce zero_normalmem and zeromem helpers in platform agnostic header
that are implemented this way:
* AArch32:
	* zero_normalmem: zero using usual data access
	* zeromem: alias for zero_normalmem
* AArch64:
	* zero_normalmem: zero normal memory  using DC ZVA instruction
	                  (needs MMU enabled)
	* zeromem: zero using usual data access

Usage guidelines: in most cases, zero_normalmem should be preferred.

There are 2 scenarios where zeromem (or memset) must be used instead:
* Code that must run with MMU disabled (which means all memory is
  considered device memory for data accesses).
* Code that fills device memory with null bytes.

Optionally, the following rule can be applied if performance is
important:
* Code zeroing small areas (few bytes) that are not secrets should use
  memset to take advantage of compiler optimizations.

  Note: Code zeroing security-related critical information should use
  zero_normalmem/zeromem instead of memset to avoid removal by
  compilers' optimizations in some cases or misbehaving versions of GCC.

Fixes ARM-software/tf-issues#408

Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2017-02-06 17:01:39 +00:00
Yatharth Kochar 9c1dceb106 AArch32: Add `memcpy4` function in assembly
At present the `el3_entrypoint_common` macro uses `memcpy`
function defined in lib/stdlib/mem.c file, to copy data
from ROM to RAM for BL1. Depending on the compiler being
used the stack could potentially be used, in `memcpy`,
for storing the local variables. Since the stack is
initialized much later in `el3_entrypoint_common` it
may result in unknown behaviour.

This patch adds `memcpy4` function definition in assembly so
that it can be used before the stack is initialized and it
also replaces `memcpy` by `memcpy4` in `el3_entrypoint_common`
macro, to copy data from ROM to RAM for BL1.

Change-Id: I3357a0e8095f05f71bbbf0b185585d9499bfd5e0
2016-09-28 14:03:47 +01:00
Yatharth Kochar 1a0a3f0622 AArch32: Common changes needed for BL1/BL2
This patch adds common changes to support AArch32 state in
BL1 and BL2. Following are the changes:

* Added functions for disabling MMU from Secure state.
* Added AArch32 specific SMC function.
* Added semihosting support.
* Added reporting of unhandled exceptions.
* Added uniprocessor stack support.
* Added `el3_entrypoint_common` macro that can be
  shared by BL1 and BL32 (SP_MIN) BL stages. The
  `el3_entrypoint_common` is similar to the AArch64
  counterpart with the main difference in the assembly
  instructions and the registers that are relevant to
  AArch32 execution state.
* Enabled `LOAD_IMAGE_V2` flag in Makefile for
  `ARCH=aarch32` and added check to make sure that
  platform has not overridden to disable it.

Change-Id: I33c6d8dfefb2e5d142fdfd06a0f4a7332962e1a3
2016-09-21 16:27:15 +01:00
Soby Mathew f24307dec4 AArch32: Add assembly helpers
This patch adds various assembly helpers for AArch32 like :

* cache management : Functions to flush, invalidate and clean
cache by MVA. Also helpers to do cache operations by set-way
are also added.

* stack management: Macros to declare stack and get the current
stack corresponding to current CPU.

* Misc: Macros to access co processor registers in AArch32,
macros to define functions in assembly, assert macros, generic
`do_panic()` implementation and function to zero block of memory.

Change-Id: I7b78ca3f922c0eda39beb9786b7150e9193425be
2016-08-10 12:35:46 +01:00