Commit Graph

157 Commits

Author SHA1 Message Date
davidcunado-arm 73a9605197
Merge pull request #1282 from robertovargas-arm/misra-changes
Misra changes
2018-02-28 18:53:30 +00:00
Roberto Vargas ce3f9a6df7 Fix MISRA rule 8.8 in common code
Rule 8.8: The static storage class specifier shall be used
          in all declarations of objects and functions that
          have internal linkage.

Change-Id: I1e94371caaadebb2cec38d0ae0fa5c59e43369e0
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-02-28 17:19:55 +00:00
davidcunado-arm c69145fc2a
Merge pull request #1286 from antonio-nino-diaz-arm/an/mmu-mismatch
Clarify comments in xlat tables lib and fixes related to the TLB
2018-02-28 01:26:21 +00:00
Antonio Nino Diaz 883d1b5d4a Add comments about mismatched TCR_ELx and xlat tables
When the MMU is enabled and the translation tables are mapped, data
read/writes to the translation tables are made using the attributes
specified in the translation tables themselves. However, the MMU
performs table walks with the attributes specified in TCR_ELx. They are
completely independent, so special care has to be taken to make sure
that they are the same.

This has to be done manually because it is not practical to have a test
in the code. Such a test would need to know the virtual memory region
that contains the translation tables and check that for all of the
tables the attributes match the ones in TCR_ELx. As the tables may not
even be mapped at all, this isn't a test that can be made generic.

The flags used by enable_mmu_xxx() have been moved to the same header
where the functions are.

Also, some comments in the linker scripts related to the translation
tables have been fixed.

Change-Id: I1754768bffdae75f53561b1c4a5baf043b45a304
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-02-27 09:55:01 +00:00
Soby Mathew 101d01e2a2 BL1: Deprecate the `bl1_init_bl2_mem_layout()` API
The `bl1_init_bl2_mem_layout()` API is now deprecated. The default weak
implementation of `bl1_plat_handle_post_image_load()` calculates the
BL2 memory layout and populates the same in x1(r1). This ensures
compatibility for the deprecated API.

Change-Id: Id44bdc1f572dc42ee6ceef4036b3a46803689315
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
2018-02-26 16:31:11 +00:00
Soby Mathew 566034fc27 Add image_id to bl1_plat_handle_post/pre_image_load()
This patch adds an argument to bl1_plat_post/pre_image_load() APIs
to make it more future proof. The default implementation of
these are moved to `plat_bl1_common.c` file.

These APIs are now invoked appropriately in the FWU code path prior
to or post image loading by BL1 and are not restricted
to LOAD_IMAGE_V2.

The patch also reorganizes some common platform files. The previous
`plat_bl2_el3_common.c` and `platform_helpers_default.c` files are
merged into a new `plat_bl_common.c` file.

NOTE: The addition of an argument to the above mentioned platform APIs
is not expected to have a great impact because these APIs were only
recently added and are unlikely to be used.

Change-Id: I0519caaee0f774dd33638ff63a2e597ea178c453
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
2018-02-26 16:29:29 +00:00
Antonio Nino Diaz 6bf0e07930 Ensure the correct execution of TLBI instructions
After executing a TLBI a DSB is needed to ensure completion of the
TLBI.

rk3328: The MMU is allowed to load TLB entries for as long as it is
enabled. Because of this, the correct place to execute a TLBI is right
after disabling the MMU.

Change-Id: I8280f248d10b49a8c354a4ccbdc8f8345ac4c170
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2018-02-21 13:54:55 +00:00
davidcunado-arm 5b75b4a725
Merge pull request #1173 from etienne-lms/armv7-qemu
support to boot OP-TEE on AArch32/Armv7+example with Cortex-A15/Qemu
2018-02-07 11:57:19 +08:00
Etienne Carriere 3fe81dcf5d aarch32: use lr as bl32 boot argument on aarch32 only systems
Add 'lr_svc' as a boot parameter in AArch32 bl1. This is used by Optee
and Trusty to get the non-secure entry point on AArch32 platforms.

This change is not ported in AArch64 mode where the BL31, not BL32,
is in charge of booting the non secure image (BL33).

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
2018-02-02 13:16:18 +01:00
Masahiro Yamada 11f001cb7f bl1: add bl1_plat_handle_{pre,post}_image_load()
Just like bl2_, add pre/post image load handlers for BL1.  No argument
is needed since BL2 is the only image loaded by BL1.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-02-02 00:19:04 +09:00
Roberto Vargas 76d2673346 bl2-el3: Don't compile BL1 when BL2_AT_EL3 is defined in FVP
This patch modifies the makefiles to avoid the definition
of BL1_SOURCES and BL2_SOURCES in the tbbr makefiles, and
it lets to the platform makefiles to define them if they
actually need these images. In the case of BL2_AT_EL3
BL1 will not be needed usually because the Boot ROM will
jump directly to BL2.

Change-Id: Ib6845a260633a22a646088629bcd7387fe35dcf9
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
2018-01-18 16:25:18 +00:00
Antonio Nino Diaz a2aedac221 Replace magic numbers in linkerscripts by PAGE_SIZE
When defining different sections in linker scripts it is needed to align
them to multiples of the page size. In most linker scripts this is done
by aligning to the hardcoded value 4096 instead of PAGE_SIZE.

This may be confusing when taking a look at all the codebase, as 4096
is used in some parts that aren't meant to be a multiple of the page
size.

Change-Id: I36c6f461c7782437a58d13d37ec8b822a1663ec1
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-11-29 12:09:52 +00:00
Isla Mitchell 2a4b4b71ba Fix order of #includes
This fix modifies the order of system includes to meet the ARM TF coding
standard. There are some exceptions in order to retain header groupings,
minimise changes to imported headers, and where there are headers within
the #if and #ifndef statements.

Change-Id: I65085a142ba6a83792b26efb47df1329153f1624
Signed-off-by: Isla Mitchell <isla.mitchell@arm.com>
2017-07-12 14:45:31 +01:00
danh-arm d70a7d0ce0 Merge pull request #978 from etienne-lms/minor-build
Minor build fixes
2017-06-28 13:46:19 +01:00
davidcunado-arm ccf3911108 Merge pull request #994 from soby-mathew/sm/fwu_fix
Fix FWU and cache helper optimization
2017-06-26 09:54:24 +01:00
Etienne Carriere c04d59cf37 bl1: include bl1_private.h in aarch* files
This change avoids warnings when setting -Wmissing-prototypes or when
using sparse tool.

Signed-off-by: Yann Gautier <yann.gautier@st.com>
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
2017-06-23 09:38:06 +02:00
Etienne Carriere 2ed7b71e43 context_mgmt: declare extern cm_set_next_context() for AArch32
This change avoids warning when setting -Wmissing-prototypes to
compile bl1_context_mgmt.c.

Reported-by: Yann Gautier <yann.gautier@st.com>
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
2017-06-23 09:37:49 +02:00
Etienne Carriere 550740833d bl: security_state should be of type unsigned int
security_state is either 0 or 1. Prevent sign conversion potential
error (setting -Werror=sign-conversion results in a build error).

Signed-off-by: Yann Gautier <yann.gautier@st.com>
Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
2017-06-23 09:26:44 +02:00
David Cunado 18f2efd67d Fully initialise essential control registers
This patch updates the el3_arch_init_common macro so that it fully
initialises essential control registers rather then relying on hardware
to set the reset values.

The context management functions are also updated to fully initialise
the appropriate control registers when initialising the non-secure and
secure context structures and when preparing to leave EL3 for a lower
EL.

This gives better alignement with the ARM ARM which states that software
must initialise RES0 and RES1 fields with 0 / 1.

This patch also corrects the following typos:

"NASCR definitions" -> "NSACR definitions"

Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc
Signed-off-by: David Cunado <david.cunado@arm.com>
2017-06-21 17:57:54 +01:00
Soby Mathew ee05ae1680 Fix issues in FWU code
This patch fixes the following issues in Firmware Update (FWU) code:

1. The FWU layer maintains a list of loaded image ids and
   while checking for image overlaps, INVALID_IMAGE_IDs were not
   skipped. The patch now adds code to skip INVALID_IMAGE_IDs.

2. While resetting the state corresponding to an image, the code
   now resets the memory used by the image only if the image were
   copied previously via IMAGE_COPY smc. This prevents the invalid
   zeroing of image memory which are not copied but are directly
   authenticated via IMAGE_AUTH smc.

Change-Id: Idf18e69bcba7259411c88807bd0347d59d9afb8f
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
2017-06-21 17:46:28 +01:00
Antonio Nino Diaz 9d6fc3c321 FWU: Introduce FWU_SMC_IMAGE_RESET
This SMC is as a means for the image loading state machine to go from
COPYING, COPIED or AUTHENTICATED states to RESET state. Previously, this
was only done when the authentication of an image failed or when the
execution of the image finished.

Documentation updated.

Change-Id: Ida6d4c65017f83ae5e27465ec36f54499c6534d9
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-06-01 14:52:12 +01:00
Antonio Nino Diaz 128daee298 FWU: Check for overlaps when loading images
Added checks to FWU_SMC_IMAGE_COPY to prevent loading data into a
memory region where another image data is already loaded.

Without this check, if two images are configured to be loaded in
overlapping memory regions, one of them can be loaded and
authenticated and the copy function is still able to load data from
the second image on top of the first one. Since the first image is
still in authenticated state, it can be executed, which could lead to
the execution of unauthenticated arbitrary code of the second image.

Firmware update documentation updated.

Change-Id: Ib6871e569794c8e610a5ea59fe162ff5dcec526c
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-06-01 14:52:11 +01:00
dp-arm a440900803 AArch32: Add `TRUSTED_BOARD_BOOT` support
This patch adds `TRUSTED_BOARD_BOOT` support for AArch32 mode.

To build this patch the "mbedtls/include/mbedtls/bignum.h"
needs to be modified to remove `#define MBEDTLS_HAVE_UDBL`
when `MBEDTLS_HAVE_INT32` is defined. This is a workaround
for "https://github.com/ARMmbed/mbedtls/issues/708"

NOTE: TBBR support on Juno AArch32 is not currently supported.

Change-Id: I86d80e30b9139adc4d9663f112801ece42deafcf
Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
Co-Authored-By: Yatharth Kochar <yatharth.kochar@arm.com>
2017-05-15 16:34:27 +01:00
Soby Mathew b6285d64c1 AArch32: Rework SMC context save and restore mechanism
The current SMC context data structure `smc_ctx_t` and related helpers are
optimized for case when SMC call does not result in world switch. This was
the case for SP_MIN and BL1 cold boot flow. But the firmware update usecase
requires world switch as a result of SMC and the current SMC context helpers
were not helping very much in this regard. Therefore this patch does the
following changes to improve this:

1. Add monitor stack pointer, `spmon` to `smc_ctx_t`

The C Runtime stack pointer in monitor mode, `sp_mon` is added to the
SMC context, and the `smc_ctx_t` pointer is cached in `sp_mon` prior
to exit from Monitor mode. This makes is easier to retrieve the
context when the next SMC call happens. As a result of this change,
the SMC context helpers no longer depend on the stack to save and
restore the register.

This aligns it with the context save and restore mechanism in AArch64.

2. Add SCR in `smc_ctx_t`

Adding the SCR register to `smc_ctx_t` makes it easier to manage this
register state when switching between non secure and secure world as a
result of an SMC call.

Change-Id: I5e12a7056107c1701b457b8f7363fdbf892230bf
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
2017-05-12 11:54:12 +01:00
davidcunado-arm d6104f5ab4 Merge pull request #927 from jeenu-arm/state-switch
Execution state switch
2017-05-11 16:04:52 +01:00
dp-arm 82cb2c1ad9 Use SPDX license identifiers
To make software license auditing simpler, use SPDX[0] license
identifiers instead of duplicating the license text in every file.

NOTE: Files that have been imported by FreeBSD have not been modified.

[0]: https://spdx.org/

Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761a
Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
2017-05-03 09:39:28 +01:00
Jeenu Viswambharan f4c8aa9054 Add macro to check whether the CPU implements an EL
Replace all instances of checks with the new macro.

Change-Id: I0eec39b9376475a1a9707a3115de9d36f88f8a2a
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2017-05-02 16:11:12 +01:00
davidcunado-arm 79199f702e Merge pull request #907 from antonio-nino-diaz-arm/an/smc-ret0
tspd:FWU:Fix usage of SMC_RET0
2017-04-26 11:56:40 +01:00
Antonio Nino Diaz aa61368eb5 Control inclusion of helper code used for asserts
Many asserts depend on code that is conditionally compiled based on the
DEBUG define. This patch modifies the conditional inclusion of such code
so that it is based on the ENABLE_ASSERTIONS build option.

Change-Id: I6406674788aa7e1ad7c23d86ce94482ad3c382bd
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-04-20 09:59:12 +01:00
Antonio Nino Diaz 7a317a70d4 tspd:FWU:Fix usage of SMC_RET0
SMC_RET0 should only be used when the SMC code works as a function that
returns void. If the code of the SMC uses SMC_RET1 to return a value to
signify success and doesn't return anything in case of an error (or the
other way around) SMC_RET1 should always be used to return clearly
identifiable values.

This patch fixes two cases in which the code used SMC_RET0 instead of
SMC_RET1.

It also introduces the define SMC_OK to use when an SMC must return a
value to tell that it succeeded, the same way as SMC_UNK is used in case
of failure.

Change-Id: Ie4278b51559e4262aced13bbde4e844023270582
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-04-20 09:54:59 +01:00
davidcunado-arm f07d3985b8 Merge pull request #885 from antonio-nino-diaz-arm/an/console-flush
Implement console_flush()
2017-04-12 22:23:44 +01:00
Douglas Raillard 51faada71a Add support for GCC stack protection
Introduce new build option ENABLE_STACK_PROTECTOR. It enables
compilation of all BL images with one of the GCC -fstack-protector-*
options.

A new platform function plat_get_stack_protector_canary() is introduced.
It returns a value that is used to initialize the canary for stack
corruption detection. Returning a random value will prevent an attacker
from predicting the value and greatly increase the effectiveness of the
protection.

A message is printed at the ERROR level when a stack corruption is
detected.

To be effective, the global data must be stored at an address
lower than the base of the stacks. Failure to do so would allow an
attacker to overwrite the canary as part of an attack which would void
the protection.

FVP implementation of plat_get_stack_protector_canary is weak as
there is no real source of entropy on the FVP. It therefore relies on a
timer's value, which could be predictable.

Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2017-03-31 13:58:48 +01:00
Antonio Nino Diaz 0b32628edd Flush console where necessary
Call console_flush() before execution either terminates or leaves an
exception level.

Fixes: ARM-software/tf-issues#123

Change-Id: I64eeb92effb039f76937ce89f877b68e355588e3
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2017-03-31 09:54:22 +01:00
dp-arm 75311203d8 Move plat/common source file definitions to generic Makefiles
These source file definitions should be defined in generic
Makefiles so that all platforms can benefit. Ensure that the
symbols are properly marked as weak so they can be overridden
by platforms.

NOTE: This change is a potential compatibility break for
non-upstream platforms.

Change-Id: I7b892efa9f2d6d216931360dc6c436e1d10cffed
Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
2017-03-20 14:58:25 +00:00
Douglas Raillard 308d359b26 Introduce unified API to zero memory
Introduce zeromem_dczva function on AArch64 that can handle unaligned
addresses and make use of DC ZVA instruction to zero a whole block at a
time. This zeroing takes place directly in the cache to speed it up
without doing external memory access.

Remove the zeromem16 function on AArch64 and replace it with an alias to
zeromem. This zeromem16 function is now deprecated.

Remove the 16-bytes alignment constraint on __BSS_START__ in
firmware-design.md as it is now not mandatory anymore (it used to comply
with zeromem16 requirements).

Change the 16-bytes alignment constraints in SP min's linker script to a
8-bytes alignment constraint as the AArch32 zeromem implementation is now
more efficient on 8-bytes aligned addresses.

Introduce zero_normalmem and zeromem helpers in platform agnostic header
that are implemented this way:
* AArch32:
	* zero_normalmem: zero using usual data access
	* zeromem: alias for zero_normalmem
* AArch64:
	* zero_normalmem: zero normal memory  using DC ZVA instruction
	                  (needs MMU enabled)
	* zeromem: zero using usual data access

Usage guidelines: in most cases, zero_normalmem should be preferred.

There are 2 scenarios where zeromem (or memset) must be used instead:
* Code that must run with MMU disabled (which means all memory is
  considered device memory for data accesses).
* Code that fills device memory with null bytes.

Optionally, the following rule can be applied if performance is
important:
* Code zeroing small areas (few bytes) that are not secrets should use
  memset to take advantage of compiler optimizations.

  Note: Code zeroing security-related critical information should use
  zero_normalmem/zeromem instead of memset to avoid removal by
  compilers' optimizations in some cases or misbehaving versions of GCC.

Fixes ARM-software/tf-issues#408

Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2017-02-06 17:01:39 +00:00
Jeenu Viswambharan 10bcd76157 Report errata workaround status to console
The errata reporting policy is as follows:

  - If an errata workaround is enabled:

    - If it applies (i.e. the CPU is affected by the errata), an INFO
      message is printed, confirming that the errata workaround has been
      applied.

    - If it does not apply, a VERBOSE message is printed, confirming
      that the errata workaround has been skipped.

  - If an errata workaround is not enabled, but would have applied had
    it been, a WARN message is printed, alerting that errata workaround
    is missing.

The CPU errata messages are printed by both BL1 (primary CPU only) and
runtime firmware on debug builds, once for each CPU/errata combination.

Relevant output from Juno r1 console when ARM Trusted Firmware is built
with PLAT=juno LOG_LEVEL=50 DEBUG=1:

  VERBOSE: BL1: cortex_a57: errata workaround for 806969 was not applied
  VERBOSE: BL1: cortex_a57: errata workaround for 813420 was not applied
  INFO:    BL1: cortex_a57: errata workaround for disable_ldnp_overread was applied
  WARNING: BL1: cortex_a57: errata workaround for 826974 was missing!
  WARNING: BL1: cortex_a57: errata workaround for 826977 was missing!
  WARNING: BL1: cortex_a57: errata workaround for 828024 was missing!
  WARNING: BL1: cortex_a57: errata workaround for 829520 was missing!
  WARNING: BL1: cortex_a57: errata workaround for 833471 was missing!
  ...
  VERBOSE: BL31: cortex_a57: errata workaround for 806969 was not applied
  VERBOSE: BL31: cortex_a57: errata workaround for 813420 was not applied
  INFO:    BL31: cortex_a57: errata workaround for disable_ldnp_overread was applied
  WARNING: BL31: cortex_a57: errata workaround for 826974 was missing!
  WARNING: BL31: cortex_a57: errata workaround for 826977 was missing!
  WARNING: BL31: cortex_a57: errata workaround for 828024 was missing!
  WARNING: BL31: cortex_a57: errata workaround for 829520 was missing!
  WARNING: BL31: cortex_a57: errata workaround for 833471 was missing!
  ...
  VERBOSE: BL31: cortex_a53: errata workaround for 826319 was not applied
  INFO:    BL31: cortex_a53: errata workaround for disable_non_temporal_hint was applied

Also update documentation.

Change-Id: Iccf059d3348adb876ca121cdf5207bdbbacf2aba
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2017-01-30 14:53:19 +00:00
Sandrine Bailleux 949a52d24e Fix integer overflows in BL1 FWU code
Before adding a base address and a size to compute the end
address of an image to copy or authenticate, check this
won't result in an integer overflow. If it does then consider
the input arguments are invalid.

As a result, bl1_plat_mem_check() can now safely assume the
end address (computed as the sum of the base address and size
of the memory region) doesn't overflow, as the validation is
done upfront in bl1_fwu_image_copy/auth(). A debug assertion
has been added nonetheless in the ARM implementation in order
to help catching such problems, should bl1_plat_mem_check()
be called in a different context in the future.

Fixes TFV-1: Malformed Firmware Update SMC can result in copy
of unexpectedly large data into secure memory

Change-Id: I8b8f8dd4c8777705722c7bd0e8b57addcba07e25
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
Signed-off-by: Dan Handley <dan.handley@arm.com>
2016-12-20 11:43:10 +00:00
Sandrine Bailleux 1bfb706851 Add some debug assertions in BL1 FWU copy code
These debug assertions sanity check the state of the internal
FWU state machine data when resuming an incomplete image copy
operation.

Change-Id: I38a125b0073658c3e2b4b1bdc623ec221741f43e
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
2016-12-20 11:43:10 +00:00
Sandrine Bailleux b38a9e5cd7 bl1_fwu_image_copy() refactoring
This patch refactors the code of the function handling a FWU_AUTH_COPY
SMC in BL1. All input validation has been moved upfront so it is now
shared between the RESET and COPYING states.

Change-Id: I6a86576b9ce3243c401c2474fe06f06687a70e2f
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
Signed-off-by: Dan Handley <dan.handley@arm.com>
2016-12-20 11:43:10 +00:00
Sandrine Bailleux 9f1489e445 Minor refactoring of BL1 FWU code
This patch introduces no functional change, it just changes
the serial console output.

 - Improve accuracy of error messages by decoupling some
   error cases;

 - Improve comments;

 - Move declaration of 'mem_layout' local variable closer to
   where it is used and make it const;

 - Rename a local variable to clarify whether it is a source
   or a destination address (base_addr -> dest_addr).

Change-Id: I349fcf053e233f316310892211d49e35ef2c39d9
Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
Signed-off-by: Dan Handley <dan.handley@arm.com>
2016-12-20 11:43:10 +00:00
Yatharth Kochar 53d703a555 Enable TRUSTED_BOARD_BOOT support for LOAD_IMAGE_V2=1
This patch enables TRUSTED_BOARD_BOOT (Authentication and FWU)
support, for AArch64, when LOAD_IMAGE_V2 is enabled.

This patch also enables LOAD_IMAGE_V2 for ARM platforms.

Change-Id: I294a2eebce7a30b6784c80c9d4ac7752808ee3ad
Signed-off-by: Yatharth Kochar <yatharth.kochar@arm.com>
2016-12-14 14:37:53 +00:00
Jeenu Viswambharan a806dad58c Define and use no_ret macro where no return is expected
There are many instances in ARM Trusted Firmware where control is
transferred to functions from which return isn't expected. Such jumps
are made using 'bl' instruction to provide the callee with the location
from which it was jumped to. Additionally, debuggers infer the caller by
examining where 'lr' register points to. If a 'bl' of the nature
described above falls at the end of an assembly function, 'lr' will be
left pointing to a location outside of the function range. This misleads
the debugger back trace.

This patch defines a 'no_ret' macro to be used when jumping to functions
from which return isn't expected. The macro ensures to use 'bl'
instruction for the jump, and also, for debug builds, places a 'nop'
instruction immediately thereafter (unless instructed otherwise) so as
to leave 'lr' pointing within the function range.

Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
Co-authored-by: Douglas Raillard <douglas.raillard@arm.com>
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
2016-12-05 14:55:35 +00:00
Yatharth Kochar fabf3017cf AArch32: Fix detection of virtualization support
The Virtualization field in the ID_PFR1 register has only 2
valid values (0 or 1) but it was incorrectly checked against
unrelated value tied to the SPSR register instead.

This patch fixes the detection of virtualization support by
using the valid values in BL1 context management code.

Change-Id: If12592e343770e1da90f0f5fecf0a3376047ac29
2016-09-23 14:34:29 +01:00
Yatharth Kochar f3b4914be3 AArch32: Add generic changes in BL1
This patch adds generic changes in BL1 to support AArch32 state.
New AArch32 specific assembly/C files are introduced and
some files are moved to AArch32/64 specific folders.
BL1 for AArch64 is refactored but functionally identical.
BL1 executes in Secure Monitor mode in AArch32 state.

NOTE: BL1 in AArch32 state ONLY handles BL1_RUN_IMAGE SMC.

Change-Id: I6e2296374c7efbf3cf2aa1a0ce8de0732d8c98a5
2016-09-21 16:27:27 +01:00
Yatharth Kochar 42019bf4e9 Changes for new version of image loading in BL1/BL2
This patch adds changes in BL1 & BL2 to use new version
of image loading to load the BL images.

Following are the changes in BL1:
  -Use new version of load_auth_image() to load BL2
  -Modified `bl1_init_bl2_mem_layout()` to remove using
   `reserve_mem()` and to calculate `bl2_mem_layout`.
   `bl2_mem_layout` calculation now assumes that BL1 RW
   data is at the top of the bl1_mem_layout, which is more
   restrictive than the previous BL1 behaviour.

Following are the changes in BL2:
  -The `bl2_main.c` is refactored and all the functions
   for loading BLxx images are now moved to `bl2_image_load.c`
   `bl2_main.c` now calls a top level `bl2_load_images()` to
   load all the images that are applicable in BL2.
  -Added new file `bl2_image_load_v2.c` that uses new version
   of image loading to load the BL images in BL2.

All the above changes are conditionally compiled using the
`LOAD_IMAGE_V2` flag.

Change-Id: Ic6dcde5a484495bdc05526d9121c59fa50c1bf23
2016-09-20 16:16:42 +01:00
davidcunado-arm f6ace15f9f Merge pull request #689 from yatharth-arm/yk/plat_report_expn
Remove looping around `plat_report_exception`
2016-08-31 14:36:20 +01:00
Yatharth Kochar 5bbc451eea Remove looping around `plat_report_exception`
This patch removes the tight loop that calls `plat_report_exception`
in unhandled exceptions in AArch64 state.
The new behaviour is to call the `plat_report_exception` only
once followed by call to `plat_panic_handler`.
This allows platforms to take platform-specific action when
there is an unhandled exception, instead of always spinning
in a tight loop.

Note: This is a subtle break in behaviour for platforms that
      expect `plat_report_exception` to be continuously executed
      when there is an unhandled exception.

Change-Id: Ie2453804b9b7caf9b010ee73e1a90eeb8384e4e8
2016-08-22 13:41:14 +01:00
Soby Mathew c45f627de4 Move SIZE_FROM_LOG2_WORDS macro to utils.h
This patch moves the macro SIZE_FROM_LOG2_WORDS() defined in
`arch.h` to `utils.h` as it is utility macro.

Change-Id: Ia8171a226978f053a1ee4037f80142c0a4d21430
2016-08-09 17:33:57 +01:00
Soby Mathew 532ed61838 Introduce `el3_runtime` and `PSCI` libraries
This patch moves the PSCI services and BL31 frameworks like context
management and per-cpu data into new library components `PSCI` and
`el3_runtime` respectively. This enables PSCI to be built independently from
BL31. A new `psci_lib.mk` makefile is introduced which adds the relevant
PSCI library sources and gets included by `bl31.mk`. Other changes which
are done as part of this patch are:

* The runtime services framework is now moved to the `common/` folder to
  enable reuse.
* The `asm_macros.S` and `assert_macros.S` helpers are moved to architecture
  specific folder.
* The `plat_psci_common.c` is moved from the `plat/common/aarch64/` folder
  to `plat/common` folder. The original file location now has a stub which
  just includes the file from new location to maintain platform compatibility.

Most of the changes wouldn't affect platform builds as they just involve
changes to the generic bl1.mk and bl31.mk makefiles.

NOTE: THE `plat_psci_common.c` FILE HAS MOVED LOCATION AND THE STUB FILE AT
THE ORIGINAL LOCATION IS NOW DEPRECATED. PLATFORMS SHOULD MODIFY THEIR
MAKEFILES TO INCLUDE THE FILE FROM THE NEW LOCATION.

Change-Id: I6bd87d5b59424995c6a65ef8076d4fda91ad5e86
2016-07-18 17:52:15 +01:00
Sandrine Bailleux 5d1c104f9a Introduce SEPARATE_CODE_AND_RODATA build flag
At the moment, all BL images share a similar memory layout: they start
with their code section, followed by their read-only data section.
The two sections are contiguous in memory. Therefore, the end of the
code section and the beginning of the read-only data one might share
a memory page. This forces both to be mapped with the same memory
attributes. As the code needs to be executable, this means that the
read-only data stored on the same memory page as the code are
executable as well. This could potentially be exploited as part of
a security attack.

This patch introduces a new build flag called
SEPARATE_CODE_AND_RODATA, which isolates the code and read-only data
on separate memory pages. This in turn allows independent control of
the access permissions for the code and read-only data.

This has an impact on memory footprint, as padding bytes need to be
introduced between the code and read-only data to ensure the
segragation of the two. To limit the memory cost, the memory layout
of the read-only section has been changed in this case.

 - When SEPARATE_CODE_AND_RODATA=0, the layout is unchanged, i.e.
   the read-only section still looks like this (padding omitted):

   |        ...        |
   +-------------------+
   | Exception vectors |
   +-------------------+
   |  Read-only data   |
   +-------------------+
   |       Code        |
   +-------------------+ BLx_BASE

   In this case, the linker script provides the limits of the whole
   read-only section.

 - When SEPARATE_CODE_AND_RODATA=1, the exception vectors and
   read-only data are swapped, such that the code and exception
   vectors are contiguous, followed by the read-only data. This
   gives the following new layout (padding omitted):

   |        ...        |
   +-------------------+
   |  Read-only data   |
   +-------------------+
   | Exception vectors |
   +-------------------+
   |       Code        |
   +-------------------+ BLx_BASE

   In this case, the linker script now exports 2 sets of addresses
   instead: the limits of the code and the limits of the read-only
   data. Refer to the Firmware Design guide for more details. This
   provides platform code with a finer-grained view of the image
   layout and allows it to map these 2 regions with the appropriate
   access permissions.

Note that SEPARATE_CODE_AND_RODATA applies to all BL images.

Change-Id: I936cf80164f6b66b6ad52b8edacadc532c935a49
2016-07-08 14:55:11 +01:00