Commit Graph

338 Commits

Author SHA1 Message Date
Jimmy Brisson d7b5f40823 Increase type widths to satisfy width requirements
Usually, C has no problem up-converting types to larger bit sizes. MISRA
rule 10.7 requires that you not do this, or be very explicit about this.
This resolves the following required rule:

    bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
    The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
    0x3c0U" (32 bits) is less that the right hand operand
    "18446744073709547519ULL" (64 bits).

This also resolves MISRA defects such as:

    bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
    In the expression "3U << 20", shifting more than 7 bits, the number
    of bits in the essential type of the left expression, "3U", is
    not allowed.

Further, MISRA requires that all shifts don't overflow. The definition of
PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
This fixes the violation by changing the definition to 1UL << 12. Since
this uses 32bits, it should not create any issues for aarch32.

This patch also contains a fix for a build failure in the sun50i_a64
platform. Specifically, these misra fixes removed a single and
instruction,

    92407e73        and     x19, x19, #0xffffffff

from the cm_setup_context function caused a relocation in
psci_cpus_on_start to require a linker-generated stub. This increased the
size of the .text section and caused an alignment later on to go over a
page boundary and round up to the end of RAM before placing the .data
section. This sectionn is of non-zero size and therefore causes a link
error.

The fix included in this reorders the functions during link time
without changing their ording with respect to alignment.

Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
2020-10-12 10:55:03 -05:00
Javier Almansa Sobrino 1994e56221 arm_fpga: Add support for unknown MPIDs
This patch allows the system to fallback to a default CPU library
in case the MPID does not match with any of the supported ones.

This feature can be enabled by setting SUPPORT_UNKNOWN_MPID build
option to 1 (enabled by default only on arm_fpga platform).

This feature can be very dangerous on a production image and
therefore it MUST be disabled for Release images.

Signed-off-by: Javier Almansa Sobrino <javier.almansasobrino@arm.com>
Change-Id: I0df7ef2b012d7d60a4fd5de44dea1fbbb46881ba
2020-09-25 15:45:50 +01:00
Leonardo Sandoval 327131c4c7 build_macros.mk: include assert and define loop macros
Loop macros make it easier for developers to include new variables to
assert or define and also help code code readability on makefiles.

Change-Id: I0d21d6e67b3eca8976c4d856ac8ccc02c8bb5ffa
Signed-off-by: Leonardo Sandoval <leonardo.sandoval@linaro.org>
2020-09-14 09:27:53 -05:00
Manish V Badarkhe 3b8456bd1c runtime_exceptions: Update AT speculative workaround
As per latest mailing communication [1], we decided to
update AT speculative workaround implementation in order to
disable page table walk for lower ELs(EL1 or EL0) immediately
after context switching to EL3 from lower ELs.

Previous implementation of AT speculative workaround is available
here: 45aecff00

AT speculative workaround is updated as below:
1. Avoid saving and restoring of SCTLR and TCR registers for EL1
   in context save and restore routine respectively.
2. On EL3 entry, save SCTLR and TCR registers for EL1.
3. On EL3 entry, update EL1 system registers to disable stage 1
   page table walk for lower ELs (EL1 and EL0) and enable EL1
   MMU.
4. On EL3 exit, restore SCTLR and TCR registers for EL1 which
   are saved in step 2.

[1]:
https://lists.trustedfirmware.org/pipermail/tf-a/2020-July/000586.html

Change-Id: Iee8de16f81dc970a8f492726f2ddd57e7bd9ffb5
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
2020-08-18 10:49:27 +01:00
Masahiro Yamada e8ad6168b0 linker_script: move .rela.dyn section to bl_common.ld.h
The .rela.dyn section is the same for BL2-AT-EL3, BL31, TSP.

Move it to the common header file.

I slightly changed the definition so that we can do "RELA_SECTION >RAM".
It still produced equivalent elf images.

Please note I got rid of '.' from the VMA field. Otherwise, if the end
of previous .data section is not 8-byte aligned, it fails to link.

aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
make: *** [Makefile:1071: build/qemu/release/bl31/bl31.elf] Error 1

Change-Id: Iba7422d99c0374d4d9e97e6fd47bae129dba5cc9
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-06-29 15:51:50 +09:00
Masahiro Yamada caa3e7e0a4 linker_script: move .data section to bl_common.ld.h
Move the data section to the common header.

I slightly tweaked some scripts as follows:

[1] bl1.ld.S has ALIGN(16). I added DATA_ALIGN macro, which is 1
    by default, but overridden by bl1.ld.S. Currently, ALIGN(16)
    of the .data section is redundant because commit 4128659076
    ("Fix boot failures on some builds linked with ld.lld.") padded
    out the previous section to work around the issue of LLD version
    <= 10.0. This will be fixed in the future release of LLVM, so
    I am keeping the proper way to align LMA.

[2] bl1.ld.S and bl2_el3.ld.S define __DATA_RAM_{START,END}__ instead
    of __DATA_{START,END}__. I put them out of the .data section.

[3] SORT_BY_ALIGNMENT() is missing tsp.ld.S, sp_min.ld.S, and
    mediatek/mt6795/bl31.ld.S. This commit adds SORT_BY_ALIGNMENT()
    for all images, so the symbol order in those three will change,
    but I do not think it is a big deal.

Change-Id: I215bb23c319f045cd88e6f4e8ee2518c67f03692
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-25 20:09:08 +09:00
Masahiro Yamada a926a9f60a linker_script: move stacks section to bl_common.ld.h
The stacks section is the same for all BL linker scripts.

Move it to the common header file.

Change-Id: Ibd253488667ab4f69702d56ff9e9929376704f6c
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-24 19:19:06 +09:00
Masahiro Yamada a7739bc7b1 linker_script: move bss section to bl_common.ld.h
Move the bss section to the common header. This adds BAKERY_LOCK_NORMAL
and PMF_TIMESTAMP, which previously existed only in BL31. This is not
a big deal because unused data should not be compiled in the first
place. I believe this should be controlled by BL*_SOURCES in Makefiles,
not by linker scripts.

I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
BL31, BL31 for plat=uniphier. I did not see any more  unexpected
code addition.

The bss section has bigger alignment. I added BSS_ALIGN for this.

Currently, SORT_BY_ALIGNMENT() is missing in sp_min.ld.S, and with this
change, the BSS symbols in SP_MIN will be sorted by the alignment.
This is not a big deal (or, even better in terms of the image size).

Change-Id: I680ee61f84067a559bac0757f9d03e73119beb33
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-02 13:38:24 +09:00
Masahiro Yamada 0a0a7a9ac8 linker_script: replace common read-only data with RODATA_COMMON
The common section data are repeated in many linker scripts (often
twice in each script to support SEPARATE_CODE_AND_RODATA). When you
add a new read-only data section, you end up with touching lots of
places.

After this commit, you will only need to touch bl_common.ld.h when
you add a new section to RODATA_COMMON.

Replace a series of RO section with RODATA_COMMON, which contains
6 sections, some of which did not exist before.

This is not a big deal because unneeded data should not be compiled
in the first place. I believe this should be controlled by BL*_SOURCES
in Makefiles, not by linker scripts.

When I was working on this commit, the BL1 image size increased
due to the fconf_populator. Commit c452ba159c ("fconf: exclude
fconf_dyn_cfg_getter.c from BL1_SOURCES") fixed this issue.

I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
BL31, BL31 for plat=uniphier. I did not see any more  unexpected
code addition.

Change-Id: I5d14d60dbe3c821765bce3ae538968ef266f1460
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-02 13:30:17 +09:00
Masahiro Yamada 9fb288a03e linker_script: move more common code to bl_common.ld.h
These are mostly used to collect data from special structure,
and repeated in many linker scripts.

To differentiate the alignment size between aarch32/aarch64, I added
a new macro STRUCT_ALIGN.

While I moved the PMF_SVC_DESCS, I dropped #if ENABLE_PMF conditional.
As you can see in include/lib/pmf/pmf_helpers.h, PMF_REGISTER_SERVICE*
are no-op when ENABLE_PMF=0. So, pmf_svc_descs and pmf_timestamp_array
data are not populated.

Change-Id: I3f4ab7fa18f76339f1789103407ba76bda7e56d0
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-04-02 12:33:18 +09:00
Mark Dykes d2737fe1c6 Merge changes from topic "mp/enhanced_pal_hw" into integration
* changes:
  plat/arm/fvp: populate pwr domain descriptor dynamically
  fconf: Extract topology node properties from HW_CONFIG dtb
  fconf: necessary modifications to support fconf in BL31 & SP_MIN
  fconf: enhancements to firmware configuration framework
2020-03-12 15:54:28 +00:00
Madhukar Pappireddy 26d1e0c330 fconf: necessary modifications to support fconf in BL31 & SP_MIN
Necessary infrastructure added to integrate fconf framework in BL31 & SP_MIN.
Created few populator() functions which parse HW_CONFIG device tree
and registered them with fconf framework. Many of the changes are
only applicable for fvp platform.

This patch:
1. Adds necessary symbols and sections in BL31, SP_MIN linker script
2. Adds necessary memory map entry for translation in BL31, SP_MIN
3. Creates an abstraction layer for hardware configuration based on
   fconf framework
4. Adds necessary changes to build flow (makefiles)
5. Minimal callback to read hw_config dtb for capturing properties
   related to GIC(interrupt-controller node)
6. updates the fconf documentation

Change-Id: Ib6292071f674ef093962b9e8ba0d322b7bf919af
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
2020-03-11 11:24:55 -05:00
Mark Dykes f9ea3a6291 Merge "Fix crash dump for lower EL" into integration 2020-03-11 15:39:32 +00:00
Masahiro Yamada 665e71b8ea Factor xlat_table sections in linker scripts out into a header file
TF-A has so many linker scripts, at least one linker script for each BL
image, and some platforms have their own ones. They duplicate quite
similar code (and comments).

When we add some changes to linker scripts, we end up with touching
so many files. This is not nice in the maintainability perspective.

When you look at Linux kernel, the common code is macrofied in
include/asm-generic/vmlinux.lds.h, which is included from each arch
linker script, arch/*/kernel/vmlinux.lds.S

TF-A can follow this approach. Let's factor out the common code into
include/common/bl_common.ld.h

As a start point, this commit factors out the xlat_table section.

Change-Id: Ifa369e9b48e8e12702535d721cc2a16d12397895
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-03-11 11:31:46 +09:00
Alexei Fedorov b4292bc65e Fix crash dump for lower EL
This patch provides a fix for incorrect crash dump data for
lower EL when TF-A is built with HANDLE_EA_EL3_FIRST=1 option
which enables routing of External Aborts and SErrors to EL3.

Change-Id: I9d5e6775e6aad21db5b78362da6c3a3d897df977
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2020-03-06 14:17:35 +00:00
Olivier Deprez 63aa4094fb Merge changes from topic "spmd" into integration
* changes:
  SPMD: enable SPM dispatcher support
  SPMD: hook SPMD into standard services framework
  SPMD: add SPM dispatcher based upon SPCI Beta 0 spec
  SPMD: add support to run BL32 in TDRAM and BL31 in secure DRAM on Arm FVP
  SPMD: add support for an example SPM core manifest
  SPMD: add SPCI Beta 0 specification header file
2020-02-11 08:34:47 +00:00
Achin Gupta c3fb00d93e SPMD: enable SPM dispatcher support
This patch adds support to the build system to include support for the SPM
dispatcher when the SPD configuration option is spmd.

Signed-off-by: Achin Gupta <achin.gupta@arm.com>
Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com>
Change-Id: Ic1ae50ecd7403fcbcf1d318abdbd6ebdc642f732
2020-02-10 14:09:21 +00:00
Alexei Fedorov 68c76088d3 Make PAC demangling more generic
At the moment, address demangling is only used by the backtrace
functionality. However, at some point, other parts of the TF-A
codebase may want to use it.
The 'demangle_address' function is replaced with a single XPACI
instruction which is also added in 'do_crash_reporting()'.

Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
Change-Id: I4424dcd54d5bf0a5f9b2a0a84c4e565eec7329ec
2020-02-07 17:00:34 +00:00
Mark Dykes 235c8174ff Merge "Coverity: remove unnecessary header file includes" into integration 2020-02-04 17:15:57 +00:00
Sandrine Bailleux 9eac8e958e Merge changes from topic "mp/separate_nobits" into integration
* changes:
  plat/arm: Add support for SEPARATE_NOBITS_REGION
  Changes necessary to support SEPARATE_NOBITS_REGION feature
2020-02-04 16:37:09 +00:00
Zelalem e6937287e4 Coverity: remove unnecessary header file includes
This patch removes unnecessary header file includes
discovered by Coverity HFA option.

Change-Id: I2827c37c1c24866c87db0e206e681900545925d4
Signed-off-by: Zelalem <zelalem.aweke@arm.com>
2020-02-04 10:23:51 -06:00
Alexei Fedorov f69a5828b7 Merge "Use correct type when reading SCR register" into integration 2020-01-30 16:55:55 +00:00
Louis Mayencourt f1be00da0b Use correct type when reading SCR register
The Secure Configuration Register is 64-bits in AArch64 and 32-bits in
AArch32. Use u_register_t instead of unsigned int to reflect this.

Change-Id: I51b69467baba36bf0cfaec2595dc8837b1566934
Signed-off-by: Louis Mayencourt <louis.mayencourt@arm.com>
2020-01-28 11:10:48 +00:00
Madhukar Pappireddy c367b75e85 Changes necessary to support SEPARATE_NOBITS_REGION feature
Since BL31 PROGBITS and BL31 NOBITS sections are going to be
in non-adjacent memory regions, potentially far from each other,
some fixes are needed to support it completely.

1. adr instruction only allows computing the effective address
of a location only within 1MB range of the PC. However, adrp
instruction together with an add permits position independent
address of any location with 4GB range of PC.

2. Since BL31 _RW_END_ marks the end of BL31 image, care must be
taken that it is aligned to page size since we map this memory
region in BL31 using xlat_v2 lib utils which mandate alignment of
image size to page granularity.

Change-Id: I3451cc030d03cb2032db3cc088f0c0e2c84bffda
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
2020-01-27 15:33:24 -06:00
Masahiro Yamada 511046eaa2 BL31: discard .dynsym .dynstr .hash sections to make ENABLE_PIE work
When I tried ENABLE_PIE for my PLAT=uniphier platform, BL31 crashed
at its entry. When it is built with ENABLE_PIE=1, some sections are
inserted before the executable code.

$ make PLAT=uniphier CROSS_COMPILE=aarch64-linux-gnu- ENABLE_PIE=1 bl31
$ aarch64-linux-gnu-objdump -h build/uniphier/release/bl31/bl31.elf | head -n 13

build/uniphier/release/bl31/bl31.elf:     file format elf64-littleaarch64

Sections:
Idx Name          Size      VMA               LMA               File off  Algn
  0 .dynsym       000002a0  0000000081000000  0000000081000000  00010000  2**3
                  CONTENTS, ALLOC, LOAD, READONLY, DATA
  1 .dynstr       000002a0  00000000810002a0  00000000810002a0  000102a0  2**0
                  CONTENTS, ALLOC, LOAD, READONLY, DATA
  2 .hash         00000124  0000000081000540  0000000081000540  00010540  2**3
                  CONTENTS, ALLOC, LOAD, READONLY, DATA
  3 ro            0000699c  0000000081000664  0000000081000664  00010664  2**11
                  CONTENTS, ALLOC, LOAD, CODE

The previous stage loader generally jumps over to the base address of
BL31, where no valid instruction exists.

I checked the linker script of Linux (arch/arm64/kernel/vmlinux.lds.S)
and U-Boot (arch/arm/cpu/armv8/u-boot.lds), both of which support
relocation. They simply discard those sections.

Do similar in TF-A too.

Change-Id: I6c33e9143856765d4ffa24f3924b0ab51a17cde9
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2020-01-24 22:34:25 +09:00
Anthony Steinhauser f461fe346b Prevent speculative execution past ERET
Even though ERET always causes a jump to another address, aarch64 CPUs
speculatively execute following instructions as if the ERET
instruction was not a jump instruction.
The speculative execution does not cross privilege-levels (to the jump
target as one would expect), but it continues on the kernel privilege
level as if the ERET instruction did not change the control flow -
thus execution anything that is accidentally linked after the ERET
instruction. Later, the results of this speculative execution are
always architecturally discarded, however they can leak data using
microarchitectural side channels. This speculative execution is very
reliable (seems to be unconditional) and it manages to complete even
relatively performance-heavy operations (e.g. multiple dependent
fetches from uncached memory).

This was fixed in Linux, FreeBSD, OpenBSD and Optee OS:
679db70801
29fb48ace4
3a08873ece
abfd092aa1

It is demonstrated in a SafeSide example:
https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cc
https://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c

Signed-off-by: Anthony Steinhauser <asteinhauser@google.com>
Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
2020-01-22 21:42:51 +00:00
Samuel Holland f8578e641b bl31: Split into two separate memory regions
Some platforms are extremely memory constrained and must split BL31
between multiple non-contiguous areas in SRAM. Allow the NOBITS
sections (.bss, stacks, page tables, and coherent memory) to be placed
in a separate region of RAM from the loaded firmware image.

Because the NOBITS region may be at a lower address than the rest of
BL31, __RW_{START,END}__ and __BL31_{START,END}__ cannot include this
region, or el3_entrypoint_common would attempt to invalidate the dcache
for the entire address space. New symbols __NOBITS_{START,END}__ are
added when SEPARATE_NOBITS_REGION is enabled, and the dcached for the
NOBITS region is invalidated separately.

Signed-off-by: Samuel Holland <samuel@sholland.org>
Change-Id: Idedfec5e4dbee77e94f2fdd356e6ae6f4dc79d37
2019-12-29 12:00:40 -06:00
Mark Dykes be84a5b9af Merge "debugfs: add 9p device interface" into integration 2019-12-20 18:10:50 +00:00
Paul Beesley 442e092842 spm-mm: Rename component makefile
Change-Id: Idcd2a35cd2b30d77a7ca031f7e0172814bdb8cab
Signed-off-by: Paul Beesley <paul.beesley@arm.com>
2019-12-20 16:04:12 +00:00
Paul Beesley 538b002046 spm: Remove SPM Alpha 1 prototype and support files
The Secure Partition Manager (SPM) prototype implementation is
being removed. This is preparatory work for putting in place a
dispatcher component that, in turn, enables partition managers
at S-EL2 / S-EL1.

This patch removes:

- The core service files (std_svc/spm)
- The Resource Descriptor headers (include/services)
- SPRT protocol support and service definitions
- SPCI protocol support and service definitions

Change-Id: Iaade6f6422eaf9a71187b1e2a4dffd7fb8766426
Signed-off-by: Paul Beesley <paul.beesley@arm.com>
Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com>
2019-12-20 16:03:32 +00:00
Paul Beesley 3f3c341ae5 Remove dependency between SPM_MM and ENABLE_SPM build flags
There are two different implementations of Secure Partition
management in TF-A. One is based on the "Management Mode" (MM)
design, the other is based on the Secure Partition Client Interface
(SPCI) specification. Currently there is a dependency between their
build flags that shouldn't exist, making further development
harder than it should be. This patch removes that
dependency, making the two flags function independently.

Before: ENABLE_SPM=1 is required for using either implementation.
        By default, the SPCI-based implementation is enabled and
        this is overridden if SPM_MM=1.

After: ENABLE_SPM=1 enables the SPCI-based implementation.
       SPM_MM=1 enables the MM-based implementation.
       The two build flags are mutually exclusive.

Note that the name of the ENABLE_SPM flag remains a bit
ambiguous - this will be improved in a subsequent patch. For this
patch the intention was to leave the name as-is so that it is
easier to track the changes that were made.

Change-Id: I8e64ee545d811c7000f27e8dc8ebb977d670608a
Signed-off-by: Paul Beesley <paul.beesley@arm.com>
2019-12-20 16:03:02 +00:00
György Szing b8e17967bb Merge changes from topic "bs/pmf32" into integration
* changes:
  pmf: Make the runtime instrumentation work on AArch32
  SiP: Don't validate entrypoint if state switch is impossible
2019-12-20 10:33:43 +00:00
Jan Dabros bb9549babc aarch64: Fix stack pointer maintenance on EA handling path
EA handlers for exceptions taken from lower ELs at the end invokes
el3_exit function. However there was a bug with sp maintenance which
resulted in el3_exit setting runtime stack to context. This in turn
caused memory corruption on consecutive EL3 entries.

Signed-off-by: Jan Dabros <jsd@semihalf.com>
Change-Id: I0424245c27c369c864506f4baa719968890ce659
2019-12-18 08:47:10 +01:00
Bence Szépkúti 0531ada537 pmf: Make the runtime instrumentation work on AArch32
Ported the pmf asm macros and the asm code in the bl31 entrypoint
necessary for the instrumentation to AArch32.

Since smc dispatch is handled by the bl32 payload on AArch32, we
provide this service only if AARCH32_SP=sp_min is set.

Signed-off-by: Bence Szépkúti <bence.szepkuti@arm.com>
Change-Id: Id33b7e9762ae86a4f4b40d7f1b37a90e5130c8ac
2019-12-17 16:08:04 +01:00
Olivier Deprez 0ca3913dd8 debugfs: add 9p device interface
The 9p interface provides abstraction layers allowing the software
that uses devices to be independent from the hardware.

This patch provides a file system abstraction to link drivers to their
devices and propose a common interface to expose driver operations to
higher layers. This file system can be used to access and configure a
device by doing read/write operations.

Signed-off-by: Ambroise Vincent <ambroise.vincent@arm.com>
Signed-off-by: Olivier Deprez <olivier.deprez@arm.com>
Change-Id: Ia9662393baf489855dc0c8f389fe4a0afbc9c255
2019-12-17 11:03:23 +01:00
Soby Mathew 7999904074 Merge "PIE: make call to GDT relocation fixup generalized" into integration 2019-12-12 14:25:47 +00:00
Manish Pandey da90359b78 PIE: make call to GDT relocation fixup generalized
When a Firmware is complied as Position Independent Executable it needs
to request GDT fixup by passing size of the memory region to
el3_entrypoint_common macro.
The Global descriptor table fixup will be done early on during cold boot
process of primary core.

Currently only BL31 supports PIE, but in future when BL2_AT_EL3 will be
compiled as PIE, it can simply pass fixup size to the common el3
entrypoint macro to fixup GDT.

The reason for this patch was to overcome the bug introduced by SHA
330ead806 which called fixup routine for each core causing
re-initializing of global pointers thus overwriting any changes
done by the previous core.

Change-Id: I55c792cc3ea9e7eef34c2e4653afd04572c4f055
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
2019-12-12 14:16:14 +00:00
Samuel Holland ebd6efae67 Reduce space lost to object alignment
Currently, sections within .text/.rodata/.data/.bss are emitted in the
order they are seen by the linker. This leads to wasted space, when a
section with a larger alignment follows one with a smaller alignment.
We can avoid this wasted space by sorting the sections.

To take full advantage of this, we must disable generation of common
symbols, so "common" data can be sorted along with the rest of .bss.

An example of the improvement, from `make DEBUG=1 PLAT=sun50i_a64 bl31`:
  .text   => no change
  .rodata => 16 bytes saved
  .data   => 11 bytes saved
  .bss    => 576 bytes saved

As a side effect, the addition of `-fno-common` in TF_CFLAGS makes it
easier to spot bugs in header files.

Signed-off-by: Samuel Holland <samuel@sholland.org>
Change-Id: I073630a9b0b84e7302a7a500d4bb4b547be01d51
2019-12-04 02:59:30 -06:00
laurenw-arm 80942622fe Neoverse N1 Errata Workaround 1542419
Coherent I-cache is causing a prefetch violation where when the core
executes an instruction that has recently been modified, the core might
fetch a stale instruction which violates the ordering of instruction
fetches.

The workaround includes an instruction sequence to implementation
defined registers to trap all EL0 IC IVAU instructions to EL3 and a trap
handler to execute a TLB inner-shareable invalidation to an arbitrary
address followed by a DSB.

Signed-off-by: Lauren Wehrmeister <lauren.wehrmeister@arm.com>
Change-Id: Ic3b7cbb11cf2eaf9005523ef5578a372593ae4d6
2019-10-04 19:31:24 +03:00
Alexei Fedorov ed108b5605 Refactor ARMv8.3 Pointer Authentication support code
This patch provides the following features and makes modifications
listed below:
- Individual APIAKey key generation for each CPU.
- New key generation on every BL31 warm boot and TSP CPU On event.
- Per-CPU storage of APIAKey added in percpu_data[]
  of cpu_data structure.
- `plat_init_apiakey()` function replaced with `plat_init_apkey()`
  which returns 128-bit value and uses Generic timer physical counter
  value to increase the randomness of the generated key.
  The new function can be used for generation of all ARMv8.3-PAuth keys
- ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
- New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
  generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
  pauth_disable_el1()` and `pauth_disable_el3()` functions disable
  PAuth for EL1 and EL3 respectively;
  `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
  cpu-data structure.
- Combined `save_gp_pauth_registers()` function replaces calls to
  `save_gp_registers()` and `pauth_context_save()`;
  `restore_gp_pauth_registers()` replaces `pauth_context_restore()`
  and `restore_gp_registers()` calls.
- `restore_gp_registers_eret()` function removed with corresponding
  code placed in `el3_exit()`.
- Fixed the issue when `pauth_t pauth_ctx` structure allocated space
  for 12 uint64_t PAuth registers instead of 10 by removal of macro
  CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
  and assigning its value to CTX_PAUTH_REGS_END.
- Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
  in `msr	spsel`  instruction instead of hard-coded values.
- Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.

Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-09-13 14:11:59 +01:00
Justin Chadwell 1f4619796a Add UBSAN support and handlers
This patch adds support for the Undefined Behaviour sanitizer. There are
two types of support offered - minimalistic trapping support which
essentially immediately crashes on undefined behaviour and full support
with full debug messages.

The full support relies on ubsan.c which has been adapted from code used
by OPTEE.

Change-Id: I417c810f4fc43dcb56db6a6a555bfd0b38440727
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
2019-09-11 14:15:54 +01:00
Justin Chadwell 53d7e003fe Move assembly newline function into common debug code
Printing a newline is a relatively common functionality for code to want
to do. Therefore, this patch now moves this function into a common part
of the code that anyone can use.

Change-Id: I2cad699fde00ef8d2aabf8bf35742ddd88d090ba
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
2019-08-29 12:00:59 +00:00
Paul Beesley 30560911dd Merge "AArch64: Disable Secure Cycle Counter" into integration 2019-08-23 11:26:57 +00:00
Alexei Fedorov e290a8fcbc AArch64: Disable Secure Cycle Counter
This patch fixes an issue when secure world timing information
can be leaked because Secure Cycle Counter is not disabled.
For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD
bit on CPU cold/warm boot.
For the earlier architectures PMCR_EL0 register is saved/restored
on secure world entry/exit from/to Non-secure state, and cycle
counting gets disabled by setting PMCR_EL0.DP bit.
'include\aarch64\arch.h' header file was tided up and new
ARMv8.5-PMU related definitions were added.

Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-08-21 15:43:24 +01:00
Alexei Fedorov 6c6a470fc1 AArch64: Align crash reporting output
This patch modifies crash reporting for AArch64 to provide
aligned output of register dump and GIC registers.

Change-Id: I8743bf1d2d6d56086e735df43785ef28051c5fc3
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-08-15 14:23:27 +00:00
Imre Kis c424b91eb6 Fix BL31 crash reporting on AArch64 only machines
The AArch32 system registers are not listed if the platform supports
AArch64 only.

Change-Id: I087a10ae6e7cad1bb52775a344635dbac1f12679
Signed-off-by: Imre Kis <imre.kis@arm.com>
2019-07-22 14:55:34 +02:00
Alexei Fedorov 9fc59639e6 Add support for Branch Target Identification
This patch adds the functionality needed for platforms to provide
Branch Target Identification (BTI) extension, introduced to AArch64
in Armv8.5-A by adding BTI instruction used to mark valid targets
for indirect branches. The patch sets new GP bit [50] to the stage 1
Translation Table Block and Page entries to denote guarded EL3 code
pages which will cause processor to trap instructions in protected
pages trying to perform an indirect branch to any instruction other
than BTI.
BTI feature is selected by BRANCH_PROTECTION option which supersedes
the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication
and is disabled by default. Enabling BTI requires compiler support
and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0.
The assembly macros and helpers are modified to accommodate the BTI
instruction.
This is an experimental feature.
Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3
is now made as an internal flag and BRANCH_PROTECTION flag should be
used instead to enable Pointer Authentication.
Note. USE_LIBROM=1 option is currently not supported.

Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-05-24 14:44:45 +01:00
Madhukar Pappireddy cc485e27ac Rework smc_unknown return code path in smc_handler
The intention of this patch is to leverage the existing el3_exit() return
routine for smc_unknown return path rather than a custom set of instructions.
In order to leverage el3_exit(), the necessary counteraction (i.e., saving the
system registers apart from GP registers) must be performed. Hence a series of
instructions which save system registers( like SPSR_EL3, SCR_EL3 etc) to stack
are moved to the top of group of instructions which essentially decode the OEN
from the smc function identifier and obtain the specific service handler in
rt_svc_descs_array. This ensures that the control flow for both known and
unknown smc calls will be similar.

Change-Id: I67f94cfcba176bf8aee1a446fb58a4e383905a87
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
2019-05-21 08:19:37 +00:00
Alexei Fedorov 050136d485 Fix restoration of PAuth context
Replace call to pauth_context_save() with pauth_context_restore()
in case of unknown SMC call.

Change-Id: Ib863d979faa7831052b33e8ac73913e2f661f9a0
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-04-05 13:51:08 +01:00
Louis Mayencourt 330ead8065 PIE: Fix reloc at the beginning of bl31 entrypoint
The relocation fixup code must be called at the beginning of bl31
entrypoint to ensure that CPU specific reset handlers are fixed up for
relocations.

Change-Id: Icb04eacb2d4c26c26b08b768d871d2c82777babb
Signed-off-by: Louis Mayencourt <louis.mayencourt@arm.com>
2019-03-25 10:54:59 +00:00