arm-trusted-firmware/bl2/aarch64/bl2_entrypoint.S

122 lines
3.2 KiB
ArmAsm
Raw Normal View History

2013-10-25 09:08:21 +01:00
/*
Introduce unified API to zero memory Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access. Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated. Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements). Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses. Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access Usage guidelines: in most cases, zero_normalmem should be preferred. There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes. Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations. Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC. Fixes ARM-software/tf-issues#408 Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2016-12-02 13:51:54 +00:00
* Copyright (c) 2013-2017, ARM Limited and Contributors. All rights reserved.
2013-10-25 09:08:21 +01:00
*
* SPDX-License-Identifier: BSD-3-Clause
2013-10-25 09:08:21 +01:00
*/
#include <arch.h>
#include <asm_macros.S>
#include <bl_common.h>
2013-10-25 09:08:21 +01:00
.globl bl2_entrypoint
func bl2_entrypoint
2013-10-25 09:08:21 +01:00
/*---------------------------------------------
* Save from x1 the extents of the tzram
* available to BL2 for future use.
* x0 is not currently used.
2013-10-25 09:08:21 +01:00
* ---------------------------------------------
*/
mov x20, x1
2013-10-25 09:08:21 +01:00
/* ---------------------------------------------
* Set the exception vector to something sane.
* ---------------------------------------------
*/
adr x0, early_exceptions
msr vbar_el1, x0
isb
/* ---------------------------------------------
* Enable the SError interrupt now that the
* exception vectors have been setup.
* ---------------------------------------------
*/
msr daifclr, #DAIF_ABT_BIT
/* ---------------------------------------------
* Enable the instruction cache, stack pointer
* and data access alignment checks
* ---------------------------------------------
*/
mov x1, #(SCTLR_I_BIT | SCTLR_A_BIT | SCTLR_SA_BIT)
mrs x0, sctlr_el1
orr x0, x0, x1
msr sctlr_el1, x0
isb
Make generic code work in presence of system caches On the ARMv8 architecture, cache maintenance operations by set/way on the last level of integrated cache do not affect the system cache. This means that such a flush or clean operation could result in the data being pushed out to the system cache rather than main memory. Another CPU could access this data before it enables its data cache or MMU. Such accesses could be serviced from the main memory instead of the system cache. If the data in the sysem cache has not yet been flushed or evicted to main memory then there could be a loss of coherency. The only mechanism to guarantee that the main memory will be updated is to use cache maintenance operations to the PoC by MVA(See section D3.4.11 (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G). This patch removes the reliance of Trusted Firmware on the flush by set/way operation to ensure visibility of data in the main memory. Cache maintenance operations by MVA are now used instead. The following are the broad category of changes: 1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is initialised. This ensures that any stale cache lines at any level of cache are removed. 2. Updates to global data in runtime firmware (BL31) by the primary CPU are made visible to secondary CPUs using a cache clean operation by MVA. 3. Cache maintenance by set/way operations are only used prior to power down. NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES. Fixes ARM-software/tf-issues#205 Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
2015-09-11 16:03:13 +01:00
/* ---------------------------------------------
* Invalidate the RW memory used by the BL2
* image. This includes the data and NOBITS
* sections. This is done to safeguard against
* possible corruption of this memory by dirty
* cache lines in a system cache as a result of
* use by an earlier boot loader stage.
* ---------------------------------------------
*/
adr x0, __RW_START__
adr x1, __RW_END__
sub x1, x1, x0
bl inv_dcache_range
/* ---------------------------------------------
* Zero out NOBITS sections. There are 2 of them:
* - the .bss section;
* - the coherent memory section.
* ---------------------------------------------
*/
ldr x0, =__BSS_START__
ldr x1, =__BSS_SIZE__
Introduce unified API to zero memory Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access. Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated. Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements). Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses. Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access Usage guidelines: in most cases, zero_normalmem should be preferred. There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes. Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations. Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC. Fixes ARM-software/tf-issues#408 Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2016-12-02 13:51:54 +00:00
bl zeromem
#if USE_COHERENT_MEM
ldr x0, =__COHERENT_RAM_START__
ldr x1, =__COHERENT_RAM_UNALIGNED_SIZE__
Introduce unified API to zero memory Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access. Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated. Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements). Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses. Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access Usage guidelines: in most cases, zero_normalmem should be preferred. There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes. Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations. Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC. Fixes ARM-software/tf-issues#408 Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2016-12-02 13:51:54 +00:00
bl zeromem
#endif
2013-10-25 09:08:21 +01:00
/* --------------------------------------------
* Allocate a stack whose memory will be marked
* as Normal-IS-WBWA when the MMU is enabled.
* There is no risk of reading stale stack
* memory after enabling the MMU as only the
* primary cpu is running at the moment.
2013-10-25 09:08:21 +01:00
* --------------------------------------------
*/
bl plat_set_my_stack
2013-10-25 09:08:21 +01:00
/* ---------------------------------------------
* Initialize the stack protector canary before
* any C code is called.
* ---------------------------------------------
*/
#if STACK_PROTECTOR_ENABLED
bl update_stack_protector_canary
#endif
2013-10-25 09:08:21 +01:00
/* ---------------------------------------------
* Perform early platform setup & platform
* specific early arch. setup e.g. mmu setup
* ---------------------------------------------
*/
mov x0, x20
2013-10-25 09:08:21 +01:00
bl bl2_early_platform_setup
bl bl2_plat_arch_setup
/* ---------------------------------------------
* Jump to main function.
* ---------------------------------------------
*/
bl bl2_main
/* ---------------------------------------------
* Should never reach this point.
* ---------------------------------------------
*/
no_ret plat_panic_handler
endfunc bl2_entrypoint