* This patch adds root context procedure to restore/configure
the registers, which are of importance during EL3 execution.
* EL3/Root context is a simple restore operation that overwrites
the following bits: (MDCR_EL3.SDD, SCR_EL3.{EA, SIF}, PMCR_EL0.DP
PSTATE.DIT) while the execution is in EL3.
* It ensures EL3 world maintains its own settings distinct
from other worlds (NS/Realm/SWd). With this in place, the EL3
system register settings is no longer influenced by settings of
incoming worlds. This allows the EL3/Root world to access features
for its own execution at EL3 (eg: Pauth).
* It should be invoked at cold and warm boot entry paths and also
at all the possible exception handlers routing to EL3 at runtime.
Cold and warm boot paths are handled by including setup_el3_context
function in "el3_entrypoint_common" macro, which gets invoked in
both the entry paths.
* At runtime, el3_context is setup at the stage, while we get prepared
to enter into EL3 via "prepare_el3_entry" routine.
Change-Id: I5c090978c54a53bc1c119d1bc5fa77cd8813cdc2
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
Replace "adr" with "adr_l" to handle symbols or labels that exceeds 1MB
access range. This modification resolves the link error.
Change-Id: I9eba2e34c0a303b40e4c7b3ea7c5b113f4c6d989
Signed-off-by: Hsin-Hsiung Wang <hsin-hsiung.wang@mediatek.com>
For a feature to be used at lower ELs, EL3 generally needs to disable
the trap so that lower ELs can access the system registers associated
with the feature. Lower ELs generally check ID registers to dynamically
detect if a feature is present (in HW) or not while EL3 Firmware relies
statically on feature build macros to enable a feature.
If a lower EL accesses a system register for a feature that EL3 FW is
unaware of, EL3 traps the access and panics. This happens mostly with
EL2 but sometimes VMs can also cause EL3 panic.
To provide platforms with capability to mitigate this problem, UNDEF
injection support has been introduced which injects a synchronous
exception into the lower EL which is supposed to handle the
synchronous exception.
The current support is only provided for aarch64.
The implementation does the following on encountering sys reg trap
- Get the target EL, which can be either EL2 or EL1
- Update ELR_ELx with ELR_EL3, so that after UNDEF handling in lower EL
control returns to original location.
- ESR_ELx with EC_UNKNOWN
- Update ELR_EL3 with vector address of sync exception handler with
following possible causes
- Current EL with SP0
- Current EL with SPx
- Lower EL using AArch64
- Re-create SPSR_EL3 which will be used to generate PSTATE at ERET
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I1b7bf6c043ce7aec1ee4fc1121c389b490b7bfb7
This patch does following changes to restrict handling of lower EL
EA's only if FFH mode is enabled.
- Compile ea_delegate.S only if FFH mode is enabled.
- For Sync exception from lower ELs if the EC is not SMC or SYS reg
trap it was assumed that it is an EA, which is not correct. Move
the known Sync exceptions (EL3 Impdef) out of sync EA handler.
- Report unhandled exceptions if there are SError from lower EL in
KFH mode, as this is unexpected.
- Move code out of ea_delegate.S which are used for KFH mode.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I577089677d0ec8cde7c20952172bee955573d2ed
This patch removes RAS_FFH_SUPPORT macro which is the combination of
ENABLE_FEAT_RAS and HANDLE_EA_EL3_FIRST_NS. Instead introduce an
internal macro FFH_SUPPORT which gets enabled when platforms wants
to enable lower EL EA handling at EL3. The internal macro FFH_SUPPORT
will be automatically enabled if HANDLE_EA_EL3_FIRST_NS is enabled.
FFH_SUPPORT along with ENABLE_FEAT_RAS will be used in source files
to provide equivalent check which was provided by RAS_FFH_SUPPORT
earlier. In generic code we needed a macro which could abstract both
HANDLE_EA_EL3_FIRST_NS and RAS_FFH_SUPPORT macros that had limitations.
Former was tied up with NS world only while the latter was tied to RAS
feature.
This is to allow Secure/Realm world to have their own FFH macros
in future.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ie5692ccbf462f5dcc3f005a5beea5aa35124ac73
For synchronization of errors at exception boundries TF-A uses "esb"
instruction with FEAT_RAS or "dsb" and "isb" otherwise. The problem
with esb instruction is, along with synching errors it might also
consume the error, which is not ideal in all scenarios. On the other
hand we can't use dsb always as its in the hot path.
To solve above mentioned problem the best way is to use FEAT_IESB
feature which provides controls to insert an implicit Error
synchronization event at exception entry and exception return.
Assumption in TF-A is, if RAS Extension is present then FEAT_IESB will
also be present and enabled.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ie5861eec5da4028a116406bb4d1fea7dac232456
Vector entries in EL3 from lower ELs, first check for any pending
async EAs from lower EL before handling the original exception.
This happens when there is an error (EA) in the system which is not
yet signaled to PE while executing at lower EL. During entry into EL3
the errors (EA) are synchronized causing async EA to pend at EL3.
On detecting the pending EA (via ISR_EL1.A) EL3 either reflects it back
to lower EL (KFH) or handles it in EL3 (FFH) based on EA routing model.
In case of Firmware First handling mode (FFH), EL3 handles the pended
EA first before returing back to handle the original exception.
While in case of Kernel First handling mode (KFH), EL3 will return back
to lower EL without handling the original exception. On returing to
lower EL, EA will be pended. In KFH mode there is a risk of back and
forth between EL3 and lower EL if the EA is masked at lower EL or
priority of EA is lower than that of original exception. This is a
limitation in current architecture but can be solved in future if EL3
gets a capability to inject virtual SError.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I3a2a31de7cf454d9d690b1ef769432a5b24f6c11
Currently, EL3 context registers are duplicated per-world per-cpu.
Some registers have the same value across all CPUs, so this patch
moves these registers out into a per-world context to reduce
memory usage.
Change-Id: I91294e3d5f4af21a58c23599af2bdbd2a747c54a
Signed-off-by: Elizabeth Ho <elizabeth.ho@arm.com>
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
interrupt exception handler in vector entry is used as a asm macro
(added as inline code) instead of a function call. Since we have limited
space (0x80) for a vector entry there is a chance that it may overflow
in the future.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ieb59f249c58b52e56e0217268fa4dc40b420f8d3
This change fixes the initial support for SMCCCv1.3 SVE hint bit [1].
In the aarch64 smc handler, the FID[16] bit is improperly extracted
and results in the corresponding flags bit to be always set.
Fix by doing the proper masking and set into the flags register.
[1] https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/17511
Signed-off-by: Olivier Deprez <olivier.deprez@arm.com>
Change-Id: I62b8e211d48a50f28e184ff27cd718f51d8d56bf
The current usage of RAS_EXTENSION in TF-A codebase is to cater for two
things in TF-A :
1. Pull in necessary framework and platform hooks for Firmware first
handling(FFH) of RAS errors.
2. Manage the FEAT_RAS extension when switching the worlds.
FFH means that all the EAs from NS are trapped in EL3 first and signaled
to NS world later after the first handling is done in firmware. There is
an alternate way of handling RAS errors viz Kernel First handling(KFH).
Tying FEAT_RAS to RAS_EXTENSION build flag was not correct as the
feature is needed for proper handling KFH in as well.
This patch breaks down the RAS_EXTENSION flag into a flag to denote the
CPU architecture `ENABLE_FEAT_RAS` which is used in context management
during world switch and another flag `RAS_FFH_SUPPORT` to pull in
required framework and platform hooks for FFH.
Proper support for KFH will be added in future patches.
BREAKING CHANGE: The previous RAS_EXTENSION is now deprecated. The
equivalent functionality can be achieved by the following
2 options:
- ENABLE_FEAT_RAS
- RAS_FFH_SUPPORT
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I1abb9ab6622b8f1b15712b12f17612804d48a6ec
Reading back a RES0 bit does not necessarily mean it will be read as 0.
The Arm ARM explicitly warns against doing this. The PMU initialisation
code tries to set such bits to 1 (in MDCR_EL3) regardless of whether
they are in use or are RES0, checking their value could be wrong and
PMCR_EL0 might not end up being saved.
Save PMCR_EL0 unconditionally to prevent this. Remove the security state
change as the outgoing state is not relevant to what the root world
context should look like.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: Id43667d37b0e2da3ded0beaf23fa0d4f9013f470
As per SMCCC spec Table 2.1 bit 23:17 must be zero (MBZ),
for all Fast Calls, when bit[31] == 1.
Adding this check to ensure SMC FIDs when get to the SMC handler
have these bits (23:17) cleared, if not capture and report them
as an unknown SMCs at the core.
Also the C runtime stack is copied to the stackpointer well in
advance, to leverage the existing el3_exit routine for unknown SMC.
Change-Id: I9972216db5ac164815011177945fb34dadc871b0
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
When we reach sysreg_handler64 from any trap handling we are entering
this path from lower EL and thus we should be calling lower_el_panic
reporting mechanism to print panic report.
Make report_elx_panic available through assembly func elx_panic which
could be used for reporting any lower_el_panic.
Change-Id: Ieb260cf20ea327a59db84198b2c6a6bfc9ca9537
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Current panic call invokes do_panic which calls el3_panic, but now panic
handles only panic from EL3 anid clear separation to use lower_el_panic()
which handles panic from lower ELs.
So now we can remove do_panic and just call el3_panic for all panics.
Change-Id: I739c69271b9fb15c1176050877a9b0c0394dc739
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
handle_lower_el_async_ea and enter_lower_el_async_ea are same except for
saving x30 register, with previous patch x30 is now freed before calling
these function we don't need both of them.
This patch also unifies the naming convention, now we have 3 handlers
- handle_lower_el_ea_esb
- handle_lower_el_sync_ea
- handle_lower_el_async_ea
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I63b584cf059bac80195aa334981d50fa6272cf49
Most of the macros/routine in vector entry need a free scratch register.
Introduce a macro "save_x30" and call it right at the begining of vector
entries where x30 is used. It is more exlicit and less error prone
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I617f3d41a120739e5e3fe1c421c79ceb70c1188e
Following macros removed
- handle_async_ea : It duplicates "check_and_unmask_ea" functionality
- check_if_serror_from_EL3: This macro is small and called only once,
replace this macro with instructions at the caller.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Id7eec6263ec23cc8792139f491c563f616fd3618
At the moment we only handle SMC traps from lower ELs, but ignore any
other synchronous traps and just panic.
To cope with system register traps, which we might need to emulate,
introduce a C function to handle those traps, and wire that up in the
exception handler to be called.
We provide a dispatcher function (in C), that will call platform
specific implementation for certain (classes of) system registers.
For now this is empty.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Change-Id: If147bcb49472eb02791498700300926afbcf75ff
scr_el3 registers cannot be modified in lower ELs which means it retains
the same value which is stored in the EL3 cpu context structure for the
given world. So, we should not save the register when entering to EL3
from lower EL as we have the copy of it present in cpu context.
During EL3 execution SCR_EL3 value can be modifed for following cases
1. Changes which is required for EL3 execution, this change is temp
and do not need to be saved.
2. Changes which affects lower EL execution, these changes need to be
written to cpu context as well and will be retrieved when scr_el3
is restored as part of exiting EL3
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I9cc984ddf50e27d09e361bd83b1b3c9f068cf2fd
SMCCCv1.3 introduces the SVE hint bit added to the SMC FID (bit 16)
denoting that the world issuing an SMC doesn't expect the callee to
preserve the SVE state (FFR, predicates, Zn vector bits greater than
127). Update the generic SMC handler to copy the SVE hint bit state
to SMC flags and mask out the bit by default for the services called
by the standard dispatcher. It is permitted by the SMCCC standard to
ignore the bit as long as the SVE state is preserved. In any case a
callee must preserve the NEON state (FPCR/FPSR, Vn 128b vectors)
whichever the SVE hint bit state.
Signed-off-by: Olivier Deprez <olivier.deprez@arm.com>
Change-Id: I2b163ed83dc311b8f81f96b23c942829ae9fa1b5
During a transition to a higher EL some of the PSTATE bits are not set
by hardware, this means that their state may be leaked from lower ELs.
This patch sets those bits to a default value upon entry to EL3.
This patch was tested using a debugger to check the PSTATE values
are correctly set. As well as adding a test in the next patch to
ensure the PSTATE in lower ELs is still maintained after this change.
Change-Id: Ie546acbca7b9aa3c86bd68185edded91b2a64ae5
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
In the next patch we add an extra step of setting the PSTATE
registers to a known state on el3 entry. In this patch we create
the function prepare_el3_entry to wrap the steps needed for before
el3 entry. For now this is only save_gp_pmcr_pauth_regs.
Change-Id: Ie26dc8d89bfaec308769165d2649e84d41be196c
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
FEAT_RME introduces two additional security states,
Root and Realm security states. This patch adds Realm
security state awareness to SMCCC helpers and entry point info
structure.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I9cdefcc1aa71259b2de46e5fb62b28d658fa59bd
For SoCs which do not implement RAS, use DSB as a barrier to
synchronize pending external aborts at the entry and exit of
exception handlers. This is needed to isolate the SErrors to
appropriate context.
However, this introduces an unintended side effect as discussed
in the https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3440
A summary of the side effect and a quick workaround is provided as
part of this patch and summarized here:
The explicit DSB at the entry of various exception vectors in BL31
for handling exceptions from lower ELs can inadvertently trigger an
SError exception in EL3 due to pending asyncrhonouus aborts in lower
ELs. This will end up being handled by serror_sp_elx in EL3 which will
ultimately panic and die.
The way to workaround is to update a flag to indicate if the exception
truly came from EL3. This flag is allocated in the cpu_context
structure. This is not a bullet proof solution to the problem at hand
because we assume the instructions following "isb" that help to update
the flag (lines 100-102 & 139-141) execute without causing further
exceptions.
Change-Id: I4d345b07d746a727459435ddd6abb37fda24a9bf
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
As per latest mailing communication [1], we decided to
update AT speculative workaround implementation in order to
disable page table walk for lower ELs(EL1 or EL0) immediately
after context switching to EL3 from lower ELs.
Previous implementation of AT speculative workaround is available
here: 45aecff00
AT speculative workaround is updated as below:
1. Avoid saving and restoring of SCTLR and TCR registers for EL1
in context save and restore routine respectively.
2. On EL3 entry, save SCTLR and TCR registers for EL1.
3. On EL3 entry, update EL1 system registers to disable stage 1
page table walk for lower ELs (EL1 and EL0) and enable EL1
MMU.
4. On EL3 exit, restore SCTLR and TCR registers for EL1 which
are saved in step 2.
[1]:
https://lists.trustedfirmware.org/pipermail/tf-a/2020-July/000586.html
Change-Id: Iee8de16f81dc970a8f492726f2ddd57e7bd9ffb5
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Since BL31 PROGBITS and BL31 NOBITS sections are going to be
in non-adjacent memory regions, potentially far from each other,
some fixes are needed to support it completely.
1. adr instruction only allows computing the effective address
of a location only within 1MB range of the PC. However, adrp
instruction together with an add permits position independent
address of any location with 4GB range of PC.
2. Since BL31 _RW_END_ marks the end of BL31 image, care must be
taken that it is aligned to page size since we map this memory
region in BL31 using xlat_v2 lib utils which mandate alignment of
image size to page granularity.
Change-Id: I3451cc030d03cb2032db3cc088f0c0e2c84bffda
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Even though ERET always causes a jump to another address, aarch64 CPUs
speculatively execute following instructions as if the ERET
instruction was not a jump instruction.
The speculative execution does not cross privilege-levels (to the jump
target as one would expect), but it continues on the kernel privilege
level as if the ERET instruction did not change the control flow -
thus execution anything that is accidentally linked after the ERET
instruction. Later, the results of this speculative execution are
always architecturally discarded, however they can leak data using
microarchitectural side channels. This speculative execution is very
reliable (seems to be unconditional) and it manages to complete even
relatively performance-heavy operations (e.g. multiple dependent
fetches from uncached memory).
This was fixed in Linux, FreeBSD, OpenBSD and Optee OS:
679db7080129fb48ace43a08873eceabfd092aa1
It is demonstrated in a SafeSide example:
https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cchttps://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c
Signed-off-by: Anthony Steinhauser <asteinhauser@google.com>
Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
This patch provides the following features and makes modifications
listed below:
- Individual APIAKey key generation for each CPU.
- New key generation on every BL31 warm boot and TSP CPU On event.
- Per-CPU storage of APIAKey added in percpu_data[]
of cpu_data structure.
- `plat_init_apiakey()` function replaced with `plat_init_apkey()`
which returns 128-bit value and uses Generic timer physical counter
value to increase the randomness of the generated key.
The new function can be used for generation of all ARMv8.3-PAuth keys
- ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
- New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
pauth_disable_el1()` and `pauth_disable_el3()` functions disable
PAuth for EL1 and EL3 respectively;
`pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
cpu-data structure.
- Combined `save_gp_pauth_registers()` function replaces calls to
`save_gp_registers()` and `pauth_context_save()`;
`restore_gp_pauth_registers()` replaces `pauth_context_restore()`
and `restore_gp_registers()` calls.
- `restore_gp_registers_eret()` function removed with corresponding
code placed in `el3_exit()`.
- Fixed the issue when `pauth_t pauth_ctx` structure allocated space
for 12 uint64_t PAuth registers instead of 10 by removal of macro
CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
and assigning its value to CTX_PAUTH_REGS_END.
- Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
in `msr spsel` instruction instead of hard-coded values.
- Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
This patch adds support for the Undefined Behaviour sanitizer. There are
two types of support offered - minimalistic trapping support which
essentially immediately crashes on undefined behaviour and full support
with full debug messages.
The full support relies on ubsan.c which has been adapted from code used
by OPTEE.
Change-Id: I417c810f4fc43dcb56db6a6a555bfd0b38440727
Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
This patch fixes an issue when secure world timing information
can be leaked because Secure Cycle Counter is not disabled.
For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD
bit on CPU cold/warm boot.
For the earlier architectures PMCR_EL0 register is saved/restored
on secure world entry/exit from/to Non-secure state, and cycle
counting gets disabled by setting PMCR_EL0.DP bit.
'include\aarch64\arch.h' header file was tided up and new
ARMv8.5-PMU related definitions were added.
Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The intention of this patch is to leverage the existing el3_exit() return
routine for smc_unknown return path rather than a custom set of instructions.
In order to leverage el3_exit(), the necessary counteraction (i.e., saving the
system registers apart from GP registers) must be performed. Hence a series of
instructions which save system registers( like SPSR_EL3, SCR_EL3 etc) to stack
are moved to the top of group of instructions which essentially decode the OEN
from the smc function identifier and obtain the specific service handler in
rt_svc_descs_array. This ensures that the control flow for both known and
unknown smc calls will be similar.
Change-Id: I67f94cfcba176bf8aee1a446fb58a4e383905a87
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Replace call to pauth_context_save() with pauth_context_restore()
in case of unknown SMC call.
Change-Id: Ib863d979faa7831052b33e8ac73913e2f661f9a0
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The previous commit added the infrastructure to load and save
ARMv8.3-PAuth registers during Non-secure <-> Secure world switches, but
didn't actually enable pointer authentication in the firmware.
This patch adds the functionality needed for platforms to provide
authentication keys for the firmware, and a new option (ENABLE_PAUTH) to
enable pointer authentication in the firmware itself. This option is
disabled by default, and it requires CTX_INCLUDE_PAUTH_REGS to be
enabled.
Change-Id: I35127ec271e1198d43209044de39fa712ef202a5
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
ARMv8.3-PAuth adds functionality that supports address authentication of
the contents of a register before that register is used as the target of
an indirect branch, or as a load.
This feature is supported only in AArch64 state.
This feature is mandatory in ARMv8.3 implementations.
This feature adds several registers to EL1. A new option called
CTX_INCLUDE_PAUTH_REGS has been added to select if the TF needs to save
them during Non-secure <-> Secure world switches. This option must be
enabled if the hardware has the registers or the values will be leaked
during world switches.
To prevent leaks, this patch also disables pointer authentication in the
Secure world if CTX_INCLUDE_PAUTH_REGS is 0. Any attempt to use it will
be trapped in EL3.
Change-Id: I27beba9907b9a86c6df1d0c5bf6180c972830855
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
This reverts commit 2f37046524 ("Add support for the SMC Calling
Convention 2.0").
SMCCC v2.0 is no longer required for SPM, and won't be needed in the
future. Removing it makes the SMC handling code less complicated.
The SPM implementation based on SPCI and SPRT was using it, but it has
been adapted to SMCCC v1.0.
Change-Id: I36795b91857b2b9c00437cfbfed04b3c1627f578
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Enforce full include path for includes. Deprecate old paths.
The following folders inside include/lib have been left unchanged:
- include/lib/cpus/${ARCH}
- include/lib/el3_runtime/${ARCH}
The reason for this change is that having a global namespace for
includes isn't a good idea. It defeats one of the advantages of having
folders and it introduces problems that are sometimes subtle (because
you may not know the header you are actually including if there are two
of them).
For example, this patch had to be created because two headers were
called the same way: e0ea0928d5 ("Fix gpio includes of mt8173 platform
to avoid collision."). More recently, this patch has had similar
problems: 46f9b2c3a2 ("drivers: add tzc380 support").
This problem was introduced in commit 4ecca33988 ("Move include and
source files to logical locations"). At that time, there weren't too
many headers so it wasn't a real issue. However, time has shown that
this creates problems.
Platforms that want to preserve the way they include headers may add the
removed paths to PLAT_INCLUDES, but this is discouraged.
Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Use the helper function `save_gp_registers` to save the register
state to cpu_context on entry to EL3 in SMC handler. This has the
effect of saving x0 - x3 as well into the cpu_context which was
not done previously but it unifies the register save sequence
in BL31.
Change-Id: I5753c942263a5f9178deda3dba896e3220f3dd83
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
External Aborts while executing in EL3 is fatal in nature. This patch
allows for the platform to define a handler for External Aborts received
while executing in EL3. A default implementation is added which falls
back to platform unhandled exception.
Change-Id: I466f2c8113a33870f2c7d2d8f2bf20437d9fd354
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
A new file ea_delegate.S is introduced, and all EA-related functions are
moved into it. This makes runtime_exceptions.S less crowded and reads
better.
No functional changes.
Change-Id: I64b653b3931984cffd420563f8e8d1ba263f329f
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
Check_vector_size checks if the size of the vector fits
in the size reserved for it. This check creates problems in
the Clang assembler. A new macro, end_vector_entry, is added
and check_vector_size is deprecated.
This new macro fills the current exception vector until the next
exception vector. If the size of the current vector is bigger
than 32 instructions then it gives an error.
Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
RAS extensions are mandatory for ARMv8.2 CPUs, but are also optional
extensions to base ARMv8.0 architecture.
This patch adds build system support to enable RAS features in ARM
Trusted Firmware. A boolean build option RAS_EXTENSION is introduced for
this.
With RAS_EXTENSION, an Exception Synchronization Barrier (ESB) is
inserted at all EL3 vector entry and exit. ESBs will synchronize pending
external aborts before entering EL3, and therefore will contain and
attribute errors to lower EL execution. Any errors thus synchronized are
detected via. DISR_EL1 register.
When RAS_EXTENSION is set to 1, HANDLE_EL3_EA_FIRST must also be set to 1.
Change-Id: I38a19d84014d4d8af688bd81d61ba582c039383a
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
At present, any External Abort routed to EL3 is reported as an unhandled
exception and cause a panic. This patch enables ARM Trusted Firmware to
handle External Aborts routed to EL3.
With this patch, when an External Abort is received at EL3, its handling
is delegated to plat_ea_handler() function. Platforms can provide their
own implementation of this function. This patch adds a weak definition
of the said function that prints out a message and just panics.
In order to support handling External Aborts at EL3, the build option
HANDLE_EA_EL3_FIRST must be set to 1.
Before this patch, HANDLE_EA_EL3_FIRST wasn't passed down to
compilation; this patch fixes that too.
Change-Id: I4d07b7e65eb191ff72d63b909ae9512478cd01a1
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
At present, the function that restores general purpose registers also
does ERET. Refactor the restore code to restore general purpose
registers without ERET to complement the save function.
The macro save_x18_to_x29_sp_el0 was used only once, and is therefore
removed, and its contents expanded inline for readability.
No functional changes, but with this patch:
- The SMC return path will incur an branch-return and an additional
register load.
- The unknown SMC path restores registers x0 to x3.
Change-Id: I7a1a63e17f34f9cde810685d70a0ad13ca3b7c50
Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
Due to differences in the bitfields of the SMC IDs, it is not possible
to support SMCCC 1.X and 2.0 at the same time.
The behaviour of `SMCCC_MAJOR_VERSION` has changed. Now, it is a build
option that specifies the major version of the SMCCC that the Trusted
Firmware supports. The only two allowed values are 1 and 2, and it
defaults to 1. The value of `SMCCC_MINOR_VERSION` is derived from it.
Note: Support for SMCCC v2.0 is an experimental feature to enable
prototyping of secure partition specifications. Support for this
convention is disabled by default and could be removed without notice.
Change-Id: I88abf9ccf08e9c66a13ce55c890edea54d9f16a7
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
According to the SMC Calling Convention (ARM DEN0028B):
The Unknown SMC Function Identifier is a sign-extended value of
(-1) that is returned in R0, W0 or X0 register.
The value wasn't sign-extended because it was defined as a 32-bit
unsigned value (0xFFFFFFFF).
SMC_PREEMPT has been redefined as -2 for the same reason.
NOTE: This might be a compatibility break for some AArch64 platforms
that don't follow the previous version of the SMCCC (ARM DEN0028A)
correctly. That document specifies that only the bottom 32 bits of the
returned value must be checked. If a platform relies on the top 32 bits
of the result being 0 (so that SMC_UNK is 0x00000000FFFFFFFF), it will
have to fix its code to comply with the SMCCC.
Change-Id: I7f7b109f6b30c114fe570aa0ead3c335383cb54d
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>