Similar to the cpu_rev_var and cpu_ger_rev_var functions, inline the
call_reset_handler handler. This way we skip the costly branch at no
extra cost as this is the only place where this is called.
While we're at it, drop the options for CPU_NO_RESET_FUNC. The only cpus
that need that are virtual cpus which can spare the tiny bit of
performance lost. The rest are real cores which can save on the check
for zero.
Now is a good time to put the assert for a missing cpu in the
get_cpu_ops_ptr function so that it's a bit better encapsulated.
Change-Id: Ia7c3dcd13b75e5d7c8bafad4698994ea65f42406
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
The following changes have been made:
* Add new sysreg definitions and ASM macro is_feat_sysreg128_present_asm
* Add registers TTBR0_EL2 and VTTBR_EL2 to EL3 crash handler output
* Use MRRS instead of MRS for registers TTBR0_EL1, TTBR0_EL2, TTBR1_EL1,
VTTBR_EL2 and PAR_EL1
Change-Id: I0e20b2c35251f3afba2df794c1f8bc0c46c197ff
Signed-off-by: Igor Podgainõi <igor.podgainoi@arm.com>
Just like for SPE, we need to synchronize TRBE samples before we change
the context to ensure everything goes where it was intended to. If that
is not done, the in-flight entries might use any piece of now incorrect
context as there are no implicit ordering requirements.
Prior to root context, the buffer drain hooks would have done that. But
now that must happen much earlier. So add a tsb to prepare_el3_entry as
well.
Annoyingly, the barrier can be reordered relative to other instructions
by default (rule RCKVWP). So add an isb after the psb/tsb to assure that
they are ordered, at least as far as context is concerned.
Then, drop the buffer draining hooks. Everything they need to do is
already done by now. There's a notable difference in that there are no
dsb-s now. Since EL3 does not access the buffers or the feature
specific context, we don't need to wait for them to finish.
Finally, drop a stray isb in the context saving macro. It is now
absorbed into root context, but was missed.
Change-Id: I30797a40ac7f91d0bb71ad271a1597e85092ccd5
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
In the chapter about FEAT_SPE (D16.4 specifically) it is stated that
"Sampling is always disabled at EL3". That means that disabling sampling
(writing PMBLIMITR_EL1.E to 0) is redundant and can be removed. The only
reason we save/restore SPE context is because of that disable, so those
can be removed too.
There's the issue of draining the profiling buffer though. No new
samples will have been generated since entering EL3. However, old
samples might still be in-flight. Unless synchronised by a psb csync,
those might be affected by our extensive context mutation. Adding a psb
in prepare_el3_entry should cater for that. Note that prior to the
introduction of root context this was not a problem as context remained
unchanged and the hooks took care of the rest.
Then, the only time we care about the buffer actually making it to
memory is when we exit coherency. On HW_ASSISTED_COHERENCY systems we
don't have to do anything, it should be handled for us. Systems without
it need a dsb to wait for them to complete. There should be one already
in each cpu's powerdown hook which should work.
While on the topic of barriers, the esb barrier is no longer used.
Remove it.
Change-Id: I9736fc7d109702c63e7d403dc9e2a4272828afb2
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Introduce the macro "adr_l," which can handle symbols or labels that
exceed the 1MB access range compared to the "adr" instruction.
Change-Id: Iab2a2a2f8a11a5e21e386f1001ba27a8de621132
Signed-off-by: Hsin-Hsiung Wang <hsin-hsiung.wang@mediatek.com>
As part of migrating RAS extension to feature detection mechanism, the
macro ENABLE_FEAT_RAS was allowed to have dynamic detection (FEAT_STATE
2). Considering this feature does impact execution of EL3 and we need
to know at compile time about the presence of this feature. Do not use
dynamic detection part of feature detection mechanism.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I23858f641f81fbd81b6b17504eb4a2cc65c1a752
For synchronization of errors at exception boundries TF-A uses "esb"
instruction with FEAT_RAS or "dsb" and "isb" otherwise. The problem
with esb instruction is, along with synching errors it might also
consume the error, which is not ideal in all scenarios. On the other
hand we can't use dsb always as its in the hot path.
To solve above mentioned problem the best way is to use FEAT_IESB
feature which provides controls to insert an implicit Error
synchronization event at exception entry and exception return.
Assumption in TF-A is, if RAS Extension is present then FEAT_IESB will
also be present and enabled.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ie5861eec5da4028a116406bb4d1fea7dac232456
Vector entries in EL3 from lower ELs, first check for any pending
async EAs from lower EL before handling the original exception.
This happens when there is an error (EA) in the system which is not
yet signaled to PE while executing at lower EL. During entry into EL3
the errors (EA) are synchronized causing async EA to pend at EL3.
On detecting the pending EA (via ISR_EL1.A) EL3 either reflects it back
to lower EL (KFH) or handles it in EL3 (FFH) based on EA routing model.
In case of Firmware First handling mode (FFH), EL3 handles the pended
EA first before returing back to handle the original exception.
While in case of Kernel First handling mode (KFH), EL3 will return back
to lower EL without handling the original exception. On returing to
lower EL, EA will be pended. In KFH mode there is a risk of back and
forth between EL3 and lower EL if the EA is masked at lower EL or
priority of EA is lower than that of original exception. This is a
limitation in current architecture but can be solved in future if EL3
gets a capability to inject virtual SError.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I3a2a31de7cf454d9d690b1ef769432a5b24f6c11
Debugging assembly is painful as it is, and having no useful stack trace
does not help. Code must emit CFI directives whenever the stack moves to
enable stack traces. Otherwise, the layout of the stack frame is
ambiguous, the debugger gives up, and shows nothing. The compiler does
this automatically for C but not assembly.
Add this information to the (currently unused) func_prologue macro.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: Ief5fd672285df8d9d90fa6a2214b5c6e45eddd81
Macro esb used in TF-A executes the instruction "esb" and is kept under
RAS_EXTENSION macro. RAS_EXTENSION, as it stands today, is only enabled
for platforms which wants RAS errors to be handled in Firmware while esb
instruction is available when RAS architecture feature is present
irrespective of its handling.
Currently TF-A does not have mechanism to detect whether RAS is present
or not in HW, define this macro unconditionally.
Its harmless for non-RAS cores as this instruction executes as NOP.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I556f2bcf5669c378bda05909525a0a4f96c7b336
The "sb" barrier instruction is a rather new addition to the AArch64
instruction set, so it is not recognised by all toolchains. On top of
that, the GNU assembler denies this instruction, unless a compatible
processor is selected:
asm_macros.S:223: Error: selected processor does not support `sb'
Provide an alternative encoding of the "sb" instruction, by using a
system register write, as this is the group where the barrier
instructions borrow their encoding space from.
This results in the exact same opcode to be generated, and any
disassembler will decode this instruction as "sb".
Change-Id: I5f44c8321e0cc04c784e02bd838e964602a96a8e
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Optimised the loop workaround for Spectre_BHB mitigation:
1. use of speculation barrier for cores implementing SB instruction.
2. use str/ldr instead of stp/ldp as the loop uses only X2 register.
Signed-off-by: Bipin Ravi <bipin.ravi@arm.com>
Change-Id: I8ac53ea1e42407ad8004c1d59c05f791011f195d
BTI instructions are a part of the NOP space in earlier architecture
versions, so it's not inherently incorrect to enable BTI code
or instructions even if the target architecture does not support them.
This change reduces our reliance on architecture versions when checking
for features.
Change-Id: I79f884eec3d65978c61e72e4268021040fd6c96e
Signed-off-by: Chris Kay <chris.kay@arm.com>
The speculation barrier feature (`FEAT_SB`) was introduced with and
made mandatory in the Armv8.5-A extension. It was retroactively made
optional in prior extensions, but the checks in our code-base do not
reflect that, assuming that it is only available in Armv8.5-A or later.
This change introduces the `ENABLE_FEAT_SB` definition, which derives
support for the `sb` instruction in the assembler from the feature
flags passed to it. Note that we assume that if this feature is enabled
then all the cores in the system support it - enabling speculation
barriers for only a subset of the cores is unsupported.
Signed-off-by: Chris Kay <chris.kay@arm.com>
Change-Id: I978ed38829385b221b10ba56d49b78f4756e20ea
Even though ERET always causes a jump to another address, aarch64 CPUs
speculatively execute following instructions as if the ERET
instruction was not a jump instruction.
The speculative execution does not cross privilege-levels (to the jump
target as one would expect), but it continues on the kernel privilege
level as if the ERET instruction did not change the control flow -
thus execution anything that is accidentally linked after the ERET
instruction. Later, the results of this speculative execution are
always architecturally discarded, however they can leak data using
microarchitectural side channels. This speculative execution is very
reliable (seems to be unconditional) and it manages to complete even
relatively performance-heavy operations (e.g. multiple dependent
fetches from uncached memory).
This was fixed in Linux, FreeBSD, OpenBSD and Optee OS:
679db7080129fb48ace43a08873eceabfd092aa1
It is demonstrated in a SafeSide example:
https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cchttps://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c
Signed-off-by: Anthony Steinhauser <asteinhauser@google.com>
Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
This patch adds the functionality needed for platforms to provide
Branch Target Identification (BTI) extension, introduced to AArch64
in Armv8.5-A by adding BTI instruction used to mark valid targets
for indirect branches. The patch sets new GP bit [50] to the stage 1
Translation Table Block and Page entries to denote guarded EL3 code
pages which will cause processor to trap instructions in protected
pages trying to perform an indirect branch to any instruction other
than BTI.
BTI feature is selected by BRANCH_PROTECTION option which supersedes
the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication
and is disabled by default. Enabling BTI requires compiler support
and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0.
The assembly macros and helpers are modified to accommodate the BTI
instruction.
This is an experimental feature.
Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3
is now made as an internal flag and BRANCH_PROTECTION flag should be
used instead to enable Pointer Authentication.
Note. USE_LIBROM=1 option is currently not supported.
Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
The workaround for Cortex-A76 errata #1286807 is implemented
in this patch.
Change-Id: I6c15af962ac99ce223e009f6d299cefb41043bed
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
Enforce full include path for includes. Deprecate old paths.
The following folders inside include/lib have been left unchanged:
- include/lib/cpus/${ARCH}
- include/lib/el3_runtime/${ARCH}
The reason for this change is that having a global namespace for
includes isn't a good idea. It defeats one of the advantages of having
folders and it introduces problems that are sometimes subtle (because
you may not know the header you are actually including if there are two
of them).
For example, this patch had to be created because two headers were
called the same way: e0ea0928d5 ("Fix gpio includes of mt8173 platform
to avoid collision."). More recently, this patch has had similar
problems: 46f9b2c3a2 ("drivers: add tzc380 support").
This problem was introduced in commit 4ecca33988 ("Move include and
source files to logical locations"). At that time, there weren't too
many headers so it wasn't a real issue. However, time has shown that
this creates problems.
Platforms that want to preserve the way they include headers may add the
removed paths to PLAT_INCLUDES, but this is discouraged.
Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
The architecture dependant header files in include/lib/${ARCH} and
include/common/${ARCH} have been moved to /include/arch/${ARCH}.
Change-Id: I96f30fdb80b191a51448ddf11b1d4a0624c03394
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-01-04 10:43:16 +00:00
Renamed from include/common/aarch64/asm_macros.S (Browse further)