Commit graph

44 commits

Author SHA1 Message Date
Boyan Karatotev
b07c317f67 perf(cpus): inline the init_cpu_data_ptr function
Similar to the reset function inline, inline this too to not do a costly
branch with no extra cost.

Change-Id: I54cc399e570e9d0f373ae13c7224d32dbdfae1e5
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-02-24 09:36:11 +00:00
Boyan Karatotev
0d020822ae perf(cpus): inline the reset function
Similar to the cpu_rev_var and cpu_ger_rev_var functions, inline the
call_reset_handler handler. This way we skip the costly branch at no
extra cost as this is the only place where this is called.

While we're at it, drop the options for CPU_NO_RESET_FUNC. The only cpus
that need that are virtual cpus which can spare the tiny bit of
performance lost. The rest are real cores which can save on the check
for zero.

Now is a good time to put the assert for a missing cpu in the
get_cpu_ops_ptr function so that it's a bit better encapsulated.

Change-Id: Ia7c3dcd13b75e5d7c8bafad4698994ea65f42406
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-02-24 09:36:10 +00:00
Ye Li
86acbbe2d8 build(bl31): support separated memory for RW DATA
Update linker file and init codes to allow using separated
memory region for RW DATA. Init codes will copy the RW DATA
from the image to the linked address.

On some NXP platforms, after the BL31 image has been verified,
the bl31 image space will be locked/protected as RO only, so
need to move the RW DATA and NOBITS out of the bl31 image.

Signed-off-by: Ye Li <ye.li@nxp.com>
Reviewed-by: Peng Fan <peng.fan@nxp.com>
Signed-off-by: Jacky Bai <ping.bai@nxp.com>
Change-Id: I361d9a715890961bf30790a3325f8085a40c0c39
2024-11-05 17:24:41 +08:00
Jayanth Dodderi Chidanand
40e5f7a58f feat(context-mgmt): introduce EL3/root context
* This patch adds root context procedure to restore/configure
  the registers, which are of importance during EL3 execution.

* EL3/Root context is a simple restore operation that overwrites
  the following bits: (MDCR_EL3.SDD, SCR_EL3.{EA, SIF}, PMCR_EL0.DP
  PSTATE.DIT) while the execution is in EL3.

* It ensures EL3 world maintains its own settings distinct
  from other worlds (NS/Realm/SWd). With this in place, the EL3
  system register settings is no longer influenced by settings of
  incoming worlds. This allows the EL3/Root world to access features
  for its own execution at EL3 (eg: Pauth).

* It should be invoked at cold and warm boot entry paths and also
  at all the possible exception handlers routing to EL3 at runtime.
  Cold and warm boot paths are handled by including setup_el3_context
  function in  "el3_entrypoint_common"  macro, which gets invoked in
  both the entry paths.

* At runtime, el3_context is setup at the stage, while we get prepared
  to enter into EL3 via "prepare_el3_entry" routine.

Change-Id: I5c090978c54a53bc1c119d1bc5fa77cd8813cdc2
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
2024-10-15 13:25:58 +01:00
Jayanth Dodderi Chidanand
59b7c0a03f feat(cm): add explicit context entries for ERRATA_SPECULATIVE_AT
* Currently, "ERRATA_SPECUALTIVE_AT" errata is enabled by default
  for few cores and they need context entries for saving and
  restoring EL1 regs "SCTLR_EL1 and TCR_EL1" registers at all times.

* This prevents the mechanism of decoupling EL1 and EL2 registers,
  as EL3 firmware shouldn't be handling both simultaneously.

* Depending on the build configuration either EL1 or EL2 context
  structures need to included, which would result in saving a good
  amount of context memory.

* In order to achieve this it's essential to have explicit context
  entries for registers supporting "ERRATA_SPECULATIVE_AT".

* This patch adds two context entries under "errata_speculative_at"
  structure to assist this errata and thereby allows decoupling
  EL1 and EL2 context structures.

Change-Id: Ia50626eea8fb64899a2e2d81622adbe07fe77d65
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
2024-07-26 15:36:31 +01:00
Jayanth Dodderi Chidanand
123002f917 feat(cm): context switch MDCR_EL3 register
Currently MDCR_EL3 register value is same for all the
worlds(Non-secure, Secure, Realm and Root).

With this approach, features enable/disable settings
remain same across all the worlds. This is not ideal as
there must be flexibility in controlling feature as per
the requirements for individual world.

The patch addresses this by providing MDCR_EL3 a per world
value. Features with identical values for all the worlds are
grouped under ``manage_extensions_common`` API.

Change-Id: Ibc068d985fe165d8cb6d0ffb84119bffd743b3d1
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
2024-06-25 13:50:32 +01:00
Sona Mathew
9e51f15ed1 chore: simplify the macro names in ENABLE_FEAT mechanism
Currently, the macros used to denote feature implementation
in hardware follow a random pattern with a few macros having
suffix as SUPPORTED and a few using the suffix IMPLEMENTED.
This patch aligns the macro names uniformly using the suffix
IMPLEMENTED across all the features and removes unused macros
pertaining to the Enable feat mechanism.

FEAT_SUPPORTED --> FEAT_IMPLEMENTED
FEAT_NOT_SUPPORTED --> FEAT_NOT_IMPLEMENTED

Change-Id: I61bb7d154b23f677b80756a4b6a81f74b10cd24f
Signed-off-by: Sona Mathew <sonarebecca.mathew@arm.com>
2024-05-02 08:53:01 -05:00
Manish Pandey
8815cdaf57 feat(spmd): initialize SCR_EL3.EEL2 bit at RESET
SCR_EL3.EEL2 bit enabled denotes that the system has S-EL2 present and
enabled, Ideally this bit is constant throughout the lifetime and
should not be modified. Currently this bit is initialized in the context
mgmt code where each world copy of the SCR_EL3 register has this bit set
to 1, but for the time duration between the RESET and the first exit to
a lower EL this bit is zero.

Modifying SCR_EL3.EEL2 along with EA bit at RESET does also helps in
mitigating against ERRATA_V2_3099206.

For details on Neoverse V2 errata 3099206, refer the SDEN document
given below.
https://developer.arm.com/documentation/SDEN-2332927/latest

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: If8b2bdbb19bc65391a33dd34cc9824a0203ae4b1
2024-02-02 18:15:21 +00:00
Manish Pandey
970a4a8d8c fix(ras): restrict ENABLE_FEAT_RAS to have only two states
As part of migrating RAS extension to feature detection mechanism, the
macro ENABLE_FEAT_RAS was allowed to have dynamic detection (FEAT_STATE
2). Considering this feature does impact execution of EL3 and we need
to know at compile time about the presence of this feature. Do not use
dynamic detection part of feature detection mechanism.

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I23858f641f81fbd81b6b17504eb4a2cc65c1a752
2023-11-01 11:11:38 +00:00
Manish Pandey
6597fcf169 feat(ras): use FEAT_IESB for error synchronization
For synchronization of errors at exception boundries TF-A uses "esb"
instruction with FEAT_RAS or "dsb" and "isb" otherwise. The problem
with esb instruction is, along with synching errors it might also
consume the error, which is not ideal in all scenarios. On the other
hand we can't use dsb always as its in the hot path.

To solve above mentioned problem the best way is to use FEAT_IESB
feature which provides controls to insert an implicit Error
synchronization event at exception entry and exception return.

Assumption in TF-A is, if RAS Extension is present then FEAT_IESB will
also be present and enabled.

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ie5861eec5da4028a116406bb4d1fea7dac232456
2023-11-01 11:11:29 +00:00
Elizabeth Ho
461c0a5d92 refactor(cm): move EL3 registers to global context
Currently, EL3 context registers are duplicated per-world per-cpu.
Some registers have the same value across all CPUs, so this patch
moves these registers out into a per-world context to reduce
memory usage.

Change-Id: I91294e3d5f4af21a58c23599af2bdbd2a747c54a
Signed-off-by: Elizabeth Ho <elizabeth.ho@arm.com>
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
2023-10-31 11:18:42 +00:00
Boyan Karatotev
5c52d7e540 refactor(cm): remove world differentiation for EL2 context restore
The EL2 context save/restore functions have an optimisation to not
bother with the EL2 context when SEL2 is not in use. However, this
decision is made on the current value of SCR_EL3.EEL2, which is not
the value for the selected security state, but rather, for the
security state that came before it. This relies on the EEL2 bit's
value to propagate identically to all worlds.

This has an unintended side effect that for the first entry into
secure world, the restoring of the context is fully skipped, because
SCR_EL3 is only initialized after the call to the restoring routine
which means the EEL2 bit is not initialized (except when FEAT_RME
is present). This is inconsistent with normal and realm worlds which
always get their EL2 registers zeroed.

Remove this optimization to remove all the complexity with managing
the EEL2 bit's value. Instead unconditionally save/restore all
registers. It is worth noting that there is no performance penalty
in the case where SEL2 is empty with this change. This is because
SEL2 will never be entered, and as such no secure save/restore will
happen anyway, while normal world remains unchanged.

Removing the value management of the EEL2 bit causes the
CTX_ICC_SRE_EL2 register to be inaccessible in Secure world for some
configurations.
Make the SCR_EL3.NS workaround in cm_prepare_el3_exit_ns() generic
on every access to the register.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I1f56d85814c5897b468e82d4bd4a08e3a90a7f8f
2023-10-05 17:44:49 +01:00
Boyan Karatotev
f0c96a2e35 refactor(cm): clean up SCR_EL3 and CPTR_EL3 initialization
As with MDCR_EL3, setting some bits of these registers is redundant at
reset since they do not matter for EL3 execution and the registers get
context switched so they get overwritten anyway.

The SCR_EL3.{TWE, TWI, SMD, API, APK} bits only affect lower ELs so
their place is in context management. The API and APK bits are a bit
special as they would get implicitly unset for secure world when
CTX_INCLUDE_PAUTH_REGS is unset. This is now explicit with their normal
world values being always set as PAuth defaults to enabled. The same
sequence is also added to realm world too. The reasoning is the same as
for Secure world - PAuth will be enabled for NS, and unless explicitly
handled by firmware, it should not leak to realm.

The CPTR_EL3.{ESM, EZ, TAM} bits are set by the relevant
feat_enable()s in lib/extensions so they can be skipped too.

CPTR_EL3.TFP is special as it's needed for access to generic floating
point registers even when SVE is not present. So keep it but move to
context management.

This leaves CPTR_EL3.TCPAC which affects several extensions. This bit
was set centrally at reset, however the earliest need for it is in BL2.
So set it in cm_setup_context_common(). However, this CPTR_EL3 is only
restored for BL31 which is clearly not the case. So always restore it.

Finally, setting CPTR_EL3 to a fresh RESET_VAL for each security state
prevents any bits from leaking between them.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
Change-Id: Ie7095e967bd4a6d6ca6acf314c7086d89fec8900
2023-10-05 17:42:23 +01:00
Boyan Karatotev
ece8f7d734 refactor(cm): set MDCR_EL3/CPTR_EL3 bits in respective feat_init_el3() only
These bits (MDCR_EL3.{NSTB, NSTBE, TTRF, TPM}, CPTR_EL3.TTA) only affect
EL2 (and lower) execution. Each feat_init_el3() is called long before
any lower EL has had a chance to execute, so setting the bits at reset
is redundant. Removing them from reset code also improves readability of
the immutable EL3 state.

Preserve the original intention for the TTA bit of "enabled for NS and
disabled everywhere else" (inferred from commit messages d4582d3088 and
2031d6166a and the comment). This is because CPTR_EL3 will be contexted
and so everyone will eventually get whatever NS has anyway.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I3d24b45d3ea80882c8e450b2d9db9d5531facec1
2023-07-24 11:04:44 +01:00
Boyan Karatotev
83a4dae1af refactor(pmu): convert FEAT_MTPMU to C and move to persistent register init
The FEAT_MTPMU feature disable runs very early after reset. This means,
it needs to be written in assembly, since the C runtime has not been
initialised yet.

However, there is no need for it to be initialised so soon. The PMU
state is only relevant after TF-A has relinquished control. The code
to do this is also very verbose and difficult to read. Delaying the
initialisation allows for it to happen with the rest of the PMU. Align
with FEAT_STATE in the process.

BREAKING CHANGE: This patch explicitly breaks the EL2 entry path. It is
currently unsupported.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I2aa659d026fbdb75152469f6d19812ece3488c6f
2023-06-29 09:59:06 +01:00
Boyan Karatotev
c73686a11c feat(pmu): introduce pmuv3 lib/extensions folder
The enablement code for the PMU is scattered and difficult to track
down. Factor out the feature into its own lib/extensions folder and
consolidate the implementation. Treat it is as an architecturally
mandatory feature as it is currently.

Additionally, do some cleanup on AArch64. Setting overflow bits in
PMCR_EL0 is irrelevant for firmware so don't do it. Then delay the PMU
initialisation until the context management stage which simplifies the
early environment assembly. One side effect is that the PMU might count
before this happens so reset all counters to 0 to prevent any leakage.

Finally, add an enable to manage_extensions_realm() as realm world uses
the pmu. This introduces the HPMN fixup to realm world.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: Ie13a8625820ecc5fbfa467dc6ca18025bf6a9cd3
2023-06-29 09:59:06 +01:00
Andre Przywara
88727fc3ec refactor(cpufeat): enable FEAT_DIT for FEAT_STATE_CHECKED
At the moment we only support FEAT_DIT to be either unconditionally
compiled in, or to be not supported at all.

Add support for runtime detection (ENABLE_DIT=2), by splitting
is_armv8_4_dit_present() into an ID register reading function and a
second function to report the support status. That function considers
both build time settings and runtime information (if needed).

We use ENABLE_DIT in two occassions in assembly code, where we just set
the DIT bit in the DIT system register.
Protect those two cases by reading the CPU ID register when ENABLE_DIT
is set to 2.

Change the FVP platform default to the now supported dynamic
option (=2), so the right decision can be made by the code at runtime.

Change-Id: I506d352f18e23c60db8cdf08edb449f60adbe098
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
2023-04-25 15:09:30 +01:00
Arvind Ram Prakash
42d4d3baac refactor(build): distinguish BL2 as TF-A entry point and BL2 running at EL3
BL2_AT_EL3 is an overloaded macro which has two uses:
	1. When BL2 is entry point into TF-A(no BL1)
	2. When BL2 is running at EL3 exception level
These two scenarios are not exactly same even though first implicitly
means second to be true. To distinguish between these two use cases we
introduce new macros.
BL2_AT_EL3 is renamed to RESET_TO_BL2 to better convey both 1. and 2.
Additional macro BL2_RUNS_AT_EL3 is added to cover all scenarious where
BL2 runs at EL3 (including four world systems).

BREAKING CHANGE: BL2_AT_EL3 renamed to RESET_TO_BL2 across the
repository.

Change-Id: I477e1d0f843b44b799c216670e028fcb3509fb72
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Signed-off-by: Maksims Svecovs <maksims.svecovs@arm.com>
2023-03-15 11:43:14 +00:00
Bipin Ravi
dc2b8e8028 Merge changes from topic "panic_cleanup" into integration
* changes:
  refactor(bl31): use elx_panic for sysreg_handler64
  refactor(aarch64): rename do_panic and el3_panic
  refactor(aarch64): remove weak links to el3_panic
  refactor(aarch64): refactor usage of elx_panic
  refactor(aarch64): cleanup HANDLE_EA_EL3_FIRST_NS usage
2023-02-23 23:38:26 +01:00
Govindraj Raja
bd62ce98d2 refactor(aarch64): rename do_panic and el3_panic
Current panic call invokes do_panic which calls el3_panic, but now panic
handles only panic from EL3 anid clear separation to use lower_el_panic()
which handles panic from lower ELs.

So now we can remove do_panic and just call el3_panic for all panics.

Change-Id: I739c69271b9fb15c1176050877a9b0c0394dc739
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
2023-02-21 17:26:01 +00:00
Manish Pandey
d87c0e277f refactor(el3_runtime): introduce save_x30 macro
Most of the macros/routine in vector entry need a free scratch register.
Introduce a macro "save_x30" and call it right at the begining of vector
entries where x30 is used. It is more exlicit and less error prone

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I617f3d41a120739e5e3fe1c421c79ceb70c1188e
2023-02-13 10:45:46 +00:00
Jiafei Pan
96a8ed14b7 feat(bl2): add support to separate no-loadable sections
Add new options SEPARATE_BL2_NOLOAD_REGION to separate no-loadable
sections (.bss, stack, page tables) to a ram region specified
by BL2_NOLOAD_START and BL2_NOLOAD_LIMIT.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
Change-Id: I844ee0fc405474af0aff978d292c826fbe0a82fd
2022-03-27 23:24:24 +08:00
Daniel Boulby
7d33ffe4c1 fix(el3-runtime): set unset pstate bits to default
During a transition to a higher EL some of the PSTATE bits are not set
by hardware, this means that their state may be leaked from lower ELs.
This patch sets those bits to a default value upon entry to EL3.

This patch was tested using a debugger to check the PSTATE values
are correctly set. As well as adding a test in the next patch to
ensure the PSTATE in lower ELs is still maintained after this change.

Change-Id: Ie546acbca7b9aa3c86bd68185edded91b2a64ae5
Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
2022-02-03 11:33:49 +00:00
johpow01
dc78e62d80 feat(sme): enable SME functionality
This patch adds two new compile time options to enable SME in TF-A:
ENABLE_SME_FOR_NS and ENABLE_SME_FOR_SWD for use in non-secure and
secure worlds respectively. Setting ENABLE_SME_FOR_NS=1 will enable
SME for non-secure worlds and trap SME, SVE, and FPU/SIMD instructions
in secure context. Setting ENABLE_SME_FOR_SWD=1 will disable these
traps, but support for SME context management does not yet exist in
SPM so building with SPD=spmd will fail.

The existing ENABLE_SVE_FOR_NS and ENABLE_SVE_FOR_SWD options cannot
be used with SME as it is a superset of SVE and will enable SVE and
FPU/SIMD along with SME.

Signed-off-by: John Powell <john.powell@arm.com>
Change-Id: Iaaac9d22fe37b4a92315207891da848a8fd0ed73
2021-11-12 10:38:00 -06:00
Zelalem Aweke
596d20d9e4 fix(pie): invalidate data cache in the entire image range if PIE is enabled
Currently on image entry, the data cache in the RW address range is
invalidated before MMU is enabled to safeguard against potential
stale data from previous firmware stage. If PIE is enabled however,
RO sections including the GOT may be also modified during pie fixup.
Therefore, to be on the safe side, invalidate the entire image
region if PIE is enabled.

Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I7ee2a324fe4377b026e32f9ab842617ad4e09d89
2021-10-19 21:30:56 +02:00
Zelalem Aweke
6c09af9f8b feat(rme): run BL2 in root world when FEAT_RME is enabled
This patch enables BL2 to run in root world (EL3) which is
needed as per the security model of RME-enabled systems.

Using the existing BL2_AT_EL3 TF-A build option is not convenient
because that option assumes TF-A BL1 doesn't exist, which is not
the case for RME-enabled systems. For the purposes of RME, we use
a normal BL1 image but we also want to run BL2 in EL3 as normally as
possible, therefore rather than use the special bl2_entrypoint
function in bl2_el3_entrypoint.S, we use a new bl2_entrypoint
function (in bl2_rme_entrypoint.S) which doesn't need reset or
mailbox initialization code seen in the el3_entrypoint_common macro.

The patch also cleans up bl2_el3_entrypoint.S, moving the
bl2_run_next_image function to its own file to avoid duplicating
code.

Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I99821b4cd550cadcb701f4c0c4dc36da81c7ef55
2021-10-04 21:13:20 +02:00
Manish V Badarkhe
5de20ece38 feat(trf): initialize trap settings of trace filter control registers access
Trap bits of trace filter control registers access are in
architecturally UNKNOWN state at boot hence

1. Initialized trap bits to one to prohibit trace filter control
   registers accesses in lower ELs (EL2, EL1) in all security states
   when FEAT_TRF is implemented.
2. These bits are RES0 when FEAT_TRF is not implemented and hence set
   it to zero to aligns with the Arm ARM reference recommendation,
   that mentions software must writes RES0 bits with all 0s.

Change-Id: I1b7abf2170ece84ee585c91cda32d22b25c0fc34
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
2021-08-26 09:29:51 +01:00
Manish V Badarkhe
2031d6166a feat(sys_reg_trace): initialize trap settings of trace system registers access
Trap bits of trace system registers access are in architecturally
UNKNOWN state at boot hence

1. Initialized trap bits to one to prohibit trace system registers
   accesses in lower ELs (EL2, EL1) in all security states when system
   trace registers are implemented.
2. These bits are RES0 in the absence of system trace register support
   and hence set it to zero to aligns with the Arm ARM reference
   recommendation,that mentions software must writes RES0 bits with
   all 0s.

Change-Id: I4b6c15cda882325273492895d72568b29de89ca3
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
2021-08-26 09:29:51 +01:00
Manish V Badarkhe
40ff907470 feat(trbe): initialize trap settings of trace buffer control registers access
Trap bits of trace buffer control registers access are in
architecturally UNKNOWN state at boot hence

1. Initialized these bits to zero to prohibit trace buffer control
   registers accesses in lower ELs (EL2, EL1) in all security states
   when FEAT_TRBE is implemented
2. Also, these bits are RES0 when FEAT_TRBE is not implemented, and
   hence setting it to zero also aligns with the Arm ARM reference
   recommendation, that mentions software must writes RES0 bits with
   all 0s

Change-Id: If2752fd314881219f232f21d8e172a9c6d341ea1
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
2021-08-25 18:01:16 +01:00
Max Shvetsov
0c5e7d1ce3 feat(sve): enable SVE for the secure world
Enables SVE support for the secure world via ENABLE_SVE_FOR_SWD.
ENABLE_SVE_FOR_SWD defaults to 0 and has to be explicitly set by the
platform. SVE is configured during initial setup and then uses EL3
context save/restore routine to switch between SVE configurations for
different contexts.
Reset value of CPTR_EL3 changed to be most restrictive by default.

Signed-off-by: Max Shvetsov <maksims.svecovs@arm.com>
Change-Id: I889fbbc2e435435d66779b73a2d90d1188bf4116
2021-06-28 13:24:24 +01:00
Alexei Fedorov
12f6c06497 fix(security): Set MDCR_EL3.MCCD bit
This patch adds setting MDCR_EL3.MCCD in 'el3_arch_init_common'
macro to disable cycle counting by PMCCNTR_EL0 in EL3 when
FEAT_PMUv3p7 is implemented. This fixes failing test
'Leak PMU CYCLE counter values from EL3 on PSCI suspend SMC'
on FVP models with 'has_v8_7_pmu_extension' parameter set to
1 or 2.

Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
Change-Id: I2ad3ef501b31ee11306f76cb5a61032ecfd0fbda
2021-05-14 12:19:54 +01:00
Javier Almansa Sobrino
0063dd1708 Add support for FEAT_MTPMU for Armv8.6
If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented
as well, it is possible to control whether PMU counters take into account
events happening on other threads.

If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit
leaving it to effective state of 0 regardless of any write to it.

This patch introduces the DISABLE_MTPMU flag, which allows to diable
multithread event count from EL3 (or EL2). The flag is disabled
by default so the behavior is consistent with those architectures
that do not implement FEAT_MTPMU.

Signed-off-by: Javier Almansa Sobrino <javier.almansasobrino@arm.com>
Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
2020-12-11 12:49:20 +00:00
Jimmy Brisson
d7b5f40823 Increase type widths to satisfy width requirements
Usually, C has no problem up-converting types to larger bit sizes. MISRA
rule 10.7 requires that you not do this, or be very explicit about this.
This resolves the following required rule:

    bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
    The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
    0x3c0U" (32 bits) is less that the right hand operand
    "18446744073709547519ULL" (64 bits).

This also resolves MISRA defects such as:

    bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
    In the expression "3U << 20", shifting more than 7 bits, the number
    of bits in the essential type of the left expression, "3U", is
    not allowed.

Further, MISRA requires that all shifts don't overflow. The definition of
PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
This fixes the violation by changing the definition to 1UL << 12. Since
this uses 32bits, it should not create any issues for aarch32.

This patch also contains a fix for a build failure in the sun50i_a64
platform. Specifically, these misra fixes removed a single and
instruction,

    92407e73        and     x19, x19, #0xffffffff

from the cm_setup_context function caused a relocation in
psci_cpus_on_start to require a linker-generated stub. This increased the
size of the .text section and caused an alignment later on to go over a
page boundary and round up to the end of RAM before placing the .data
section. This sectionn is of non-zero size and therefore causes a link
error.

The fix included in this reorders the functions during link time
without changing their ording with respect to alignment.

Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
2020-10-12 10:55:03 -05:00
Manish V Badarkhe
3b8456bd1c runtime_exceptions: Update AT speculative workaround
As per latest mailing communication [1], we decided to
update AT speculative workaround implementation in order to
disable page table walk for lower ELs(EL1 or EL0) immediately
after context switching to EL3 from lower ELs.

Previous implementation of AT speculative workaround is available
here: 45aecff00

AT speculative workaround is updated as below:
1. Avoid saving and restoring of SCTLR and TCR registers for EL1
   in context save and restore routine respectively.
2. On EL3 entry, save SCTLR and TCR registers for EL1.
3. On EL3 entry, update EL1 system registers to disable stage 1
   page table walk for lower ELs (EL1 and EL0) and enable EL1
   MMU.
4. On EL3 exit, restore SCTLR and TCR registers for EL1 which
   are saved in step 2.

[1]:
https://lists.trustedfirmware.org/pipermail/tf-a/2020-July/000586.html

Change-Id: Iee8de16f81dc970a8f492726f2ddd57e7bd9ffb5
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
2020-08-18 10:49:27 +01:00
Varun Wadekar
1a04b2e536 Fix compilation error when ENABLE_PIE=1
This patch fixes compilation errors when ENABLE_PIE=1.

<snip>
bl31/aarch64/bl31_entrypoint.S: Assembler messages:
bl31/aarch64/bl31_entrypoint.S:61: Error: invalid operand (*UND* section) for `~'
bl31/aarch64/bl31_entrypoint.S:61: Error: invalid immediate
Makefile:1079: recipe for target 'build/tegra/t194/debug/bl31/bl31_entrypoint.o' failed
<snip>

Verified by setting 'ENABLE_PIE=1' for Tegra platform builds.

Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Change-Id: Ifd184f89b86b4360fda86a6ce83fd8495f930bbc
2020-05-16 22:44:52 -07:00
Samuel Holland
f8578e641b bl31: Split into two separate memory regions
Some platforms are extremely memory constrained and must split BL31
between multiple non-contiguous areas in SRAM. Allow the NOBITS
sections (.bss, stacks, page tables, and coherent memory) to be placed
in a separate region of RAM from the loaded firmware image.

Because the NOBITS region may be at a lower address than the rest of
BL31, __RW_{START,END}__ and __BL31_{START,END}__ cannot include this
region, or el3_entrypoint_common would attempt to invalidate the dcache
for the entire address space. New symbols __NOBITS_{START,END}__ are
added when SEPARATE_NOBITS_REGION is enabled, and the dcached for the
NOBITS region is invalidated separately.

Signed-off-by: Samuel Holland <samuel@sholland.org>
Change-Id: Idedfec5e4dbee77e94f2fdd356e6ae6f4dc79d37
2019-12-29 12:00:40 -06:00
Manish Pandey
da90359b78 PIE: make call to GDT relocation fixup generalized
When a Firmware is complied as Position Independent Executable it needs
to request GDT fixup by passing size of the memory region to
el3_entrypoint_common macro.
The Global descriptor table fixup will be done early on during cold boot
process of primary core.

Currently only BL31 supports PIE, but in future when BL2_AT_EL3 will be
compiled as PIE, it can simply pass fixup size to the common el3
entrypoint macro to fixup GDT.

The reason for this patch was to overcome the bug introduced by SHA
330ead806 which called fixup routine for each core causing
re-initializing of global pointers thus overwriting any changes
done by the previous core.

Change-Id: I55c792cc3ea9e7eef34c2e4653afd04572c4f055
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
2019-12-12 14:16:14 +00:00
Petre-Ionut Tudor
2a7adf2567 Explicitly disable the SPME bit in MDCR_EL3
Currently the MDCR_EL3 initialisation implicitly disables
MDCR_EL3.SPME by using mov_imm.

This patch makes the SPME bit more visible by explicitly
disabling it and documenting its use in different versions
of the architecture.

Signed-off-by: Petre-Ionut Tudor <petre-ionut.tudor@arm.com>
Change-Id: I221fdf314f01622f46ac5aa43388f59fa17a29b3
2019-10-07 11:50:07 +01:00
Lionel Debieve
0a12302c3f Add missing support for BL2_AT_EL3 in XIP memory
Add the missing flag for aarch32 XIP memory mode. It was
previously added in aarch64 only.
Minor: Correct the aarch64 missing flag.

Signed-off-by: Lionel Debieve <lionel.debieve@st.com>
Change-Id: Iac0a7581a1fd580aececa75f97deb894858f776f
2019-10-02 09:06:39 +02:00
Hadi Asyrafi
b90f207a1d Invalidate dcache build option for bl2 entry at EL3
Some of the platform (ie. Agilex) make use of CCU IPs which will only be
initialized during bl2_el3_early_platform_setup. Any operation to the
cache beforehand will crash the platform. Hence, this will provide an
option to skip the data cache invalidation upon bl2 entry at EL3

Signed-off-by: Hadi Asyrafi <muhammad.hadi.asyrafi.abdul.halim@intel.com>
Change-Id: I2c924ed0589a72d0034714c31be8fe57237d1f06
2019-09-12 12:36:31 +00:00
Alexei Fedorov
e290a8fcbc AArch64: Disable Secure Cycle Counter
This patch fixes an issue when secure world timing information
can be leaked because Secure Cycle Counter is not disabled.
For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD
bit on CPU cold/warm boot.
For the earlier architectures PMCR_EL0 register is saved/restored
on secure world entry/exit from/to Non-secure state, and cycle
counting gets disabled by setting PMCR_EL0.DP bit.
'include\aarch64\arch.h' header file was tided up and new
ARMv8.5-PMU related definitions were added.

Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
2019-08-21 15:43:24 +01:00
Antonio Nino Diaz
5283962eba Add ARMv8.3-PAuth registers to CPU context
ARMv8.3-PAuth adds functionality that supports address authentication of
the contents of a register before that register is used as the target of
an indirect branch, or as a load.

This feature is supported only in AArch64 state.

This feature is mandatory in ARMv8.3 implementations.

This feature adds several registers to EL1. A new option called
CTX_INCLUDE_PAUTH_REGS has been added to select if the TF needs to save
them during Non-secure <-> Secure world switches. This option must be
enabled if the hardware has the registers or the values will be leaked
during world switches.

To prevent leaks, this patch also disables pointer authentication in the
Secure world if CTX_INCLUDE_PAUTH_REGS is 0. Any attempt to use it will
be trapped in EL3.

Change-Id: I27beba9907b9a86c6df1d0c5bf6180c972830855
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-02-27 11:08:59 +00:00
Antonio Nino Diaz
ed4fc6f026 Disable processor Cycle Counting in Secure state
In a system with ARMv8.5-PMU implemented:

- If EL3 is using AArch32, setting MDCR_EL3.SCCD to 1 disables counting
  in Secure state in PMCCNTR.

- If EL3 is using AArch64, setting SDCR.SCCD to 1 disables counting in
  Secure state in PMCCNTR_EL0.

So far this effect has been achieved by setting PMCR_EL0.DP (in AArch64)
or PMCR.DP (in AArch32) to 1 instead, but this isn't considered secure
as any EL can change that value.

Change-Id: I82cbb3e48f2e5a55c44d9c4445683c5881ef1f6f
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-02-18 17:03:16 +00:00
Antonio Nino Diaz
f5478dedf9 Reorganize architecture-dependent header files
The architecture dependant header files in include/lib/${ARCH} and
include/common/${ARCH} have been moved to /include/arch/${ARCH}.

Change-Id: I96f30fdb80b191a51448ddf11b1d4a0624c03394
Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
2019-01-04 10:43:16 +00:00
Renamed from include/common/aarch64/el3_common_macros.S (Browse further)