Since the introduction of the toolchain detection framework into the
build system, we have done determination and identification of the
toolchain(s) used for the build at the initialization of the build
system.
This incurs a large cost to the build every time - for every toolchain
that has been requested by the current makefile, we try to identify each
tool in the list of known tool classes, even if that tool doesn't
actually see any use.
For the clean and check-like targets we worked around this by disabling
most of the toolchains if we detect these targets, but this is
inflexible and not very reliable, and it still means that when building
normal targets we are incurring that cost for all tools whether they are
used or not.
This change instead modifies the toolchain detection framework to only
initialize a tool for a given toolchain when it is first used. This does
mean that we can no longer warn about an incorrectly-configured
toolchain at the beginning of build system invocation, but it has the
advantage of substantially reducing build time and the complexity of
*using* the framework (at the cost of an increase in complexity in the
framework itself).
Change-Id: I7f3d06b2eb58c1b26a846791a13b0037f32c8013
Signed-off-by: Chris Kay <chris.kay@arm.com>
With introduction of FEAT_STATE_CHECK_ASYMMETRIC, the asymmetry of cores
can be handled. FEAT_TCR2 is one of the features which can be
asymmetric across cores and the respective support is added here.
Adding a function to handle this asymmetry by re-visting the
feature presence on running core.
There are two possible cases:
- If the primary core has the feature and secondary does not have it
then the feature is disabled.
- If the primary does not have the feature and secondary has it then,
the feature need to be enabled in secondary cores.
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
Change-Id: I73a70891d52268ddfa4effe40edf04115f5821ca
Currently, as part of the context_memory report, we explicitly
list the EL3, EL1 and EL2 registers and memory usage per CPU
for each world. The remaining bits in the cpu_context_t structure
are grouped and listed as other section.
This patch enhances this part, by individually listing all the
remaining bits (GPREGS, PAUTH_REGS) separately providing
a much detailed overview of the context memory consumption
amongst the registers.
The patch has been tested on the CI with the following patch
and the results are summarised precisely.
[https://review.trustedfirmware.org/c/ci/tf-a-ci-scripts/+/28849]
Change-Id: I16f210b605ddd7900600519520accf1ccd057bc7
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
* Currently, EL1 context is included in cpu_context_t by default
for all the build configurations.
As part of the cpu context structure, we hold a copy of EL1, EL2
system registers, per world per PE. This context structure is
enormous and will continue to grow bigger with the addition of
new features incorporating new registers.
* Ideally, EL3 should save and restore the system registers at its next
lower exception level, which is EL2 in majority of the configurations.
* This patch aims at optimising the memory allocation in cases, when
the members from the context structure are unused. So el1 system
register context must be omitted when lower EL is always x-EL2.
* "CTX_INCLUDE_EL2_REGS" is the internal build flag which gets set,
when SPD=spmd and SPMD_SPM_AT_SEL2=1 or ENABLE_RME=1.
It indicates, the system registers at EL2 are context switched for
the respective build configuration. Here, there is no need to save
and restore EL1 system registers, while x-EL2 is enabled.
Henceforth, this patch addresses this issue, by taking out the EL1
context at all possible places, while EL2 (CTX_INCLUDE_EL2_REGS) is
enabled, there by saving memory.
Change-Id: Ifddc497d3c810e22a15b1c227a731bcc133c2f4a
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
SCTLR_EL1 and TCR_EL1 regs are included either as part of errata
"ERRATA_SPECULATIVE_AT" or under el1_sysregs_t context structure.
The code to write and read into these context entries, looks
repetitive and is invoked at most places.
This section is refactored to bring them under a static procedure,
keeping the code neat and easier to maintain.
Change-Id: Ib0d8c51bee09e1600c5baaa7f9745083dca9fee1
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
* changes:
feat(fvp): allow SIMD context to be put in TZC DRAM
docs(simd): introduce CTX_INCLUDE_SVE_REGS build flag
feat(fvp): add Cactus partition manifest for EL3 SPMC
chore(simd): remove unused macros and utilities for FP
feat(el3-spmc): support simd context management upon world switch
feat(trusty): switch to simd_ctx_save/restore apis
feat(pncd): switch to simd_ctx_save/restore apis
feat(spm-mm): switch to simd_ctx_save/restore APIs
feat(simd): add rules to rationalize simd ctxt mgmt
feat(simd): introduce simd context helper APIs
feat(simd): add routines to save, restore sve state
feat(simd): add sve state to simd ctxt struct
feat(simd): add data struct for simd ctxt management
Cortex-A720 erratum 2792132 is a Cat B erratum that is present
in revision r0p0, r0p1 and is fixed in r0p2.
The workaround is to set bit[26] of the CPUACTLR2_EL1 to 1.
SDEN documentation:
https://developer.arm.com/documentation/SDEN2439421/latest
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: I8d11fe65a2ab5f79244cc3395d0645f77256304c
This patch adds the common API to save and restore FP and SVE. When SVE
is enabled we save and restore SVE which automatically covers FP. If FP
is enabled while SVE is not, then we save and restore FP only.
The patch uses simd_ctx_t to save and restore both FP and SVE which
means developers need not use fp or sve routines directly. Once all the
calls to fpregs_context_* are replaced with simd_ctx_*, we can remove
fp_regs_t data structure and macros (taken care in a following patch).
simd_ctx_t is currently allocated in section of its own. This will go
into BSS section by default but platform will have option of relocating
it to a different section by overriding in plat.ld.S.
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Signed-off-by: Okash Khawaja <okash@google.com>
Change-Id: I090f8b8fa3862e527b6c40385249adc69256bf24
This adds assembly routines to save and restore SVE registers. In order
to share between FPU and SVE the code to save and restore FPCR and
FPSR, the patch converts code for those registers into macro.
Since we will be using simd_ctx_t to save and restore FPU also, we use
offsets in simd_ctx_t for FPSR and FPCR. Since simd_ctx_t has the same
structure at the beginning as fp_regs_t, those offsets should be the
same as CTX_FP_* offsets, when SVE is not enabled. Note that the code
also saves and restores FPEXC32 reg along with FPSR and FPCR.
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Signed-off-by: Okash Khawaja <okash@google.com>
Change-Id: I120c02359794aa6bb6376a464a9afe98bd84ae60
* changes:
feat(tc): enable trbe errata flags for Cortex-A520 and X4
feat(cm): asymmetric feature support for trbe
refactor(errata-abi): move EXTRACT_PARTNUM to arch.h
feat(cpus): workaround for Cortex-A520(2938996) and Cortex-X4(2726228)
feat(tc): make SPE feature asymmetric
feat(cm): handle asymmetry for SPE feature
feat(cm): support for asymmetric feature among cores
feat(cpufeat): add new feature state for asymmetric features
This patch checks if the Errata 2938996(Cortex-A520) , 2726228(Cortex-X4)
applies to cores and if affected applies the errata workaround which
disables TRBE.
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: I53b037839820c8b3a869f393588302a365d5b97c
This patch implements errata functions for two errata, both of them
disable TRBE as a workaround. This patch doesn't have functions
that disable TRBE but only implemented helper functions that are
used to detect cores affected by Errata 2938996(Cortex-A520) & 2726228(Cortex-X4)
Cortex-X4 SDEN documentation:
https://developer.arm.com/documentation/SDEN2432808/latest
Cortex-A520 SDEN Documentation:
https://developer.arm.com/documentation/SDEN-2444153/latest
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: I8f886a1c21698f546a0996c719cc27dc0a23633a
With introduction of FEAT_STATE_CHECK_ASYMMETRIC, the asymmetry of cores
can be handled. SPE is one of the features which can be asymmetric
across cores.
Add a function to handle this asymmetry by re-visting the feature
presence on running core.
There are two possible cases:
- If the primary has the feature and secondary does not have it then,
the feature needs to be disabled.
- If the primary does not have the feature and secondary has it then,
the feature need to be enabled
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ibb2b88b5ef63b3efcb80801898ae8d8967e5c271
TF-A assumes that all the cores in a platform has architecture feature
parity, this is evident by the fact that primary sets up the
Non-secure context of secondary cores.
With changing landscape of platforms (e.g. big/little/mid cores), we are
seeing more and more platforms which has feature asymmetry among cores.
There is also a scenario where certain CPU erratum only applies to one
type of cores and requires a feature to be disabled even it supports
the feature.
To handle these scenarios, introduce a hook in warmboot path which would
be called on the running CPU to override any feature disparity in the
NS context stashed up by primary. Note that, re-checking of feature for
Secure/Realm context is not required as the context is created on
running core itself.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I5a01dbda528fa8481a00fdd098b58a7463ed0e22
Introduce a new feature state CHECK_ASYMMETRIC to cater for the features
which are asymmetric across cores. This state is useful for platforms
which has architectural asymmetric cores (A feature is only present in
one type of core e.g. big).
This state is similar to FEAT_STATE_CHECK (dynamic detection) except
that feature state is also checked on each core during warmboot path and
override the context (just for asymmetric features) which was setup by
core executing CPU_ON call.
Only Non-secure context will be re-checked as secure and realm context
is created on same core.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: Ic78a0b6ca996e0d7881c43da1a6a0c422f528ef3
The problem that this resolves is a bit involved; the following
must be met at the same time for some function <to_be_wrapped>:
* to_be_wrapped must be specified as part of the romlib
* to_be_wrapped must _not_ be referenced by any translation unit
in TF-A
* to_be_wrapped must be referenced by a translation unit in a
dependent library, mbedtls for example.
Under these circumstances, to_be_wrapped will not be wrapped, and
will instead reference its original definition while simultaneously
residing in romlib.
This is a side effect of two issues with romlib prior to this patch:
1 to_be_wrapped is expected to wrap by duplicating its definition.
This causes any condition that links against both the base and
wrapper functions to be a link error (duplicate symbol definition).
2 to_be_wrapped is in its own translation unit
This causes the wrappers to be used by TF-A in an as needed.
The duplicate function definitions can be worked around using the
linker's `--wrap` flag, which redirects all references to a symbol
to resolve to `__wrap_<symbol>` and the original symbol to be
available as `__real_<symbol>`. Most of the changes handle creating
this arguments and passing them to the linker.
Further, once you use the linker's wrap, you will encounter another
issue: if TF-A does not use a function, its wrapper is not present.
This causes link issues when a library and not TF-A uses the wrapper.
Note that this issue would have been resolved previously by ignoring
the wrapper and using the base definition.
This further issue is worked around by concatenating the assembly for
all of the wrappers into a single translation unit. It's possible to
work around this issue in a few other ways, including reordering the
libraries passed to the linker to place libwrapper.a last or grouping
the libraries so that symbols from later libraries may be resolved
with prior libraries.
I chose the translation unit concatenation approach as it revealed
that a jumptable has duplicate symbols within it.
Change-Id: Ie57b5ae69bde2fc8705bdc7a93fae3ddb5341ed9
Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
The function always checks the first parent of the current core
instead parse the tree topology to find the parent at parent level
of the CPU. It is because the current loop has no effect as it uses
a fixed parameter 'my_idx' and returns the FIRST parent of CPU.
Also, it looks for the parent nodes in the array of CPU nodes, but
actually they are in a separate array.
This update allows to parse the PSCI topology tree to find
the parent at parent level of the CPU identified by my_idx.
Fixes: 606b743007 ("feat(psci): add support for OS-initiated mode")
Change-Id: I96fb5ecc154a76b16adca5b5055217b8626c9e66
Signed-off-by: Charlie Bareham <charlie.bareham@arm.com>
Cortex-A720 erratum 2844092 is a Cat B erratum that is present
in revisions r0p0, r0p1 and is fixed in r0p2.
The workaround is to set bit[11] of CPUACTLR4_EL1 register.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-2439421/latest
Change-Id: I3d8eacb26cba42774f1f31c3aae2a0e6fecec614
Signed-off-by: Sona Mathew <sonarebecca.mathew@arm.com>
Cortex-X4 erratum 2816013 is a Cat B erratum that applies
to all revisions <= r0p1 and is fixed in r0p2. This erratum
is only present when memory tagging is enabled.
The workaround is to set CPUACTLR5_EL1[14] to 1.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-2432808/latest
Change-Id: I546044bde6e5eedd0abf61643d25e2dd2036df5c
Signed-off-by: Sona Mathew <sonarebecca.mathew@arm.com>
This patch adds trbe_disable() which disables Trace buffer access
from lower ELs in all security state. This function makes Secure
state the owner of Trace buffer and access from EL2/EL1 generate
trap exceptions to EL3.
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: If3e3bd621684b3c28f44c3ed2fe3df30b143f8cd
Introduce a function to disable SPE feature for Non-secure state and do
the default setting of making Secure state the owner of profiling
buffers and trap access of profiling and profiling buffer control
registers from lower ELs to EL3.
This functionality is required to handle asymmetric cores where SPE has
to disabled at runtime.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I2f99e922e8df06bfc900c153137aef7c9dcfd759
During CPU power down, we stop the profiling by calling spe_disable()
function. From TF-A point of view, enable/disable means the avaibility
of the feature for lower EL. In this case we are not actully disabling
the feautre but stoping it before power down.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I6e3b39c5c35d330c51e7ac715446a8b36bf9531f
Currently the EL1 part of the context structure (el1_sysregs_t),
is coupled with feature flags reducing the context memory allocation
for platforms, that don't enable/support all the architectural
features at once.
Similar to the el2 context optimization commit-"d6af234" this patch
further improves this section by converting the assembly context-offset
entries into a c structure. It relies on garbage collection of the
linker removing unreferenced structures from memory, as well as aiding
in readability and future maintenance. Additionally, it eliminates
the #ifs usage in 'context_mgmt.c' source file.
Change-Id: If6075931cec994bc89231241337eccc7042c5ede
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
* Currently, "ERRATA_SPECUALTIVE_AT" errata is enabled by default
for few cores and they need context entries for saving and
restoring EL1 regs "SCTLR_EL1 and TCR_EL1" registers at all times.
* This prevents the mechanism of decoupling EL1 and EL2 registers,
as EL3 firmware shouldn't be handling both simultaneously.
* Depending on the build configuration either EL1 or EL2 context
structures need to included, which would result in saving a good
amount of context memory.
* In order to achieve this it's essential to have explicit context
entries for registers supporting "ERRATA_SPECULATIVE_AT".
* This patch adds two context entries under "errata_speculative_at"
structure to assist this errata and thereby allows decoupling
EL1 and EL2 context structures.
Change-Id: Ia50626eea8fb64899a2e2d81622adbe07fe77d65
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
Errata printing is done directly via generic_errata_report.
This commit removes the unused \_cpu\()_errata_report
functions for all cores, and removes errata_func from cpu_ops.
Change-Id: I04fefbde5f0ff63b1f1cd17c864557a14070d68c
Signed-off-by: Ryan Everett <ryan.everett@arm.com>
In all non-trivial cases the CPU specific errata functions
already call generic_errata_report, this cuts out the middleman
by directly calling generic_errata_report from
print_errata_status.
The CPU specific errata functions (cpu_ops->errata_func)
can now be removed from all cores, and this field can be
removed from cpu_ops.
Also removes the now unused old errata reporting
function and macros.
Change-Id: Ie4a4fd60429aca37cf434e79c0ce2992a5ff5d68
Signed-off-by: Ryan Everett <ryan.everett@arm.com>
The system register actlr_el2 can be set during CPU or platform reset
handler. E.g. on Arm Total Compute platform, the CLUSTERPMUEN bit of
actlr_el2 is set in the platform reset handler to enable the write
access to DSU PMU registers from EL1. However, as EL2 context gets
restored without saving it beforehand during jump to SPM and next NS
image, therefore, the initialized value of actlr_el2 is not retained.
To fix this issue, keep track of actlr_el2 value during the EL2 context
initialization. This applies for both secure and non-secure security
state.
Change-Id: I1bd7b984216c042c056ad20c6724bedce5a6a3e2
Signed-off-by: Jagdish Gediya <jagdish.gediya@arm.com>
Signed-off-by: Leo Yan <leo.yan@arm.com>
According to recently firmware handsoff spec [1]'s "Register usage at handoff
boundary", Transfer List's signature value was changed from 0x40_b10b
(3 bytes) to 4a0f_b10b (4 bytes).
As updating of TL's signature, register value of x1/r1 should be:
In aarch32's r1 value should be
R1[23:0]: set to the TL signature (4a0f_b10b -> masked range value: 0f_b10b)
R1[31:24]: version of the register convention == 1
and
In aarch64's x1 value should be
X1[31:0]: set to the TL signature (4a0f_b10b)
X1[39:32]: version of the register convention == 1
X1[63:40]: MBZ
(See the [2] and [3]).
Therefore, it requires to separate mask and shift value for register
convention version field when sets each r1/x1.
This patch fix two problems:
1. breaking X1 value with updated specification in aarch64
- change of length of signature field.
2. previous error value set in R1 in arm32.
- length of signature should be 24, but it uses 32bit signature.
This change is breaking change. It requires some patch for other
softwares (u-boot[4], optee[5]).
Link: https://github.com/FirmwareHandoff/firmware_handoff [1]
Link: https://github.com/FirmwareHandoff/firmware_handoff/issues/32 [2]
Link: 5aa7aa1d3a [3]
Link: https://lists.denx.de/pipermail/u-boot/2024-July/558628.html [4]
Link: https://github.com/OP-TEE/optee_os/pull/6933 [5]
Signed-off-by: Levi Yun <yeoreum.yun@arm.com>
Change-Id: Ie417e054a7a4c192024a2679419e99efeded1705
In a system enabled with RME, the function
'xlat_get_mem_attributes_internal' fails to accurately provide
'output PA space' for Realm and Root memory because it does not
consider the 'nse' bit in page table descriptor.
This patch resolves the issue by extracting the 'nse' bit value.
As a result, it ensures correct retrieval of attributes
in RME-enabled systems while maintaining unaffected attribute
retrieval for non-RME systems.
Change-Id: If2d01545b921c9074f48c52a98027ff331e14237
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
This commit streamlines directory creation by introducing a single
pattern rule to automatically make directories for which there is a
dependency.
We currently use several macros to generate rules to create directories
upon dependence, which is a significant amount of code and a lot of
redundancy. The rule introduced by this change represents a catch-all:
any rule dependency on a path ending in a forward slash is automatically
created.
Now, rules can rely on an unordered dependency (`|`) on `$$(@D)/` which,
when secondary expansion is enabled, expands to the directory of the
target being built, e.g.:
build/main.o: main.c | $$(@D)/ # automatically creates `build/`
Change-Id: I7e554efa2ac850e779bb302fd9c7fbb239886c9f
Signed-off-by: Chris Kay <chris.kay@arm.com>
modify the print logs when an erratum workaround does not
need to be applied to a certain revision/variant of the CPU.
Change-Id: I8f60636320f617ecd4ed88ee1fbf7a3e3e4517ee
Signed-off-by: Sona Mathew <sonarebecca.mathew@arm.com>
This patch disables trapping to EL3 when the FEAT_FGT2
specific trap registers are accessed by setting the
SCR_EL3.FGTEn2 bit
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: I6d2b614affb9067b2bc3d7bf0ae7d169d031592a
This patch enables FEAT_Debugv8p9 and prevents EL1/0 from
trapping to EL3 when accessing MDSELR_EL1 register by
setting the MDCR_EL3.EBWE bit.
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: I3613af1dd8cb8c0d3c33dc959f170846c0b9695a
Currently MDCR_EL3 register value is same for all the
worlds(Non-secure, Secure, Realm and Root).
With this approach, features enable/disable settings
remain same across all the worlds. This is not ideal as
there must be flexibility in controlling feature as per
the requirements for individual world.
The patch addresses this by providing MDCR_EL3 a per world
value. Features with identical values for all the worlds are
grouped under ``manage_extensions_common`` API.
Change-Id: Ibc068d985fe165d8cb6d0ffb84119bffd743b3d1
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>