With the introduction of FEAT_RME MDCR_EL3 bits NSPB and NSPBE depend on
each other. The enable code relies on the register being initialised to
zero and omits to reset NSPBE. However, this is not obvious. Reset the
bit explicitly to document this.
Similarly, reset the STE bit , since it's part of the feature enablement.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I3714507bae10042cdccd2b7bc713b31d4cdeb02f
The FEAT_MTPMU feature disable runs very early after reset. This means,
it needs to be written in assembly, since the C runtime has not been
initialised yet.
However, there is no need for it to be initialised so soon. The PMU
state is only relevant after TF-A has relinquished control. The code
to do this is also very verbose and difficult to read. Delaying the
initialisation allows for it to happen with the rest of the PMU. Align
with FEAT_STATE in the process.
BREAKING CHANGE: This patch explicitly breaks the EL2 entry path. It is
currently unsupported.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I2aa659d026fbdb75152469f6d19812ece3488c6f
The enablement code for the PMU is scattered and difficult to track
down. Factor out the feature into its own lib/extensions folder and
consolidate the implementation. Treat it is as an architecturally
mandatory feature as it is currently.
Additionally, do some cleanup on AArch64. Setting overflow bits in
PMCR_EL0 is irrelevant for firmware so don't do it. Then delay the PMU
initialisation until the context management stage which simplifies the
early environment assembly. One side effect is that the PMU might count
before this happens so reset all counters to 0 to prevent any leakage.
Finally, add an enable to manage_extensions_realm() as realm world uses
the pmu. This introduces the HPMN fixup to realm world.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: Ie13a8625820ecc5fbfa467dc6ca18025bf6a9cd3
The cpu_macros.S file is loaded with lots of definitions for the cpu_ops
structure. However, since they are defined as .equ directives they are
inaccessible for C code. Convert them to #defines, put them into order,
refactor them for readability, and extract them to a separate file to
make this possible.
This has the benefit of removing some Aarch differences and a lot of
duplicate code.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I72861794b6c9131285a9297d5918822ed718b228
When using the ENABLE_STACK_PROTECTOR=strong build option, the QEMU code
will try to use the RNDR CPU instructions to initialise the stack
canary. Since the instructions are defined for AArch64 only, this will
fail to build for AArch32.
And even though we now always return "false" when asked about the
availability of the RNDR instruction, the compiler will still leave the
reference to read_rdnr() in, if optimisations are turned off (-O0).
Avoid this by providing a dummy read_rndr() implementation, that makes
the linker happy in any case.
This fixes the QEMU build for AArch32 with ENABLE_STACK_PROTECTOR=strong
Change-Id: Ibf450ba4a46167fdf3a14a527d338350ced8b5ba
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Many newer architecture features are defined for AArch64 only, so cannot
be used in an AArch32 build.
To avoid #ifdef-ing every single user, just provide trivial
implementations of the feature check functions is_feat_xxx_supported(),
which always return "false" in AArch32. The compiler will then optimise
out the dependent code automatically.
Change-Id: I1e7d653fca0e676a11858efd953c2d623f2d5c9e
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
FEAT_PAN is implemented in AArch32 as well, provide the helper functions
to query the feature availability at runtime.
Change-Id: I375e3eb7b05955ea28a092ba99bb93302af48a0e
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
This patch adds the errata management firmware interface for lower ELs
to discover details about CPU erratum. Based on the CPU erratum
identifier the interface enables the OS to find the mitigation of an
erratum in EL3.
The ABI can only be present in a system that is compliant with SMCCCv1.1
or higher. This implements v1.0 of the errata ABI spec.
For details on all possible return values, refer the design
documentation below:
ABI design documentation:
https://developer.arm.com/documentation/den0100/1-0?lang=en
Signed-off-by: Sona Mathew <SonaRebecca.Mathew@arm.com>
Change-Id: I70f0e2569cf92e6e02ad82e3e77874546232b89a
At the moment we only support FEAT_DIT to be either unconditionally
compiled in, or to be not supported at all.
Add support for runtime detection (ENABLE_DIT=2), by splitting
is_armv8_4_dit_present() into an ID register reading function and a
second function to report the support status. That function considers
both build time settings and runtime information (if needed).
We use ENABLE_DIT in two occassions in assembly code, where we just set
the DIT bit in the DIT system register.
Protect those two cases by reading the CPU ID register when ENABLE_DIT
is set to 2.
Change the FVP platform default to the now supported dynamic
option (=2), so the right decision can be made by the code at runtime.
Change-Id: I506d352f18e23c60db8cdf08edb449f60adbe098
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
The AMU extension code was using its own feature detection routines.
Replace them with the generic CPU feature handlers (defined in
arch_features.h), which get updated to cover the v1p1 variant as well.
Change-Id: I8540f1e745d7b02a25a6c6cdf2a39d6f5e21f2aa
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
At the moment we only support access to the trace unit by system
registers (SYS_REG_TRACE) to be either unconditionally compiled in, or
to be not supported at all.
Add support for runtime detection (ENABLE_SYS_REG_TRACE_FOR_NS=2), by
adding is_feat_sys_reg_trace_supported(). That function considers both
build time settings and runtime information (if needed), and is used
before we access SYS_REG_TRACE related registers.
The FVP platform decided to compile in support unconditionally (=1),
even though this is an optional feature, so it is not available with the
FVP model's default command line.
Change that to the now supported dynamic option (=2), so the right
decision can be made by the code at runtime.
Change-Id: I450a574a4f6bd9fc269887037049c94c906f54b2
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
At the moment we only support FEAT_SPE to be either unconditionally
compiled in, or to be not supported at all.
Add support for runtime detection (ENABLE_SPE_FOR_NS=2), by splitting
is_armv8_2_feat_spe_present() into an ID register reading function and
a second function to report the support status. That function considers
both build time settings and runtime information (if needed), and is
used before we access SPE related registers.
Previously SPE was enabled unconditionally for all platforms, change
this now to the runtime detection version.
Change-Id: I830c094107ce6a398bf1f4aef7ffcb79d4f36552
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
BL2_AT_EL3 is an overloaded macro which has two uses:
1. When BL2 is entry point into TF-A(no BL1)
2. When BL2 is running at EL3 exception level
These two scenarios are not exactly same even though first implicitly
means second to be true. To distinguish between these two use cases we
introduce new macros.
BL2_AT_EL3 is renamed to RESET_TO_BL2 to better convey both 1. and 2.
Additional macro BL2_RUNS_AT_EL3 is added to cover all scenarious where
BL2 runs at EL3 (including four world systems).
BREAKING CHANGE: BL2_AT_EL3 renamed to RESET_TO_BL2 across the
repository.
Change-Id: I477e1d0f843b44b799c216670e028fcb3509fb72
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Signed-off-by: Maksims Svecovs <maksims.svecovs@arm.com>
At the moment we only support FEAT_TRF to be either unconditionally
compiled in, or to be not supported at all.
Add support for runtime detection (ENABLE_TRF_FOR_NS=2), by splitting
is_feat_trf_present() into an ID register reading function and a second
function to report the support status. That function considers both
build time settings and runtime information (if needed), and is used
before we access TRF related registers.
Also move the context saving code from assembly to C, and use the new
is_feat_trf_supported() function to guard its execution.
The FVP platform decided to compile in support unconditionally (=1),
even though FEAT_TRF is an ARMv8.4 feature, so is not available with the
FVP model's default command line.
Change that to the now supported dynamic option (=2), so the right
decision can be made by the code at runtime.
Change-Id: Ia97b01adbe24970a4d837afd463dc5506b7295a3
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Some MISRA test complains about our code to isolate CPU ID register
fields: the ID registers (and associated masks) are 64 bits wide, but
the eventual field is always 4 bits wide only, so we use an unsigned
int to represent that. MISRA dislikes the differing width here.
Since the code to extract a feature field from a CPU ID register is very
schematic already, provide a wrapper macro to make this more readable,
and do the proper casting in one central place on the way.
While at it, use the same macro for the AArch32 feature detection side.
Change-Id: Ie102a9e7007a386f5879ec65e159ff041504a4ee
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
For an easier debug on Aarch32, in case of abort, it is useful to access
DFSR, IFSR, DFAR and IFAR CP15 registers.
Change-Id: Ie6b5a2882cd701f76e9d455ec43bd4b0fbe3cc78
Signed-off-by: Yann Gautier <yann.gautier@st.com>
This patch adds two helper functions:
- plat_ic_raise_ns_sgi to raise a NS SGI
- plat_ic_raise_s_el1_sgi to raise a S-EL1 SGI
Signed-off-by: Florian Lugou <florian.lugou@provenrun.com>
Change-Id: I6f262dd1da1d77fec3f850eb74189e726b8e24da
Add new options SEPARATE_BL2_NOLOAD_REGION to separate no-loadable
sections (.bss, stack, page tables) to a ram region specified
by BL2_NOLOAD_START and BL2_NOLOAD_LIMIT.
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
Change-Id: I844ee0fc405474af0aff978d292c826fbe0a82fd
FEAT_CCIDX modifies the register fields in CCSIDR/CCSIDR2 (aarch32)
and CCSIDR_EL1 (aarch64). This patch adds a check to the do_dcsw_op
function to use the right register format rather than assuming
that FEAT_CCIDX is not implemented.
Signed-off-by: John Powell <john.powell@arm.com>
Change-Id: I12cd00cd7b5889525d4d2750281a751dd74ef5dc
This change removes the `AMU_GROUP0_COUNTERS_MASK` and
`AMU_GROUP0_MAX_COUNTERS` preprocessor definitions, instead retrieving
the number of group 0 counters dynamically through `AMCGCR_EL0.CG0NC`.
Change-Id: I70e39c30fbd5df89b214276fac79cc8758a89f72
Signed-off-by: Chris Kay <chris.kay@arm.com>
This change introduces a small set of register getters and setters to
avoid having to repeatedly mask and shift in complex code.
Change-Id: Ia372f60c5efb924cd6eeceb75112e635ad13d942
Signed-off-by: Chris Kay <chris.kay@arm.com>
Currently on image entry, the data cache in the RW address range is
invalidated before MMU is enabled to safeguard against potential
stale data from previous firmware stage. If PIE is enabled however,
RO sections including the GOT may be also modified during pie fixup.
Therefore, to be on the safe side, invalidate the entire image
region if PIE is enabled.
Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I7ee2a324fe4377b026e32f9ab842617ad4e09d89
Introduced a build flag 'ENABLE_TRF_FOR_NS' to enable trace filter
control registers access in NS-EL2, or NS-EL1 (when NS-EL2 is
implemented but unused).
Change-Id: If3f53b8173a5573424b9a405a4bd8c206ffdeb8c
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Trap bits of trace filter control registers access are in
architecturally UNKNOWN state at boot hence
1. Initialized trap bits to one to prohibit trace filter control
registers accesses in lower ELs (EL2, EL1) in all security states
when FEAT_TRF is implemented.
2. These bits are RES0 when FEAT_TRF is not implemented and hence set
it to zero to aligns with the Arm ARM reference recommendation,
that mentions software must writes RES0 bits with all 0s.
Change-Id: I1b7abf2170ece84ee585c91cda32d22b25c0fc34
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Introduced a build flag 'ENABLE_SYS_REG_TRACE_FOR_NS' to enable trace
system registers access in NS-EL2, or NS-EL1 (when NS-EL2 is
implemented but unused).
Change-Id: Idc1acede4186e101758cbf7bed5af7b634d7d18d
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Trap bits of trace system registers access are in architecturally
UNKNOWN state at boot hence
1. Initialized trap bits to one to prohibit trace system registers
accesses in lower ELs (EL2, EL1) in all security states when system
trace registers are implemented.
2. These bits are RES0 in the absence of system trace register support
and hence set it to zero to aligns with the Arm ARM reference
recommendation,that mentions software must writes RES0 bits with
all 0s.
Change-Id: I4b6c15cda882325273492895d72568b29de89ca3
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
Only BL32 (SP_min) is supported at the moment, BL1 and BL2_AT_EL3 are just
stubbed with _pie_fixup_size=0.
The changes are an adaptation for AARCH32 on what has been done for
PIE support on AARCH64.
The RELA_SECTION is redefined for AARCH32, as the created section is
.rel.dyn and the symbols are .rel*.
Change-Id: I92bafe70e6b77735f6f890f32f2b637b98cf01b9
Signed-off-by: Yann Gautier <yann.gautier@st.com>
The use of end addresses is preferred over the size of sections.
This was done for some AARCH64 files for PIE with commit [1],
and some extra explanations can be found in its commit message.
Align the missing AARCH64 files.
For AARCH32 files, this is required to prepare PIE support introduction.
[1] f1722b693d ("PIE: Use PC relative adrp/adr for symbol reference")
Change-Id: I8f1c06580182b10c680310850f72904e58a54d7d
Signed-off-by: Yann Gautier <yann.gautier@st.com>
The speculation barrier feature (`FEAT_SB`) was introduced with and
made mandatory in the Armv8.5-A extension. It was retroactively made
optional in prior extensions, but the checks in our code-base do not
reflect that, assuming that it is only available in Armv8.5-A or later.
This change introduces the `ENABLE_FEAT_SB` definition, which derives
support for the `sb` instruction in the assembler from the feature
flags passed to it. Note that we assume that if this feature is enabled
then all the cores in the system support it - enabling speculation
barriers for only a subset of the cores is unsupported.
Signed-off-by: Chris Kay <chris.kay@arm.com>
Change-Id: I978ed38829385b221b10ba56d49b78f4756e20ea
ARMv8.6 adds virtual offset registers to support virtualization of the
event counters in EL1 and EL0. This patch enables support for this
feature in EL3 firmware.
Signed-off-by: John Powell <john.powell@arm.com>
Change-Id: I7ee1f3d9f554930bf5ef6f3d492e932e6d95b217
If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented
as well, it is possible to control whether PMU counters take into account
events happening on other threads.
If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit
leaving it to effective state of 0 regardless of any write to it.
This patch introduces the DISABLE_MTPMU flag, which allows to diable
multithread event count from EL3 (or EL2). The flag is disabled
by default so the behavior is consistent with those architectures
that do not implement FEAT_MTPMU.
Signed-off-by: Javier Almansa Sobrino <javier.almansasobrino@arm.com>
Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
When issuing barrier instructions like DSB or DMB, we must make sure
that the compiler does not undermine out efforts to fence off
instructions. Currently the compiler is free to move the barrier
instruction around, in respect to former or later memory access
statements, which is not what we want.
Add a compiler barrier to the inline assembly statement in our
DEFINE_SYSOP_TYPE_FUNC macro, to make sure memory accesses are not
reordered by the compiler.
This is in line with Linux' definition:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/include/asm/barrier.h
Since those instructions share a definition, apart from DSB and DMB this
now also covers some TLBI instructions. Having a compiler barrier there
also is useful, although we probably have stronger barriers in place
already.
Change-Id: If6fe97b13a562643a643efc507cb4aad29daa5b6
Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Usually, C has no problem up-converting types to larger bit sizes. MISRA
rule 10.7 requires that you not do this, or be very explicit about this.
This resolves the following required rule:
bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
0x3c0U" (32 bits) is less that the right hand operand
"18446744073709547519ULL" (64 bits).
This also resolves MISRA defects such as:
bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
In the expression "3U << 20", shifting more than 7 bits, the number
of bits in the essential type of the left expression, "3U", is
not allowed.
Further, MISRA requires that all shifts don't overflow. The definition of
PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
This fixes the violation by changing the definition to 1UL << 12. Since
this uses 32bits, it should not create any issues for aarch32.
This patch also contains a fix for a build failure in the sun50i_a64
platform. Specifically, these misra fixes removed a single and
instruction,
92407e73 and x19, x19, #0xffffffff
from the cm_setup_context function caused a relocation in
psci_cpus_on_start to require a linker-generated stub. This increased the
size of the .text section and caused an alignment later on to go over a
page boundary and round up to the end of RAM before placing the .data
section. This sectionn is of non-zero size and therefore causes a link
error.
The fix included in this reorders the functions during link time
without changing their ording with respect to alignment.
Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
This patch fixes the bug when AMUv1 group1 counters was
always assumed being implemented without checking for its
presence which was causing exception otherwise.
The AMU extension code was also modified as listed below:
- Added detection of AMUv1 for ARMv8.6
- 'PLAT_AMU_GROUP1_NR_COUNTERS' build option is removed and
number of group1 counters 'AMU_GROUP1_NR_COUNTERS' is now
calculated based on 'AMU_GROUP1_COUNTERS_MASK' value
- Added bit fields definitions and access functions for
AMCFGR_EL0/AMCFGR and AMCGCR_EL0/AMCGCR registers
- Unification of amu.c Aarch64 and Aarch32 source files
- Bug fixes and TF-A coding style compliant changes.
Change-Id: I14e407be62c3026ebc674ec7045e240ccb71e1fb
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
By writing 0 to CLUSTERPWRDN DSU register bit 0, we send an
advisory to the power controller that cluster power is not required
when all cores are powered down.
The AArch32 CLUSTERPWRDN register is architecturally mapped to the
AArch64 CLUSTERPWRDN_EL1 register
Change-Id: Ie6e67c1c7d811fa25c51e2e405ca7f59bd20c81b
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
bakery_lock_normal.c uses the raw register accessor, read_sctlr(_el3)
to check whether the dcache is enabled.
Using is_dcache_enabled() is cleaner, and a good abstraction for
the library code like this.
A problem is is_dcache_enabled() is declared in the local header,
lib/xlat_tables_v2/xlat_tables_private.h
I searched for a good place to declare this helper. Moving it to
arch_helpers.h, closed to cache operation helpers, looks good enough
to me.
I also changed the type of 'is_cached' to bool for consistency,
and to avoid MISRA warnings.
Change-Id: I9b016f67bc8eade25c316aa9c0db0fa4cd375b79
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Attempts to address MISRA compliance issues in BL1, BL2, and BL31 code.
Mainly issues like not using boolean expressions in conditionals,
conflicting variable names, ignoring return values without (void), adding
explicit casts, etc.
Change-Id: If1fa18ab621b9c374db73fa6eaa6f6e5e55c146a
Signed-off-by: John Powell <john.powell@arm.com>
aarch32 CPUs speculatively execute instructions following a
ERET as if it was not a jump instruction. This could lead to
cache-based side channel vulnerabilities. The software fix is
to place barrier instructions following ERET.
The counterpart patch for aarch64 is merged:
https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/commit/?id=f461fe346b728d0e88142fd7b8f2816415af18bc
Change-Id: I2aa3105bee0b92238f389830b3a3b8650f33af3d
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
At each BL entry point, the registers r9 to r12 are used to save info from
the previous BL parameters put in r0 to r3. But zeromem uses r12, leading
to a corruption of arg3. Therefore this change copies r12 to r7 before
zeromem() call and restores r12 afterwards. It may be better to save it
in r7 in el3_arch_init_common and not at the entrypoint as r7 could be used
in other functions, especially platform ones.
This is a fix for Task T661.
Change-Id: Icc11990c69b5d4c542d08aca1a77b1f754b61a53
Signed-off-by: Yann Gautier <yann.gautier@st.com>
From AArch64 state, arguments are passed in registers W0-W7(X0-X7)
and results are returned in W0-W7(X0-X7) for SMC32(SMC64) calls.
From AArch32 state, arguments are passed in registers R0-R7 and
results are returned in registers R0-R7 for SMC32 calls.
Most of the functions and macros already existed to support using
upto 8 registers for passing/returning parameters/results. Added
few helper macros for SMC calls from AArch32 state.
Link to the specification:
https://developer.arm.com/docs/den0028/c
Change-Id: I87976b42454dc3fc45c8343e9640aa78210e9741
Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
Add the missing flag for aarch32 XIP memory mode. It was
previously added in aarch64 only.
Minor: Correct the aarch64 missing flag.
Signed-off-by: Lionel Debieve <lionel.debieve@st.com>
Change-Id: Iac0a7581a1fd580aececa75f97deb894858f776f
This patch changes implementation for disabling Secure Cycle
Counter. For ARMv8.5 the counter gets disabled by setting
SDCR.SCCD bit on CPU cold/warm boot. For the earlier
architectures PMCR register is saved/restored on secure
world entry/exit from/to Non-secure state, and cycle counting
gets disabled by setting PMCR.DP bit.
In 'include\aarch32\arch.h' header file new
ARMv8.5-PMU related definitions were added.
Change-Id: Ia8845db2ebe8de940d66dff479225a5b879316f8
Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>