Merge changes from topic "ar/asymmetricSupport" into integration

* changes:
  feat(tc): enable trbe errata flags for Cortex-A520 and X4
  feat(cm): asymmetric feature support for trbe
  refactor(errata-abi): move EXTRACT_PARTNUM to arch.h
  feat(cpus): workaround for Cortex-A520(2938996) and Cortex-X4(2726228)
  feat(tc): make SPE feature asymmetric
  feat(cm): handle asymmetry for SPE feature
  feat(cm): support for asymmetric feature among cores
  feat(cpufeat): add new feature state for asymmetric features
This commit is contained in:
Manish V Badarkhe 2024-08-19 11:56:49 +02:00 committed by TrustedFirmware Code Review
commit 553b70c3ef
17 changed files with 218 additions and 20 deletions

View file

@ -42,6 +42,7 @@ BL31_SOURCES += bl31/bl31_main.c \
bl31/bl31_context_mgmt.c \ bl31/bl31_context_mgmt.c \
bl31/bl31_traps.c \ bl31/bl31_traps.c \
common/runtime_svc.c \ common/runtime_svc.c \
lib/cpus/errata_common.c \
lib/cpus/aarch64/dsu_helpers.S \ lib/cpus/aarch64/dsu_helpers.S \
plat/common/aarch64/platform_mp_stack.S \ plat/common/aarch64/platform_mp_stack.S \
services/arm_arch_svc/arm_arch_svc_setup.c \ services/arm_arch_svc/arm_arch_svc_setup.c \

View file

@ -98,14 +98,15 @@ feature set, and thereby save and restore the configuration associated with them
4. **Dynamic discovery of Feature enablement by EL3** 4. **Dynamic discovery of Feature enablement by EL3**
TF-A supports three states for feature enablement at EL3, to make them available TF-A supports four states for feature enablement at EL3, to make them available
for lower exception levels. for lower exception levels.
.. code:: c .. code:: c
#define FEAT_STATE_DISABLED 0 #define FEAT_STATE_DISABLED 0
#define FEAT_STATE_ENABLED 1 #define FEAT_STATE_ENABLED 1
#define FEAT_STATE_CHECK 2 #define FEAT_STATE_CHECK 2
#define FEAT_STATE_CHECK_ASYMMETRIC 3
A pattern is established for feature enablement behavior. A pattern is established for feature enablement behavior.
Each feature must support the 3 possible values with rigid semantics. Each feature must support the 3 possible values with rigid semantics.
@ -119,7 +120,26 @@ Each feature must support the 3 possible values with rigid semantics.
- **FEAT_STATE_CHECK** - same as ``FEAT_STATE_ALWAYS`` except that the feature's - **FEAT_STATE_CHECK** - same as ``FEAT_STATE_ALWAYS`` except that the feature's
existence will be checked at runtime. Default on dynamic platforms (example: FVP). existence will be checked at runtime. Default on dynamic platforms (example: FVP).
.. note:: - **FEAT_STATE_CHECK_ASYMMETRIC** - same as ``FEAT_STATE_CHECK`` except that the feature's
existence is asymmetric across cores, which requires the feature existence is checked
during warmboot path also. Note that only limited number of features can be asymmetric.
.. note::
Only limited number of features can be ``FEAT_STATE_CHECK_ASYMMETRIC`` this is due to
the fact that Operating systems are designed for SMP systems.
There are no clear guidelines what kind of mismatch is allowed but following pointers
can help making a decision
- All mandatory features must be symmetric.
- Any feature that impacts the generation of page tables must be symmetric.
- Any feature access which does not trap to EL3 should be symmetric.
- Features related with profiling, debug and trace could be asymmetric
- Migration of vCPU/tasks between CPUs should not cause an error
Whenever there is asymmetric feature support is added for a feature TF-A need to add
feature specific code in context management code.
.. note::
``FEAT_RAS`` is an exception here, as it impacts the execution of EL3 and ``FEAT_RAS`` is an exception here, as it impacts the execution of EL3 and
it is essential to know its presence at compile time. Refer to ``ENABLE_FEAT`` it is essential to know its presence at compile time. Refer to ``ENABLE_FEAT``
macro under :ref:`Build Options` section for more details. macro under :ref:`Build Options` section for more details.
@ -498,4 +518,4 @@ Realm worlds.
.. |Context Init WarmBoot| image:: ../resources/diagrams/context_init_warmboot.png .. |Context Init WarmBoot| image:: ../resources/diagrams/context_init_warmboot.png
.. _Trustzone for AArch64: https://developer.arm.com/documentation/102418/0101/TrustZone-in-the-processor/Switching-between-Security-states .. _Trustzone for AArch64: https://developer.arm.com/documentation/102418/0101/TrustZone-in-the-processor/Switching-between-Security-states
.. _Security States with RME: https://developer.arm.com/documentation/den0126/0100/Security-states .. _Security States with RME: https://developer.arm.com/documentation/den0126/0100/Security-states
.. _lib/el3_runtime/(aarch32/aarch64): https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/lib/el3_runtime .. _lib/el3_runtime/(aarch32/aarch64): https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/lib/el3_runtime

View file

@ -826,6 +826,10 @@ For Cortex-X4, the following errata build flags are defined :
feature is enabled and can assist the Kernel in the process of feature is enabled and can assist the Kernel in the process of
mitigation of the erratum. mitigation of the erratum.
- ``ERRATA_X4_2726228``: This applies erratum 2726228 workaround to Cortex-X4
CPU. This needs to be enabled for revisions r0p0 and r0p1. It is fixed in
r0p2.
- ``ERRATA_X4_2740089``: This applies errata 2740089 workaround to Cortex-X4 - ``ERRATA_X4_2740089``: This applies errata 2740089 workaround to Cortex-X4
CPU. This needs to be enabled for revisions r0p0 and r0p1. It is fixed CPU. This needs to be enabled for revisions r0p0 and r0p1. It is fixed
in r0p2. in r0p2.
@ -899,6 +903,10 @@ For Cortex-A520, the following errata build flags are defined :
Cortex-A520 CPU. This needs to be enabled for revisions r0p0 and r0p1. Cortex-A520 CPU. This needs to be enabled for revisions r0p0 and r0p1.
It is still open. It is still open.
- ``ERRATA_A520_2938996``: This applies errata 2938996 workaround to
Cortex-A520 CPU. This needs to be enabled for revisions r0p0 and r0p1.
It is fixed in r0p2.
For Cortex-A715, the following errata build flags are defined : For Cortex-A715, the following errata build flags are defined :
- ``ERRATA_A715_2331818``: This applies errata 2331818 workaround to - ``ERRATA_A715_2331818``: This applies errata 2331818 workaround to

View file

@ -24,6 +24,9 @@
#define MIDR_PN_MASK U(0xfff) #define MIDR_PN_MASK U(0xfff)
#define MIDR_PN_SHIFT U(0x4) #define MIDR_PN_SHIFT U(0x4)
/* Extracts the CPU part number from MIDR for checking CPU match */
#define EXTRACT_PARTNUM(x) ((x >> MIDR_PN_SHIFT) & MIDR_PN_MASK)
/******************************************************************************* /*******************************************************************************
* MPIDR macros * MPIDR macros
******************************************************************************/ ******************************************************************************/

View file

@ -11,8 +11,9 @@
void detect_arch_features(void); void detect_arch_features(void);
/* Macro Definitions */ /* Macro Definitions */
#define FEAT_STATE_DISABLED 0 #define FEAT_STATE_DISABLED 0
#define FEAT_STATE_ALWAYS 1 #define FEAT_STATE_ALWAYS 1
#define FEAT_STATE_CHECK 2 #define FEAT_STATE_CHECK 2
#define FEAT_STATE_CHECK_ASYMMETRIC 3
#endif /* FEAT_DETECT_H */ #endif /* FEAT_DETECT_H */

View file

@ -28,4 +28,15 @@
#define CORTEX_A520_CPUPWRCTLR_EL1 S3_0_C15_C2_7 #define CORTEX_A520_CPUPWRCTLR_EL1 S3_0_C15_C2_7
#define CORTEX_A520_CPUPWRCTLR_EL1_CORE_PWRDN_BIT U(1) #define CORTEX_A520_CPUPWRCTLR_EL1_CORE_PWRDN_BIT U(1)
#ifndef __ASSEMBLER__
#if ERRATA_A520_2938996
long check_erratum_cortex_a520_2938996(long cpu_rev);
#else
static inline long check_erratum_cortex_a520_2938996(long cpu_rev)
{
return 0;
}
#endif /* ERRATA_A520_2938996 */
#endif /* __ASSEMBLER__ */
#endif /* CORTEX_A520_H */ #endif /* CORTEX_A520_H */

View file

@ -34,4 +34,15 @@
#define CORTEX_X4_CPUACTLR5_EL1 S3_0_C15_C8_0 #define CORTEX_X4_CPUACTLR5_EL1 S3_0_C15_C8_0
#define CORTEX_X4_CPUACTLR5_EL1_BIT_14 (ULL(1) << 14) #define CORTEX_X4_CPUACTLR5_EL1_BIT_14 (ULL(1) << 14)
#ifndef __ASSEMBLER__
#if ERRATA_X4_2726228
long check_erratum_cortex_x4_2726228(long cpu_rev);
#else
static inline long check_erratum_cortex_x4_2726228(long cpu_rev)
{
return 0;
}
#endif /* ERRATA_X4_2726228 */
#endif /* __ASSEMBLER__ */
#endif /* CORTEX_X4_H */ #endif /* CORTEX_X4_H */

View file

@ -25,12 +25,21 @@
#define ERRATUM_MITIGATED ERRATUM_CHOSEN + ERRATUM_CHOSEN_SIZE #define ERRATUM_MITIGATED ERRATUM_CHOSEN + ERRATUM_CHOSEN_SIZE
#define ERRATUM_ENTRY_SIZE ERRATUM_MITIGATED + ERRATUM_MITIGATED_SIZE #define ERRATUM_ENTRY_SIZE ERRATUM_MITIGATED + ERRATUM_MITIGATED_SIZE
/* Errata status */
#define ERRATA_NOT_APPLIES 0
#define ERRATA_APPLIES 1
#define ERRATA_MISSING 2
#ifndef __ASSEMBLER__ #ifndef __ASSEMBLER__
#include <lib/cassert.h> #include <lib/cassert.h>
void print_errata_status(void); void print_errata_status(void);
void errata_print_msg(unsigned int status, const char *cpu, const char *id); void errata_print_msg(unsigned int status, const char *cpu, const char *id);
#if ERRATA_A520_2938996 || ERRATA_X4_2726228
unsigned int check_if_affected_core(void);
#endif
/* /*
* NOTE that this structure will be different on AArch32 and AArch64. The * NOTE that this structure will be different on AArch32 and AArch64. The
* uintptr_t will reflect the change and the alignment will be correct in both. * uintptr_t will reflect the change and the alignment will be correct in both.
@ -74,11 +83,6 @@ CASSERT(sizeof(struct erratum_entry) == ERRATUM_ENTRY_SIZE,
#endif /* __ASSEMBLER__ */ #endif /* __ASSEMBLER__ */
/* Errata status */
#define ERRATA_NOT_APPLIES 0
#define ERRATA_APPLIES 1
#define ERRATA_MISSING 2
/* Macro to get CPU revision code for checking errata version compatibility. */ /* Macro to get CPU revision code for checking errata version compatibility. */
#define CPU_REV(r, p) ((r << 4) | p) #define CPU_REV(r, p) ((r << 4) | p)

View file

@ -44,6 +44,7 @@ void cm_init_context_by_index(unsigned int cpu_idx,
void cm_manage_extensions_el3(void); void cm_manage_extensions_el3(void);
void manage_extensions_nonsecure_per_world(void); void manage_extensions_nonsecure_per_world(void);
void cm_el3_arch_init_per_world(per_world_context_t *per_world_ctx); void cm_el3_arch_init_per_world(per_world_context_t *per_world_ctx);
void cm_handle_asymmetric_features(void);
#endif #endif
#if CTX_INCLUDE_EL2_REGS #if CTX_INCLUDE_EL2_REGS
@ -95,6 +96,7 @@ void *cm_get_next_context(void);
void cm_set_next_context(void *context); void cm_set_next_context(void *context);
static inline void cm_manage_extensions_el3(void) {} static inline void cm_manage_extensions_el3(void) {}
static inline void manage_extensions_nonsecure_per_world(void) {} static inline void manage_extensions_nonsecure_per_world(void) {}
static inline void cm_handle_asymmetric_features(void) {}
#endif /* __aarch64__ */ #endif /* __aarch64__ */
#endif /* CONTEXT_MGMT_H */ #endif /* CONTEXT_MGMT_H */

View file

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2021-2023, Arm Limited. All rights reserved. * Copyright (c) 2021-2024, Arm Limited. All rights reserved.
* *
* SPDX-License-Identifier: BSD-3-Clause * SPDX-License-Identifier: BSD-3-Clause
*/ */
@ -11,6 +11,9 @@
#include <cpu_macros.S> #include <cpu_macros.S>
#include <plat_macros.S> #include <plat_macros.S>
/* .global erratum_cortex_a520_2938996_wa */
.global check_erratum_cortex_a520_2938996
/* Hardware handled coherency */ /* Hardware handled coherency */
#if HW_ASSISTED_COHERENCY == 0 #if HW_ASSISTED_COHERENCY == 0
#error "Cortex A520 must be compiled with HW_ASSISTED_COHERENCY enabled" #error "Cortex A520 must be compiled with HW_ASSISTED_COHERENCY enabled"
@ -32,6 +35,25 @@ workaround_reset_start cortex_a520, ERRATUM(2858100), ERRATA_A520_2858100
workaround_reset_end cortex_a520, ERRATUM(2858100) workaround_reset_end cortex_a520, ERRATUM(2858100)
check_erratum_ls cortex_a520, ERRATUM(2858100), CPU_REV(0, 1) check_erratum_ls cortex_a520, ERRATUM(2858100), CPU_REV(0, 1)
workaround_runtime_start cortex_a520, ERRATUM(2938996), ERRATA_A520_2938996, CORTEX_A520_MIDR
workaround_runtime_end cortex_a520, ERRATUM(2938996)
check_erratum_custom_start cortex_a520, ERRATUM(2938996)
/* This erratum needs to be enabled for r0p0 and r0p1.
* Check if revision is less than or equal to r0p1.
*/
#if ERRATA_A520_2938996
mov x1, #1
b cpu_rev_var_ls
#else
mov x0, #ERRATA_MISSING
#endif
ret
check_erratum_custom_end cortex_a520, ERRATUM(2938996)
/* ---------------------------------------------------- /* ----------------------------------------------------
* HW will do the cache maintenance while powering down * HW will do the cache maintenance while powering down
* ---------------------------------------------------- * ----------------------------------------------------

View file

@ -22,10 +22,30 @@
#error "Cortex X4 supports only AArch64. Compile with CTX_INCLUDE_AARCH32_REGS=0" #error "Cortex X4 supports only AArch64. Compile with CTX_INCLUDE_AARCH32_REGS=0"
#endif #endif
.global check_erratum_cortex_x4_2726228
#if WORKAROUND_CVE_2022_23960 #if WORKAROUND_CVE_2022_23960
wa_cve_2022_23960_bhb_vector_table CORTEX_X4_BHB_LOOP_COUNT, cortex_x4 wa_cve_2022_23960_bhb_vector_table CORTEX_X4_BHB_LOOP_COUNT, cortex_x4
#endif /* WORKAROUND_CVE_2022_23960 */ #endif /* WORKAROUND_CVE_2022_23960 */
workaround_runtime_start cortex_x4, ERRATUM(2726228), ERRATA_X4_2726228, CORTEX_X4_MIDR
workaround_runtime_end cortex_x4, ERRATUM(2726228)
check_erratum_custom_start cortex_x4, ERRATUM(2726228)
/* This erratum needs to be enabled for r0p0 and r0p1.
* Check if revision is less than or equal to r0p1.
*/
#if ERRATA_X4_2726228
mov x1, #1
b cpu_rev_var_ls
#else
mov x0, #ERRATA_MISSING
#endif
ret
check_erratum_custom_end cortex_x4, ERRATUM(2726228)
workaround_runtime_start cortex_x4, ERRATUM(2740089), ERRATA_X4_2740089 workaround_runtime_start cortex_x4, ERRATUM(2740089), ERRATA_X4_2740089
/* dsb before isb of power down sequence */ /* dsb before isb of power down sequence */
dsb sy dsb sy

View file

@ -823,6 +823,10 @@ CPU_FLAG_LIST += ERRATA_X3_2779509
# cpu and is fixed in r0p1. # cpu and is fixed in r0p1.
CPU_FLAG_LIST += ERRATA_X4_2701112 CPU_FLAG_LIST += ERRATA_X4_2701112
# Flag to apply erratum 2726228 workaround during warmboot. This erratum
# applies to all revisions <= r0p1 of the Cortex-X4 cpu, it is fixed in r0p2.
CPU_FLAG_LIST += ERRATA_X4_2726228
# Flag to apply erratum 2740089 workaround during powerdown. This erratum # Flag to apply erratum 2740089 workaround during powerdown. This erratum
# applies to all revisions <= r0p1 of the Cortex-X4 cpu, it is fixed in r0p2. # applies to all revisions <= r0p1 of the Cortex-X4 cpu, it is fixed in r0p2.
CPU_FLAG_LIST += ERRATA_X4_2740089 CPU_FLAG_LIST += ERRATA_X4_2740089
@ -896,6 +900,10 @@ CPU_FLAG_LIST += ERRATA_A520_2630792
# applies to revision r0p0 and r0p1 of the Cortex-A520 cpu and is still open. # applies to revision r0p0 and r0p1 of the Cortex-A520 cpu and is still open.
CPU_FLAG_LIST += ERRATA_A520_2858100 CPU_FLAG_LIST += ERRATA_A520_2858100
# Flag to apply erratum 2938996 workaround during reset. This erratum
# applies to revision r0p0 and r0p1 of the Cortex-A520 cpu and is fixed in r0p2.
CPU_FLAG_LIST += ERRATA_A520_2938996
# Flag to apply erratum 2331132 workaround during reset. This erratum applies # Flag to apply erratum 2331132 workaround during reset. This erratum applies
# to revisions r0p0, r0p1 and r0p2. It is still open. # to revisions r0p0, r0p1 and r0p2. It is still open.
CPU_FLAG_LIST += ERRATA_V2_2331132 CPU_FLAG_LIST += ERRATA_V2_2331132

30
lib/cpus/errata_common.c Normal file
View file

@ -0,0 +1,30 @@
/*
* Copyright (c) 2024, Arm Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
/* Runtime C routines for errata workarounds and common routines */
#include <arch.h>
#include <arch_helpers.h>
#include <cortex_a520.h>
#include <cortex_x4.h>
#include <lib/cpus/cpu_ops.h>
#include <lib/cpus/errata.h>
#if ERRATA_A520_2938996 || ERRATA_X4_2726228
unsigned int check_if_affected_core(void)
{
uint32_t midr_val = read_midr();
long rev_var = cpu_get_rev_var();
if (EXTRACT_PARTNUM(midr_val) == EXTRACT_PARTNUM(CORTEX_A520_MIDR)) {
return check_erratum_cortex_a520_2938996(rev_var);
} else if (EXTRACT_PARTNUM(midr_val) == EXTRACT_PARTNUM(CORTEX_X4_MIDR)) {
return check_erratum_cortex_x4_2726228(rev_var);
}
return ERRATA_NOT_APPLIES;
}
#endif

View file

@ -142,7 +142,7 @@ endfunc fpregs_context_restore
* always enable DIT in EL3 * always enable DIT in EL3
*/ */
#if ENABLE_FEAT_DIT #if ENABLE_FEAT_DIT
#if ENABLE_FEAT_DIT == 2 #if ENABLE_FEAT_DIT >= 2
mrs x8, id_aa64pfr0_el1 mrs x8, id_aa64pfr0_el1
and x8, x8, #(ID_AA64PFR0_DIT_MASK << ID_AA64PFR0_DIT_SHIFT) and x8, x8, #(ID_AA64PFR0_DIT_MASK << ID_AA64PFR0_DIT_SHIFT)
cbz x8, 1f cbz x8, 1f
@ -166,8 +166,7 @@ endfunc fpregs_context_restore
.macro restore_mpam3_el3 .macro restore_mpam3_el3
#if ENABLE_FEAT_MPAM #if ENABLE_FEAT_MPAM
#if ENABLE_FEAT_MPAM == 2 #if ENABLE_FEAT_MPAM >= 2
mrs x8, id_aa64pfr0_el1 mrs x8, id_aa64pfr0_el1
lsr x8, x8, #(ID_AA64PFR0_MPAM_SHIFT) lsr x8, x8, #(ID_AA64PFR0_MPAM_SHIFT)
and x8, x8, #(ID_AA64PFR0_MPAM_MASK) and x8, x8, #(ID_AA64PFR0_MPAM_MASK)

View file

@ -19,6 +19,8 @@
#include <common/debug.h> #include <common/debug.h>
#include <context.h> #include <context.h>
#include <drivers/arm/gicv3.h> #include <drivers/arm/gicv3.h>
#include <lib/cpus/cpu_ops.h>
#include <lib/cpus/errata.h>
#include <lib/el3_runtime/context_mgmt.h> #include <lib/el3_runtime/context_mgmt.h>
#include <lib/el3_runtime/cpu_data.h> #include <lib/el3_runtime/cpu_data.h>
#include <lib/el3_runtime/pubsub_events.h> #include <lib/el3_runtime/pubsub_events.h>
@ -1523,6 +1525,45 @@ void cm_el2_sysregs_context_restore(uint32_t security_state)
} }
#endif /* CTX_INCLUDE_EL2_REGS */ #endif /* CTX_INCLUDE_EL2_REGS */
#if IMAGE_BL31
/*********************************************************************************
* This function allows Architecture features asymmetry among cores.
* TF-A assumes that all the cores in the platform has architecture feature parity
* and hence the context is setup on different core (e.g. primary sets up the
* context for secondary cores).This assumption may not be true for systems where
* cores are not conforming to same Arch version or there is CPU Erratum which
* requires certain feature to be be disabled only on a given core.
*
* This function is called on secondary cores to override any disparity in context
* setup by primary, this would be called during warmboot path.
*********************************************************************************/
void cm_handle_asymmetric_features(void)
{
#if ENABLE_SPE_FOR_NS == FEAT_STATE_CHECK_ASYMMETRIC
cpu_context_t *spe_ctx = cm_get_context(NON_SECURE);
assert(spe_ctx != NULL);
if (is_feat_spe_supported()) {
spe_enable(spe_ctx);
} else {
spe_disable(spe_ctx);
}
#endif
#if ERRATA_A520_2938996 || ERRATA_X4_2726228
cpu_context_t *trbe_ctx = cm_get_context(NON_SECURE);
assert(trbe_ctx != NULL);
if (check_if_affected_core() == ERRATA_APPLIES) {
if (is_feat_trbe_supported()) {
trbe_disable(trbe_ctx);
}
}
#endif
}
#endif
/******************************************************************************* /*******************************************************************************
* This function is used to exit to Non-secure world. If CTX_INCLUDE_EL2_REGS * This function is used to exit to Non-secure world. If CTX_INCLUDE_EL2_REGS
* is enabled, it restores EL1 and EL2 sysreg contexts instead of directly * is enabled, it restores EL1 and EL2 sysreg contexts instead of directly
@ -1531,6 +1572,18 @@ void cm_el2_sysregs_context_restore(uint32_t security_state)
******************************************************************************/ ******************************************************************************/
void cm_prepare_el3_exit_ns(void) void cm_prepare_el3_exit_ns(void)
{ {
#if IMAGE_BL31
/*
* Check and handle Architecture feature asymmetry among cores.
*
* In warmboot path secondary cores context is initialized on core which
* did CPU_ON SMC call, if there is feature asymmetry in these cores handle
* it in this function call.
* For Symmetric cores this is an empty function.
*/
cm_handle_asymmetric_features();
#endif
#if CTX_INCLUDE_EL2_REGS #if CTX_INCLUDE_EL2_REGS
#if ENABLE_ASSERTIONS #if ENABLE_ASSERTIONS
cpu_context_t *ctx = cm_get_context(NON_SECURE); cpu_context_t *ctx = cm_get_context(NON_SECURE);

View file

@ -34,6 +34,7 @@ ENABLE_AMU_AUXILIARY_COUNTERS := 1
ENABLE_MPMM := 1 ENABLE_MPMM := 1
ENABLE_MPMM_FCONF := 1 ENABLE_MPMM_FCONF := 1
ENABLE_FEAT_MTE2 := 2 ENABLE_FEAT_MTE2 := 2
ENABLE_SPE_FOR_NS := 3
CTX_INCLUDE_AARCH32_REGS := 0 CTX_INCLUDE_AARCH32_REGS := 0
@ -109,6 +110,9 @@ endif
# CPU libraries for TARGET_PLATFORM=2 # CPU libraries for TARGET_PLATFORM=2
ifeq (${TARGET_PLATFORM}, 2) ifeq (${TARGET_PLATFORM}, 2)
ERRATA_A520_2938996 := 1
ERRATA_X4_2726228 := 1
TC_CPU_SOURCES += lib/cpus/aarch64/cortex_a520.S \ TC_CPU_SOURCES += lib/cpus/aarch64/cortex_a520.S \
lib/cpus/aarch64/cortex_a720.S \ lib/cpus/aarch64/cortex_a720.S \
lib/cpus/aarch64/cortex_x4.S lib/cpus/aarch64/cortex_x4.S
@ -116,6 +120,8 @@ endif
# CPU libraries for TARGET_PLATFORM=3 # CPU libraries for TARGET_PLATFORM=3
ifeq (${TARGET_PLATFORM}, 3) ifeq (${TARGET_PLATFORM}, 3)
ERRATA_A520_2938996 := 1
TC_CPU_SOURCES += lib/cpus/aarch64/cortex_a520.S \ TC_CPU_SOURCES += lib/cpus/aarch64/cortex_a520.S \
lib/cpus/aarch64/cortex_a725.S \ lib/cpus/aarch64/cortex_a725.S \
lib/cpus/aarch64/cortex_x925.S lib/cpus/aarch64/cortex_x925.S

View file

@ -8,6 +8,7 @@
#define ERRATA_CPUSPEC_H #define ERRATA_CPUSPEC_H
#include <stdint.h> #include <stdint.h>
#include <arch.h>
#include <arch_helpers.h> #include <arch_helpers.h>
#if __aarch64__ #if __aarch64__
@ -31,8 +32,6 @@
/* Default values for unused memory in the array */ /* Default values for unused memory in the array */
#define UNDEF_ERRATA {UINT_MAX, UCHAR_MAX, UCHAR_MAX} #define UNDEF_ERRATA {UINT_MAX, UCHAR_MAX, UCHAR_MAX}
#define EXTRACT_PARTNUM(x) ((x >> MIDR_PN_SHIFT) & MIDR_PN_MASK)
#define RXPX_RANGE(x, y, z) (((x >= y) && (x <= z)) ? true : false) #define RXPX_RANGE(x, y, z) (((x >= y) && (x <= z)) ? true : false)
/* /*