In RESET_TO_BL31, the SPMC manifest base address that is utilized by
bl32_image_ep_info has to be statically defined as DT is not available.
Common arm code sets this to the top of SRAM using macros but it can be
different for some platforms. Hence, introduce the macro
PLAT_ARM_SPMC_MANIFEST_BASE that could be re-defined by platform as per
their use-case. Platforms that utilize arm_def.h would use the existing
value from arm common code.
Signed-off-by: Rakshit Goyal <rakshit.goyal@arm.com>
Change-Id: I4491749ad2b5794e06c9bd11ff61e2e64f21a948
* changes:
feat(tc): get entropy with PSA Crypto API
feat(psa): add interface with RSE for retrieving entropy
fix(psa): guard Crypto APIs with CRYPTO_SUPPORT
feat(tc): enable trng
feat(tc): initialize the RSE communication in earlier phase
Add the AP/RSS interface for reading the entropy. And update the
document for the API.
Change-Id: I61492d6b5d824a01ffeadc92f9d41ca841ba3367
Signed-off-by: Leo Yan <leo.yan@arm.com>
Signed-off-by: Icen Zeyada <Icen.Zeyada2@arm.com>
When building Crypto APIs, it requires dependency on external headers,
e.g., Mbedtls headers. Without the CRYPTO_SUPPORT configuration,
external dependencies are not set up, building Crypto APIs will fail.
Guard Crypto APIs with the CRYPTO_SUPPORT configuration, to make sure
the code is built only for Crypto enabled case.
Change-Id: Iffe1220b0e6272586c46432b4f8d0512cb39b0b5
Signed-off-by: Leo Yan <leo.yan@arm.com>
Neoverse-V3 erratum 3701767 that applies to r0p0, r0p1, r0p2 is
still Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-2891958/latest/
Change-Id: I5be0de881f408a9e82a07b8459d79490e9065f94
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Neoverse-N3 erratum 3699563 that applies to r0p0 is still Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-3050973/latest/
Change-Id: I77aaf8ae0afff3adde9a85f4a1a13ac9d1daf0af
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Neoverse-N2 erratum 3701773 that applies to r0p0, r0p1, r0p2 and r0p3
is still Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-1982442/latest/
Change-Id: If95bd67363228c8083724b31f630636fb27f3b61
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cortex-X925 erratum 3701747 that applies to r0p0, r0p1 and is still
Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/109180/latest/
Change-Id: I080296666f89276b3260686c2bdb8de63fc174c1
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cortex-X4 erratum 3701758 that applies to r0p0, r0p1, r0p2 and r0p3
is still Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/109148/latest/
Change-Id: I4ee941d1e7653de7a12d69f538ca05f7f9f9961d
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cortex-X3 erratum 3701769 that applies to r0p0, r1p0, r1p1 and r1p2
is still Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-2055130/latest/
Change-Id: Ifd722e1bb8616ada2ad158297a7ca80b19a3370b
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cortex-X2 erratum 3701772 that applies to r0p0, r1p0, r2p0, r2p1
is still Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-1775100/latest/
Change-Id: I2ffc5e7d7467f1bcff8b895fea52a1daa7d14495
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cortex-A725 erratum 3699564 that applies to r0p0, r0p1 and is
fixed in r0p2.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-2832921/latest
Change-Id: Ifad1f6c3f5b74060273f897eb5e4b79dd9f088f7
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cortex-A720-AE erratum 3699562 that applies to r0p0 and is still
Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-3090091/latest/
Change-Id: Ib830470747822cac916750c01684a65cb5efc15b
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cortex-A720 erratum 3699561 that applies to all revisions <= r0p2
and is still Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-2439421/latest/
Change-Id: I7ea3aaf3e7bf6b4f3648f6872e505a41247b14ba
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cortex-A715 erratum 3699560 that applies to all revisions <= r1p3
and is still Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-2148827/latest/
Change-Id: I183aa921b4b6f715d64eb6b70809de2566017d31
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
Cortex-A710 erratum 3701772 that applies to all revisions <= r2p1
and is still Open.
The workaround is for EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-1775101/latest/
Change-Id: I997c9cfaa75321f22b4f690c4d3f234c0b51c670
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
When ICH_VMCR_EL2.VBPR1 is written in Secure state (SCR_EL3.NS==0)
and then subsequently read in Non-secure state (SCR_EL3.NS==1), a
wrong value might be returned. The same issue exists in the opposite way.
Adding workaround in EL3 software that performs context save/restore
on a change of Security state to use a value of SCR_EL3.NS when
accessing ICH_VMCR_EL2 that reflects the Security state that owns the
data being saved or restored. For example, EL3 software should set
SCR_EL3.NS to 1 when saving or restoring the value ICH_VMCR_EL2 for
Non-secure(or Realm) state. EL3 software should clear
SCR_EL3.NS to 0 when saving or restoring the value ICH_VMCR_EL2 for
Secure state.
SDEN documentation:
https://developer.arm.com/documentation/SDEN-1775101/latest/
Change-Id: I9f0403601c6346276e925f02eab55908b009d957
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
- errata.h is using incorrect header macro ERRATA_REPORT_H fix this.
- Group errata function utilities.
Change-Id: I6a4a8ec6546adb41e24d8885cb445fa8be830148
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
To allow for generic handling of a wakeup, this hook is no longer
expected to call wfi itself. Update the name everywhere to reflect this
expectation so that future platform implementers don't get misled.
Change-Id: Ic33f0b6da74592ad6778fd802c2f0b85223af614
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Newer cores in upcoming platforms may refuse to power down. The PSCI
library is already prepared for this so convert platform code to also
allow this. This is simple - drop the `wfi` + panic and let common code
deal with the fallout. The end result will be the same (sans the
message) except the platform will have fewer responsibilities. The only
exception is for cores being signalled to power off gracefully ahead of
system reset. That path must also be terminal so replace the end with
the same psci_pwrdown_cpu_end() to behave the same as the generic
implementation. It will handle wakeups and panic, hoping that the system
gets reset from under it. The dmb is upgraded to a dsb so no functional
change.
Change-Id: I381f96bec8532bda6ccdac65de57971aac42e7e8
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Travis' and Gelas' TRMs tell us to disable SME (set PSTATE.{ZA, SM} to
0) when we're attempting to power down. What they don't tell us is that
if this isn't done, the powerdown request will be rejected. On the
CPU_OFF path that's not a problem - we can force SVCR to 0 and be
certain the core will power off.
On the suspend to powerdown path, however, we cannot do this. The TRM
also tells us that the sequence could also be aborted on eg. GIC
interrupts. If this were to happen when we have overwritten SVCR to 0,
upon a return to the caller they would experience a loss of context. We
know that at least Linux may call into PSCI with SVCR != 0. One option
is to save the entire SME context which would be quite expensive just to
work around. Another option is to downgrade the request to a normal
suspend when SME was left on. This option is better as this is expected
to happen rarely enough to ignore the wasted power and we don't want to
burden the generic (correct) path with needless context management.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I698fa8490ebf51461f6aa8bba84f9827c5c46ad4
The simplistic view of a core's powerdown sequence is that power is
atomically cut upon calling `wfi`. However, it turns out that it has
lots to do - it has to talk to the interconnect to exit coherency, clean
caches, check for RAS errors, etc. These take significant amounts of
time and are certainly not atomic. As such there is a significant window
of opportunity for external events to happen. Many of these steps are
not destructive to context, so theoretically, the core can just "give
up" half way (or roll certain actions back) and carry on running. The
point in this sequence after which roll back is not possible is called
the point of no return.
One of these actions is the checking for RAS errors. It is possible for
one to happen during this lengthy sequence, or at least remain
undiscovered until that point. If the core were to continue powerdown
when that happens, there would be no (easy) way to inform anyone about
it. Rejecting the powerdown and letting software handle the error is the
best way to implement this.
Arm cores since at least the a510 have included this exact feature. So
far it hasn't been deemed necessary to account for it in firmware due to
the low likelihood of this happening. However, events like GIC wakeup
requests are much more probable. Older cores will powerdown and
immediately power back up when this happens. Travis and Gelas include a
feature similar to the RAS case above, called powerdown abandon. The
idea is that this will improve the latency to service the interrupt by
saving on work which the core and software need to do.
So far firmware has relied on the `wfi` being the point of no return and
if it doesn't explicitly detect a pending interrupt quite early on, it
will embark onto a sequence that it expects to end with shutdown. To
accommodate for it not being a point of no return, we must undo all of
the system management we did, just like in the warm boot entrypoint.
To achieve that, the pwr_domain_pwr_down_wfi hook must not be terminal.
Most recent platforms do some platform management and finish on the
standard `wfi`, followed by a panic or an endless loop as this is
expected to not return. To make this generic, any platform that wishes
to support wakeups must instead let common code call
`psci_power_down_wfi()` right after. Besides wakeups, this lets common
code handle powerdown errata better as well.
Then, the CPU_OFF case is simple - PSCI does not allow it to return. So
the best that can be done is to attempt the `wfi` a few times (the
choice of 32 is arbitrary) in the hope that the wakeup is transient. If
it isn't, the only choice is to panic, as the system is likely to be in
a bad state, eg. interrupts weren't routed away. The same applies for
SYSTEM_OFF, SYSTEM_RESET, and SYSTEM_RESET2. There the panic won't
matter as the system is going offline one way or another. The RAS case
will be considered in a separate patch.
Now, the CPU_SUSPEND case is more involved. First, to powerdown it must
wipe its context as it is not written on warm boot. But it cannot be
overwritten in case of a wakeup. To avoid the catch 22, save a copy that
will only be used if powerdown fails. That is about 500 bytes on the
stack so it hopefully doesn't tip anyone over any limits. In future that
can be avoided by having a core manage its own context.
Second, when the core wakes up, it must undo anything it did to prepare
for poweroff, which for the cores we care about, is writing
CPUPWRCTLR_EL1.CORE_PWRDN_EN. The least intrusive for the cpu library
way of doing this is to simply call the power off hook again and have
the hook toggle the bit. If in the future there need to be more complex
sequences, their direction can be advised on the value of this bit.
Third, do the actual "resume". Most of the logic is already there for
the retention suspend, so that only needs a small touch up to apply to
the powerdown case as well. The missing bit is the powerdown specific
state management. Luckily, the warmboot entrypoint does exactly that
already too, so steal that and we're done.
All of this is hidden behind a FEAT_PABANDON flag since it has a large
memory and runtime cost that we don't want to burden non pabandon cores
with.
Finally, do some function renaming to better reflect their purpose and
make names a little bit more consistent.
Change-Id: I2405b59300c2e24ce02e266f91b7c51474c1145f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
This function doesn't return and its callers that don't return either
rely on this. Drop the dead attribute and add a panic() after it to make
this expectation explicit. Calling `wfi` in the powerdown sequence is
terminal so even if the function was made to return, there would be no
functional change.
This is useful for a following patch that makes psci_power_down_wfi()
return.
Change-Id: I62ca1ee058b1eaeb046966c795081e01bf45a2eb
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
The workarounds introduced in the three patches starting at
888eafa00b assumed that any powerdown
request will be (forced to be) terminal. This assumption can no longer
be the case for new CPUs so there is a need to revisit these older
cores. Since we may wake up, we now need to respect the workaround's
recommendation that the workaround needs to be reverted on wakeup. So do
exactly that.
Introduce a new helper to toggle bits in assembly. This allows us to
call the workaround twice, with the first call setting the workaround
and second undoing it. This is also used for gelas' an travis' powerdown
routines. This is so the same function can be called again
Also fix the condition in the cpu helper macro as it was subtly wrong
Change-Id: Iff9e5251dc9d8670d085d88c070f78991955e7c3
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Introduce a new helper to toggle bits in assembly. This allows us to
call the workaround twice, with the first call setting the workaround
and second undoing it. This allows the (errata) workaround functions to
be used to both apply and undo the mitigation.
This is applied to functions where the undo part will be required in
follow-up patches.
Change-Id: I058bad58f5949b2d5fe058101410e33b6be1b8ba
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
This patch implements SMCCC_ARCH_WORKAROUND_4 and
allows discovery through SMCCC_ARCH_FEATURES.
This mechanism is enabled if CVE_2024_7881 [1] is enabled
by the platform. If CVE_2024_7881 mitigation
is implemented, the discovery call returns 0,
if not -1 (SMC_ARCH_CALL_NOT_SUPPORTED).
For more information about SMCCC_ARCH_WORKAROUND_4 [2], please
refer to the SMCCC Specification reference provided below.
[1]: https://developer.arm.com/Arm%20Security%20Center/Arm%20CPU%20Vulnerability%20CVE-2024-7881
[2]: https://developer.arm.com/documentation/den0028/latest
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: I1b1ffaa1f806f07472fd79d5525f81764d99bc79
This patch adds new cpu ops function extra4 and a new macro
for CVE-2024-7881 [1]. This new macro declare_cpu_ops_wa_4 allows
support for new CVE check function.
[1]: https://developer.arm.com/Arm%20Security%20Center/Arm%20CPU%20Vulnerability%20CVE-2024-7881
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Change-Id: I417389f040c6ead7f96f9b720d29061833f43d37
EXTLLC bit in CPUECTLR_EL1(for non-gelas cpus) and in CPUECTLR2_EL1
register for gelas cpu enables external Last-level cache in the system,
External LLC is present on TC4 systems in MCN but it is not enabled in
CPU registers so enable it.
On TC4, Gelas vs Non-Gelas CPUs have different bits to enable EXTLLC
so take care of that as well.
Change-Id: Ic6a74b4af110a3c34d19131676e51901ea2bf6e3
Signed-off-by: Jagdish Gediya <jagdish.gediya@arm.com>
Signed-off-by: Icen.Zeyada <Icen.Zeyada2@arm.com>
This patch adds support for Local Chip Addressing (LCA). In a multi-chip
system, enablig LCA allows each GIC Distributor to maintain its own
version of routing table. This feature is activated when the
GICD_CFGID.LCA bit is set to 1.
The existing `gic600_multichip_data` data structure did not account for
the LCA feature. To support LCA:
- `rt_owner_base` is replaced by `base_addrs[]`. This is required
because each GICD in the system needs to be configured independently,
and their base addresses must be passed to the driver.
- `chip_addrs` is changed from 1D to 2D array to store the routing table
for each chip's GICD. The entries in `chip_addrs` are configuration
dependent, as the GIC specification does not enforce this.
On a multi-chip platform with chip count N where LCA is enabled by
default, the `gic600_multichip_data` structure should contain all copies
of the routing table (N*N entries). On platforms where LCA is not
supported, only the first sub-array with N entries is required. The
function signature of `gic600_multichip_init` remains unchanged, but if
the LCA feature is enabled, the driver will expect the routing table
configuration in the described format.
Change-Id: I8830c2cf90db6a0cae78e99914cd32c637284a2b
Signed-off-by: Jerry Wang <Jerry.Wang4@arm.com>
Replace the dummy implementation of clk_ops.get_rate with a basic
version that only handles the oscillator objects. Subsequent commits
will add more objects to this list.
Change-Id: I8c1bbbfa6b116fdcf5a1f1353bdb52b474bac831
Signed-off-by: Ghennadi Procopciuc <ghennadi.procopciuc@nxp.com>
* changes:
perf(psci): pass my_core_pos around instead of calling it repeatedly
refactor(psci): move timestamp collection to psci_pwrdown_cpu
refactor(psci): factor common code out of the standby finisher
refactor(psci): don't use PSCI_INVALID_PWR_LVL to signal OFF state
docs(psci): drop outdated cache maintenance comment
FEAT_MOPS, mandatory from Arm v8.8, is typically managed in EL2.
However, in configurations where NS_EL2 is not enabled,
EL3 must set the HCRX_EL2.MSCEn bit to 1 to enable the feature.
This patch ensures FEAT_MOPS is enabled by setting HCRX_EL2.MSCEn to 1.
Change-Id: Ic4960e0cc14a44279156b79ded50de475b3b21c5
Signed-off-by: Arvind Ram Prakash <arvind.ramprakash@arm.com>
Initializing all early clocks before the MMU is enabled can impact boot
time. Therefore, splitting the setup into A53 clocks and peripheral
clocks can be beneficial, with the peripheral clocks configured after
fully initializing the MMU.
Change-Id: I19644227b66effab8e2c43e64e057ea0c8625ebc
Signed-off-by: Ghennadi Procopciuc <ghennadi.procopciuc@nxp.com>
Add all clock modules as entries in MMU using dynamic regions.
Change-Id: I56f724ced4bd024554c7b38afd14ea420de80cc6
Signed-off-by: Ghennadi Procopciuc <ghennadi.procopciuc@nxp.com>
On some platforms plat_my_core_pos is a nontrivial function that takes a
bit of time and the compiler really doesn't like to inline. In the PSCI
library, at least, we have no need to keep repeatedly calling it and we
can instead pass it around as an argument. This saves on a lot of
redundant calls, speeding the library up a bit.
Change-Id: I137f69bea80d7cac90d7a20ffe98e1ba8d77246f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
The target_pwrlvl field in the psci cpu data struct only stores the
highest power domain that a CPU_SUSPEND call affected, and is used to
resume those same domains on warm reset. If the cpu is otherwise OFF
(never turned on or CPU_OFF), then this needs to be the highest power
level because we don't know the highest level that will be off.
So skip the invalidation and always keep the field to the maximum value.
During suspend the field will be lowered to the appropriate value and
then put back after wakeup.
Also, do that in the suspend to standby path as well as it will have
been written before the sleep and it might end up incorrect.
Change-Id: I614272ec387e1d83023c94700780a0f538a9a6b6
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
* changes:
feat(el3_spmc): ffa error handling in direct msg
feat(ff-a): support FFA_MSG_SEND_DIRECT_REQ2/RESP2
feat(ff-a): add FFA_MEM_PERM_GET/SET_SMC64
feat(el3-spmc): support Hob list to boot S-EL0 SP
feat(synquacer): add support Hob creation
fix(fvp): exclude extend memory map TZC regions
feat(fvp): add StandaloneMm manifest in fvp
feat(spm): use xfer list with Hob list in SPM_MM
StandaloneMm which is S-EL0 partition uses
FFA_MSG_SEND_DIRECT_REQ2/RESP2 to handle multiple services.
For this, add support for FFA_MSG_SEND_DIRECT_REQ2/RESP2 in el3_spmc
restrictly up to use 8 registers.
although FF-A v1.2 defines FFA_MSG_SEND_DIRECT_REQ2/RESP2
with ability to pass/return up to 18 registers.
Signed-off-by: Levi Yun <yeoreum.yun@arm.com>
Change-Id: I8ab1c332d269d9d131330bb2debd10d75bdba1ee
* changes:
feat(qemu): hand off TPM event log via TL
feat(handoff): common API for TPM event log handoff
feat(handoff): transfer entry ID for TPM event log
fix(qemu): fix register convention in BL31 for qemu
fix(handoff): fix register convention in opteed
SMCCC_ARCH_FEATURE_AVAILABILITY [1] is a call to query firmware about
the features it is aware of and enables. This is useful when a feature
is not enabled at EL3, eg due to an older FW image, but it is present in
hardware. In those cases, the EL1 ID registers do not reflect the usable
feature set and this call should provide the necessary information to
remedy that.
The call itself is very lightweight - effectively a sanitised read of
the relevant system register. Bits that are not relevant to feature
enablement are masked out and active low bits are converted to active
high.
The implementation is also very simple. All relevant, irrelevant, and
inverted bits combined into bitmasks at build time. Then at runtime the
masks are unconditionally applied to produce the right result. This
assumes that context managers will make sure that disabled features
do not have their bits set and the registers are context switched if
any fields in them make enablement ambiguous.
Features that are not yet supported in TF-A have not been added. On
debug builds, calling this function will fail an assert if any bits that
are not expected are set. In combination with CI this should allow for
this feature to to stay up to date as new architectural features are
added.
If a call for MPAM3_EL3 is made when MPAM is not enabled, the call
will return INVALID_PARAM, while if it is FEAT_STATE_CHECK, it will
return zero. This should be fairly consistent with feature detection.
The bitmask is meant to be interpreted as the logical AND of the
relevant ID registers. It would be permissible for this to return 1
while the ID returns 0. Despite this, this implementation takes steps
not to. In the general case, the two should match exactly.
Finally, it is not entirely clear whether this call replies to SMC32
requests. However, it will not, as the return values are all 64 bits.
[1]: https://developer.arm.com/documentation/den0028/galp1/?lang=en
Co-developed-by: Charlie Bareham <charlie.bareham@arm.com>
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I1a74e7d0b3459b1396961b8fa27f84e3f0ad6a6f
In preparation for SMCCC_ARCH_FEATURE_AVAILABILITY, it is useful for
context to be directly related to the underlying system. Currently,
certain bits like SCR_EL3.APK are always set with the understanding that
they will only take effect if the feature is present.
However, that is problematic for SMCCC_ARCH_FEATURE_AVAILABILITY (an
SMCCC call to report which features firmware enables), as simply reading
the enable bit may contradict the ID register, like the APK bit above
for a system with no Pauth present.
This patch is to clean up these cases. Add a check for PAuth's presence
so that the APK bit remains unset if not present. Also move SPE and TRBE
enablement to only the NS context. They already only enable the features
for NS only and disable them for Secure and Realm worlds. This change
only makes these worlds' context read 0 for easy bitmasking.
There's only a single snag on SPE and TRBE. Currently, their fields have
the same values and any world asymmetry is handled by hardware. Since we
don't want to do that, the buffers' ownership will change if we just set
the fields to 0 for non-NS worlds. Doing that, however, exposes Secure
state to a potential denial of service attack - a malicious NS can
enable profiling and call an SMC. Then, the owning security state will
change and since no SPE/TRBE registers are contexted, Secure state will
start generating records. Always have NS world own the buffers to
prevent this.
Finally, get rid of manage_extensions_common() as it's just a level of
indirection to enable a single feature.
Change-Id: I487bd4c70ac3e2105583917a0e5499e0ee248ed9
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Create a common BL2 API to add a TE for TPM event log.
Change-Id: I459e70f40069aa9ea0625977e0bad8ec316439e6
Signed-off-by: Raymond Mao <raymond.mao@linaro.org>
SPE and TRBE don't have an outright EL3 disable, there are only
constraints on what's allowed. Since we only enable them for NS at the
moment, we want NS to own the buffers even when the feature should be
"disabled" for a world. This means that when we're running in NS
everything is as normal but when running in S/RL then tracing is
prohibited (since the buffers are owned by NS). This allows us to fiddle
with context a bit more without having to context switch registers.
Change-Id: Ie1dc7c00e4cf9bcc746f02ae43633acca32d3758
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>