More often than not, Arm based systems include some revision of a GIC.
There are two ways of adding support for them in platform code - calling
the top-level helpers from plat/arm/common/arm_gicvX.c or by using the
driver directly. Both of these methods allow for a high degree of
customisation - most functions are defined to be weak and there are no
calls to any of them in generic code.
As it turns out, requirements around those GICs are largely the same.
Platforms that use arm_gicvX.c use the helpers identically among each
other. Platforms that use the driver directly tend to end up with calls
that look a lot like the arm_gicvX.c helpers and the weakness of the
functions are never exercised.
All of this results in a lot of code duplication to do what is
essentially the same thing. Even though it's not a lot of code, when
multiplied among many platforms it becomes significant and makes
refactoring it quite difficult. It's also bug prone since the steps are
a little convoluted and things are likely to work even with subtle
errors (see 50009f6117).
So promote as much of the GIC to be called from common code. Do the
setup in bl31_main() and have every PSCI method do the state management
directly instead of delegating it to the platform hooks. We can base
this implementation on arm_gicvX.c since they already offer logical
names and have worked quite well so far with minimal changes.
The main benefit of doing this is reduced code duplication. If we assume
that, outside of some platform setup, GIC management is identical, then
a platform can add support by telling the build system, regardless of
GIC revision. The other benefit is performance - BL31 and PSCI already
know the core_pos and they can pass it as an argument instead of having
to call plat_my_core_pos(). Now, the only platform specific GIC actions
necessary are the saving and restoring of context on entering and
exiting a power domain. The PSCI library does not keep track of this so
it is unable perform it itself. The routines themselves are also
provided.
For compatibility all of this is hidden behind a build flag. Platforms
are encouraged to adopt this driver, but it would not be practical to
convert and validate every GIC based platform.
This patch renames the functions in question to follow the
gic_<function>() convention. This allows the names to be version
agnostic.
Finally, drop the weak definitions - they are unused, likely to remain
so, and can be added back if the need arises.
Change-Id: I5b5267f4b72f633fb1096400ec8e4b208694135f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
prepare_el3_entry() is meant to be the one-stop shop for all the context
we must fiddle with to enter EL3 proper. However, PAuth is the one
exception, happening right after. Absorb it into prepare_el3_entry(),
handling the BL1/BL31 difference.
This is a good time to also move the key saving into the enable
function, also to centralise. With this it becomes apparent that saving
keys just before CPU_SUSPEND is redundant as they will be reinitialised
when the core wakes up.
Note that the key loading, now in save_gp_pmcr_pauth_regs, does not end
in an isb. The effects of the key change are not needed until the isb
in the caller, so this isb is not needed.
Change-Id: Idd286bea91140c106ab4c933c5c44b0bc2050ca2
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Currently, the calling core (meaning the core which received the call to
CPU_ON or the powerdown path of CPU_SUSPEND on the same core) is in
charge of initialising the context for the waking core (the warmboot
entrypoint for both). This is convenient because the calling core can
write the context while in coherency and the waking core will only need
the context after its entered coherency. This avoids any cache
maintenance and makes communication simple.
However, this has 3 main problems:
a) asymmetric feature support is problematic - the calling core has no
way of knowing the feature set of the waking core. If the two
diverge, the architectural feature discovery via ID registers breaks
down. We've thus far "fixed" this on a case by case basis which
doesn't scale and introduces redundancy.
b) powerdown abandon (pabandon) introduces a contradiction - the calling
core has to initialise the context for when the core wakes up, but
should the core not powerdown it needs its old context intact. The only
way to work around this is by keeping two copies of context which
incurs a runtime and memory overhead.
c) cm_prepare_el3_exit[_ns]() doesn't have access to the entrypoint but needs
it to make initialisation decisions. We can infer some of this from
registers that have already been written but this is awkwardly
limiting for what we can do. This also necessitates the split from
the context initialisation.
We can solve all three by a making a core be in full ownership of its
own context. The calling core then only writes entrypoint information
and nothing else. The waking core then initialises its own context as it
sees fit with full knowledge of the whole picture.
The only tricky bit is cache coherency - the waking core has to be able
to coherently observe its new entrypoint. Calling cores will write to
the shared region with coherent caches on. If we make sure to read the
context only after the waking core has entered coherency, then we can
avoid cache operations and let hardware handle everything.
We can skip the spsr check for FEAT_TCR2 as it doesn't make a
difference. We can also skip enabling it twice from generic code.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I86e7fe8b698191fc3b469e5ced1fd010f8754b0e
When building with LTO, GCC is uncomfortable that these variables are
uninitialised and complains that they may be used before they are
initialised. Set them to 0 as there are plenty of asserts to make sure
these branches cannot be taken.
Change-Id: Ic1f05e77252e93bdafab033dcb24ad42856ebf9a
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
* changes:
fix(psci): avoid altering function parameters
fix(services): avoid altering function parameters
fix(common): ignore the unused function return value
fix(psci): modify variable conflicting with external function
fix(delay-timer): create unique variable name
This corrects the MISRA violation C2012-17.8:
A function parameter should not be modified.
Local variable is declared and used to process the value
from the argument.
Change-Id: Ia757db4903132794623dbf92ff8cecc9b40f170d
Signed-off-by: Nithin G <nithing@amd.com>
Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
This corrects the MISRA violation C2012-5.8:
Identifiers that define objects or functions with
external linkage shall be unique.
Modify the variable name to prevent conflict with
external function declaration
Change-Id: I2f109242b6dd3b3c5e9289881e3dd5466c74fcb5
Signed-off-by: Nithin G <nithing@amd.com>
Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
This corrects the MISRA violation C2012-8.13:
A pointer should point to a const-qualified type whenever possible.
Added const qualifier to pointer in the function arguments.
Change-Id: Id3d4aa528f275973a37c0b9af04495632cb2dda3
Signed-off-by: Nithin G <nithing@amd.com>
Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
This corrects the MISRA violation C2012-15.6:
The body of an iteration-statement or a selection-statement shall
be a compound-statement.
Enclosed statement body within the curly braces.
Change-Id: I8b656f59b445e914dd3f47e3dde83735481a3640
Signed-off-by: Nithin G <nithing@amd.com>
Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
The current code is incredibly resilient to updates to the spec and
has worked quite well so far. However, recent implementations expose a
weakness in that this is rather slow. A large part of it is written in
assembly, making it opaque to the compiler for optimisations. The
future proofness requires reading registers that are effectively
`volatile`, making it even harder for the compiler, as well as adding
lots of implicit barriers, making it hard for the microarchitecutre to
optimise as well.
We can make a few assumptions, checked by a few well placed asserts, and
remove a lot of this burden. For a start, at the moment there are 4
group 0 counters with static assignments. Contexting them is a trivial
affair that doesn't need a loop. Similarly, there can only be up to 16
group 1 counters. Contexting them is a bit harder, but we can do with a
single branch with a falling through switch. If/when both of these
change, we have a pair of asserts and the feature detection mechanism to
guard us against pretending that we support something we don't.
We can drop contexting of the offset registers. They are fully
accessible by EL2 and as such are its responsibility to preserve on
powerdown.
Another small thing we can do, is pass the core_pos into the hook.
The caller already knows which core we're running on, we don't need to
call this non-trivial function again.
Finally, knowing this, we don't really need the auxiliary AMUs to be
described by the device tree. Linux doesn't care at the moment, and any
information we need for EL3 can be neatly placed in a simple array.
All of this, combined with lifting the actual saving out of assembly,
reduces the instructions to save the context from 180 to 40, including a
lot fewer branches. The code is also much shorter and easier to read.
Also propagate to aarch32 so that the two don't diverge too much.
Change-Id: Ib62e6e9ba5be7fb9fb8965c8eee148d5598a5361
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
To allow for generic handling of a wakeup, this hook is no longer
expected to call wfi itself. Update the name everywhere to reflect this
expectation so that future platform implementers don't get misled.
Change-Id: Ic33f0b6da74592ad6778fd802c2f0b85223af614
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
There is now a convent place at the end of the function to jump to when
we shouldn't power down. This eliminates the need for skip_wfi.
Change-Id: I8d1a88c77463e8ee6765b185adbe3d23d9fce101
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Travis' and Gelas' TRMs tell us to disable SME (set PSTATE.{ZA, SM} to
0) when we're attempting to power down. What they don't tell us is that
if this isn't done, the powerdown request will be rejected. On the
CPU_OFF path that's not a problem - we can force SVCR to 0 and be
certain the core will power off.
On the suspend to powerdown path, however, we cannot do this. The TRM
also tells us that the sequence could also be aborted on eg. GIC
interrupts. If this were to happen when we have overwritten SVCR to 0,
upon a return to the caller they would experience a loss of context. We
know that at least Linux may call into PSCI with SVCR != 0. One option
is to save the entire SME context which would be quite expensive just to
work around. Another option is to downgrade the request to a normal
suspend when SME was left on. This option is better as this is expected
to happen rarely enough to ignore the wasted power and we don't want to
burden the generic (correct) path with needless context management.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I698fa8490ebf51461f6aa8bba84f9827c5c46ad4
The simplistic view of a core's powerdown sequence is that power is
atomically cut upon calling `wfi`. However, it turns out that it has
lots to do - it has to talk to the interconnect to exit coherency, clean
caches, check for RAS errors, etc. These take significant amounts of
time and are certainly not atomic. As such there is a significant window
of opportunity for external events to happen. Many of these steps are
not destructive to context, so theoretically, the core can just "give
up" half way (or roll certain actions back) and carry on running. The
point in this sequence after which roll back is not possible is called
the point of no return.
One of these actions is the checking for RAS errors. It is possible for
one to happen during this lengthy sequence, or at least remain
undiscovered until that point. If the core were to continue powerdown
when that happens, there would be no (easy) way to inform anyone about
it. Rejecting the powerdown and letting software handle the error is the
best way to implement this.
Arm cores since at least the a510 have included this exact feature. So
far it hasn't been deemed necessary to account for it in firmware due to
the low likelihood of this happening. However, events like GIC wakeup
requests are much more probable. Older cores will powerdown and
immediately power back up when this happens. Travis and Gelas include a
feature similar to the RAS case above, called powerdown abandon. The
idea is that this will improve the latency to service the interrupt by
saving on work which the core and software need to do.
So far firmware has relied on the `wfi` being the point of no return and
if it doesn't explicitly detect a pending interrupt quite early on, it
will embark onto a sequence that it expects to end with shutdown. To
accommodate for it not being a point of no return, we must undo all of
the system management we did, just like in the warm boot entrypoint.
To achieve that, the pwr_domain_pwr_down_wfi hook must not be terminal.
Most recent platforms do some platform management and finish on the
standard `wfi`, followed by a panic or an endless loop as this is
expected to not return. To make this generic, any platform that wishes
to support wakeups must instead let common code call
`psci_power_down_wfi()` right after. Besides wakeups, this lets common
code handle powerdown errata better as well.
Then, the CPU_OFF case is simple - PSCI does not allow it to return. So
the best that can be done is to attempt the `wfi` a few times (the
choice of 32 is arbitrary) in the hope that the wakeup is transient. If
it isn't, the only choice is to panic, as the system is likely to be in
a bad state, eg. interrupts weren't routed away. The same applies for
SYSTEM_OFF, SYSTEM_RESET, and SYSTEM_RESET2. There the panic won't
matter as the system is going offline one way or another. The RAS case
will be considered in a separate patch.
Now, the CPU_SUSPEND case is more involved. First, to powerdown it must
wipe its context as it is not written on warm boot. But it cannot be
overwritten in case of a wakeup. To avoid the catch 22, save a copy that
will only be used if powerdown fails. That is about 500 bytes on the
stack so it hopefully doesn't tip anyone over any limits. In future that
can be avoided by having a core manage its own context.
Second, when the core wakes up, it must undo anything it did to prepare
for poweroff, which for the cores we care about, is writing
CPUPWRCTLR_EL1.CORE_PWRDN_EN. The least intrusive for the cpu library
way of doing this is to simply call the power off hook again and have
the hook toggle the bit. If in the future there need to be more complex
sequences, their direction can be advised on the value of this bit.
Third, do the actual "resume". Most of the logic is already there for
the retention suspend, so that only needs a small touch up to apply to
the powerdown case as well. The missing bit is the powerdown specific
state management. Luckily, the warmboot entrypoint does exactly that
already too, so steal that and we're done.
All of this is hidden behind a FEAT_PABANDON flag since it has a large
memory and runtime cost that we don't want to burden non pabandon cores
with.
Finally, do some function renaming to better reflect their purpose and
make names a little bit more consistent.
Change-Id: I2405b59300c2e24ce02e266f91b7c51474c1145f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
The workarounds introduced in the three patches starting at
888eafa00b assumed that any powerdown
request will be (forced to be) terminal. This assumption can no longer
be the case for new CPUs so there is a need to revisit these older
cores. Since we may wake up, we now need to respect the workaround's
recommendation that the workaround needs to be reverted on wakeup. So do
exactly that.
Introduce a new helper to toggle bits in assembly. This allows us to
call the workaround twice, with the first call setting the workaround
and second undoing it. This is also used for gelas' an travis' powerdown
routines. This is so the same function can be called again
Also fix the condition in the cpu helper macro as it was subtly wrong
Change-Id: Iff9e5251dc9d8670d085d88c070f78991955e7c3
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
On some platforms plat_my_core_pos is a nontrivial function that takes a
bit of time and the compiler really doesn't like to inline. In the PSCI
library, at least, we have no need to keep repeatedly calling it and we
can instead pass it around as an argument. This saves on a lot of
redundant calls, speeding the library up a bit.
Change-Id: I137f69bea80d7cac90d7a20ffe98e1ba8d77246f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
psci_pwrdown_cpu has two callers, both of which save timestamps meant to
measure how much time the cache maintenance operations take. Move the
timestamp collection inside to save on a bit of code duplication.
Change-Id: Ia2e7168faf7773d99b696cbdb6c98db7b58e31cf
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
psci_suspend_to_standby_finisher and psci_cpu_suspend_finish do mostly
the same stuff, besides the system management associated with their
respective wakeup paths. So bring the wake from standby path in line
with the wake from reset path - caller acquires locks and manages
context. This way both behave in vaguely the same way. We can also bring
their names in line so it's more apparent how they are different.
This is in preparation for cores waking from sleep, coming in another
patch. No functional change is expected.
Change-Id: I0e569d12f65d231606080faa0149d22efddc386d
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
The target_pwrlvl field in the psci cpu data struct only stores the
highest power domain that a CPU_SUSPEND call affected, and is used to
resume those same domains on warm reset. If the cpu is otherwise OFF
(never turned on or CPU_OFF), then this needs to be the highest power
level because we don't know the highest level that will be off.
So skip the invalidation and always keep the field to the maximum value.
During suspend the field will be lowered to the appropriate value and
then put back after wakeup.
Also, do that in the suspend to standby path as well as it will have
been written before the sleep and it might end up incorrect.
Change-Id: I614272ec387e1d83023c94700780a0f538a9a6b6
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
The comment was written when cache maintenance had to be considered when
calling this function. But that argument was dropped a while back and
this comment no longer makes any sense.
Change-Id: Ib68293f23cc3edca3010164dfe8866956b8e1a63
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
In the chapter about FEAT_SPE (D16.4 specifically) it is stated that
"Sampling is always disabled at EL3". That means that disabling sampling
(writing PMBLIMITR_EL1.E to 0) is redundant and can be removed. The only
reason we save/restore SPE context is because of that disable, so those
can be removed too.
There's the issue of draining the profiling buffer though. No new
samples will have been generated since entering EL3. However, old
samples might still be in-flight. Unless synchronised by a psb csync,
those might be affected by our extensive context mutation. Adding a psb
in prepare_el3_entry should cater for that. Note that prior to the
introduction of root context this was not a problem as context remained
unchanged and the hooks took care of the rest.
Then, the only time we care about the buffer actually making it to
memory is when we exit coherency. On HW_ASSISTED_COHERENCY systems we
don't have to do anything, it should be handled for us. Systems without
it need a dsb to wait for them to complete. There should be one already
in each cpu's powerdown hook which should work.
While on the topic of barriers, the esb barrier is no longer used.
Remove it.
Change-Id: I9736fc7d109702c63e7d403dc9e2a4272828afb2
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
The errata framework has a helper to invoke workarounds, complete with a
cpu rev_var check. We can use that directly instead of the
apply_cpu_pwr_dwn_errata to save on some code, as well as an extra
branch. It's also more readable.
Also, apply_erratum invocation in cpu files don't need to check the
rev_var as that was already done by the cpu_ops dispatcher for us to end
up in the file.
Finally, X2 erratum 2768515 only applies in the powerdown sequence, i.e.
at runtime. It doesn't achieve anything at reset, so we can label it
accordingly.
Change-Id: I02f9dd7d0619feb54c870938ea186be5e3a6ca7b
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
The function always checks the first parent of the current core
instead parse the tree topology to find the parent at parent level
of the CPU. It is because the current loop has no effect as it uses
a fixed parameter 'my_idx' and returns the FIRST parent of CPU.
Also, it looks for the parent nodes in the array of CPU nodes, but
actually they are in a separate array.
This update allows to parse the PSCI topology tree to find
the parent at parent level of the CPU identified by my_idx.
Fixes: 606b743007 ("feat(psci): add support for OS-initiated mode")
Change-Id: I96fb5ecc154a76b16adca5b5055217b8626c9e66
Signed-off-by: Charlie Bareham <charlie.bareham@arm.com>
During CPU power down, we stop the profiling by calling spe_disable()
function. From TF-A point of view, enable/disable means the avaibility
of the feature for lower EL. In this case we are not actully disabling
the feautre but stoping it before power down.
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I6e3b39c5c35d330c51e7ac715446a8b36bf9531f
Update parent_idx support in psci_validate_state_coordination() as
it is done in psci_do_state_coordination(). The modified loop verifies
the targeted state for all the branch up to end_pwrlvl in the topology
for the current cpu.
Fixes: 606b743007 ("feat(psci): add support for OS-initiated mode")
Change-Id: I14420f64a18b543eb4e10a1279f51cc17558c13c
Signed-off-by: Patrick Delaunay <patrick.delaunay@foss.st.com>
Resolve issue where optimization is enabled for TF-A using
-Og and compile fail is seen in PSCI module.
Change-Id: Id9afb5c56a6937e7040b20cd01080c190c8276d5
Signed-off-by: Mark Dykes <mark.dykes@arm.com>
spe_disable function, disables profiling and flushes all the buffers and
hence needs to be called on power-off/suspend path.
It needs to be invoked as SPE feature writes to memory as part of
regular operation and not disabling before exiting coherency
could potentially cause issues.
Currently, this is handled only for the FVP. Other platforms need
to replicate this behaviour and is covered as part of this patch.
Calling it from generic psci library code, before the platform specific
actions to turn off the CPUs, will make it applicable for all the
platforms which have ported the PSCI library.
Change-Id: I90b24c59480357e2ebfa3dfc356c719ca935c13d
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
Adding a new API under PSCI library,for managing all the architectural
features, required during power off or suspend cases.
Change-Id: I1659560daa43b9344dd0cc0d9b311129b4e9a9c7
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
There are various SMC calls which pass mpidr as an argument which is
currently tested at random places in SMC call path.
To make the mpidr validation check consistent across SMC calls, do
this check as part of SMC argument validation.
This patch introduce a helper function is_valid_mpidr() to validate
mpidr and call it as part of validating SMC arguments at starting of
SMC handlers (which expect mpidr as an argument).
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I11ea50e22caf17896cf4b2059b87029b2ba136b1
Align entire TF-A to use Arm in copyright header.
Change-Id: Ief9992169efdab61d0da6bd8c5180de7a4bc2244
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
This involves replacing:
* the reset_func with the standard cpu_reset_func_{start,end} to apply
errata automatically
* the <cpu>_errata_report with the errata_report_shim to report errata
automatically
...and for each erratum:
* the prologue with the workaround_<type>_start to do the checks and
framework registration automatically
* the epilogue with the workaround_<type>_end
* the checker function with the check_erratum_<type> to make it more
descriptive
It is important to note that the errata workaround sequences remain
unchanged and preserve their git blame.
Note: cortex_a510.S is applicable and being used only by arm_fpga platform.
However, to test the ported changes, below steps were carried out on the
fvp and the obtained results has been verified.
Testing was conducted by:
* Building for release with all errata flags enabled and running script
in change 19136 to compare output of objdump for each errata.
* Testing via script was not complete, as it directed to verify the
check and the workaround functions of few erratas manually.
* Manual comparison of disassembly of converted functions with non-
converted functions
aarch64-none-elf-objdump -D <trusted-firmware-a with conversion>/build/../release/bl31/bl31.elf
vs
aarch64-none-elf-objdump -D <trusted-firmware-a clean repo>/build/fvp/release/bl31/bl31.elf
* Manual comparison of disassembly of both both files(bl31.elf)
ensured, the ported changes were identical and hence verified.
* Build for release with all errata flags enabled and run default
tftf tests.
CROSS_COMPILE=aarch64-none-elf- \
make PLAT=fvp \
ARCH=aarch64 \
DEBUG=0 \
HW_ASSISTED_COHERENCY=1 \
USE_COHERENT_MEM=0 \
CTX_INCLUDE_AARCH32_REGS=0 \
ERRATA_A510_1922240=1 \
ERRATA_A510_2288014=1 \
ERRATA_A510_2042739=1 \
ERRATA_A510_2041909=1 \
ERRATA_A510_2250311=1 \
ERRATA_A510_2218950=1 \
ERRATA_A510_2172148=1 \
ERRATA_A510_2347730=1 \
ERRATA_A510_2371937=1 \
ERRATA_A510_2666669=1 \
ERRATA_A510_2684597=1 \
ERRATA_DSU_2313941=1 \
BL33=/home/jaychi01/tf_a/tf-a-tests/build/fvp/release/tftf.bin \
fip all -j12
* Build for debug with all errata enabled and step through ArmDS
at reset to ensure that if Errata are applicable then the
workaround functions are entered precisely.
Change-Id: Icf7aa25c0b3b30f5e2ad6db83953f7f4f0b201d9
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
PSCI on and suspend wakeup both end with a cm_prepare_el3_exit_ns() call.
Since they are equivalent to the caller, move the call to just after the
*_finish calls to deduplicate it.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I05c16dc6613aba357d20cc39cc43aab803d675e0
manage_extensions_nonsecure() is problematic because it updates both
context and in-place registers (unlike its secure/realm counterparts).
The in-place register updates make it particularly tricky, as those
never change for the lifetime of TF-A. However, they are only set when
exiting to NS world. As such, all of TF-A's execution before that
operates under a different context. This is inconsistent and could cause
problems.
This patch Introduce a real manage_extensions_nonsecure() which only
operates on the context structure. It also introduces a
cm_manage_extensions_el3() which only operates on register in-place that
are not context switched. It is called in BL31's entrypoints so that all
of TF-A executes with the same environment once all features have been
converted.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: Ic579f86c41026d2054863ef44893e0ba4c591da9
This patch adds a new optional member `pwr_domain_validate_suspend` to
the `plat_psci_ops_t` structure that allows a platform to optionally
perform platform specific validations in OS-initiated mode. This is
conditionally compiled into the build depending on the value of the
`PSCI_OS_INIT_MODE` build option.
In https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/17682,
the return type of the `pwr_domain_suspend` handler was updated from
`void` to `int` to allow a platform to optionally perform platform
specific validations in OS-initiated mode. However, when an error code
other than `PSCI_E_SUCCESS` is returned, the current exit path does not
undo the operations in `psci_suspend_to_pwrdown_start`, and as a result,
the system ends up in an unexpected state.
The fix in this patch prevents the need to undo the operations in
`psci_suspend_to_pwrdown_start`, by allowing the platform to first
perform any necessary platform specific validations before the PSCI
generic code proceeds to the point of no return where the CPU_SUSPEND
request is expected to complete successfully.
Change-Id: I05d92c7ea3f5364da09af630d44d78252185db20
Signed-off-by: Wing Li <wingers@google.com>
The ERRATA_XXX macros, used in cpu_helpers.S, are necessary for the
check_errata_xxx family of functions. The CPU_REV should be used in the
cpu files but for whatever reason the values have been hard-coded so far
(at the cost of readability). It's evident this file is not strictly for
status reporting.
The new purpose of this file is to make it a one-stop-shop for all
things errata.
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I1ce22dd36df5aa0bcfc5f2772251f91af8703dfb
Commit 66327414fb ("fix(psci): potential array overflow with cpu on")
changed an assert in the PSCI library's psci_cpu_on_start() function to
a runtime error message, followed by a panic. This does not seem right
for two reasons:
- We must not panic() triggered by conditions influenced by lower EL
callers. If non-secure world provides illegal arguments to a PSCI
call, we can easily detect this and return -PSCI_E_INVALID_PARAMS, as
the PSCI spec demands. In fact this is done already, which brings us
to the next reason:
- psci_cpu_on_start() is effectively a function private to the PSCI
library: its prototype is in psci_private.h. It's just not static
because it lives in a different code file from the main PSCI code.
We check for illegal MPID values already in psci_cpu_on(), and return
an error value to the caller, as we should. This function is the ONLY
caller of psci_cpu_on_start(), so there is no way we get an illegal
target_cpu argument into this function. An assert() is thus the proper
way to check for this.
Mostly revert the patch mentioned above, just extending the assert so
that it does also check for not exceeding the array boundaries.
To harden the code, add a check against PLATFORM_MAX_CORE_COUNT in
psci_validate_mpidr(), and return with the proper PSCI error code if
this number is exceeded.
This also fixes the sun50i_a64 build with DEBUG=1, which exceeded an
SRAM limit due to the error message.
Change-Id: I48fc58d96b0173da5b934750f4cadf7884ef5e42
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Move the runtime errata source file into the PSCI library, as PSCI is
the only component directly dependent on it, and it doesn't require
internal access to the CPUs library.
Change-Id: I92826714d49b1b0131f62c158543b4c167ab9aa8
Signed-off-by: Chris Kay <chris.kay@arm.com>
This patch introduces the 'pwr_domain_off_early' hook for
platforms wanting to perform housekeeping steps before the
PSCI framework starts the CPU power off sequence. Platforms
might also want to use ths opportunity to ensure that the
CPU off sequence can proceed.
The PSCI framework expects a return code of PSCI_E_DENIED,
if the platform wants to halt the CPU off sequence.
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Change-Id: I6980e84fc4d6cb80537a178d0d3d26fb28a13853
Fix coverity finding in psci_cpu_on, in which target_idx is directly
assigned the return value from plat_core_pos_by_mpidr. If the latter
returns a negative or large positive value, it can trigger an out of
bounds overflow for the psci_cpu_pd_nodes array.
>>>> CID 382009: (OVERRUN)
>>>> Overrunning callee's array of size 8 by passing argument "target_idx" (which evaluates to 4294967295) in call to "psci_spin_lock_cpu".
> 80 psci_spin_lock_cpu(target_idx);
>>>> CID 382009: (OVERRUN)
>>>> Overrunning callee's array of size 8 by passing argument "target_idx" (which evaluates to 4294967295) in call to "psci_spin_unlock_cpu".
> 160 psci_spin_unlock_cpu(target_idx);
Signed-off-by: Olivier Deprez <olivier.deprez@arm.com>
Change-Id: Ibc46934e9ca7fdcaeebd010e5c6954dcf2dcf8c7
The PSCI function dispatcher switch/case is split up between 32-bit and
64-bit function IDs, based on bit 30 of the encoding. This bit just
encodes the maximum size of the arguments, not necessarily whether they
are used from AArch64 or AArch32. So while some functions exist in both
worlds (CPU_ON, for instance), some functions take no or only 32-bit
arguments (CPU_OFF, PSCI_FEATURES), so they only exist as a 32-bit
function call.
Commit b88a4416b5 ("feat(psci): add support for PSCI_SET_SUSPEND_MODE"
, gerrit ID Iebf65f5f7846aef6b8643ad6082db99b4dcc4bef) and commit
9a70e69e05 ("feat(psci): update PSCI_FEATURES", gerrit ID
I5da8a989b53419ad2ab55b73ddeee6e882c25554) introduced two "case"
sections for 32-bit function IDs in the 64-bit branch, which will never
trigger. The one small extra case caused the sun50i_a64 DEBUG build to
go beyond its RAM limit.
Removed the redundant switch/case blocks, to make sun50i_a64 build
again.
Change-Id: Ic65b7403d128837296a0c3af42c6f23f9f57778e
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
This patch updates the PSCI_FEATURES handler to indicate support for
OS-initiated mode per section 5.15.2 of the PSCI spec (DEN0022D.b) based
on the value of `FF_SUPPORTS_OS_INIT_MODE`, which is conditionally
enabled by the `PSCI_OS_INIT_MODE` build option.
Change-Id: I5da8a989b53419ad2ab55b73ddeee6e882c25554
Signed-off-by: Wing Li <wingers@google.com>
This patch adds a `psci_validate_state_coordination` function that is
called by `psci_cpu_suspend_start` in OS-initiated mode.
This function validates the request per sections 4.2.3.2, 5.4.5, and 6.3
of the PSCI spec (DEN0022D.b):
- The requested power states are consistent with the system's state
- The calling core is the last running core at the requested power level
This function differs from `psci_do_state_coordination` in that:
- The `psci_req_local_pwr_states` map is not modified if the request
were to be denied
- The `state_info` argument is never modified since it contains the
power states requested by the calling OS
This is conditionally compiled into the build depending on the value of
the `PSCI_OS_INIT_MODE` build option.
Change-Id: I667041c842d2856e9d128c98db4d5ae4e4552df3
Signed-off-by: Wing Li <wingers@google.com>
This patch adds a PSCI_SET_SUSPEND_MODE handler that validates the
request per section 5.20.2 of the PSCI spec (DEN0022D.b), and updates
the suspend mode to the requested mode.
This is conditionally compiled into the build depending on the value of
the `PSCI_OS_INIT_MODE` build option.
Change-Id: Iebf65f5f7846aef6b8643ad6082db99b4dcc4bef
Signed-off-by: Wing Li <wingers@google.com>
Some of our specialized sections are not prefixed with the conventional
period. The compiler uses input section names to derive certain other
section names (e.g. `.rela.text`, `.relacpu_ops`), and these can be
difficult to select in linker scripts when there is a lack of a
delimiter.
This change introduces the period prefix to all specialized section
names.
BREAKING-CHANGE: All input and output linker section names have been
prefixed with the period character, e.g. `cpu_ops` -> `.cpu_ops`.
Change-Id: I51c13c5266d5975fbd944ef4961328e72f82fc1c
Signed-off-by: Chris Kay <chris.kay@arm.com>
Cortex-A510 erratum 2684597 is a Cat B erratum that applies to revisions
r0p0, r0p1, r0p2, r0p3, r1p0, r1p1 and r1p2. It is fixed in r1p3. The
workaround is to execute a TSB CSYNC and DSB before executing WFI for
power down.
SDEN can be found here:
https://developer.arm.com/documentation/SDEN1873361/latesthttps://developer.arm.com/documentation/SDEN1873351/latest
Change-Id: Ic0b24b600bc013eb59c797401fbdc9bda8058d6d
Signed-off-by: Harrison Mutai <harrison.mutai@arm.com>
A processing element should never return from a wfi, however, due to a
hardware bug, certain CPUs may wake up because of an external event.
This patch tightens the behaviour of the common power down sequence, it
ensures the routine never returns by entering a wfi loop at its end. It
aligns with the behaviour of the platform implementations.
Change-Id: I36d8b0c64eccb71035bf164b4cd658d66ed7beb4
Signed-off-by: Harrison Mutai <harrison.mutai@arm.com>