Commit graph

143 commits

Author SHA1 Message Date
Maheedhar Bollapalli
c7b0a28d32 fix(psci): add missing curly braces
This corrects the MISRA violation C2012-15.6:
The body of an iteration-statement or a selection-statement shall
be a compound-statement.
Enclosed statement body within the curly braces.

Change-Id: I8b656f59b445e914dd3f47e3dde83735481a3640
Signed-off-by: Nithin G <nithing@amd.com>
Signed-off-by: Maheedhar Bollapalli <maheedharsai.bollapalli@amd.com>
2025-03-07 13:22:18 +01:00
Boyan Karatotev
83ec7e452c perf(amu): greatly simplify AMU context management
The current code is incredibly resilient to updates to the spec and
has worked quite well so far. However, recent implementations expose a
weakness in that this is rather slow. A large part of it is written in
assembly, making it opaque to the compiler for optimisations. The
future proofness requires reading registers that are effectively
`volatile`, making it even harder for the compiler, as well as adding
lots of implicit barriers, making it hard for the microarchitecutre to
optimise as well.

We can make a few assumptions, checked by a few well placed asserts, and
remove a lot of this burden. For a start, at the moment there are 4
group 0 counters with static assignments. Contexting them is a trivial
affair that doesn't need a loop. Similarly, there can only be up to 16
group 1 counters. Contexting them is a bit harder, but we can do with a
single branch with a falling through switch. If/when both of these
change, we have a pair of asserts and the feature detection mechanism to
guard us against pretending that we support something we don't.

We can drop contexting of the offset registers. They are fully
accessible by EL2 and as such are its responsibility to preserve on
powerdown.

Another small thing we can do, is pass the core_pos into the hook.
The caller already knows which core we're running on, we don't need to
call this non-trivial function again.

Finally, knowing this, we don't really need the auxiliary AMUs to be
described by the device tree. Linux doesn't care at the moment, and any
information we need for EL3 can be neatly placed in a simple array.

All of this, combined with lifting the actual saving out of assembly,
reduces the instructions to save the context from 180 to 40, including a
lot fewer branches. The code is also much shorter and easier to read.

Also propagate to aarch32 so that the two don't diverge too much.

Change-Id: Ib62e6e9ba5be7fb9fb8965c8eee148d5598a5361
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-02-25 08:50:46 +00:00
Boyan Karatotev
db5fe4f493 chore(docs): drop the "wfi" from pwr_domain_pwr_down_wfi
To allow for generic handling of a wakeup, this hook is no longer
expected to call wfi itself. Update the name everywhere to reflect this
expectation so that future platform implementers don't get misled.

Change-Id: Ic33f0b6da74592ad6778fd802c2f0b85223af614
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-02-03 14:29:47 +00:00
Boyan Karatotev
dc0bf48669 chore(psci): drop skip_wfi variable
There is now a convent place at the end of the function to jump to when
we shouldn't power down. This eliminates the need for skip_wfi.

Change-Id: I8d1a88c77463e8ee6765b185adbe3d23d9fce101
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-02-03 14:29:47 +00:00
Boyan Karatotev
45c7328c0b fix(cpus): avoid SME related loss of context on powerdown
Travis' and Gelas' TRMs tell us to disable SME (set PSTATE.{ZA, SM} to
0) when we're attempting to power down. What they don't tell us is that
if this isn't done, the powerdown request will be rejected. On the
CPU_OFF path that's not a problem - we can force SVCR to 0 and be
certain the core will power off.

On the suspend to powerdown path, however, we cannot do this. The TRM
also tells us that the sequence could also be aborted on eg. GIC
interrupts. If this were to happen when we have overwritten SVCR to 0,
upon a return to the caller they would experience a loss of context. We
know that at least Linux may call into PSCI with SVCR != 0. One option
is to save the entire SME context which would be quite expensive just to
work around. Another option is to downgrade the request to a normal
suspend when SME was left on. This option is better as this is expected
to happen rarely enough to ignore the wasted power and we don't want to
burden the generic (correct) path with needless context management.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I698fa8490ebf51461f6aa8bba84f9827c5c46ad4
2025-02-03 14:29:47 +00:00
Boyan Karatotev
2b5e00d4ea feat(psci): allow cores to wake up from powerdown
The simplistic view of a core's powerdown sequence is that power is
atomically cut upon calling `wfi`. However, it turns out that it has
lots to do - it has to talk to the interconnect to exit coherency, clean
caches, check for RAS errors, etc. These take significant amounts of
time and are certainly not atomic. As such there is a significant window
of opportunity for external events to happen. Many of these steps are
not destructive to context, so theoretically, the core can just "give
up" half way (or roll certain actions back) and carry on running. The
point in this sequence after which roll back is not possible is called
the point of no return.

One of these actions is the checking for RAS errors. It is possible for
one to happen during this lengthy sequence, or at least remain
undiscovered until that point. If the core were to continue powerdown
when that happens, there would be no (easy) way to inform anyone about
it. Rejecting the powerdown and letting software handle the error is the
best way to implement this.

Arm cores since at least the a510 have included this exact feature. So
far it hasn't been deemed necessary to account for it in firmware due to
the low likelihood of this happening. However, events like GIC wakeup
requests are much more probable. Older cores will powerdown and
immediately power back up when this happens. Travis and Gelas include a
feature similar to the RAS case above, called powerdown abandon. The
idea is that this will improve the latency to service the interrupt by
saving on work which the core and software need to do.

So far firmware has relied on the `wfi` being the point of no return and
if it doesn't explicitly detect a pending interrupt quite early on, it
will embark onto a sequence that it expects to end with shutdown. To
accommodate for it not being a point of no return, we must undo all of
the system management we did, just like in the warm boot entrypoint.

To achieve that, the pwr_domain_pwr_down_wfi hook must not be terminal.
Most recent platforms do some platform management and finish on the
standard `wfi`, followed by a panic or an endless loop as this is
expected to not return. To make this generic, any platform that wishes
to support wakeups must instead let common code call
`psci_power_down_wfi()` right after. Besides wakeups, this lets common
code handle powerdown errata better as well.

Then, the CPU_OFF case is simple - PSCI does not allow it to return. So
the best that can be done is to attempt the `wfi` a few times (the
choice of 32 is arbitrary) in the hope that the wakeup is transient. If
it isn't, the only choice is to panic, as the system is likely to be in
a bad state, eg. interrupts weren't routed away. The same applies for
SYSTEM_OFF, SYSTEM_RESET, and SYSTEM_RESET2. There the panic won't
matter as the system is going offline one way or another. The RAS case
will be considered in a separate patch.

Now, the CPU_SUSPEND case is more involved. First, to powerdown it must
wipe its context as it is not written on warm boot. But it cannot be
overwritten in case of a wakeup. To avoid the catch 22, save a copy that
will only be used if powerdown fails. That is about 500 bytes on the
stack so it hopefully doesn't tip anyone over any limits. In future that
can be avoided by having a core manage its own context.

Second, when the core wakes up, it must undo anything it did to prepare
for poweroff, which for the cores we care about, is writing
CPUPWRCTLR_EL1.CORE_PWRDN_EN. The least intrusive for the cpu library
way of doing this is to simply call the power off hook again and have
the hook toggle the bit. If in the future there need to be more complex
sequences, their direction can be advised on the value of this bit.

Third, do the actual "resume". Most of the logic is already there for
the retention suspend, so that only needs a small touch up to apply to
the powerdown case as well. The missing bit is the powerdown specific
state management. Luckily, the warmboot entrypoint does exactly that
already too, so steal that and we're done.

All of this is hidden behind a FEAT_PABANDON flag since it has a large
memory and runtime cost that we don't want to burden non pabandon cores
with.

Finally, do some function renaming to better reflect their purpose and
make names a little bit more consistent.

Change-Id: I2405b59300c2e24ce02e266f91b7c51474c1145f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-02-03 14:29:47 +00:00
Boyan Karatotev
cc94e71b3a refactor(cpus): undo errata mitigations
The workarounds introduced in the three patches starting at
888eafa00b assumed that any powerdown
request will be (forced to be) terminal. This assumption can no longer
be the case for new CPUs so there is a need to revisit these older
cores. Since we may wake up, we now need to respect the workaround's
recommendation that the workaround needs to be reverted on wakeup. So do
exactly that.

Introduce a new helper to toggle bits in assembly. This allows us to
call the workaround twice, with the first call setting the workaround
and second undoing it. This is also used for gelas' an travis' powerdown
routines. This is so the same function can be called again

Also fix the condition in the cpu helper macro as it was subtly wrong

Change-Id: Iff9e5251dc9d8670d085d88c070f78991955e7c3
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-02-03 07:37:58 +00:00
Boyan Karatotev
3b8021058a perf(psci): pass my_core_pos around instead of calling it repeatedly
On some platforms plat_my_core_pos is a nontrivial function that takes a
bit of time and the compiler really doesn't like to inline. In the PSCI
library, at least, we have no need to keep repeatedly calling it and we
can instead pass it around as an argument. This saves on a lot of
redundant calls, speeding the library up a bit.

Change-Id: I137f69bea80d7cac90d7a20ffe98e1ba8d77246f
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-01-14 10:02:00 +00:00
Boyan Karatotev
9b1e800ef0 refactor(psci): move timestamp collection to psci_pwrdown_cpu
psci_pwrdown_cpu has two callers, both of which save timestamps meant to
measure how much time the cache maintenance operations take. Move the
timestamp collection inside to save on a bit of code duplication.

Change-Id: Ia2e7168faf7773d99b696cbdb6c98db7b58e31cf
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-01-14 10:01:37 +00:00
Boyan Karatotev
44ee7714a2 refactor(psci): factor common code out of the standby finisher
psci_suspend_to_standby_finisher and psci_cpu_suspend_finish do mostly
the same stuff, besides the system management associated with their
respective wakeup paths. So bring the wake from standby path in line
with the wake from reset path - caller acquires locks and manages
context. This way both behave in vaguely the same way. We can also bring
their names in line so it's more apparent how they are different.

This is in preparation for cores waking from sleep, coming in another
patch. No functional change is expected.

Change-Id: I0e569d12f65d231606080faa0149d22efddc386d
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-01-14 10:01:34 +00:00
Boyan Karatotev
0c836554b2 refactor(psci): don't use PSCI_INVALID_PWR_LVL to signal OFF state
The target_pwrlvl field in the psci cpu data struct only stores the
highest power domain that a CPU_SUSPEND call affected, and is used to
resume those same domains on warm reset. If the cpu is otherwise OFF
(never turned on or CPU_OFF), then this needs to be the highest power
level because we don't know the highest level that will be off.

So skip the invalidation and always keep the field to the maximum value.
During suspend the field will be lowered to the appropriate value and
then put back after wakeup.

Also, do that in the suspend to standby path as well as it will have
been written before the sleep and it might end up incorrect.

Change-Id: I614272ec387e1d83023c94700780a0f538a9a6b6
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-01-14 09:23:49 +00:00
Boyan Karatotev
39fba640de docs(psci): drop outdated cache maintenance comment
The comment was written when cache maintenance had to be considered when
calling this function. But that argument was dropped a while back and
this comment no longer makes any sense.

Change-Id: Ib68293f23cc3edca3010164dfe8866956b8e1a63
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2025-01-14 09:23:49 +00:00
Boyan Karatotev
f808873372 fix(spe): add a psb before updating context and remove context saving
In the chapter about FEAT_SPE (D16.4 specifically) it is stated that
"Sampling is always disabled at EL3". That means that disabling sampling
(writing PMBLIMITR_EL1.E to 0) is redundant and can be removed. The only
reason we save/restore SPE context is because of that disable, so those
can be removed too.

There's the issue of draining the profiling buffer though. No new
samples will have been generated since entering EL3. However, old
samples might still be in-flight. Unless synchronised by a psb csync,
those might be affected by our extensive context mutation. Adding a psb
in prepare_el3_entry should cater for that. Note that prior to the
introduction of root context this was not a problem as context remained
unchanged and the hooks took care of the rest.

Then, the only time we care about the buffer actually making it to
memory is when we exit coherency. On HW_ASSISTED_COHERENCY systems we
don't have to do anything, it should be handled for us. Systems without
it need a dsb to wait for them to complete. There should be one already
in each cpu's powerdown hook which should work.

While on the topic of barriers, the esb barrier is no longer used.
Remove it.

Change-Id: I9736fc7d109702c63e7d403dc9e2a4272828afb2
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2024-12-16 15:14:30 +00:00
Boyan Karatotev
db9ee83432 chore(cpus): optimise runtime errata applications
The errata framework has a helper to invoke workarounds, complete with a
cpu rev_var check. We can use that directly instead of the
apply_cpu_pwr_dwn_errata to save on some code, as well as an extra
branch. It's also more readable.

Also, apply_erratum invocation in cpu files don't need to check the
rev_var as that was already done by the cpu_ops dispatcher for us to end
up in the file.

Finally, X2 erratum 2768515 only applies in the powerdown sequence, i.e.
at runtime. It doesn't achieve anything at reset, so we can label it
accordingly.

Change-Id: I02f9dd7d0619feb54c870938ea186be5e3a6ca7b
Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
2024-10-16 13:59:57 +01:00
Charlie Bareham
01959a1656 fix(psci): fix parent parsing in psci_is_last_cpu_to_idle_at_pwrlvl
The function always checks the first parent of the current core
instead parse the tree topology to find the parent at parent level
of the CPU. It is because the current loop has no effect as it uses
a fixed parameter 'my_idx' and returns the FIRST parent of CPU.
Also, it looks for the parent nodes in the array of CPU nodes, but
actually they are in a separate array.

This update allows to parse the PSCI topology tree to find
the parent at parent level of the CPU identified by my_idx.

Fixes: 606b743007 ("feat(psci): add support for OS-initiated mode")
Change-Id: I96fb5ecc154a76b16adca5b5055217b8626c9e66
Signed-off-by: Charlie Bareham <charlie.bareham@arm.com>
2024-08-06 09:20:29 +01:00
Manish Pandey
4de07b4be7 chore(spe): rename spe_disable() to spe_stop()
During CPU power down, we stop the profiling by calling spe_disable()
function. From TF-A point of view, enable/disable means the avaibility
of the feature for lower EL. In this case we are not actully disabling
the feautre but stoping it before power down.

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I6e3b39c5c35d330c51e7ac715446a8b36bf9531f
2024-07-29 20:34:04 +01:00
Patrick Delaunay
412d92fdfd fix(psci): fix parent_idx in psci_validate_state_coordination
Update parent_idx support in psci_validate_state_coordination() as
it is done in psci_do_state_coordination(). The modified loop verifies
the targeted state for all the branch up to end_pwrlvl in the topology
for the current cpu.

Fixes: 606b743007 ("feat(psci): add support for OS-initiated mode")
Change-Id: I14420f64a18b543eb4e10a1279f51cc17558c13c
Signed-off-by: Patrick Delaunay <patrick.delaunay@foss.st.com>
2024-05-08 10:09:07 +01:00
Mark Dykes
152ad112d7
fix(cc): code coverage optimization fix
Resolve issue where optimization is enabled for TF-A using
-Og and compile fail is seen in PSCI module.

Change-Id: Id9afb5c56a6937e7040b20cd01080c190c8276d5
Signed-off-by: Mark Dykes <mark.dykes@arm.com>
2024-04-08 13:38:01 -05:00
Jayanth Dodderi Chidanand
777f1f6897 fix(spe): invoke spe_disable during power domain off/suspend
spe_disable function, disables profiling and flushes all the buffers and
hence needs to be called on power-off/suspend path.
It needs to be invoked as SPE feature writes to memory as part of
regular operation and not disabling before exiting coherency
could potentially cause issues.

Currently, this is handled only for the FVP. Other platforms need
to replicate this behaviour and is covered as part of this patch.

Calling it from generic psci library code, before the platform specific
actions to turn off the CPUs, will make it applicable for all the
platforms which have ported the PSCI library.

Change-Id: I90b24c59480357e2ebfa3dfc356c719ca935c13d
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
2024-02-02 20:06:28 +00:00
Jayanth Dodderi Chidanand
160e8434ba feat(psci): add psci_do_manage_extensions API
Adding a new API under PSCI library,for managing all the architectural
features, required during power off or suspend cases.

Change-Id: I1659560daa43b9344dd0cc0d9b311129b4e9a9c7
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
2024-02-02 20:06:28 +00:00
Manish Pandey
e60c18471f fix(smccc): ensure that mpidr passed through SMC is valid
There are various SMC calls which pass mpidr as an argument which is
currently tested at random places in SMC call path.
To make the mpidr validation check consistent across SMC calls, do
this check as part of SMC argument validation.

This patch introduce a helper function is_valid_mpidr() to validate
mpidr and call it as part of validating SMC arguments at starting of
SMC handlers (which expect mpidr as an argument).

Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I11ea50e22caf17896cf4b2059b87029b2ba136b1
2023-11-06 20:43:38 +00:00
Govindraj Raja
4c700c1563 chore: update to use Arm word across TF-A
Align entire TF-A to use Arm in copyright header.

Change-Id: Ief9992169efdab61d0da6bd8c5180de7a4bc2244
Signed-off-by: Govindraj Raja <govindraj.raja@arm.com>
2023-08-08 15:12:30 +01:00
Jayanth Dodderi Chidanand
ed6d4a3b48 refactor(cpus): convert the Cortex-A510 to use the errata framework
This involves replacing:
 * the reset_func with the standard cpu_reset_func_{start,end} to apply
   errata automatically
 * the <cpu>_errata_report with the errata_report_shim to report errata
   automatically
...and for each erratum:
 * the prologue with the workaround_<type>_start to do the checks and
   framework registration automatically
 * the epilogue with the workaround_<type>_end
 * the checker function with the check_erratum_<type> to make it more
   descriptive

It is important to note that the errata workaround sequences remain
unchanged and preserve their git blame.

Note: cortex_a510.S is applicable and being used only by arm_fpga platform.

However, to test the ported changes, below steps were carried out on the
fvp and the obtained results has been verified.

Testing was conducted by:
 * Building for release with all errata flags enabled and running script
   in change 19136 to compare output of objdump for each errata.

 * Testing via script was not complete, as it directed to verify the
   check and the workaround functions of few erratas manually.

 * Manual comparison of disassembly of converted functions with non-
   converted functions

   aarch64-none-elf-objdump -D <trusted-firmware-a with conversion>/build/../release/bl31/bl31.elf
     vs
   aarch64-none-elf-objdump -D <trusted-firmware-a clean repo>/build/fvp/release/bl31/bl31.elf

 * Manual comparison of disassembly of both both files(bl31.elf)
   ensured, the ported changes were identical and hence verified.

 * Build for release with all errata flags enabled and run default
   tftf tests.

   CROSS_COMPILE=aarch64-none-elf- \
   make PLAT=fvp \
   ARCH=aarch64 \
   DEBUG=0 \
   HW_ASSISTED_COHERENCY=1 \
   USE_COHERENT_MEM=0 \
   CTX_INCLUDE_AARCH32_REGS=0 \
   ERRATA_A510_1922240=1 \
   ERRATA_A510_2288014=1 \
   ERRATA_A510_2042739=1 \
   ERRATA_A510_2041909=1 \
   ERRATA_A510_2250311=1 \
   ERRATA_A510_2218950=1 \
   ERRATA_A510_2172148=1 \
   ERRATA_A510_2347730=1 \
   ERRATA_A510_2371937=1 \
   ERRATA_A510_2666669=1 \
   ERRATA_A510_2684597=1 \
   ERRATA_DSU_2313941=1 \
   BL33=/home/jaychi01/tf_a/tf-a-tests/build/fvp/release/tftf.bin \
   fip all -j12

 * Build for debug with all errata enabled and step through ArmDS
   at reset to ensure that if Errata are applicable then the
   workaround functions are entered precisely.

Change-Id: Icf7aa25c0b3b30f5e2ad6db83953f7f4f0b201d9
Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
2023-07-27 09:35:12 +01:00
Boyan Karatotev
e07e7392a1 refactor(psci): extract cm_prepare_el3_exit_ns() to a common location
PSCI on and suspend wakeup both end with a cm_prepare_el3_exit_ns() call.
Since they are equivalent to the caller, move the call to just after the
*_finish calls to deduplicate it.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I05c16dc6613aba357d20cc39cc43aab803d675e0
2023-07-24 11:04:44 +01:00
Boyan Karatotev
24a70738b2 refactor(cm): introduce a real manage_extensions_nonsecure()
manage_extensions_nonsecure() is problematic because it updates both
context and in-place registers (unlike its secure/realm counterparts).
The in-place register updates make it particularly tricky, as those
never change for the lifetime of TF-A. However, they are only set when
exiting to NS world. As such, all of TF-A's execution before that
operates under a different context. This is inconsistent and could cause
problems.

This patch Introduce a real manage_extensions_nonsecure() which only
operates on the context structure. It also introduces a
cm_manage_extensions_el3() which only operates on register in-place that
are not context switched. It is called in BL31's entrypoints so that all
of TF-A executes with the same environment once all features have been
converted.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: Ic579f86c41026d2054863ef44893e0ba4c591da9
2023-06-29 09:59:06 +01:00
Manish Pandey
f4d011b0f0 Merge changes from topic "psci-osi" into integration
* changes:
  fix(psci): add optional pwr_domain_validate_suspend to plat_psci_ops_t
  fix(sc7280): update pwr_domain_suspend
  fix(fvp): update pwr_domain_suspend
2023-06-12 10:22:50 +02:00
Wing Li
d34886140c fix(psci): add optional pwr_domain_validate_suspend to plat_psci_ops_t
This patch adds a new optional member `pwr_domain_validate_suspend` to
the `plat_psci_ops_t` structure that allows a platform to optionally
perform platform specific validations in OS-initiated mode. This is
conditionally compiled into the build depending on the value of the
`PSCI_OS_INIT_MODE` build option.

In https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/17682,
the return type of the `pwr_domain_suspend` handler was updated from
`void` to `int` to allow a platform to optionally perform platform
specific validations in OS-initiated mode. However, when an error code
other than `PSCI_E_SUCCESS` is returned, the current exit path does not
undo the operations in `psci_suspend_to_pwrdown_start`, and as a result,
the system ends up in an unexpected state.

The fix in this patch prevents the need to undo the operations in
`psci_suspend_to_pwrdown_start`, by allowing the platform to first
perform any necessary platform specific validations before the PSCI
generic code proceeds to the point of no return where the CPU_SUSPEND
request is expected to complete successfully.

Change-Id: I05d92c7ea3f5364da09af630d44d78252185db20
Signed-off-by: Wing Li <wingers@google.com>
2023-05-31 23:54:19 -07:00
Boyan Karatotev
6bb96fa6d6 refactor(cpus): rename errata_report.h to errata.h
The ERRATA_XXX macros, used in cpu_helpers.S, are necessary for the
check_errata_xxx family of functions. The CPU_REV should be used in the
cpu files but for whatever reason the values have been hard-coded so far
(at the cost of readability). It's evident this file is not strictly for
status reporting.

The new purpose of this file is to make it a one-stop-shop for all
things errata.

Signed-off-by: Boyan Karatotev <boyan.karatotev@arm.com>
Change-Id: I1ce22dd36df5aa0bcfc5f2772251f91af8703dfb
2023-05-30 09:31:15 +01:00
Manish Pandey
8700c6f784 Merge "fix(psci): do not panic on illegal MPIDR" into integration 2023-05-10 18:56:46 +02:00
Andre Przywara
8a6d0d262a fix(psci): do not panic on illegal MPIDR
Commit 66327414fb ("fix(psci): potential array overflow with cpu on")
changed an assert in the PSCI library's psci_cpu_on_start() function to
a runtime error message, followed by a panic. This does not seem right
for two reasons:
- We must not panic() triggered by conditions influenced by lower EL
  callers. If non-secure world provides illegal arguments to a PSCI
  call, we can easily detect this and return -PSCI_E_INVALID_PARAMS, as
  the PSCI spec demands. In fact this is done already, which brings us
  to the next reason:
- psci_cpu_on_start() is effectively a function private to the PSCI
  library: its prototype is in psci_private.h. It's just not static
  because it lives in a different code file from the main PSCI code.
  We check for illegal MPID values already in psci_cpu_on(), and return
  an error value to the caller, as we should. This function is the ONLY
  caller of psci_cpu_on_start(), so there is no way we get an illegal
  target_cpu argument into this function. An assert() is thus the proper
  way to check for this.

Mostly revert the patch mentioned above, just extending the assert so
that it does also check for not exceeding the array boundaries.
To harden the code, add a check against PLATFORM_MAX_CORE_COUNT in
psci_validate_mpidr(), and return with the proper PSCI error code if
this number is exceeded.

This also fixes the sun50i_a64 build with DEBUG=1, which exceeded an
SRAM limit due to the error message.

Change-Id: I48fc58d96b0173da5b934750f4cadf7884ef5e42
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
2023-05-03 17:00:31 +01:00
Chris Kay
11ccf5d99a build(psci): move runtime_errata.S to PSCI
Move the runtime errata source file into the PSCI library, as PSCI is
the only component directly dependent on it, and it doesn't require
internal access to the CPUs library.

Change-Id: I92826714d49b1b0131f62c158543b4c167ab9aa8
Signed-off-by: Chris Kay <chris.kay@arm.com>
2023-05-03 15:36:08 +02:00
Varun Wadekar
6cf4ae979a feat(psci): introduce 'pwr_domain_off_early' hook
This patch introduces the 'pwr_domain_off_early'  hook for
platforms wanting to perform housekeeping steps before the
PSCI framework starts the CPU power off sequence. Platforms
might also want to use ths opportunity to ensure that the
CPU off sequence can proceed.

The PSCI framework expects a return code of PSCI_E_DENIED,
if the platform wants to halt the CPU off sequence.

Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Change-Id: I6980e84fc4d6cb80537a178d0d3d26fb28a13853
2023-04-26 09:53:10 +01:00
Olivier Deprez
66327414fb fix(psci): potential array overflow with cpu on
Fix coverity finding in psci_cpu_on, in which target_idx is directly
assigned the return value from plat_core_pos_by_mpidr. If the latter
returns a negative or large positive value, it can trigger an out of
bounds overflow for the psci_cpu_pd_nodes array.

>>>>    CID 382009:    (OVERRUN)
>>>>    Overrunning callee's array of size 8 by passing argument "target_idx" (which evaluates to 4294967295) in call to "psci_spin_lock_cpu".
> 80         psci_spin_lock_cpu(target_idx);

>>>>    CID 382009:    (OVERRUN)
>>>>    Overrunning callee's array of size 8 by passing argument "target_idx" (which evaluates to 4294967295) in call to "psci_spin_unlock_cpu".
> 160         psci_spin_unlock_cpu(target_idx);

Signed-off-by: Olivier Deprez <olivier.deprez@arm.com>
Change-Id: Ibc46934e9ca7fdcaeebd010e5c6954dcf2dcf8c7
2023-04-11 17:59:38 +02:00
Andre Przywara
ad27f4b5d9 fix(psci): remove unreachable switch/case blocks
The PSCI function dispatcher switch/case is split up between 32-bit and
64-bit function IDs, based on bit 30 of the encoding. This bit just
encodes the maximum size of the arguments, not necessarily whether they
are used from AArch64 or AArch32. So while some functions exist in both
worlds (CPU_ON, for instance), some functions take no or only 32-bit
arguments (CPU_OFF, PSCI_FEATURES), so they only exist as a 32-bit
function call.

Commit b88a4416b5 ("feat(psci): add support for PSCI_SET_SUSPEND_MODE"
, gerrit ID Iebf65f5f7846aef6b8643ad6082db99b4dcc4bef) and commit
9a70e69e05 ("feat(psci): update PSCI_FEATURES", gerrit ID
I5da8a989b53419ad2ab55b73ddeee6e882c25554) introduced two "case"
sections for 32-bit function IDs in the 64-bit branch, which will never
trigger. The one small extra case caused the sun50i_a64 DEBUG build to
go beyond its RAM limit.

Removed the redundant switch/case blocks, to make sun50i_a64 build
again.

Change-Id: Ic65b7403d128837296a0c3af42c6f23f9f57778e
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
2023-04-04 12:39:36 +02:00
Wing Li
9a70e69e05 feat(psci): update PSCI_FEATURES
This patch updates the PSCI_FEATURES handler to indicate support for
OS-initiated mode per section 5.15.2 of the PSCI spec (DEN0022D.b) based
on the value of `FF_SUPPORTS_OS_INIT_MODE`, which is conditionally
enabled by the `PSCI_OS_INIT_MODE` build option.

Change-Id: I5da8a989b53419ad2ab55b73ddeee6e882c25554
Signed-off-by: Wing Li <wingers@google.com>
2023-03-20 22:20:35 -07:00
Wing Li
606b743007 feat(psci): add support for OS-initiated mode
This patch adds a `psci_validate_state_coordination` function that is
called by `psci_cpu_suspend_start` in OS-initiated mode.

This function validates the request per sections 4.2.3.2, 5.4.5, and 6.3
of the PSCI spec (DEN0022D.b):
- The requested power states are consistent with the system's state
- The calling core is the last running core at the requested power level

This function differs from `psci_do_state_coordination` in that:
- The `psci_req_local_pwr_states` map is not modified if the request
  were to be denied
- The `state_info` argument is never modified since it contains the
  power states requested by the calling OS

This is conditionally compiled into the build depending on the value of
the `PSCI_OS_INIT_MODE` build option.

Change-Id: I667041c842d2856e9d128c98db4d5ae4e4552df3
Signed-off-by: Wing Li <wingers@google.com>
2023-03-20 22:20:35 -07:00
Wing Li
b88a4416b5 feat(psci): add support for PSCI_SET_SUSPEND_MODE
This patch adds a PSCI_SET_SUSPEND_MODE handler that validates the
request per section 5.20.2 of the PSCI spec (DEN0022D.b), and updates
the suspend mode to the requested mode.

This is conditionally compiled into the build depending on the value of
the `PSCI_OS_INIT_MODE` build option.

Change-Id: Iebf65f5f7846aef6b8643ad6082db99b4dcc4bef
Signed-off-by: Wing Li <wingers@google.com>
2023-03-20 22:20:35 -07:00
Chris Kay
da04341ed5 build: always prefix section names with .
Some of our specialized sections are not prefixed with the conventional
period. The compiler uses input section names to derive certain other
section names (e.g. `.rela.text`, `.relacpu_ops`), and these can be
difficult to select in linker scripts when there is a lack of a
delimiter.

This change introduces the period prefix to all specialized section
names.

BREAKING-CHANGE: All input and output linker section names have been
 prefixed with the period character, e.g. `cpu_ops` -> `.cpu_ops`.

Change-Id: I51c13c5266d5975fbd944ef4961328e72f82fc1c
Signed-off-by: Chris Kay <chris.kay@arm.com>
2023-02-20 18:29:33 +00:00
Harrison Mutai
aea4ccf8d9 fix(cpus): workaround for Cortex-A510 erratum 2684597
Cortex-A510 erratum 2684597 is a Cat B erratum that applies to revisions
r0p0, r0p1, r0p2, r0p3, r1p0, r1p1 and r1p2. It is fixed in r1p3. The
workaround is to execute a TSB CSYNC and DSB before executing WFI for
power down.

SDEN can be found here:
https://developer.arm.com/documentation/SDEN1873361/latest
https://developer.arm.com/documentation/SDEN1873351/latest

Change-Id: Ic0b24b600bc013eb59c797401fbdc9bda8058d6d
Signed-off-by: Harrison Mutai <harrison.mutai@arm.com>
2023-01-25 09:40:33 +00:00
Harrison Mutai
695a48b5b4 fix(psci): tighten psci_power_down_wfi behaviour
A processing element should never return from a wfi, however, due to a
hardware bug, certain CPUs may wake up because of an external event.
This patch tightens the behaviour of the common power down sequence, it
ensures the routine never returns by entering a wfi loop at its end. It
aligns with the behaviour of the platform implementations.

Change-Id: I36d8b0c64eccb71035bf164b4cd658d66ed7beb4
Signed-off-by: Harrison Mutai <harrison.mutai@arm.com>
2023-01-23 17:25:40 +00:00
Jayanth Dodderi Chidanand
b41b082464 refactor(psci): unify psci_is_last_on_cpu and psci_is_last_on_cpu_safe
"psci_is_last_on_cpu" and "psci_is_last_on_cpu_safe" modules perform
mostly similar functionalities, verifying whether the current CPU
is the only active core and other cores have been turned off.

However, psci_is_last_on_cpu_safe function differs from the other with:
1. Safe API locks the power domain

This patch removes the section duplicating the functionality
and ensures that "psci_is_last_on_cpu api",is reused in
"psci_is_last_on_cpu_safe" procedure.

Signed-off-by: Jayanth Dodderi Chidanand <jayanthdodderi.chidanand@arm.com>
Change-Id: Ie372519e423898d7afa5427cdd77a7f9d3369587
2022-09-29 16:37:34 +01:00
Pranav Madhu
65bbb9358b refactor(psci): move psci_do_pwrdown_sequence() out of private header
Move the psci_do_pwrdown_sequence() function declaration from PSCI
private header to common header. The psci_do_pwrdown_sequence is
required to support warm reset, where each CPU need to execute the
powerdown sequence.

Change-Id: I298e7a120be814941fa91c0b001002a080e56263
Signed-off-by: Pranav Madhu <pranav.madhu@arm.com>
2022-09-15 18:09:56 +05:30
Manish V Badarkhe
0551aac563 fix(psci): fix MISRA failure - Memory - illegal accesses
Fixed below MISRA failure -
>>>     CID 379362:  Memory - illegal accesses  (OVERRUN)
>>>     Overrunning array "psci_non_cpu_pd_nodes" of 5 16-byte
>>>     elements at element index 5 (byte offset 95) using index
>>>     "i" (which evaluates to 5).

Change-Id: Ie88fc555e48b06563372bfe4e51f16b13c0a020b
Signed-off-by: Manish V Badarkhe <Manish.Badarkhe@arm.com>
2022-07-22 13:46:20 +02:00
Lucian Paul-Trifu
ce14a12f8b feat(psci): add a helper function to ensure that non-boot PEs are offline
Introduce a helper function that ensures that non-boot PEs are offline.
This function will be used by DRTM implementation to ensure that system
is running with only single PE.

Signed-off-by: Manish V Badarkhe <manish.badarkhe@arm.com>
Signed-off-by: Lucian Paul-Trifu <lucian.paultrifu@gmail.com>
Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
Change-Id: I521ebefa49297026b02554629b1710a232148e01
2022-07-20 19:52:42 +01:00
Zelalem Aweke
8b95e84870 refactor(context mgmt): add cm_prepare_el3_exit_ns function
As part of the RFC:
https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/13651,
this patch adds the 'cm_prepare_el3_exit_ns' function. The function is
a wrapper to 'cm_prepare_el3_exit' function for Non-secure state.

When EL2 sysregs context exists (CTX_INCLUDE_EL2_REGS is
enabled) EL1 and EL2 sysreg values are restored from the context
instead of directly updating the registers.

Signed-off-by: Zelalem Aweke <zelalem.aweke@arm.com>
Change-Id: I9b071030576bb05500d54090e2a03b3f125d1653
2022-04-12 17:42:11 +02:00
Yann Gautier
b9338eee7f fix(psci): correct parent_node type in messages
As parent_node is unsigned, we have to use %u and not %d.
This avoids warning when -Wformat-signedness is enabled.

Signed-off-by: Yann Gautier <yann.gautier@st.com>
Change-Id: I5ab7acb33227d720b2c8a4ec013435442b219a44
2022-02-15 18:09:51 +01:00
Samuel Holland
a1d5ac6a5a feat(psci): require validate_power_state to expose CPU_SUSPEND
psci_cpu_suspend unconditionally calls psci_validate_power_state, which
asserts that the platform implements ops->validate_power_state. To avoid
a failure at runtime, do not expose CPU_SUSPEND unless that callback is
implemented. This also allows a platform to provide SYSTEM_SUSPEND
without providing CPU_SUSPEND.

Signed-off-by: Samuel Holland <samuel@sholland.org>
Change-Id: I5dafb7845f482ab3af03a9de562def41dd70189e
2021-10-15 14:13:54 +02:00
Graeme Gregory
a86865ac42 PSCI: fix limit of 256 CPUs caused by cast to unsigned char
In psci_setup.c psci_init_pwr_domain_node() takes an unsigned
char as node_idx which limits it to initialising only the first
256 CPUs. As the calling function does not check for a limit of
256 I think this is a bug so change the unsigned char to
uint16_t and change the cast from the calling site in
populate_power_domain_tree().

Also update the non_cpu_pwr_domain_node structure lock_index
to uint16_t and update the function signature for psci_lock_init()
appropriately.

Finally add a define PSCI_MAX_CPUS_INDEX to psci_private.h and add
a CASSERT to psci_setup.c to make sure PLATFORM_CORE_COUNT cannot
exceed the index value.

Signed-off-by: Graeme Gregory <graeme@nuviainc.com>
Change-Id: I9e26842277db7483fd698b46bbac62aa86e71b45
2020-12-22 07:39:51 +00:00
Joanna Farley
943aff0c16 Merge "Increase type widths to satisfy width requirements" into integration 2020-10-18 14:51:00 +00:00
Jimmy Brisson
d7b5f40823 Increase type widths to satisfy width requirements
Usually, C has no problem up-converting types to larger bit sizes. MISRA
rule 10.7 requires that you not do this, or be very explicit about this.
This resolves the following required rule:

    bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
    The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
    0x3c0U" (32 bits) is less that the right hand operand
    "18446744073709547519ULL" (64 bits).

This also resolves MISRA defects such as:

    bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
    In the expression "3U << 20", shifting more than 7 bits, the number
    of bits in the essential type of the left expression, "3U", is
    not allowed.

Further, MISRA requires that all shifts don't overflow. The definition of
PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
This fixes the violation by changing the definition to 1UL << 12. Since
this uses 32bits, it should not create any issues for aarch32.

This patch also contains a fix for a build failure in the sun50i_a64
platform. Specifically, these misra fixes removed a single and
instruction,

    92407e73        and     x19, x19, #0xffffffff

from the cm_setup_context function caused a relocation in
psci_cpus_on_start to require a linker-generated stub. This increased the
size of the .text section and caused an alignment later on to go over a
page boundary and round up to the end of RAM before placing the .data
section. This sectionn is of non-zero size and therefore causes a link
error.

The fix included in this reorders the functions during link time
without changing their ording with respect to alignment.

Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
Signed-off-by: Jimmy Brisson <jimmy.brisson@arm.com>
2020-10-12 10:55:03 -05:00