Commit graph

9 commits

Author SHA1 Message Date
Okash Khawaja
92f8be8fd1 fix(cpus): remove plat_can_cmo check for aarch32
We don't need CONDITIONAL_CMO for aarch32 so let's remove it.

Signed-off-by: Okash Khawawja <okash@google.com>
Change-Id: I256959d7005df21a850ff7791c8188ea01f5c53b
2022-11-14 15:31:17 +01:00
Okash Khawaja
04c7303b9c feat(cpus): make cache ops conditional
When a core is in debug recovery mode its caches are not invalidated
upon reset, so the L1 and L2 cache contents from before reset are
observable after reset. Similarly, debug recovery mode of DynamIQ
cluster ensures that contents of the shared L3 cache are also not
invalidated upon transition to On mode.

Booting cores in debug recovery mode means booting with caches disabled
and preserving the caches until a point where software can dump the
caches and retrieve their contents. TF-A however unconditionally cleans
and invalidates caches at multiple points during boot. This can lead to
memory corruption as well as loss of cache contents to be used for
debugging.

This patch fixes this by calling a platform hook before performing CMOs
in helper routines in cache_helpers.S. The platform hook plat_can_cmo is
an assembly routine which must not clobber x2 and x3, and avoid using
stack. The whole checking is conditional upon `CONDITIONAL_CMO` which
can be set at compile time.

Signed-off-by: Okash Khawaja <okash@google.com>
Change-Id: I172e999e4acd0f872c24056e647cc947ee54b193
2022-11-10 12:14:05 +00:00
johpow01
d0ec1cc437 feat(ccidx): update the do_dcsw_op function to support FEAT_CCIDX
FEAT_CCIDX modifies the register fields in CCSIDR/CCSIDR2 (aarch32)
and CCSIDR_EL1 (aarch64). This patch adds a check to the do_dcsw_op
function to use the right register format rather than assuming
that FEAT_CCIDX is not implemented.

Signed-off-by: John Powell <john.powell@arm.com>
Change-Id: I12cd00cd7b5889525d4d2750281a751dd74ef5dc
2021-12-14 12:48:08 -06:00
Joel Hutton
c554e1ad82 cache_helpers.s:fix mixed tabs and spaces
Change-Id: I8b7c7888d09200410e1a1c11a070c94dd8013ea7
Signed-off-by: Joel Hutton <Joel.Hutton@Arm.com>
2019-04-10 10:57:58 +01:00
Joel Hutton
f999faca06 Add note about erratum 814220 for A7
On Cortex-A7 an L2 set/way cache maintenance operation can overtake
an L1 set/way cache maintenance operation. The mitigation for this is
to use a `DSB` instruction before changing cache. The cache cleaning
code happens to already be doing this, so only a comment was added.

Change-Id: Ia1ffb8ca8b6bbbba422ed6f6818671ef9fe02d90
Signed-off-by: Joel Hutton <Joel.Hutton@Arm.com>
2019-04-10 10:57:58 +01:00
Soby Mathew
3ec5204c49 Exit early if size zero for cache helpers
This patch enables cache helper functions `flush_dcache_range`,
`clean_dcache_range` and `invalidate_dcache_range` to exit early
if the size argument specified is zero

Change-Id: I0b63e8f4bd3d47ec08bf2a0b0b9a7ff8a269a9b0
Signed-off-by: Soby Mathew <soby.mathew@arm.com>
2017-06-21 17:46:28 +01:00
dp-arm
82cb2c1ad9 Use SPDX license identifiers
To make software license auditing simpler, use SPDX[0] license
identifiers instead of duplicating the license text in every file.

NOTE: Files that have been imported by FreeBSD have not been modified.

[0]: https://spdx.org/

Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761a
Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
2017-05-03 09:39:28 +01:00
Douglas Raillard
355a5d0336 Replace ASM signed tests with unsigned
ge, lt, gt and le condition codes in assembly provide a signed test
whereas hs, lo, hi and ls provide the unsigned counterpart. Signed tests
should only be used when strictly necessary, as using them on logically
unsigned values can lead to inverting the test for high enough values.
All offsets, addresses and usually counters are actually unsigned
values, and should be tested as such.

Replace the occurrences of signed condition codes where it was
unnecessary by an unsigned test as the unsigned tests allow the full
range of unsigned values to be used without inverting the result with
some large operands.

Change-Id: I58b7e98d03e3a4476dfb45230311f296d224980a
Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
2017-03-20 10:38:43 +00:00
Soby Mathew
f24307dec4 AArch32: Add assembly helpers
This patch adds various assembly helpers for AArch32 like :

* cache management : Functions to flush, invalidate and clean
cache by MVA. Also helpers to do cache operations by set-way
are also added.

* stack management: Macros to declare stack and get the current
stack corresponding to current CPU.

* Misc: Macros to access co processor registers in AArch32,
macros to define functions in assembly, assert macros, generic
`do_panic()` implementation and function to zero block of memory.

Change-Id: I7b78ca3f922c0eda39beb9786b7150e9193425be
2016-08-10 12:35:46 +01:00