Merge pull request from MISL-EBU-System-SW/marvell-support-v6

Marvell support for Armada 8K SoC family
This commit is contained in:
danh-arm 2018-07-19 17:11:32 +01:00 committed by GitHub
commit ba0248b52d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
116 changed files with 17487 additions and 9 deletions
.gitignoreacknowledgements.rst
docs/marvell
drivers
include
maintainers.rst
make_helpers
plat/marvell

1
.gitignore vendored
View file

@ -18,6 +18,7 @@ tools/cert_create/src/*.o
tools/cert_create/src/**/*.o
tools/cert_create/cert_create
tools/cert_create/cert_create.exe
tools/doimage/doimage
# GNU GLOBAL files
GPATH

View file

@ -14,5 +14,7 @@ Xilinx, Inc.
NXP Semiconductors
Marvell International Ltd.
Individuals
-----------

181
docs/marvell/build.txt Normal file
View file

@ -0,0 +1,181 @@
TF-A Build Instructions
======================
This section describes how to compile the ARM Trusted Firmware (TF-A) project for Marvell's platforms.
Build Instructions
------------------
(1) Set the cross compiler::
> export CROSS_COMPILE=/path/to/toolchain/aarch64-linux-gnu-
(2) Set path for FIP images:
Set U-Boot image path (relatively to TF-A root or absolute path)::
> export BL33=path/to/u-boot.bin
For example: if U-Boot project (and its images) is located at ~/project/u-boot,
BL33 should be ~/project/u-boot/u-boot.bin
.. note::
u-boot.bin should be used and not u-boot-spl.bin
Set MSS/SCP image path (mandatory only for Armada80x0 and Aramada8xxy)::
> export SCP_BL2=path/to/mrvl_scp_bl2*.img
(3) Armada-37x0 build requires WTP tools installation.
See below in the section "Tools Installation for Armada37x0 Builds".
Install ARM 32-bit cross compiler, which is required by building WTMI image for CM3::
> sudo apt-get install gcc-arm-linux-gnueabi
(4) Clean previous build residuals (if any)::
> make distclean
(5) Build TF-A:
There are several build options:
- DEBUG: default is without debug information (=0). in order to enable it use DEBUG=1
- LOG_LEVEL: defines the level of logging which will be purged to the default output port.
LOG_LEVEL_NONE 0
LOG_LEVEL_ERROR 10
LOG_LEVEL_NOTICE 20
LOG_LEVEL_WARNING 30
LOG_LEVEL_INFO 40
LOG_LEVEL_VERBOSE 50
- USE_COHERENT_MEM: This flag determines whether to include the coherent memory region in the
BL memory map or not.
-LLC_ENABLE: Flag defining the LLC (L3) cache state. The cache is enabled by default (LLC_ENABLE=1).
- MARVELL_SECURE_BOOT: build trusted(=1)/non trusted(=0) image, default is non trusted.
- BLE_PATH:
Points to BLE (Binary ROM extension) sources folder. Only required for A8K and A8K+ builds.
The parameter is optional, its default value is "ble".
- MV_DDR_PATH:
For A7/8K, use this parameter to point to mv_ddr driver sources to allow BLE build. For A37x0,
it is used for ddr_tool build.
Usage example: MV_DDR_PATH=path/to/mv_ddr
The parameter is optional for A7/8K, when this parameter is not set, the mv_ddr
sources are expected to be located at: drivers/marvell/mv_ddr. However, the parameter
is necessary for A37x0.
- DDR_TOPOLOGY: For Armada37x0 only, the DDR topology map index/name, default is 0.
Supported Options:
- DDR3 1CS (0): DB-88F3720-DDR3-Modular (512MB); EspressoBIN (512MB)
- DDR4 1CS (1): DB-88F3720-DDR4-Modular (512MB)
- DDR3 2CS (2): EspressoBIN (1GB)
- DDR4 2CS (3): DB-88F3720-DDR4-Modular (4GB)
- DDR3 1CS (4): DB-88F3720-DDR3-Modular (1GB)
- CUSTOMER (CUST): Customer board, DDR3 1CS 512MB
- CLOCKSPRESET: For Armada37x0 only, the clock tree configuration preset including CPU and DDR frequency,
default is CPU_800_DDR_800.
- CPU_600_DDR_600 - CPU at 600 MHz, DDR at 600 MHz
- CPU_800_DDR_800 - CPU at 800 MHz, DDR at 800 MHz
- CPU_1000_DDR_800 - CPU at 1000 MHz, DDR at 800 MHz
- CPU_1200_DDR_750 - CPU at 1200 MHz, DDR at 750 MHz
- BOOTDEV: For Armada37x0 only, the flash boot device, default is SPINOR,
Currently, Armada37x0 only supports SPINOR, SPINAND, EMMCNORM and SATA:
- SPINOR - SPI NOR flash boot
- SPINAND - SPI NAND flash boot
- EMMCNORM - eMMC Download Mode
Download boot loader or program code from eMMC flash into CM3 or CA53
Requires full initialization and command sequence
- SATA - SATA device boot
- PARTNUM: For Armada37x0 only, the boot partition number, default is 0. To boot from eMMC, the value
should be aligned with the parameter in U-Boot with name of CONFIG_SYS_MMC_ENV_PART, whose
value by default is 1.
For details about CONFIG_SYS_MMC_ENV_PART, please refer to the U-Boot build instructions.
- WTMI_IMG: For Armada37x0 only, the path of the WTMI image can point to an image which does
nothing, an image which supports EFUSE or a customized CM3 firmware binary. The default image
is wtmi.bin that built from sources in WTP folder, which is the next option. If the default
image is OK, then this option should be skipped.
- WTP: For Armada37x0 only, use this parameter to point to wtptools source code directory, which
can be found as a3700_utils.zip in the release.
Usage example: WTP=/path/to/a3700_utils
- CP_NUM: Total amount of CPs (South Bridge) chips wired to the interconnected APs.
When the parameter is omitted, the build is uses the default number of CPs equal to 2.
The parameter is valid for Armada 8K-plus SoC family (PLAT=a8xxy) and results in a build of images
suitable for a8xxY SoC, where "Y" is a number of connected CPs and "xx" is a number of CPU cores.
Valid values with CP_NUM is in a range of 0 to 8.
The CPs defined by this parameter are evenly distributed across interconnected APs that in turn
are dynamically detected. For instance, if the CP_NUM=6 and the TF-A detects 2 interconnected
APs, each AP assumed to have 3 attached CPs. With the same amount of APs and CP_NUM=3, the AP0
will have 2 CPs connected and AP1 - a just single CP.
For example, in order to build the image in debug mode with log level up to 'notice' level run::
> make DEBUG=1 USE_COHERENT_MEM=0 LOG_LEVEL=20 PLAT=<MARVELL_PLATFORM> all fip
And if we want to build a Armada37x0 image in debug mode with log level up to 'notice' level,
the image has the preset CPU at 1000 MHz, preset DDR3 at 800 MHz, the DDR topology of DDR3 2CS,
the image boot from SPI NOR flash partition 0, and the image is non trusted in WTP, the command
line is as following::
> make DEBUG=1 USE_COHERENT_MEM=0 LOG_LEVEL=20 SECURE=0 CLOCKSPRESET=CPU_1000_DDR_800 \
DDR_TOPOLOGY=2 BOOTDEV=SPINOR PARTNUM=0 PLAT=a3700 all fip
Supported MARVELL_PLATFORM are:
- a3700
- a70x0
- a70x0_amc (for AMC board)
- a70x0_cust (for customers)
- a80x0
- a80x0_mcbin (for MacciatoBin)
Special Build Flags
--------------------
- PLAT_RECOVERY_IMAGE_ENABLE: When set this option to enable secondary recovery function when build
atf. In order to build uart recovery image this operation should be disabled for a70x0 and a80x0
because of hardware limitation(boot from secondary image can interrupt uart recovery process).
This MACRO definition is set in plat/marvell/a8k/common/include/platform_def.h file
(for more information about build options, please refer to section 'Summary of build options' in TF-A user-guide:
https://github.com/ARM-software/arm-trusted-firmware/blob/master/docs/user-guide.md)
Build output
-------------
Marvell's TF-A compilation generates 7 files:
- ble.bin - BLe image
- bl1.bin - BL1 image
- bl2.bin - BL2 image
- bl31.bin - BL31 image
- fip.bin - FIP image (contains BL2, BL31 & BL33 (U-Boot) images)
- boot-image.bin - TF-A image (contains BL1 and FIP images)
- flash-image.bin - Image which contains boot-image.bin and SPL image; should be placed on the boot flash/device.
Tools Installation for Armada37x0 Builds
-----------------------------------------
Install a cross GNU ARM tool chain for building the WTMI binary.
Any cross GNU ARM tool chain that is able to build ARM Cortex M3 binaries
is suitable.
On Debian/Uboot hosts the default GNU ARM tool chain can be installed
using the following command::
> sudo apt-get install gcc-arm-linux-gnueabi
If required, the default tool chain prefix "arm-linux-gnueabi-" can be
overwritten using the environment variable CROSS_CM3.
Example for BASH shell::
> export CROSS_CM3=/opt/arm-cross/bin/arm-linux-gnueabi

View file

@ -0,0 +1,47 @@
Address decoding flow and address translation units of Marvell Armada 8K SoC family
+--------------------------------------------------------------------------------------------------+
| +-------------+ +--------------+ |
| | Memory +----- DRAM CS | |
|+------------+ +-----------+ +-----------+ | Controller | +--------------+ |
|| AP DMA | | | | | +-------------+ |
|| SD/eMMC | | CA72 CPUs | | AP MSS | +-------------+ |
|| MCI-0/1 | | | | | | Memory | |
|+------+-----+ +--+--------+ +--------+--+ +------------+ | Controller | +-------------+ |
| | | | | +----- Translaton | |AP | |
| | | | | | +-------------+ |Configuration| |
| | | +-----+ +-------------------------Space | |
| | | +-------------+ | CCU | +-------------+ |
| | | | MMU +---------+ Windows | +-----------+ +-------------+ |
| | +-| translation | | Lookup +---- +--------- AP SPI | |
| | +-------------+ | | | | +-------------+ |
| | +-------------+ | | | IO | +-------------+ |
| +------------| SMMU +---------+ | | Windows +--------- AP MCI0/1 | |
| | translation | +------------+ | Lookup | +-------------+ |
| +---------+---+ | | +-------------+ |
| - | | +--------- AP STM | |
| +----------------- | | +-------------+ |
| AP | | +-+---------+ |
+---------------------------------------------------------------|----------------------------------+
+-------------|-------------------------------------------------|----------------------------------+
| CP | +-------------+ +------+-----+ +-------------------+ |
| | | | | +------- SB CFG Space | |
| | | DIOB | | | +-------------------+ |
| | | Windows ----------------- IOB | +-------------------+ |
| | | Control | | Windows +------| SB PCIe-0 - PCIe2 | |
| | | | | Lookup | +-------------------+ |
| | +------+------+ | | +-------------------+ |
| | | | +------+ SB NAND | |
| | | +------+-----+ +-------------------+ |
| | | | |
| | | | |
| +------------------+ +------------+ +------+-----+ +-------------------+ |
| | Network Engine | | | | +------- SB SPI-0/SPI-1 | |
| | Security Engine | | PCIe, MSS | | RUNIT | +-------------------+ |
| | SATA, USB | | DMA | | Windows | +-------------------+ |
| | SD/eMMC | | | | Lookup +------- SB Device Bus | |
| | TDM, I2C | | | | | +-------------------+ |
| +------------------+ +------------+ +------------+ |
| |
+--------------------------------------------------------------------------------------------------+

View file

@ -0,0 +1,45 @@
AMB - AXI MBUS address decoding
-------------------------------
AXI to M-bridge decoding unit driver for Marvell Armada 8K and 8K+ SoCs.
- The Runit offers a second level of address windows lookup. It is used to map transaction towards
the CD BootROM, SPI0, SPI1 and Device bus (NOR).
- The Runit contains eight configurable windows. Each window defines a contiguous,
address space and the properties associated with that address space.
Unit Bank ATTR
Device-Bus DEV_BOOT_CS 0x2F
DEV_CS0 0x3E
DEV_CS1 0x3D
DEV_CS2 0x3B
DEV_CS3 0x37
SPI-0 SPI_A_CS0 0x1E
SPI_A_CS1 0x5E
SPI_A_CS2 0x9E
SPI_A_CS3 0xDE
SPI_A_CS4 0x1F
SPI_A_CS5 0x5F
SPI_A_CS6 0x9F
SPI_A_CS7 0xDF
SPI1 SPI_B_CS0 0x1A
SPI_B_CS1 0x5A
SPI_B_CS2 0x9A
SPI_B_CS3 0xDA
BOOT_ROM BOOT_ROM 0x1D
UART UART 0x01
Mandatory functions:
- marvell_get_amb_memory_map
returns the AMB windows configuration and the number of windows
Mandatory structures:
amb_memory_map - Array that include the configuration of the windows
every window/entry is a struct which has 2 parameters:
- base address of the window
- Attribute of the window
Examples:
struct addr_map_win amb_memory_map[] = {
{0xf900, AMB_DEV_CS0_ID},
};

View file

@ -0,0 +1,23 @@
Marvell CCU address decoding bindings
=====================================
CCU configration driver (1st stage address translation) for Marvell Armada 8K and 8K+ SoCs.
The CCU node includes a description of the address decoding configuration.
Mandatory functions:
- marvell_get_ccu_memory_map
return the CCU windows configuration and the number of windows
of the specific AP.
Mandatory structures:
ccu_memory_map - Array that includes the configuration of the windows
every window/entry is a struct which has 3 parameters:
- Base address of the window
- Size of the window
- Target-ID of the window
Example:
struct addr_map_win ccu_memory_map[] = {
{0x00000000f2000000, 0x00000000e000000, IO_0_TID}, /* IO window */
};

View file

@ -0,0 +1,35 @@
Marvell IO WIN address decoding bindings
=====================================
IO Window configration driver (2nd stage address translation) for Marvell Armada 8K and 8K+ SoCs.
The IO WIN includes a description of the address decoding configuration.
Transactions that are decoded by CCU windows as IO peripheral, have an additional
layer of decoding. This additional address decoding layer defines one of the
following targets:
0x0 = BootRom
0x1 = STM (Serial Trace Macro-cell, a programmer's port into trace stream)
0x2 = SPI direct access
0x3 = PCIe registers
0x4 = MCI Port
0x5 = PCIe port
Mandatory functions:
- marvell_get_io_win_memory_map
returns the IO windows configuration and the number of windows
of the specific AP.
Mandatory structures:
io_win_memory_map - Array that include the configuration of the windows
every window/entry is a struct which has 3 parameters:
- Base address of the window
- Size of the window
- Target-ID of the window
Example:
struct addr_map_win io_win_memory_map[] = {
{0x00000000fe000000, 0x000000001f00000, PCIE_PORT_TID}, /* PCIe window 31Mb for PCIe port*/
{0x00000000ffe00000, 0x000000000100000, PCIE_REGS_TID}, /* PCI-REG window 64Kb for PCIe-reg*/
{0x00000000f6000000, 0x000000000100000, MCIPHY_TID}, /* MCI window 1Mb for PHY-reg*/
};

View file

@ -0,0 +1,40 @@
Marvell IOB address decoding bindings
=====================================
IO bridge configration driver (3rd stage address translation) for Marvell Armada 8K and 8K+ SoCs.
The IOB includes a description of the address decoding configuration.
IOB supports up to n (in CP110 n=24) windows for external memory transaction.
When a transaction passes through the IOB, its address is compared to each of
the enabled windows. If there is a hit and it passes the security checks, it is
advanced to the target port.
Mandatory functions:
- marvell_get_iob_memory_map
returns the IOB windows configuration and the number of windows
Mandatory structures:
iob_memory_map - Array that include the configuration of the windows
every window/entry is a struct which has 3 parameters:
- Base address of the window
- Size of the window
- Target-ID of the window
Target ID options:
- 0x0 = Internal configuration space
- 0x1 = MCI0
- 0x2 = PEX1_X1
- 0x3 = PEX2_X1
- 0x4 = PEX0_X4
- 0x5 = NAND flash
- 0x6 = RUNIT (NOR/SPI/BootRoom)
- 0x7 = MCI1
Example:
struct addr_map_win iob_memory_map[] = {
{0x00000000f7000000, 0x0000000001000000, PEX1_TID}, /* PEX1_X1 window */
{0x00000000f8000000, 0x0000000001000000, PEX2_TID}, /* PEX2_X1 window */
{0x00000000f6000000, 0x0000000001000000, PEX0_TID}, /* PEX0_X4 window */
{0x00000000f9000000, 0x0000000001000000, NAND_TID} /* NAND window */
};

66
docs/marvell/porting.txt Normal file
View file

@ -0,0 +1,66 @@
TF-A Porting Guide
=================
This section describes how to port TF-A to a customer board, assuming that the SoC being used is already supported
in TF-A.
Source Code Structure
---------------------
- The customer platform specific code shall reside under "plat/marvell/<soc family>/<soc>_cust"
(e.g. 'plat/marvell/a8k/a7040_cust').
- The platform name for build purposes is called "<soc>_cust" (e.g. a7040_cust).
- The build system will reuse all files from within the soc directory, and take only the porting
files from the customer platform directory.
Files that require porting are located at "plat/marvell/<soc family>/<soc>_cust" directory.
Armada-70x0/Armada-80x0 Porting
-------------------------------
- SoC Physical Address Map (marvell_plat_config.c):
- This file describes the SoC physical memory mapping to be used for the CCU, IOWIN, AXI-MBUS and IOB
address decode units (Refer to the functional spec for more details).
- In most cases, using the default address decode windows should work OK.
- In cases where a special physical address map is needed (e.g. Special size for PCIe MEM windows,
large memory mapped SPI flash...), then porting of the SoC memory map is required.
- Note: For a detailed information on how CCU, IOWIN, AXI-MBUS & IOB work, please refer to the SoC functional spec,
and under "docs/marvell/misc/mvebu-[ccu/iob/amb/io-win].txt" files.
- boot loader recovery (marvell_plat_config.c):
- Background:
boot rom can skip the current image and choose to boot from next position if a specific value
(0xDEADB002) is returned by the ble main function. This feature is used for boot loader recovery
by booting from a valid flash-image saved in next position on flash (e.g. address 2M in SPI flash).
Supported options to implement the skip request are:
- GPIO
- I2C
- User defined
- Porting:
Under marvell_plat_config.c, implement struct skip_image that includes specific board parameters.
.. warning:: to disable this feature make sure the struct skip_image is not implemented.
- Example:
In A7040-DB specific implementation (plat/marvell/a8k/a70x0/board/marvell_plat_config.c),
the image skip is implemented using GPIO: mpp 33 (SW5).
Before resetting the board make sure there is a valid image on the next flash address:
-tftp [valid address] flash-image.bin
-sf update [valid address] 0x2000000 [size]
Press reset and keep pressing the button connected to the chosen GPIO pin. A skip image request
message is printed on the screen and boot rom boots from the saved image at the next position.
- DDR Porting (dram_port.c):
- This file defines the dram topology and parameters of the target board.
- The DDR code is part of the BLE component, which is an extension of ARM Trusted Firmware (TF-A).
- The DDR driver called mv_ddr is released separately apart from TF-A sources.
- The BLE and consequently, the DDR init code is executed at the early stage of the boot process.
- Each supported platform of the TF-A has its own DDR porting file called dram_port.c located at
``atf/plat/marvell/a8k/<platform>/board`` directory.
- Please refer to '<path_to_mv_ddr_sources>/doc/porting_guide.txt' for detailed porting description.
- The build target directory is "build/<platform>/release/ble".

View file

@ -372,7 +372,6 @@ static int fip_file_read(io_entity_t *entity, uintptr_t buffer, size_t length,
uintptr_t backend_handle;
assert(entity != NULL);
assert(buffer != (uintptr_t)NULL);
assert(length_read != NULL);
assert(entity->info != (uintptr_t)NULL);

View file

@ -9,6 +9,7 @@
#include <io_driver.h>
#include <io_memmap.h>
#include <io_storage.h>
#include <platform_def.h>
#include <string.h>
#include <utils.h>
@ -169,7 +170,6 @@ static int memmap_block_read(io_entity_t *entity, uintptr_t buffer,
size_t pos_after;
assert(entity != NULL);
assert(buffer != (uintptr_t)NULL);
assert(length_read != NULL);
fp = (file_state_t *) entity->info;
@ -197,7 +197,6 @@ static int memmap_block_write(io_entity_t *entity, const uintptr_t buffer,
size_t pos_after;
assert(entity != NULL);
assert(buffer != (uintptr_t)NULL);
assert(length_written != NULL);
fp = (file_state_t *) entity->info;

View file

@ -8,6 +8,7 @@
#include <io_driver.h>
#include <io_semihosting.h>
#include <io_storage.h>
#include <platform_def.h>
#include <semihosting.h>
@ -133,7 +134,6 @@ static int sh_file_read(io_entity_t *entity, uintptr_t buffer, size_t length,
long file_handle;
assert(entity != NULL);
assert(buffer != (uintptr_t)NULL);
assert(length_read != NULL);
file_handle = (long)entity->info;
@ -158,7 +158,6 @@ static int sh_file_write(io_entity_t *entity, const uintptr_t buffer,
size_t bytes = length;
assert(entity != NULL);
assert(buffer != (uintptr_t)NULL);
assert(length_written != NULL);
file_handle = (long)entity->info;

View file

@ -279,7 +279,7 @@ int io_read(uintptr_t handle,
size_t *length_read)
{
int result = -ENODEV;
assert(is_valid_entity(handle) && (buffer != (uintptr_t)NULL));
assert(is_valid_entity(handle));
io_entity_t *entity = (io_entity_t *)handle;
@ -299,7 +299,7 @@ int io_write(uintptr_t handle,
size_t *length_written)
{
int result = -ENODEV;
assert(is_valid_entity(handle) && (buffer != (uintptr_t)NULL));
assert(is_valid_entity(handle));
io_entity_t *entity = (io_entity_t *)handle;

156
drivers/marvell/amb_adec.c Normal file
View file

@ -0,0 +1,156 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* AXI to M-Bridge decoding unit driver for Marvell Armada 8K and 8K+ SoCs */
#include <a8k_common.h>
#include <debug.h>
#include <mmio.h>
#include <mvebu.h>
#include <mvebu_def.h>
#if LOG_LEVEL >= LOG_LEVEL_INFO
#define DEBUG_ADDR_MAP
#endif
/* common defines */
#define WIN_ENABLE_BIT (0x1)
#define MVEBU_AMB_ADEC_OFFSET (0x70ff00)
#define AMB_WIN_CR_OFFSET(win) (amb_base + 0x0 + (0x8 * win))
#define AMB_ATTR_OFFSET 8
#define AMB_ATTR_MASK 0xFF
#define AMB_SIZE_OFFSET 16
#define AMB_SIZE_MASK 0xFF
#define AMB_WIN_BASE_OFFSET(win) (amb_base + 0x4 + (0x8 * win))
#define AMB_BASE_OFFSET 16
#define AMB_BASE_ADDR_MASK ((1 << (32 - AMB_BASE_OFFSET)) - 1)
#define AMB_WIN_ALIGNMENT_64K (0x10000)
#define AMB_WIN_ALIGNMENT_1M (0x100000)
uintptr_t amb_base;
static void amb_check_win(struct addr_map_win *win, uint32_t win_num)
{
uint32_t base_addr;
/* make sure the base address is in 16-bit range */
if (win->base_addr > AMB_BASE_ADDR_MASK) {
WARN("Window %d: base address is too big 0x%llx\n",
win_num, win->base_addr);
win->base_addr = AMB_BASE_ADDR_MASK;
WARN("Set the base address to 0x%llx\n", win->base_addr);
}
base_addr = win->base_addr << AMB_BASE_OFFSET;
/* for AMB The base is always 1M aligned */
/* check if address is aligned to 1M */
if (IS_NOT_ALIGN(base_addr, AMB_WIN_ALIGNMENT_1M)) {
win->base_addr = ALIGN_UP(base_addr, AMB_WIN_ALIGNMENT_1M);
WARN("Window %d: base address unaligned to 0x%x\n",
win_num, AMB_WIN_ALIGNMENT_1M);
WARN("Align up the base address to 0x%llx\n", win->base_addr);
}
/* size parameter validity check */
if (!IS_POWER_OF_2(win->win_size)) {
WARN("Window %d: window size is not power of 2 (0x%llx)\n",
win_num, win->win_size);
win->win_size = ROUND_UP_TO_POW_OF_2(win->win_size);
WARN("Rounding size to 0x%llx\n", win->win_size);
}
}
static void amb_enable_win(struct addr_map_win *win, uint32_t win_num)
{
uint32_t ctrl, base, size;
/*
* size is 64KB granularity.
* The number of ones specifies the size of the
* window in 64 KB granularity. 0 is 64KB
*/
size = (win->win_size / AMB_WIN_ALIGNMENT_64K) - 1;
ctrl = (size << AMB_SIZE_OFFSET) | (win->target_id << AMB_ATTR_OFFSET);
base = win->base_addr << AMB_BASE_OFFSET;
mmio_write_32(AMB_WIN_BASE_OFFSET(win_num), base);
mmio_write_32(AMB_WIN_CR_OFFSET(win_num), ctrl);
/* enable window after configuring window size (and attributes) */
ctrl |= WIN_ENABLE_BIT;
mmio_write_32(AMB_WIN_CR_OFFSET(win_num), ctrl);
}
#ifdef DEBUG_ADDR_MAP
static void dump_amb_adec(void)
{
uint32_t ctrl, base, win_id, attr;
uint32_t size, size_count;
/* Dump all AMB windows */
tf_printf("bank attribute base size\n");
tf_printf("--------------------------------------------\n");
for (win_id = 0; win_id < AMB_MAX_WIN_ID; win_id++) {
ctrl = mmio_read_32(AMB_WIN_CR_OFFSET(win_id));
if (ctrl & WIN_ENABLE_BIT) {
base = mmio_read_32(AMB_WIN_BASE_OFFSET(win_id));
attr = (ctrl >> AMB_ATTR_OFFSET) & AMB_ATTR_MASK;
size_count = (ctrl >> AMB_SIZE_OFFSET) & AMB_SIZE_MASK;
size = (size_count + 1) * AMB_WIN_ALIGNMENT_64K;
tf_printf("amb 0x%04x 0x%08x 0x%08x\n",
attr, base, size);
}
}
}
#endif
int init_amb_adec(uintptr_t base)
{
struct addr_map_win *win;
uint32_t win_id, win_reg;
uint32_t win_count;
INFO("Initializing AXI to MBus Bridge Address decoding\n");
/* Get the base address of the AMB address decoding */
amb_base = base + MVEBU_AMB_ADEC_OFFSET;
/* Get the array of the windows and its size */
marvell_get_amb_memory_map(&win, &win_count, base);
if (win_count <= 0)
INFO("no windows configurations found\n");
if (win_count > AMB_MAX_WIN_ID) {
INFO("number of windows is bigger than %d\n", AMB_MAX_WIN_ID);
return 0;
}
/* disable all AMB windows */
for (win_id = 0; win_id < AMB_MAX_WIN_ID; win_id++) {
win_reg = mmio_read_32(AMB_WIN_CR_OFFSET(win_id));
win_reg &= ~WIN_ENABLE_BIT;
mmio_write_32(AMB_WIN_CR_OFFSET(win_id), win_reg);
}
/* enable relevant windows */
for (win_id = 0; win_id < win_count; win_id++, win++) {
amb_check_win(win, win_id);
amb_enable_win(win, win_id);
}
#ifdef DEBUG_ADDR_MAP
dump_amb_adec();
#endif
INFO("Done AXI to MBus Bridge Address decoding Initializing\n");
return 0;
}

109
drivers/marvell/cache_llc.c Normal file
View file

@ -0,0 +1,109 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* LLC driver is the Last Level Cache (L3C) driver
* for Marvell SoCs in AP806, AP807, and AP810
*/
#include <arch_helpers.h>
#include <assert.h>
#include <cache_llc.h>
#include <ccu.h>
#include <mmio.h>
#include <mvebu_def.h>
#define CCU_HTC_CR(ap_index) (MVEBU_CCU_BASE(ap_index) + 0x200)
#define CCU_SET_POC_OFFSET 5
extern void ca72_l2_enable_unique_clean(void);
void llc_cache_sync(int ap_index)
{
mmio_write_32(LLC_SYNC(ap_index), 0);
/* Atomic write, no need to wait */
}
void llc_flush_all(int ap_index)
{
mmio_write_32(L2X0_CLEAN_INV_WAY(ap_index), LLC_WAY_MASK);
llc_cache_sync(ap_index);
}
void llc_clean_all(int ap_index)
{
mmio_write_32(L2X0_CLEAN_WAY(ap_index), LLC_WAY_MASK);
llc_cache_sync(ap_index);
}
void llc_inv_all(int ap_index)
{
mmio_write_32(L2X0_INV_WAY(ap_index), LLC_WAY_MASK);
llc_cache_sync(ap_index);
}
void llc_disable(int ap_index)
{
llc_flush_all(ap_index);
mmio_write_32(LLC_CTRL(ap_index), 0);
dsbishst();
}
void llc_enable(int ap_index, int excl_mode)
{
uint32_t val;
dsbsy();
llc_inv_all(ap_index);
dsbsy();
val = LLC_CTRL_EN;
if (excl_mode)
val |= LLC_EXCLUSIVE_EN;
mmio_write_32(LLC_CTRL(ap_index), val);
dsbsy();
}
int llc_is_exclusive(int ap_index)
{
uint32_t reg;
reg = mmio_read_32(LLC_CTRL(ap_index));
if ((reg & (LLC_CTRL_EN | LLC_EXCLUSIVE_EN)) ==
(LLC_CTRL_EN | LLC_EXCLUSIVE_EN))
return 1;
return 0;
}
void llc_runtime_enable(int ap_index)
{
uint32_t reg;
reg = mmio_read_32(LLC_CTRL(ap_index));
if (reg & LLC_CTRL_EN)
return;
INFO("Enabling LLC\n");
/*
* Enable L2 UniqueClean evictions with data
* Note: this configuration assumes that LLC is configured
* in exclusive mode.
* Later on in the code this assumption will be validated
*/
ca72_l2_enable_unique_clean();
llc_enable(ap_index, 1);
/* Set point of coherency to DDR.
* This is required by units which have SW cache coherency
*/
reg = mmio_read_32(CCU_HTC_CR(ap_index));
reg |= (0x1 << CCU_SET_POC_OFFSET);
mmio_write_32(CCU_HTC_CR(ap_index), reg);
}

361
drivers/marvell/ccu.c Normal file
View file

@ -0,0 +1,361 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* CCU unit device driver for Marvell AP807, AP807 and AP810 SoCs */
#include <a8k_common.h>
#include <ccu.h>
#include <debug.h>
#include <mmio.h>
#include <mvebu.h>
#include <mvebu_def.h>
#if LOG_LEVEL >= LOG_LEVEL_INFO
#define DEBUG_ADDR_MAP
#endif
/* common defines */
#define WIN_ENABLE_BIT (0x1)
/* Physical address of the base of the window = {AddrLow[19:0],20h0} */
#define ADDRESS_SHIFT (20 - 4)
#define ADDRESS_MASK (0xFFFFFFF0)
#define CCU_WIN_ALIGNMENT (0x100000)
#define IS_DRAM_TARGET(tgt) ((((tgt) == DRAM_0_TID) || \
((tgt) == DRAM_1_TID) || \
((tgt) == RAR_TID)) ? 1 : 0)
/* For storage of CR, SCR, ALR, AHR abd GCR */
static uint32_t ccu_regs_save[MVEBU_CCU_MAX_WINS * 4 + 1];
#ifdef DEBUG_ADDR_MAP
static void dump_ccu(int ap_index)
{
uint32_t win_id, win_cr, alr, ahr;
uint8_t target_id;
uint64_t start, end;
/* Dump all AP windows */
tf_printf("\tbank target start end\n");
tf_printf("\t----------------------------------------------------\n");
for (win_id = 0; win_id < MVEBU_CCU_MAX_WINS; win_id++) {
win_cr = mmio_read_32(CCU_WIN_CR_OFFSET(ap_index, win_id));
if (win_cr & WIN_ENABLE_BIT) {
target_id = (win_cr >> CCU_TARGET_ID_OFFSET) &
CCU_TARGET_ID_MASK;
alr = mmio_read_32(CCU_WIN_ALR_OFFSET(ap_index,
win_id));
ahr = mmio_read_32(CCU_WIN_AHR_OFFSET(ap_index,
win_id));
start = ((uint64_t)alr << ADDRESS_SHIFT);
end = (((uint64_t)ahr + 0x10) << ADDRESS_SHIFT);
tf_printf("\tccu %02x 0x%016llx 0x%016llx\n",
target_id, start, end);
}
}
win_cr = mmio_read_32(CCU_WIN_GCR_OFFSET(ap_index));
target_id = (win_cr >> CCU_GCR_TARGET_OFFSET) & CCU_GCR_TARGET_MASK;
tf_printf("\tccu GCR %d - all other transactions\n", target_id);
}
#endif
void ccu_win_check(struct addr_map_win *win)
{
/* check if address is aligned to 1M */
if (IS_NOT_ALIGN(win->base_addr, CCU_WIN_ALIGNMENT)) {
win->base_addr = ALIGN_UP(win->base_addr, CCU_WIN_ALIGNMENT);
NOTICE("%s: Align up the base address to 0x%llx\n",
__func__, win->base_addr);
}
/* size parameter validity check */
if (IS_NOT_ALIGN(win->win_size, CCU_WIN_ALIGNMENT)) {
win->win_size = ALIGN_UP(win->win_size, CCU_WIN_ALIGNMENT);
NOTICE("%s: Aligning size to 0x%llx\n",
__func__, win->win_size);
}
}
void ccu_enable_win(int ap_index, struct addr_map_win *win, uint32_t win_id)
{
uint32_t ccu_win_reg;
uint32_t alr, ahr;
uint64_t end_addr;
if ((win_id == 0) || (win_id > MVEBU_CCU_MAX_WINS)) {
ERROR("Enabling wrong CCU window %d!\n", win_id);
return;
}
end_addr = (win->base_addr + win->win_size - 1);
alr = (uint32_t)((win->base_addr >> ADDRESS_SHIFT) & ADDRESS_MASK);
ahr = (uint32_t)((end_addr >> ADDRESS_SHIFT) & ADDRESS_MASK);
mmio_write_32(CCU_WIN_ALR_OFFSET(ap_index, win_id), alr);
mmio_write_32(CCU_WIN_AHR_OFFSET(ap_index, win_id), ahr);
ccu_win_reg = WIN_ENABLE_BIT;
ccu_win_reg |= (win->target_id & CCU_TARGET_ID_MASK)
<< CCU_TARGET_ID_OFFSET;
mmio_write_32(CCU_WIN_CR_OFFSET(ap_index, win_id), ccu_win_reg);
}
static void ccu_disable_win(int ap_index, uint32_t win_id)
{
uint32_t win_reg;
if ((win_id == 0) || (win_id > MVEBU_CCU_MAX_WINS)) {
ERROR("Disabling wrong CCU window %d!\n", win_id);
return;
}
win_reg = mmio_read_32(CCU_WIN_CR_OFFSET(ap_index, win_id));
win_reg &= ~WIN_ENABLE_BIT;
mmio_write_32(CCU_WIN_CR_OFFSET(ap_index, win_id), win_reg);
}
/* Insert/Remove temporary window for using the out-of reset default
* CPx base address to access the CP configuration space prior to
* the further base address update in accordance with address mapping
* design.
*
* NOTE: Use the same window array for insertion and removal of
* temporary windows.
*/
void ccu_temp_win_insert(int ap_index, struct addr_map_win *win, int size)
{
uint32_t win_id;
for (int i = 0; i < size; i++) {
win_id = MVEBU_CCU_MAX_WINS - 1 - i;
ccu_win_check(win);
ccu_enable_win(ap_index, win, win_id);
win++;
}
}
/*
* NOTE: Use the same window array for insertion and removal of
* temporary windows.
*/
void ccu_temp_win_remove(int ap_index, struct addr_map_win *win, int size)
{
uint32_t win_id;
for (int i = 0; i < size; i++) {
uint64_t base;
uint32_t target;
win_id = MVEBU_CCU_MAX_WINS - 1 - i;
target = mmio_read_32(CCU_WIN_CR_OFFSET(ap_index, win_id));
target >>= CCU_TARGET_ID_OFFSET;
target &= CCU_TARGET_ID_MASK;
base = mmio_read_32(CCU_WIN_ALR_OFFSET(ap_index, win_id));
base <<= ADDRESS_SHIFT;
if ((win->target_id != target) || (win->base_addr != base)) {
ERROR("%s: Trying to remove bad window-%d!\n",
__func__, win_id);
continue;
}
ccu_disable_win(ap_index, win_id);
win++;
}
}
/* Returns current DRAM window target (DRAM_0_TID, DRAM_1_TID, RAR_TID)
* NOTE: Call only once for each AP.
* The AP0 DRAM window is located at index 2 only at the BL31 execution start.
* Then it relocated to index 1 for matching the rest of APs DRAM settings.
* Calling this function after relocation will produce wrong results on AP0
*/
static uint32_t ccu_dram_target_get(int ap_index)
{
/* On BLE stage the AP0 DRAM window is opened by the BootROM at index 2.
* All the rest of detected APs will use window at index 1.
* The AP0 DRAM window is moved from index 2 to 1 during
* init_ccu() execution.
*/
const uint32_t win_id = (ap_index == 0) ? 2 : 1;
uint32_t target;
target = mmio_read_32(CCU_WIN_CR_OFFSET(ap_index, win_id));
target >>= CCU_TARGET_ID_OFFSET;
target &= CCU_TARGET_ID_MASK;
return target;
}
void ccu_dram_target_set(int ap_index, uint32_t target)
{
/* On BLE stage the AP0 DRAM window is opened by the BootROM at index 2.
* All the rest of detected APs will use window at index 1.
* The AP0 DRAM window is moved from index 2 to 1
* during init_ccu() execution.
*/
const uint32_t win_id = (ap_index == 0) ? 2 : 1;
uint32_t dram_cr;
dram_cr = mmio_read_32(CCU_WIN_CR_OFFSET(ap_index, win_id));
dram_cr &= ~(CCU_TARGET_ID_MASK << CCU_TARGET_ID_OFFSET);
dram_cr |= (target & CCU_TARGET_ID_MASK) << CCU_TARGET_ID_OFFSET;
mmio_write_32(CCU_WIN_CR_OFFSET(ap_index, win_id), dram_cr);
}
/* Setup CCU DRAM window and enable it */
void ccu_dram_win_config(int ap_index, struct addr_map_win *win)
{
#if IMAGE_BLE /* BLE */
/* On BLE stage the AP0 DRAM window is opened by the BootROM at index 2.
* Since the BootROM is not accessing DRAM at BLE stage,
* the DRAM window can be temporarely disabled.
*/
const uint32_t win_id = (ap_index == 0) ? 2 : 1;
#else /* end of BLE */
/* At the ccu_init() execution stage, DRAM windows of all APs
* are arranged at index 1.
* The AP0 still has the old window BootROM DRAM at index 2, so
* the window-1 can be safely disabled without breaking the DRAM access.
*/
const uint32_t win_id = 1;
#endif
ccu_disable_win(ap_index, win_id);
/* enable write secure (and clear read secure) */
mmio_write_32(CCU_WIN_SCR_OFFSET(ap_index, win_id),
CCU_WIN_ENA_WRITE_SECURE);
ccu_win_check(win);
ccu_enable_win(ap_index, win, win_id);
}
/* Save content of CCU window + GCR */
static void ccu_save_win_range(int ap_id, int win_first,
int win_last, uint32_t *buffer)
{
int win_id, idx;
/* Save CCU */
for (idx = 0, win_id = win_first; win_id <= win_last; win_id++) {
buffer[idx++] = mmio_read_32(CCU_WIN_CR_OFFSET(ap_id, win_id));
buffer[idx++] = mmio_read_32(CCU_WIN_SCR_OFFSET(ap_id, win_id));
buffer[idx++] = mmio_read_32(CCU_WIN_ALR_OFFSET(ap_id, win_id));
buffer[idx++] = mmio_read_32(CCU_WIN_AHR_OFFSET(ap_id, win_id));
}
buffer[idx] = mmio_read_32(CCU_WIN_GCR_OFFSET(ap_id));
}
/* Restore content of CCU window + GCR */
static void ccu_restore_win_range(int ap_id, int win_first,
int win_last, uint32_t *buffer)
{
int win_id, idx;
/* Restore CCU */
for (idx = 0, win_id = win_first; win_id <= win_last; win_id++) {
mmio_write_32(CCU_WIN_CR_OFFSET(ap_id, win_id), buffer[idx++]);
mmio_write_32(CCU_WIN_SCR_OFFSET(ap_id, win_id), buffer[idx++]);
mmio_write_32(CCU_WIN_ALR_OFFSET(ap_id, win_id), buffer[idx++]);
mmio_write_32(CCU_WIN_AHR_OFFSET(ap_id, win_id), buffer[idx++]);
}
mmio_write_32(CCU_WIN_GCR_OFFSET(ap_id), buffer[idx]);
}
void ccu_save_win_all(int ap_id)
{
ccu_save_win_range(ap_id, 0, MVEBU_CCU_MAX_WINS - 1, ccu_regs_save);
}
void ccu_restore_win_all(int ap_id)
{
ccu_restore_win_range(ap_id, 0, MVEBU_CCU_MAX_WINS - 1, ccu_regs_save);
}
int init_ccu(int ap_index)
{
struct addr_map_win *win, *dram_win;
uint32_t win_id, win_reg;
uint32_t win_count, array_id;
uint32_t dram_target;
#if IMAGE_BLE
/* In BootROM context CCU Window-1
* has SRAM_TID target and should not be disabled
*/
const uint32_t win_start = 2;
#else
const uint32_t win_start = 1;
#endif
INFO("Initializing CCU Address decoding\n");
/* Get the array of the windows and fill the map data */
marvell_get_ccu_memory_map(ap_index, &win, &win_count);
if (win_count <= 0) {
INFO("No windows configurations found\n");
} else if (win_count > (MVEBU_CCU_MAX_WINS - 1)) {
ERROR("CCU mem map array > than max available windows (%d)\n",
MVEBU_CCU_MAX_WINS);
win_count = MVEBU_CCU_MAX_WINS;
}
/* Need to set GCR to DRAM before all CCU windows are disabled for
* securing the normal access to DRAM location, which the ATF is running
* from. Once all CCU windows are set, which have to include the
* dedicated DRAM window as well, the GCR can be switched to the target
* defined by the platform configuration.
*/
dram_target = ccu_dram_target_get(ap_index);
win_reg = (dram_target & CCU_GCR_TARGET_MASK) << CCU_GCR_TARGET_OFFSET;
mmio_write_32(CCU_WIN_GCR_OFFSET(ap_index), win_reg);
/* If the DRAM window was already configured at the BLE stage,
* only the window target considered valid, the address range should be
* updated according to the platform configuration.
*/
for (dram_win = win, array_id = 0; array_id < win_count;
array_id++, dram_win++) {
if (IS_DRAM_TARGET(dram_win->target_id)) {
dram_win->target_id = dram_target;
break;
}
}
/* Disable all AP CCU windows
* Window-0 is always bypassed since it already contains
* data allowing the internal configuration space access
*/
for (win_id = win_start; win_id < MVEBU_CCU_MAX_WINS; win_id++) {
ccu_disable_win(ap_index, win_id);
/* enable write secure (and clear read secure) */
mmio_write_32(CCU_WIN_SCR_OFFSET(ap_index, win_id),
CCU_WIN_ENA_WRITE_SECURE);
}
/* win_id is the index of the current ccu window
* array_id is the index of the current memory map window entry
*/
for (win_id = win_start, array_id = 0;
((win_id < MVEBU_CCU_MAX_WINS) && (array_id < win_count));
win_id++) {
ccu_win_check(win);
ccu_enable_win(ap_index, win, win_id);
win++;
array_id++;
}
/* Get & set the default target according to board topology */
win_reg = (marvell_get_ccu_gcr_target(ap_index) & CCU_GCR_TARGET_MASK)
<< CCU_GCR_TARGET_OFFSET;
mmio_write_32(CCU_WIN_GCR_OFFSET(ap_index), win_reg);
#ifdef DEBUG_ADDR_MAP
dump_ccu(ap_index);
#endif
INFO("Done CCU Address decoding Initializing\n");
return 0;
}

473
drivers/marvell/comphy.h Normal file
View file

@ -0,0 +1,473 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* Driver for COMPHY unit that is part or Marvell A8K SoCs */
#ifndef _COMPHY_H_
#define _COMPHY_H_
/* COMPHY registers */
#define COMMON_PHY_CFG1_REG 0x0
#define COMMON_PHY_CFG1_PWR_UP_OFFSET 1
#define COMMON_PHY_CFG1_PWR_UP_MASK \
(0x1 << COMMON_PHY_CFG1_PWR_UP_OFFSET)
#define COMMON_PHY_CFG1_PIPE_SELECT_OFFSET 2
#define COMMON_PHY_CFG1_PIPE_SELECT_MASK \
(0x1 << COMMON_PHY_CFG1_PIPE_SELECT_OFFSET)
#define COMMON_PHY_CFG1_PWR_ON_RESET_OFFSET 13
#define COMMON_PHY_CFG1_PWR_ON_RESET_MASK \
(0x1 << COMMON_PHY_CFG1_PWR_ON_RESET_OFFSET)
#define COMMON_PHY_CFG1_CORE_RSTN_OFFSET 14
#define COMMON_PHY_CFG1_CORE_RSTN_MASK \
(0x1 << COMMON_PHY_CFG1_CORE_RSTN_OFFSET)
#define COMMON_PHY_PHY_MODE_OFFSET 15
#define COMMON_PHY_PHY_MODE_MASK \
(0x1 << COMMON_PHY_PHY_MODE_OFFSET)
#define COMMON_SELECTOR_PHY_OFFSET 0x140
#define COMMON_SELECTOR_PIPE_OFFSET 0x144
#define COMMON_PHY_SD_CTRL1 0x148
#define COMMON_PHY_SD_CTRL1_COMPHY_0_4_PORT_OFFSET 0
#define COMMON_PHY_SD_CTRL1_COMPHY_0_4_PORT_MASK 0xFFFF
#define COMMON_PHY_SD_CTRL1_PCIE_X4_EN_OFFSET 24
#define COMMON_PHY_SD_CTRL1_PCIE_X4_EN_MASK \
(0x1 << COMMON_PHY_SD_CTRL1_PCIE_X4_EN_OFFSET)
#define COMMON_PHY_SD_CTRL1_PCIE_X2_EN_OFFSET 25
#define COMMON_PHY_SD_CTRL1_PCIE_X2_EN_MASK \
(0x1 << COMMON_PHY_SD_CTRL1_PCIE_X2_EN_OFFSET)
#define DFX_DEV_GEN_CTRL12 0x80
#define DFX_DEV_GEN_PCIE_CLK_SRC_OFFSET 7
#define DFX_DEV_GEN_PCIE_CLK_SRC_MASK \
(0x3 << DFX_DEV_GEN_PCIE_CLK_SRC_OFFSET)
/* HPIPE register */
#define HPIPE_PWR_PLL_REG 0x4
#define HPIPE_PWR_PLL_REF_FREQ_OFFSET 0
#define HPIPE_PWR_PLL_REF_FREQ_MASK \
(0x1f << HPIPE_PWR_PLL_REF_FREQ_OFFSET)
#define HPIPE_PWR_PLL_PHY_MODE_OFFSET 5
#define HPIPE_PWR_PLL_PHY_MODE_MASK \
(0x7 << HPIPE_PWR_PLL_PHY_MODE_OFFSET)
#define HPIPE_DFE_REG0 0x01C
#define HPIPE_DFE_RES_FORCE_OFFSET 15
#define HPIPE_DFE_RES_FORCE_MASK \
(0x1 << HPIPE_DFE_RES_FORCE_OFFSET)
#define HPIPE_G2_SET_1_REG 0x040
#define HPIPE_G2_SET_1_G2_RX_SELMUPI_OFFSET 0
#define HPIPE_G2_SET_1_G2_RX_SELMUPI_MASK \
(0x7 << HPIPE_G2_SET_1_G2_RX_SELMUPI_OFFSET)
#define HPIPE_G2_SET_1_G2_RX_SELMUPP_OFFSET 3
#define HPIPE_G2_SET_1_G2_RX_SELMUPP_MASK \
(0x7 << HPIPE_G2_SET_1_G2_RX_SELMUPP_OFFSET)
#define HPIPE_G2_SET_1_G2_RX_SELMUFI_OFFSET 6
#define HPIPE_G2_SET_1_G2_RX_SELMUFI_MASK \
(0x3 << HPIPE_G2_SET_1_G2_RX_SELMUFI_OFFSET)
#define HPIPE_G3_SETTINGS_1_REG 0x048
#define HPIPE_G3_RX_SELMUPI_OFFSET 0
#define HPIPE_G3_RX_SELMUPI_MASK \
(0x7 << HPIPE_G3_RX_SELMUPI_OFFSET)
#define HPIPE_G3_RX_SELMUPF_OFFSET 3
#define HPIPE_G3_RX_SELMUPF_MASK \
(0x7 << HPIPE_G3_RX_SELMUPF_OFFSET)
#define HPIPE_G3_SETTING_BIT_OFFSET 13
#define HPIPE_G3_SETTING_BIT_MASK \
(0x1 << HPIPE_G3_SETTING_BIT_OFFSET)
#define HPIPE_INTERFACE_REG 0x94
#define HPIPE_INTERFACE_GEN_MAX_OFFSET 10
#define HPIPE_INTERFACE_GEN_MAX_MASK \
(0x3 << HPIPE_INTERFACE_GEN_MAX_OFFSET)
#define HPIPE_INTERFACE_DET_BYPASS_OFFSET 12
#define HPIPE_INTERFACE_DET_BYPASS_MASK \
(0x1 << HPIPE_INTERFACE_DET_BYPASS_OFFSET)
#define HPIPE_INTERFACE_LINK_TRAIN_OFFSET 14
#define HPIPE_INTERFACE_LINK_TRAIN_MASK \
(0x1 << HPIPE_INTERFACE_LINK_TRAIN_OFFSET)
#define HPIPE_VDD_CAL_CTRL_REG 0x114
#define HPIPE_EXT_SELLV_RXSAMPL_OFFSET 5
#define HPIPE_EXT_SELLV_RXSAMPL_MASK \
(0x1f << HPIPE_EXT_SELLV_RXSAMPL_OFFSET)
#define HPIPE_PCIE_REG0 0x120
#define HPIPE_PCIE_IDLE_SYNC_OFFSET 12
#define HPIPE_PCIE_IDLE_SYNC_MASK \
(0x1 << HPIPE_PCIE_IDLE_SYNC_OFFSET)
#define HPIPE_PCIE_SEL_BITS_OFFSET 13
#define HPIPE_PCIE_SEL_BITS_MASK \
(0x3 << HPIPE_PCIE_SEL_BITS_OFFSET)
#define HPIPE_LANE_ALIGN_REG 0x124
#define HPIPE_LANE_ALIGN_OFF_OFFSET 12
#define HPIPE_LANE_ALIGN_OFF_MASK \
(0x1 << HPIPE_LANE_ALIGN_OFF_OFFSET)
#define HPIPE_MISC_REG 0x13C
#define HPIPE_MISC_CLK100M_125M_OFFSET 4
#define HPIPE_MISC_CLK100M_125M_MASK \
(0x1 << HPIPE_MISC_CLK100M_125M_OFFSET)
#define HPIPE_MISC_ICP_FORCE_OFFSET 5
#define HPIPE_MISC_ICP_FORCE_MASK \
(0x1 << HPIPE_MISC_ICP_FORCE_OFFSET)
#define HPIPE_MISC_TXDCLK_2X_OFFSET 6
#define HPIPE_MISC_TXDCLK_2X_MASK \
(0x1 << HPIPE_MISC_TXDCLK_2X_OFFSET)
#define HPIPE_MISC_CLK500_EN_OFFSET 7
#define HPIPE_MISC_CLK500_EN_MASK \
(0x1 << HPIPE_MISC_CLK500_EN_OFFSET)
#define HPIPE_MISC_REFCLK_SEL_OFFSET 10
#define HPIPE_MISC_REFCLK_SEL_MASK \
(0x1 << HPIPE_MISC_REFCLK_SEL_OFFSET)
#define HPIPE_SAMPLER_N_PROC_CALIB_CTRL_REG 0x16C
#define HPIPE_SMAPLER_OFFSET 12
#define HPIPE_SMAPLER_MASK (0x1 << HPIPE_SMAPLER_OFFSET)
#define HPIPE_PWR_CTR_DTL_REG 0x184
#define HPIPE_PWR_CTR_DTL_FLOOP_EN_OFFSET 2
#define HPIPE_PWR_CTR_DTL_FLOOP_EN_MASK \
(0x1 << HPIPE_PWR_CTR_DTL_FLOOP_EN_OFFSET)
#define HPIPE_FRAME_DET_CONTROL_REG 0x220
#define HPIPE_FRAME_DET_LOCK_LOST_TO_OFFSET 12
#define HPIPE_FRAME_DET_LOCK_LOST_TO_MASK \
(0x1 << HPIPE_FRAME_DET_LOCK_LOST_TO_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_0_REG 0x268
#define HPIPE_TX_TRAIN_P2P_HOLD_OFFSET 15
#define HPIPE_TX_TRAIN_P2P_HOLD_MASK \
(0x1 << HPIPE_TX_TRAIN_P2P_HOLD_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_REG 0x26C
#define HPIPE_TX_TRAIN_CTRL_G1_OFFSET 0
#define HPIPE_TX_TRAIN_CTRL_G1_MASK \
(0x1 << HPIPE_TX_TRAIN_CTRL_G1_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_GN1_OFFSET 1
#define HPIPE_TX_TRAIN_CTRL_GN1_MASK \
(0x1 << HPIPE_TX_TRAIN_CTRL_GN1_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_G0_OFFSET 2
#define HPIPE_TX_TRAIN_CTRL_G0_MASK \
(0x1 << HPIPE_TX_TRAIN_CTRL_G0_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_4_REG 0x278
#define HPIPE_TRX_TRAIN_TIMER_OFFSET 0
#define HPIPE_TRX_TRAIN_TIMER_MASK \
(0x3FF << HPIPE_TRX_TRAIN_TIMER_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_5_REG 0x2A4
#define HPIPE_TX_TRAIN_START_SQ_EN_OFFSET 11
#define HPIPE_TX_TRAIN_START_SQ_EN_MASK \
(0x1 << HPIPE_TX_TRAIN_START_SQ_EN_OFFSET)
#define HPIPE_TX_TRAIN_START_FRM_DET_EN_OFFSET 12
#define HPIPE_TX_TRAIN_START_FRM_DET_EN_MASK \
(0x1 << HPIPE_TX_TRAIN_START_FRM_DET_EN_OFFSET)
#define HPIPE_TX_TRAIN_START_FRM_LOCK_EN_OFFSET 13
#define HPIPE_TX_TRAIN_START_FRM_LOCK_EN_MASK \
(0x1 << HPIPE_TX_TRAIN_START_FRM_LOCK_EN_OFFSET)
#define HPIPE_TX_TRAIN_WAIT_TIME_EN_OFFSET 14
#define HPIPE_TX_TRAIN_WAIT_TIME_EN_MASK \
(0x1 << HPIPE_TX_TRAIN_WAIT_TIME_EN_OFFSET)
#define HPIPE_TX_TRAIN_REG 0x31C
#define HPIPE_TX_TRAIN_CHK_INIT_OFFSET 4
#define HPIPE_TX_TRAIN_CHK_INIT_MASK \
(0x1 << HPIPE_TX_TRAIN_CHK_INIT_OFFSET)
#define HPIPE_TX_TRAIN_COE_FM_PIN_PCIE3_OFFSET 7
#define HPIPE_TX_TRAIN_COE_FM_PIN_PCIE3_MASK \
(0x1 << HPIPE_TX_TRAIN_COE_FM_PIN_PCIE3_OFFSET)
#define HPIPE_CDR_CONTROL_REG 0x418
#define HPIPE_CDR_RX_MAX_DFE_ADAPT_0_OFFSET 14
#define HPIPE_CDR_RX_MAX_DFE_ADAPT_0_MASK \
(0x3 << HPIPE_CDR_RX_MAX_DFE_ADAPT_0_OFFSET)
#define HPIPE_CDR_RX_MAX_DFE_ADAPT_1_OFFSET 12
#define HPIPE_CDR_RX_MAX_DFE_ADAPT_1_MASK \
(0x3 << HPIPE_CDR_RX_MAX_DFE_ADAPT_1_OFFSET)
#define HPIPE_CDR_MAX_DFE_ADAPT_0_OFFSET 9
#define HPIPE_CDR_MAX_DFE_ADAPT_0_MASK \
(0x7 << HPIPE_CDR_MAX_DFE_ADAPT_0_OFFSET)
#define HPIPE_CDR_MAX_DFE_ADAPT_1_OFFSET 6
#define HPIPE_CDR_MAX_DFE_ADAPT_1_MASK \
(0x7 << HPIPE_CDR_MAX_DFE_ADAPT_1_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_11_REG 0x438
#define HPIPE_TX_STATUS_CHECK_MODE_OFFSET 6
#define HPIPE_TX_TX_STATUS_CHECK_MODE_MASK \
(0x1 << HPIPE_TX_STATUS_CHECK_MODE_OFFSET)
#define HPIPE_TX_NUM_OF_PRESET_OFFSET 10
#define HPIPE_TX_NUM_OF_PRESET_MASK \
(0x7 << HPIPE_TX_NUM_OF_PRESET_OFFSET)
#define HPIPE_TX_SWEEP_PRESET_EN_OFFSET 15
#define HPIPE_TX_SWEEP_PRESET_EN_MASK \
(0x1 << HPIPE_TX_SWEEP_PRESET_EN_OFFSET)
#define HPIPE_G2_SETTINGS_4_REG 0x44C
#define HPIPE_G2_DFE_RES_OFFSET 8
#define HPIPE_G2_DFE_RES_MASK (0x3 << HPIPE_G2_DFE_RES_OFFSET)
#define HPIPE_G3_SETTING_3_REG 0x450
#define HPIPE_G3_FFE_CAP_SEL_OFFSET 0
#define HPIPE_G3_FFE_CAP_SEL_MASK \
(0xf << HPIPE_G3_FFE_CAP_SEL_OFFSET)
#define HPIPE_G3_FFE_RES_SEL_OFFSET 4
#define HPIPE_G3_FFE_RES_SEL_MASK \
(0x7 << HPIPE_G3_FFE_RES_SEL_OFFSET)
#define HPIPE_G3_FFE_SETTING_FORCE_OFFSET 7
#define HPIPE_G3_FFE_SETTING_FORCE_MASK \
(0x1 << HPIPE_G3_FFE_SETTING_FORCE_OFFSET)
#define HPIPE_G3_FFE_DEG_RES_LEVEL_OFFSET 12
#define HPIPE_G3_FFE_DEG_RES_LEVEL_MASK \
(0x3 << HPIPE_G3_FFE_DEG_RES_LEVEL_OFFSET)
#define HPIPE_G3_FFE_LOAD_RES_LEVEL_OFFSET 14
#define HPIPE_G3_FFE_LOAD_RES_LEVEL_MASK \
(0x3 << HPIPE_G3_FFE_LOAD_RES_LEVEL_OFFSET)
#define HPIPE_G3_SETTING_4_REG 0x454
#define HPIPE_G3_DFE_RES_OFFSET 8
#define HPIPE_G3_DFE_RES_MASK (0x3 << HPIPE_G3_DFE_RES_OFFSET)
#define HPIPE_DFE_CONTROL_REG 0x470
#define HPIPE_DFE_TX_MAX_DFE_ADAPT_OFFSET 14
#define HPIPE_DFE_TX_MAX_DFE_ADAPT_MASK \
(0x3 << HPIPE_DFE_TX_MAX_DFE_ADAPT_OFFSET)
#define HPIPE_DFE_CTRL_28_REG 0x49C
#define HPIPE_DFE_CTRL_28_PIPE4_OFFSET 7
#define HPIPE_DFE_CTRL_28_PIPE4_MASK \
(0x1 << HPIPE_DFE_CTRL_28_PIPE4_OFFSET)
#define HPIPE_G3_SETTING_5_REG 0x548
#define HPIPE_G3_SETTING_5_G3_ICP_OFFSET 0
#define HPIPE_G3_SETTING_5_G3_ICP_MASK \
(0xf << HPIPE_G3_SETTING_5_G3_ICP_OFFSET)
#define HPIPE_LANE_STATUS1_REG 0x60C
#define HPIPE_LANE_STATUS1_PCLK_EN_OFFSET 0
#define HPIPE_LANE_STATUS1_PCLK_EN_MASK \
(0x1 << HPIPE_LANE_STATUS1_PCLK_EN_OFFSET)
#define HPIPE_LANE_CFG4_REG 0x620
#define HPIPE_LANE_CFG4_DFE_EN_SEL_OFFSET 3
#define HPIPE_LANE_CFG4_DFE_EN_SEL_MASK \
(0x1 << HPIPE_LANE_CFG4_DFE_EN_SEL_OFFSET)
#define HPIPE_LANE_EQU_CONFIG_0_REG 0x69C
#define HPIPE_CFG_EQ_FS_OFFSET 0
#define HPIPE_CFG_EQ_FS_MASK (0x3f << HPIPE_CFG_EQ_FS_OFFSET)
#define HPIPE_CFG_EQ_LF_OFFSET 6
#define HPIPE_CFG_EQ_LF_MASK (0x3f << HPIPE_CFG_EQ_LF_OFFSET)
#define HPIPE_CFG_PHY_RC_EP_OFFSET 12
#define HPIPE_CFG_PHY_RC_EP_MASK \
(0x1 << HPIPE_CFG_PHY_RC_EP_OFFSET)
#define HPIPE_LANE_EQ_CFG1_REG 0x6a0
#define HPIPE_CFG_UPDATE_POLARITY_OFFSET 12
#define HPIPE_CFG_UPDATE_POLARITY_MASK \
(0x1 << HPIPE_CFG_UPDATE_POLARITY_OFFSET)
#define HPIPE_LANE_EQ_CFG2_REG 0x6a4
#define HPIPE_CFG_EQ_BUNDLE_DIS_OFFSET 14
#define HPIPE_CFG_EQ_BUNDLE_DIS_MASK \
(0x1 << HPIPE_CFG_EQ_BUNDLE_DIS_OFFSET)
#define HPIPE_LANE_PRESET_CFG0_REG 0x6a8
#define HPIPE_CFG_CURSOR_PRESET0_OFFSET 0
#define HPIPE_CFG_CURSOR_PRESET0_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET0_OFFSET)
#define HPIPE_CFG_CURSOR_PRESET1_OFFSET 6
#define HPIPE_CFG_CURSOR_PRESET1_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET1_OFFSET)
#define HPIPE_LANE_PRESET_CFG1_REG 0x6ac
#define HPIPE_CFG_CURSOR_PRESET2_OFFSET 0
#define HPIPE_CFG_CURSOR_PRESET2_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET2_OFFSET)
#define HPIPE_CFG_CURSOR_PRESET3_OFFSET 6
#define HPIPE_CFG_CURSOR_PRESET3_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET3_OFFSET)
#define HPIPE_LANE_PRESET_CFG2_REG 0x6b0
#define HPIPE_CFG_CURSOR_PRESET4_OFFSET 0
#define HPIPE_CFG_CURSOR_PRESET4_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET4_OFFSET)
#define HPIPE_CFG_CURSOR_PRESET5_OFFSET 6
#define HPIPE_CFG_CURSOR_PRESET5_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET5_OFFSET)
#define HPIPE_LANE_PRESET_CFG3_REG 0x6b4
#define HPIPE_CFG_CURSOR_PRESET6_OFFSET 0
#define HPIPE_CFG_CURSOR_PRESET6_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET6_OFFSET)
#define HPIPE_CFG_CURSOR_PRESET7_OFFSET 6
#define HPIPE_CFG_CURSOR_PRESET7_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET7_OFFSET)
#define HPIPE_LANE_PRESET_CFG4_REG 0x6b8
#define HPIPE_CFG_CURSOR_PRESET8_OFFSET 0
#define HPIPE_CFG_CURSOR_PRESET8_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET8_OFFSET)
#define HPIPE_CFG_CURSOR_PRESET9_OFFSET 6
#define HPIPE_CFG_CURSOR_PRESET9_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET9_OFFSET)
#define HPIPE_LANE_PRESET_CFG5_REG 0x6bc
#define HPIPE_CFG_CURSOR_PRESET10_OFFSET 0
#define HPIPE_CFG_CURSOR_PRESET10_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET10_OFFSET)
#define HPIPE_CFG_CURSOR_PRESET11_OFFSET 6
#define HPIPE_CFG_CURSOR_PRESET11_MASK \
(0x3f << HPIPE_CFG_CURSOR_PRESET11_OFFSET)
#define HPIPE_LANE_PRESET_CFG6_REG 0x6c0
#define HPIPE_CFG_PRE_CURSOR_PRESET0_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET0_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET0_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET0_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET0_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET0_OFFSET)
#define HPIPE_LANE_PRESET_CFG7_REG 0x6c4
#define HPIPE_CFG_PRE_CURSOR_PRESET1_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET1_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET1_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET1_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET1_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET1_OFFSET)
#define HPIPE_LANE_PRESET_CFG8_REG 0x6c8
#define HPIPE_CFG_PRE_CURSOR_PRESET2_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET2_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET2_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET2_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET2_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET2_OFFSET)
#define HPIPE_LANE_PRESET_CFG9_REG 0x6cc
#define HPIPE_CFG_PRE_CURSOR_PRESET3_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET3_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET3_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET3_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET3_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET3_OFFSET)
#define HPIPE_LANE_PRESET_CFG10_REG 0x6d0
#define HPIPE_CFG_PRE_CURSOR_PRESET4_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET4_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET4_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET4_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET4_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET4_OFFSET)
#define HPIPE_LANE_PRESET_CFG11_REG 0x6d4
#define HPIPE_CFG_PRE_CURSOR_PRESET5_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET5_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET5_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET5_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET5_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET5_OFFSET)
#define HPIPE_LANE_PRESET_CFG12_REG 0x6d8
#define HPIPE_CFG_PRE_CURSOR_PRESET6_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET6_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET6_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET6_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET6_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET6_OFFSET)
#define HPIPE_LANE_PRESET_CFG13_REG 0x6dc
#define HPIPE_CFG_PRE_CURSOR_PRESET7_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET7_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET7_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET7_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET7_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET7_OFFSET)
#define HPIPE_LANE_PRESET_CFG14_REG 0x6e0
#define HPIPE_CFG_PRE_CURSOR_PRESET8_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET8_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET8_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET8_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET8_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET8_OFFSET)
#define HPIPE_LANE_PRESET_CFG15_REG 0x6e4
#define HPIPE_CFG_PRE_CURSOR_PRESET9_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET9_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET9_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET9_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET9_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET9_OFFSET)
#define HPIPE_LANE_PRESET_CFG16_REG 0x6e8
#define HPIPE_CFG_PRE_CURSOR_PRESET10_OFFSET 0
#define HPIPE_CFG_PRE_CURSOR_PRESET10_MASK \
(0x3f << HPIPE_CFG_PRE_CURSOR_PRESET10_OFFSET)
#define HPIPE_CFG_POST_CURSOR_PRESET10_OFFSET 6
#define HPIPE_CFG_POST_CURSOR_PRESET10_MASK \
(0x3f << HPIPE_CFG_POST_CURSOR_PRESET10_OFFSET)
#define HPIPE_LANE_EQ_REMOTE_SETTING_REG 0x6f8
#define HPIPE_LANE_CFG_FOM_DIRN_OVERRIDE_OFFSET 0
#define HPIPE_LANE_CFG_FOM_DIRN_OVERRIDE_MASK \
(0x1 << HPIPE_LANE_CFG_FOM_DIRN_OVERRIDE_OFFSET)
#define HPIPE_LANE_CFG_FOM_ONLY_MODE_OFFFSET 1
#define HPIPE_LANE_CFG_FOM_ONLY_MODE_MASK \
(0x1 << HPIPE_LANE_CFG_FOM_ONLY_MODE_OFFFSET)
#define HPIPE_LANE_CFG_FOM_PRESET_VECTOR_OFFSET 2
#define HPIPE_LANE_CFG_FOM_PRESET_VECTOR_MASK \
(0xf << HPIPE_LANE_CFG_FOM_PRESET_VECTOR_OFFSET)
#define HPIPE_RST_CLK_CTRL_REG 0x704
#define HPIPE_RST_CLK_CTRL_PIPE_RST_OFFSET 0
#define HPIPE_RST_CLK_CTRL_PIPE_RST_MASK \
(0x1 << HPIPE_RST_CLK_CTRL_PIPE_RST_OFFSET)
#define HPIPE_RST_CLK_CTRL_FIXED_PCLK_OFFSET 2
#define HPIPE_RST_CLK_CTRL_FIXED_PCLK_MASK \
(0x1 << HPIPE_RST_CLK_CTRL_FIXED_PCLK_OFFSET)
#define HPIPE_RST_CLK_CTRL_PIPE_WIDTH_OFFSET 3
#define HPIPE_RST_CLK_CTRL_PIPE_WIDTH_MASK \
(0x1 << HPIPE_RST_CLK_CTRL_PIPE_WIDTH_OFFSET)
#define HPIPE_RST_CLK_CTRL_CORE_FREQ_SEL_OFFSET 9
#define HPIPE_RST_CLK_CTRL_CORE_FREQ_SEL_MASK \
(0x1 << HPIPE_RST_CLK_CTRL_CORE_FREQ_SEL_OFFSET)
#define HPIPE_CLK_SRC_LO_REG 0x70c
#define HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SEL_OFFSET 1
#define HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SEL_MASK \
(0x1 << HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SEL_OFFSET)
#define HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SCALE_OFFSET 2
#define HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SCALE_MASK \
(0x3 << HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SCALE_OFFSET)
#define HPIPE_CLK_SRC_LO_PLL_RDY_DL_OFFSET 5
#define HPIPE_CLK_SRC_LO_PLL_RDY_DL_MASK \
(0x7 << HPIPE_CLK_SRC_LO_PLL_RDY_DL_OFFSET)
#define HPIPE_CLK_SRC_HI_REG 0x710
#define HPIPE_CLK_SRC_HI_LANE_STRT_OFFSET 0
#define HPIPE_CLK_SRC_HI_LANE_STRT_MASK \
(0x1 << HPIPE_CLK_SRC_HI_LANE_STRT_OFFSET)
#define HPIPE_CLK_SRC_HI_LANE_BREAK_OFFSET 1
#define HPIPE_CLK_SRC_HI_LANE_BREAK_MASK \
(0x1 << HPIPE_CLK_SRC_HI_LANE_BREAK_OFFSET)
#define HPIPE_CLK_SRC_HI_LANE_MASTER_OFFSET 2
#define HPIPE_CLK_SRC_HI_LANE_MASTER_MASK \
(0x1 << HPIPE_CLK_SRC_HI_LANE_MASTER_OFFSET)
#define HPIPE_CLK_SRC_HI_MODE_PIPE_OFFSET 7
#define HPIPE_CLK_SRC_HI_MODE_PIPE_MASK \
(0x1 << HPIPE_CLK_SRC_HI_MODE_PIPE_OFFSET)
#define HPIPE_GLOBAL_PM_CTRL 0x740
#define HPIPE_GLOBAL_PM_RXDLOZ_WAIT_OFFSET 0
#define HPIPE_GLOBAL_PM_RXDLOZ_WAIT_MASK \
(0xFF << HPIPE_GLOBAL_PM_RXDLOZ_WAIT_OFFSET)
#endif /* _COMPHY_H_ */

View file

@ -0,0 +1,775 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* Marvell CP110 SoC COMPHY unit driver */
#ifndef _PHY_COMPHY_CP110_H
#define _PHY_COMPHY_CP110_H
#define SD_ADDR(base, lane) (base + 0x1000 * lane)
#define HPIPE_ADDR(base, lane) (SD_ADDR(base, lane) + 0x800)
#define COMPHY_ADDR(base, lane) (base + 0x28 * lane)
#define MAX_NUM_OF_FFE 8
#define RX_TRAINING_TIMEOUT 500
/* Comphy registers */
#define COMMON_PHY_CFG1_REG 0x0
#define COMMON_PHY_CFG1_PWR_UP_OFFSET 1
#define COMMON_PHY_CFG1_PWR_UP_MASK \
(0x1 << COMMON_PHY_CFG1_PWR_UP_OFFSET)
#define COMMON_PHY_CFG1_PIPE_SELECT_OFFSET 2
#define COMMON_PHY_CFG1_PIPE_SELECT_MASK \
(0x1 << COMMON_PHY_CFG1_PIPE_SELECT_OFFSET)
#define COMMON_PHY_CFG1_CORE_RSTN_OFFSET 13
#define COMMON_PHY_CFG1_CORE_RSTN_MASK \
(0x1 << COMMON_PHY_CFG1_CORE_RSTN_OFFSET)
#define COMMON_PHY_CFG1_PWR_ON_RESET_OFFSET 14
#define COMMON_PHY_CFG1_PWR_ON_RESET_MASK \
(0x1 << COMMON_PHY_CFG1_PWR_ON_RESET_OFFSET)
#define COMMON_PHY_PHY_MODE_OFFSET 15
#define COMMON_PHY_PHY_MODE_MASK \
(0x1 << COMMON_PHY_PHY_MODE_OFFSET)
#define COMMON_PHY_CFG6_REG 0x14
#define COMMON_PHY_CFG6_IF_40_SEL_OFFSET 18
#define COMMON_PHY_CFG6_IF_40_SEL_MASK \
(0x1 << COMMON_PHY_CFG6_IF_40_SEL_OFFSET)
#define COMMON_PHY_CFG6_REG 0x14
#define COMMON_PHY_CFG6_IF_40_SEL_OFFSET 18
#define COMMON_PHY_CFG6_IF_40_SEL_MASK \
(0x1 << COMMON_PHY_CFG6_IF_40_SEL_OFFSET)
#define COMMON_SELECTOR_PHY_REG_OFFSET 0x140
#define COMMON_SELECTOR_PIPE_REG_OFFSET 0x144
#define COMMON_SELECTOR_COMPHY_MASK 0xf
#define COMMON_SELECTOR_COMPHYN_FIELD_WIDTH 4
#define COMMON_SELECTOR_COMPHYN_SATA 0x4
#define COMMON_SELECTOR_PIPE_COMPHY_PCIE 0x4
#define COMMON_SELECTOR_PIPE_COMPHY_USBH 0x1
#define COMMON_SELECTOR_PIPE_COMPHY_USBD 0x2
/* SGMII/HS-SGMII/SFI/RXAUI */
#define COMMON_SELECTOR_COMPHY0_1_2_NETWORK 0x1
#define COMMON_SELECTOR_COMPHY3_RXAUI 0x1
#define COMMON_SELECTOR_COMPHY3_SGMII 0x2
#define COMMON_SELECTOR_COMPHY4_PORT1 0x1
#define COMMON_SELECTOR_COMPHY4_ALL_OTHERS 0x2
#define COMMON_SELECTOR_COMPHY5_RXAUI 0x2
#define COMMON_SELECTOR_COMPHY5_SGMII 0x1
#define COMMON_PHY_SD_CTRL1 0x148
#define COMMON_PHY_SD_CTRL1_COMPHY_0_PORT_OFFSET 0
#define COMMON_PHY_SD_CTRL1_COMPHY_1_PORT_OFFSET 4
#define COMMON_PHY_SD_CTRL1_COMPHY_2_PORT_OFFSET 8
#define COMMON_PHY_SD_CTRL1_COMPHY_3_PORT_OFFSET 12
#define COMMON_PHY_SD_CTRL1_COMPHY_0_3_PORT_MASK 0xFFFF
#define COMMON_PHY_SD_CTRL1_COMPHY_0_1_PORT_MASK 0xFF
#define COMMON_PHY_SD_CTRL1_PCIE_X4_EN_OFFSET 24
#define COMMON_PHY_SD_CTRL1_PCIE_X4_EN_MASK \
(0x1 << COMMON_PHY_SD_CTRL1_PCIE_X4_EN_OFFSET)
#define COMMON_PHY_SD_CTRL1_PCIE_X2_EN_OFFSET 25
#define COMMON_PHY_SD_CTRL1_PCIE_X2_EN_MASK \
(0x1 << COMMON_PHY_SD_CTRL1_PCIE_X2_EN_OFFSET)
#define COMMON_PHY_SD_CTRL1_RXAUI1_OFFSET 26
#define COMMON_PHY_SD_CTRL1_RXAUI1_MASK \
(0x1 << COMMON_PHY_SD_CTRL1_RXAUI1_OFFSET)
#define COMMON_PHY_SD_CTRL1_RXAUI0_OFFSET 27
#define COMMON_PHY_SD_CTRL1_RXAUI0_MASK \
(0x1 << COMMON_PHY_SD_CTRL1_RXAUI0_OFFSET)
/* DFX register */
#define DFX_BASE (0x400000)
#define DFX_DEV_GEN_CTRL12_REG (0x280)
#define DFX_DEV_GEN_PCIE_CLK_SRC_MUX (0x3)
#define DFX_DEV_GEN_PCIE_CLK_SRC_OFFSET 7
#define DFX_DEV_GEN_PCIE_CLK_SRC_MASK \
(0x3 << DFX_DEV_GEN_PCIE_CLK_SRC_OFFSET)
/* SerDes IP registers */
#define SD_EXTERNAL_CONFIG0_REG 0
#define SD_EXTERNAL_CONFIG0_SD_PU_PLL_OFFSET 1
#define SD_EXTERNAL_CONFIG0_SD_PU_PLL_MASK \
(1 << SD_EXTERNAL_CONFIG0_SD_PU_PLL_OFFSET)
#define SD_EXTERNAL_CONFIG0_SD_PHY_GEN_RX_OFFSET 3
#define SD_EXTERNAL_CONFIG0_SD_PHY_GEN_RX_MASK \
(0xf << SD_EXTERNAL_CONFIG0_SD_PHY_GEN_RX_OFFSET)
#define SD_EXTERNAL_CONFIG0_SD_PHY_GEN_TX_OFFSET 7
#define SD_EXTERNAL_CONFIG0_SD_PHY_GEN_TX_MASK \
(0xf << SD_EXTERNAL_CONFIG0_SD_PHY_GEN_TX_OFFSET)
#define SD_EXTERNAL_CONFIG0_SD_PU_RX_OFFSET 11
#define SD_EXTERNAL_CONFIG0_SD_PU_RX_MASK \
(1 << SD_EXTERNAL_CONFIG0_SD_PU_RX_OFFSET)
#define SD_EXTERNAL_CONFIG0_SD_PU_TX_OFFSET 12
#define SD_EXTERNAL_CONFIG0_SD_PU_TX_MASK \
(1 << SD_EXTERNAL_CONFIG0_SD_PU_TX_OFFSET)
#define SD_EXTERNAL_CONFIG0_HALF_BUS_MODE_OFFSET 14
#define SD_EXTERNAL_CONFIG0_HALF_BUS_MODE_MASK \
(1 << SD_EXTERNAL_CONFIG0_HALF_BUS_MODE_OFFSET)
#define SD_EXTERNAL_CONFIG0_MEDIA_MODE_OFFSET 15
#define SD_EXTERNAL_CONFIG0_MEDIA_MODE_MASK \
(0x1 << SD_EXTERNAL_CONFIG0_MEDIA_MODE_OFFSET)
#define SD_EXTERNAL_CONFIG1_REG 0x4
#define SD_EXTERNAL_CONFIG1_RESET_IN_OFFSET 3
#define SD_EXTERNAL_CONFIG1_RESET_IN_MASK \
(0x1 << SD_EXTERNAL_CONFIG1_RESET_IN_OFFSET)
#define SD_EXTERNAL_CONFIG1_RX_INIT_OFFSET 4
#define SD_EXTERNAL_CONFIG1_RX_INIT_MASK \
(0x1 << SD_EXTERNAL_CONFIG1_RX_INIT_OFFSET)
#define SD_EXTERNAL_CONFIG1_RESET_CORE_OFFSET 5
#define SD_EXTERNAL_CONFIG1_RESET_CORE_MASK \
(0x1 << SD_EXTERNAL_CONFIG1_RESET_CORE_OFFSET)
#define SD_EXTERNAL_CONFIG1_RF_RESET_IN_OFFSET 6
#define SD_EXTERNAL_CONFIG1_RF_RESET_IN_MASK \
(0x1 << SD_EXTERNAL_CONFIG1_RF_RESET_IN_OFFSET)
#define SD_EXTERNAL_CONFIG2_REG 0x8
#define SD_EXTERNAL_CONFIG2_PIN_DFE_EN_OFFSET 4
#define SD_EXTERNAL_CONFIG2_PIN_DFE_EN_MASK \
(0x1 << SD_EXTERNAL_CONFIG2_PIN_DFE_EN_OFFSET)
#define SD_EXTERNAL_CONFIG2_SSC_ENABLE_OFFSET 7
#define SD_EXTERNAL_CONFIG2_SSC_ENABLE_MASK \
(0x1 << SD_EXTERNAL_CONFIG2_SSC_ENABLE_OFFSET)
#define SD_EXTERNAL_STATUS_REG 0xc
#define SD_EXTERNAL_STATUS_START_RX_TRAINING_OFFSET 7
#define SD_EXTERNAL_STATUS_START_RX_TRAINING_MASK \
(1 << SD_EXTERNAL_STATUS_START_RX_TRAINING_OFFSET)
#define SD_EXTERNAL_STATUS0_REG 0x18
#define SD_EXTERNAL_STATUS0_PLL_TX_OFFSET 2
#define SD_EXTERNAL_STATUS0_PLL_TX_MASK \
(0x1 << SD_EXTERNAL_STATUS0_PLL_TX_OFFSET)
#define SD_EXTERNAL_STATUS0_PLL_RX_OFFSET 3
#define SD_EXTERNAL_STATUS0_PLL_RX_MASK \
(0x1 << SD_EXTERNAL_STATUS0_PLL_RX_OFFSET)
#define SD_EXTERNAL_STATUS0_RX_INIT_OFFSET 4
#define SD_EXTERNAL_STATUS0_RX_INIT_MASK \
(0x1 << SD_EXTERNAL_STATUS0_RX_INIT_OFFSET)
#define SD_EXTERNAL_STATAUS1_REG 0x1c
#define SD_EXTERNAL_STATAUS1_REG_RX_TRAIN_COMP_OFFSET 0
#define SD_EXTERNAL_STATAUS1_REG_RX_TRAIN_COMP_MASK \
(1 << SD_EXTERNAL_STATAUS1_REG_RX_TRAIN_COMP_OFFSET)
#define SD_EXTERNAL_STATAUS1_REG_RX_TRAIN_FAILED_OFFSET 1
#define SD_EXTERNAL_STATAUS1_REG_RX_TRAIN_FAILED_MASK \
(1 << SD_EXTERNAL_STATAUS1_REG_RX_TRAIN_FAILED_OFFSET)
/* HPIPE registers */
#define HPIPE_PWR_PLL_REG 0x4
#define HPIPE_PWR_PLL_REF_FREQ_OFFSET 0
#define HPIPE_PWR_PLL_REF_FREQ_MASK \
(0x1f << HPIPE_PWR_PLL_REF_FREQ_OFFSET)
#define HPIPE_PWR_PLL_PHY_MODE_OFFSET 5
#define HPIPE_PWR_PLL_PHY_MODE_MASK \
(0x7 << HPIPE_PWR_PLL_PHY_MODE_OFFSET)
#define HPIPE_CAL_REG1_REG 0xc
#define HPIPE_CAL_REG_1_EXT_TXIMP_OFFSET 10
#define HPIPE_CAL_REG_1_EXT_TXIMP_MASK \
(0x1f << HPIPE_CAL_REG_1_EXT_TXIMP_OFFSET)
#define HPIPE_CAL_REG_1_EXT_TXIMP_EN_OFFSET 15
#define HPIPE_CAL_REG_1_EXT_TXIMP_EN_MASK \
(0x1 << HPIPE_CAL_REG_1_EXT_TXIMP_EN_OFFSET)
#define HPIPE_SQUELCH_FFE_SETTING_REG 0x18
#define HPIPE_SQUELCH_THRESH_IN_OFFSET 8
#define HPIPE_SQUELCH_THRESH_IN_MASK \
(0xf << HPIPE_SQUELCH_THRESH_IN_OFFSET)
#define HPIPE_SQUELCH_DETECTED_OFFSET 14
#define HPIPE_SQUELCH_DETECTED_MASK \
(0x1 << HPIPE_SQUELCH_DETECTED_OFFSET)
#define HPIPE_DFE_REG0 0x1c
#define HPIPE_DFE_RES_FORCE_OFFSET 15
#define HPIPE_DFE_RES_FORCE_MASK \
(0x1 << HPIPE_DFE_RES_FORCE_OFFSET)
#define HPIPE_DFE_F3_F5_REG 0x28
#define HPIPE_DFE_F3_F5_DFE_EN_OFFSET 14
#define HPIPE_DFE_F3_F5_DFE_EN_MASK \
(0x1 << HPIPE_DFE_F3_F5_DFE_EN_OFFSET)
#define HPIPE_DFE_F3_F5_DFE_CTRL_OFFSET 15
#define HPIPE_DFE_F3_F5_DFE_CTRL_MASK \
(0x1 << HPIPE_DFE_F3_F5_DFE_CTRL_OFFSET)
#define HPIPE_G1_SET_0_REG 0x34
#define HPIPE_G1_SET_0_G1_TX_AMP_OFFSET 1
#define HPIPE_G1_SET_0_G1_TX_AMP_MASK \
(0x1f << HPIPE_G1_SET_0_G1_TX_AMP_OFFSET)
#define HPIPE_G1_SET_0_G1_TX_AMP_ADJ_OFFSET 6
#define HPIPE_G1_SET_0_G1_TX_AMP_ADJ_MASK \
(0x1 << HPIPE_G1_SET_0_G1_TX_AMP_ADJ_OFFSET)
#define HPIPE_G1_SET_0_G1_TX_EMPH1_OFFSET 7
#define HPIPE_G1_SET_0_G1_TX_EMPH1_MASK \
(0xf << HPIPE_G1_SET_0_G1_TX_EMPH1_OFFSET)
#define HPIPE_G1_SET_0_G1_TX_EMPH1_EN_OFFSET 11
#define HPIPE_G1_SET_0_G1_TX_EMPH1_EN_MASK \
(0x1 << HPIPE_G1_SET_0_G1_TX_EMPH1_EN_OFFSET)
#define HPIPE_G1_SET_1_REG 0x38
#define HPIPE_G1_SET_1_G1_RX_SELMUPI_OFFSET 0
#define HPIPE_G1_SET_1_G1_RX_SELMUPI_MASK \
(0x7 << HPIPE_G1_SET_1_G1_RX_SELMUPI_OFFSET)
#define HPIPE_G1_SET_1_G1_RX_SELMUPP_OFFSET 3
#define HPIPE_G1_SET_1_G1_RX_SELMUPP_MASK \
(0x7 << HPIPE_G1_SET_1_G1_RX_SELMUPP_OFFSET)
#define HPIPE_G1_SET_1_G1_RX_SELMUFI_OFFSET 6
#define HPIPE_G1_SET_1_G1_RX_SELMUFI_MASK \
(0x3 << HPIPE_G1_SET_1_G1_RX_SELMUFI_OFFSET)
#define HPIPE_G1_SET_1_G1_RX_SELMUFF_OFFSET 8
#define HPIPE_G1_SET_1_G1_RX_SELMUFF_MASK \
(0x3 << HPIPE_G1_SET_1_G1_RX_SELMUFF_OFFSET)
#define HPIPE_G1_SET_1_G1_RX_DFE_EN_OFFSET 10
#define HPIPE_G1_SET_1_G1_RX_DFE_EN_MASK \
(0x1 << HPIPE_G1_SET_1_G1_RX_DFE_EN_OFFSET)
#define HPIPE_G1_SET_1_G1_RX_DIGCK_DIV_OFFSET 11
#define HPIPE_G1_SET_1_G1_RX_DIGCK_DIV_MASK \
(0x3 << HPIPE_G1_SET_1_G1_RX_DIGCK_DIV_OFFSET)
#define HPIPE_G2_SET_0_REG 0x3c
#define HPIPE_G2_SET_0_G2_TX_AMP_OFFSET 1
#define HPIPE_G2_SET_0_G2_TX_AMP_MASK \
(0x1f << HPIPE_G2_SET_0_G2_TX_AMP_OFFSET)
#define HPIPE_G2_SET_0_G2_TX_AMP_ADJ_OFFSET 6
#define HPIPE_G2_SET_0_G2_TX_AMP_ADJ_MASK \
(0x1 << HPIPE_G2_SET_0_G2_TX_AMP_ADJ_OFFSET)
#define HPIPE_G2_SET_0_G2_TX_EMPH1_OFFSET 7
#define HPIPE_G2_SET_0_G2_TX_EMPH1_MASK \
(0xf << HPIPE_G2_SET_0_G2_TX_EMPH1_OFFSET)
#define HPIPE_G2_SET_0_G2_TX_EMPH1_EN_OFFSET 11
#define HPIPE_G2_SET_0_G2_TX_EMPH1_EN_MASK \
(0x1 << HPIPE_G2_SET_0_G2_TX_EMPH1_EN_OFFSET)
#define HPIPE_G2_SET_1_REG 0x40
#define HPIPE_G2_SET_1_G2_RX_SELMUPI_OFFSET 0
#define HPIPE_G2_SET_1_G2_RX_SELMUPI_MASK \
(0x7 << HPIPE_G2_SET_1_G2_RX_SELMUPI_OFFSET)
#define HPIPE_G2_SET_1_G2_RX_SELMUPP_OFFSET 3
#define HPIPE_G2_SET_1_G2_RX_SELMUPP_MASK \
(0x7 << HPIPE_G2_SET_1_G2_RX_SELMUPP_OFFSET)
#define HPIPE_G2_SET_1_G2_RX_SELMUFI_OFFSET 6
#define HPIPE_G2_SET_1_G2_RX_SELMUFI_MASK \
(0x3 << HPIPE_G2_SET_1_G2_RX_SELMUFI_OFFSET)
#define HPIPE_G2_SET_1_G2_RX_SELMUFF_OFFSET 8
#define HPIPE_G2_SET_1_G2_RX_SELMUFF_MASK \
(0x3 << HPIPE_G2_SET_1_G2_RX_SELMUFF_OFFSET)
#define HPIPE_G2_SET_1_G2_RX_DFE_EN_OFFSET 10
#define HPIPE_G2_SET_1_G2_RX_DFE_EN_MASK \
(0x1 << HPIPE_G2_SET_1_G2_RX_DFE_EN_OFFSET)
#define HPIPE_G2_SET_1_G2_RX_DIGCK_DIV_OFFSET 11
#define HPIPE_G2_SET_1_G2_RX_DIGCK_DIV_MASK \
(0x3 << HPIPE_G2_SET_1_G2_RX_DIGCK_DIV_OFFSET)
#define HPIPE_G3_SET_0_REG 0x44
#define HPIPE_G3_SET_0_G3_TX_AMP_OFFSET 1
#define HPIPE_G3_SET_0_G3_TX_AMP_MASK \
(0x1f << HPIPE_G3_SET_0_G3_TX_AMP_OFFSET)
#define HPIPE_G3_SET_0_G3_TX_AMP_ADJ_OFFSET 6
#define HPIPE_G3_SET_0_G3_TX_AMP_ADJ_MASK \
(0x1 << HPIPE_G3_SET_0_G3_TX_AMP_ADJ_OFFSET)
#define HPIPE_G3_SET_0_G3_TX_EMPH1_OFFSET 7
#define HPIPE_G3_SET_0_G3_TX_EMPH1_MASK \
(0xf << HPIPE_G3_SET_0_G3_TX_EMPH1_OFFSET)
#define HPIPE_G3_SET_0_G3_TX_EMPH1_EN_OFFSET 11
#define HPIPE_G3_SET_0_G3_TX_EMPH1_EN_MASK \
(0x1 << HPIPE_G3_SET_0_G3_TX_EMPH1_EN_OFFSET)
#define HPIPE_G3_SET_0_G3_TX_SLEW_RATE_SEL_OFFSET 12
#define HPIPE_G3_SET_0_G3_TX_SLEW_RATE_SEL_MASK \
(0x7 << HPIPE_G3_SET_0_G3_TX_SLEW_RATE_SEL_OFFSET)
#define HPIPE_G3_SET_0_G3_TX_SLEW_CTRL_EN_OFFSET 15
#define HPIPE_G3_SET_0_G3_TX_SLEW_CTRL_EN_MASK \
(0x1 << HPIPE_G3_SET_0_G3_TX_SLEW_CTRL_EN_OFFSET)
#define HPIPE_G3_SET_1_REG 0x48
#define HPIPE_G3_SET_1_G3_RX_SELMUPI_OFFSET 0
#define HPIPE_G3_SET_1_G3_RX_SELMUPI_MASK \
(0x7 << HPIPE_G3_SET_1_G3_RX_SELMUPI_OFFSET)
#define HPIPE_G3_SET_1_G3_RX_SELMUPF_OFFSET 3
#define HPIPE_G3_SET_1_G3_RX_SELMUPF_MASK \
(0x7 << HPIPE_G3_SET_1_G3_RX_SELMUPF_OFFSET)
#define HPIPE_G3_SET_1_G3_RX_SELMUFI_OFFSET 6
#define HPIPE_G3_SET_1_G3_RX_SELMUFI_MASK \
(0x3 << HPIPE_G3_SET_1_G3_RX_SELMUFI_OFFSET)
#define HPIPE_G3_SET_1_G3_RX_SELMUFF_OFFSET 8
#define HPIPE_G3_SET_1_G3_RX_SELMUFF_MASK \
(0x3 << HPIPE_G3_SET_1_G3_RX_SELMUFF_OFFSET)
#define HPIPE_G3_SET_1_G3_RX_DFE_EN_OFFSET 10
#define HPIPE_G3_SET_1_G3_RX_DFE_EN_MASK \
(0x1 << HPIPE_G3_SET_1_G3_RX_DFE_EN_OFFSET)
#define HPIPE_G3_SET_1_G3_RX_DIGCK_DIV_OFFSET 11
#define HPIPE_G3_SET_1_G3_RX_DIGCK_DIV_MASK \
(0x3 << HPIPE_G3_SET_1_G3_RX_DIGCK_DIV_OFFSET)
#define HPIPE_G3_SET_1_G3_SAMPLER_INPAIRX2_EN_OFFSET 13
#define HPIPE_G3_SET_1_G3_SAMPLER_INPAIRX2_EN_MASK \
(0x1 << HPIPE_G3_SET_1_G3_SAMPLER_INPAIRX2_EN_OFFSET)
#define HPIPE_PHY_TEST_CONTROL_REG 0x54
#define HPIPE_PHY_TEST_PATTERN_SEL_OFFSET 4
#define HPIPE_PHY_TEST_PATTERN_SEL_MASK \
(0xf << HPIPE_PHY_TEST_PATTERN_SEL_OFFSET)
#define HPIPE_PHY_TEST_RESET_OFFSET 14
#define HPIPE_PHY_TEST_RESET_MASK \
(0x1 << HPIPE_PHY_TEST_RESET_OFFSET)
#define HPIPE_PHY_TEST_EN_OFFSET 15
#define HPIPE_PHY_TEST_EN_MASK \
(0x1 << HPIPE_PHY_TEST_EN_OFFSET)
#define HPIPE_PHY_TEST_DATA_REG 0x6c
#define HPIPE_PHY_TEST_DATA_OFFSET 0
#define HPIPE_PHY_TEST_DATA_MASK \
(0xffff << HPIPE_PHY_TEST_DATA_OFFSET)
#define HPIPE_LOOPBACK_REG 0x8c
#define HPIPE_LOOPBACK_SEL_OFFSET 1
#define HPIPE_LOOPBACK_SEL_MASK \
(0x7 << HPIPE_LOOPBACK_SEL_OFFSET)
#define HPIPE_CDR_LOCK_OFFSET 7
#define HPIPE_CDR_LOCK_MASK \
(0x1 << HPIPE_CDR_LOCK_OFFSET)
#define HPIPE_CDR_LOCK_DET_EN_OFFSET 8
#define HPIPE_CDR_LOCK_DET_EN_MASK \
(0x1 << HPIPE_CDR_LOCK_DET_EN_OFFSET)
#define HPIPE_INTERFACE_REG 0x94
#define HPIPE_INTERFACE_GEN_MAX_OFFSET 10
#define HPIPE_INTERFACE_GEN_MAX_MASK \
(0x3 << HPIPE_INTERFACE_GEN_MAX_OFFSET)
#define HPIPE_INTERFACE_DET_BYPASS_OFFSET 12
#define HPIPE_INTERFACE_DET_BYPASS_MASK \
(0x1 << HPIPE_INTERFACE_DET_BYPASS_OFFSET)
#define HPIPE_INTERFACE_LINK_TRAIN_OFFSET 14
#define HPIPE_INTERFACE_LINK_TRAIN_MASK \
(0x1 << HPIPE_INTERFACE_LINK_TRAIN_OFFSET)
#define HPIPE_G1_SET_2_REG 0xf4
#define HPIPE_G1_SET_2_G1_TX_EMPH0_OFFSET 0
#define HPIPE_G1_SET_2_G1_TX_EMPH0_MASK \
(0xf << HPIPE_G1_SET_2_G1_TX_EMPH0_OFFSET)
#define HPIPE_G1_SET_2_G1_TX_EMPH0_EN_OFFSET 4
#define HPIPE_G1_SET_2_G1_TX_EMPH0_EN_MASK \
(0x1 << HPIPE_G1_SET_2_G1_TX_EMPH0_EN_OFFSET)
#define HPIPE_G2_SET_2_REG 0xf8
#define HPIPE_G2_TX_SSC_AMP_OFFSET 9
#define HPIPE_G2_TX_SSC_AMP_MASK \
(0x7f << HPIPE_G2_TX_SSC_AMP_OFFSET)
#define HPIPE_VDD_CAL_0_REG 0x108
#define HPIPE_CAL_VDD_CONT_MODE_OFFSET 15
#define HPIPE_CAL_VDD_CONT_MODE_MASK \
(0x1 << HPIPE_CAL_VDD_CONT_MODE_OFFSET)
#define HPIPE_VDD_CAL_CTRL_REG 0x114
#define HPIPE_EXT_SELLV_RXSAMPL_OFFSET 5
#define HPIPE_EXT_SELLV_RXSAMPL_MASK \
(0x1f << HPIPE_EXT_SELLV_RXSAMPL_OFFSET)
#define HPIPE_PCIE_REG0 0x120
#define HPIPE_PCIE_IDLE_SYNC_OFFSET 12
#define HPIPE_PCIE_IDLE_SYNC_MASK \
(0x1 << HPIPE_PCIE_IDLE_SYNC_OFFSET)
#define HPIPE_PCIE_SEL_BITS_OFFSET 13
#define HPIPE_PCIE_SEL_BITS_MASK \
(0x3 << HPIPE_PCIE_SEL_BITS_OFFSET)
#define HPIPE_LANE_ALIGN_REG 0x124
#define HPIPE_LANE_ALIGN_OFF_OFFSET 12
#define HPIPE_LANE_ALIGN_OFF_MASK \
(0x1 << HPIPE_LANE_ALIGN_OFF_OFFSET)
#define HPIPE_MISC_REG 0x13C
#define HPIPE_MISC_CLK100M_125M_OFFSET 4
#define HPIPE_MISC_CLK100M_125M_MASK \
(0x1 << HPIPE_MISC_CLK100M_125M_OFFSET)
#define HPIPE_MISC_ICP_FORCE_OFFSET 5
#define HPIPE_MISC_ICP_FORCE_MASK \
(0x1 << HPIPE_MISC_ICP_FORCE_OFFSET)
#define HPIPE_MISC_TXDCLK_2X_OFFSET 6
#define HPIPE_MISC_TXDCLK_2X_MASK \
(0x1 << HPIPE_MISC_TXDCLK_2X_OFFSET)
#define HPIPE_MISC_CLK500_EN_OFFSET 7
#define HPIPE_MISC_CLK500_EN_MASK \
(0x1 << HPIPE_MISC_CLK500_EN_OFFSET)
#define HPIPE_MISC_REFCLK_SEL_OFFSET 10
#define HPIPE_MISC_REFCLK_SEL_MASK \
(0x1 << HPIPE_MISC_REFCLK_SEL_OFFSET)
#define HPIPE_RX_CONTROL_1_REG 0x140
#define HPIPE_RX_CONTROL_1_RXCLK2X_SEL_OFFSET 11
#define HPIPE_RX_CONTROL_1_RXCLK2X_SEL_MASK \
(0x1 << HPIPE_RX_CONTROL_1_RXCLK2X_SEL_OFFSET)
#define HPIPE_RX_CONTROL_1_CLK8T_EN_OFFSET 12
#define HPIPE_RX_CONTROL_1_CLK8T_EN_MASK \
(0x1 << HPIPE_RX_CONTROL_1_CLK8T_EN_OFFSET)
#define HPIPE_PWR_CTR_REG 0x148
#define HPIPE_PWR_CTR_RST_DFE_OFFSET 0
#define HPIPE_PWR_CTR_RST_DFE_MASK \
(0x1 << HPIPE_PWR_CTR_RST_DFE_OFFSET)
#define HPIPE_PWR_CTR_SFT_RST_OFFSET 10
#define HPIPE_PWR_CTR_SFT_RST_MASK \
(0x1 << HPIPE_PWR_CTR_SFT_RST_OFFSET)
#define HPIPE_SPD_DIV_FORCE_REG 0x154
#define HPIPE_TXDIGCK_DIV_FORCE_OFFSET 7
#define HPIPE_TXDIGCK_DIV_FORCE_MASK \
(0x1 << HPIPE_TXDIGCK_DIV_FORCE_OFFSET)
#define HPIPE_SPD_DIV_FORCE_RX_SPD_DIV_OFFSET 8
#define HPIPE_SPD_DIV_FORCE_RX_SPD_DIV_MASK \
(0x3 << HPIPE_SPD_DIV_FORCE_RX_SPD_DIV_OFFSET)
#define HPIPE_SPD_DIV_FORCE_RX_SPD_DIV_FORCE_OFFSET 10
#define HPIPE_SPD_DIV_FORCE_RX_SPD_DIV_FORCE_MASK \
(0x1 << HPIPE_SPD_DIV_FORCE_RX_SPD_DIV_FORCE_OFFSET)
#define HPIPE_SPD_DIV_FORCE_TX_SPD_DIV_OFFSET 13
#define HPIPE_SPD_DIV_FORCE_TX_SPD_DIV_MASK \
(0x3 << HPIPE_SPD_DIV_FORCE_TX_SPD_DIV_OFFSET)
#define HPIPE_SPD_DIV_FORCE_TX_SPD_DIV_FORCE_OFFSET 15
#define HPIPE_SPD_DIV_FORCE_TX_SPD_DIV_FORCE_MASK \
(0x1 << HPIPE_SPD_DIV_FORCE_TX_SPD_DIV_FORCE_OFFSET)
#define HPIPE_SAMPLER_N_PROC_CALIB_CTRL_REG 0x16C
#define HPIPE_RX_SAMPLER_OS_GAIN_OFFSET 6
#define HPIPE_RX_SAMPLER_OS_GAIN_MASK \
(0x3 << HPIPE_RX_SAMPLER_OS_GAIN_OFFSET)
#define HPIPE_SMAPLER_OFFSET 12
#define HPIPE_SMAPLER_MASK \
(0x1 << HPIPE_SMAPLER_OFFSET)
#define HPIPE_TX_REG1_REG 0x174
#define HPIPE_TX_REG1_TX_EMPH_RES_OFFSET 5
#define HPIPE_TX_REG1_TX_EMPH_RES_MASK \
(0x3 << HPIPE_TX_REG1_TX_EMPH_RES_OFFSET)
#define HPIPE_TX_REG1_SLC_EN_OFFSET 10
#define HPIPE_TX_REG1_SLC_EN_MASK \
(0x3f << HPIPE_TX_REG1_SLC_EN_OFFSET)
#define HPIPE_PWR_CTR_DTL_REG 0x184
#define HPIPE_PWR_CTR_DTL_SQ_DET_EN_OFFSET 0
#define HPIPE_PWR_CTR_DTL_SQ_DET_EN_MASK \
(0x1 << HPIPE_PWR_CTR_DTL_SQ_DET_EN_OFFSET)
#define HPIPE_PWR_CTR_DTL_SQ_PLOOP_EN_OFFSET 1
#define HPIPE_PWR_CTR_DTL_SQ_PLOOP_EN_MASK \
(0x1 << HPIPE_PWR_CTR_DTL_SQ_PLOOP_EN_OFFSET)
#define HPIPE_PWR_CTR_DTL_FLOOP_EN_OFFSET 2
#define HPIPE_PWR_CTR_DTL_FLOOP_EN_MASK \
(0x1 << HPIPE_PWR_CTR_DTL_FLOOP_EN_OFFSET)
#define HPIPE_PWR_CTR_DTL_CLAMPING_SEL_OFFSET 4
#define HPIPE_PWR_CTR_DTL_CLAMPING_SEL_MASK \
(0x7 << HPIPE_PWR_CTR_DTL_CLAMPING_SEL_OFFSET)
#define HPIPE_PWR_CTR_DTL_INTPCLK_DIV_FORCE_OFFSET 10
#define HPIPE_PWR_CTR_DTL_INTPCLK_DIV_FORCE_MASK \
(0x1 << HPIPE_PWR_CTR_DTL_INTPCLK_DIV_FORCE_OFFSET)
#define HPIPE_PWR_CTR_DTL_CLK_MODE_OFFSET 12
#define HPIPE_PWR_CTR_DTL_CLK_MODE_MASK \
(0x3 << HPIPE_PWR_CTR_DTL_CLK_MODE_OFFSET)
#define HPIPE_PWR_CTR_DTL_CLK_MODE_FORCE_OFFSET 14
#define HPIPE_PWR_CTR_DTL_CLK_MODE_FORCE_MASK \
(1 << HPIPE_PWR_CTR_DTL_CLK_MODE_FORCE_OFFSET)
#define HPIPE_PHASE_CONTROL_REG 0x188
#define HPIPE_OS_PH_OFFSET_OFFSET 0
#define HPIPE_OS_PH_OFFSET_MASK \
(0x7f << HPIPE_OS_PH_OFFSET_OFFSET)
#define HPIPE_OS_PH_OFFSET_FORCE_OFFSET 7
#define HPIPE_OS_PH_OFFSET_FORCE_MASK \
(0x1 << HPIPE_OS_PH_OFFSET_FORCE_OFFSET)
#define HPIPE_OS_PH_VALID_OFFSET 8
#define HPIPE_OS_PH_VALID_MASK \
(0x1 << HPIPE_OS_PH_VALID_OFFSET)
#define HPIPE_SQ_GLITCH_FILTER_CTRL 0x1c8
#define HPIPE_SQ_DEGLITCH_WIDTH_P_OFFSET 0
#define HPIPE_SQ_DEGLITCH_WIDTH_P_MASK \
(0xf << HPIPE_SQ_DEGLITCH_WIDTH_P_OFFSET)
#define HPIPE_SQ_DEGLITCH_WIDTH_N_OFFSET 4
#define HPIPE_SQ_DEGLITCH_WIDTH_N_MASK \
(0xf << HPIPE_SQ_DEGLITCH_WIDTH_N_OFFSET)
#define HPIPE_SQ_DEGLITCH_EN_OFFSET 8
#define HPIPE_SQ_DEGLITCH_EN_MASK \
(0x1 << HPIPE_SQ_DEGLITCH_EN_OFFSET)
#define HPIPE_FRAME_DETECT_CTRL_0_REG 0x214
#define HPIPE_TRAIN_PAT_NUM_OFFSET 0x7
#define HPIPE_TRAIN_PAT_NUM_MASK \
(0x1FF << HPIPE_TRAIN_PAT_NUM_OFFSET)
#define HPIPE_FRAME_DETECT_CTRL_3_REG 0x220
#define HPIPE_PATTERN_LOCK_LOST_TIMEOUT_EN_OFFSET 12
#define HPIPE_PATTERN_LOCK_LOST_TIMEOUT_EN_MASK \
(0x1 << HPIPE_PATTERN_LOCK_LOST_TIMEOUT_EN_OFFSET)
#define HPIPE_DME_REG 0x228
#define HPIPE_DME_ETHERNET_MODE_OFFSET 7
#define HPIPE_DME_ETHERNET_MODE_MASK \
(0x1 << HPIPE_DME_ETHERNET_MODE_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_0_REG 0x268
#define HPIPE_TX_TRAIN_P2P_HOLD_OFFSET 15
#define HPIPE_TX_TRAIN_P2P_HOLD_MASK \
(0x1 << HPIPE_TX_TRAIN_P2P_HOLD_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_REG 0x26C
#define HPIPE_TX_TRAIN_CTRL_G1_OFFSET 0
#define HPIPE_TX_TRAIN_CTRL_G1_MASK \
(0x1 << HPIPE_TX_TRAIN_CTRL_G1_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_GN1_OFFSET 1
#define HPIPE_TX_TRAIN_CTRL_GN1_MASK \
(0x1 << HPIPE_TX_TRAIN_CTRL_GN1_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_G0_OFFSET 2
#define HPIPE_TX_TRAIN_CTRL_G0_MASK \
(0x1 << HPIPE_TX_TRAIN_CTRL_G0_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_4_REG 0x278
#define HPIPE_TRX_TRAIN_TIMER_OFFSET 0
#define HPIPE_TRX_TRAIN_TIMER_MASK \
(0x3FF << HPIPE_TRX_TRAIN_TIMER_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_5_REG 0x2A4
#define HPIPE_RX_TRAIN_TIMER_OFFSET 0
#define HPIPE_RX_TRAIN_TIMER_MASK \
(0x3ff << HPIPE_RX_TRAIN_TIMER_OFFSET)
#define HPIPE_TX_TRAIN_START_SQ_EN_OFFSET 11
#define HPIPE_TX_TRAIN_START_SQ_EN_MASK \
(0x1 << HPIPE_TX_TRAIN_START_SQ_EN_OFFSET)
#define HPIPE_TX_TRAIN_START_FRM_DET_EN_OFFSET 12
#define HPIPE_TX_TRAIN_START_FRM_DET_EN_MASK \
(0x1 << HPIPE_TX_TRAIN_START_FRM_DET_EN_OFFSET)
#define HPIPE_TX_TRAIN_START_FRM_LOCK_EN_OFFSET 13
#define HPIPE_TX_TRAIN_START_FRM_LOCK_EN_MASK \
(0x1 << HPIPE_TX_TRAIN_START_FRM_LOCK_EN_OFFSET)
#define HPIPE_TX_TRAIN_WAIT_TIME_EN_OFFSET 14
#define HPIPE_TX_TRAIN_WAIT_TIME_EN_MASK \
(0x1 << HPIPE_TX_TRAIN_WAIT_TIME_EN_OFFSET)
#define HPIPE_TX_TRAIN_REG 0x31C
#define HPIPE_TX_TRAIN_CHK_INIT_OFFSET 4
#define HPIPE_TX_TRAIN_CHK_INIT_MASK \
(0x1 << HPIPE_TX_TRAIN_CHK_INIT_OFFSET)
#define HPIPE_TX_TRAIN_COE_FM_PIN_PCIE3_OFFSET 7
#define HPIPE_TX_TRAIN_COE_FM_PIN_PCIE3_MASK \
(0x1 << HPIPE_TX_TRAIN_COE_FM_PIN_PCIE3_OFFSET)
#define HPIPE_TX_TRAIN_16BIT_AUTO_EN_OFFSET 8
#define HPIPE_TX_TRAIN_16BIT_AUTO_EN_MASK \
(0x1 << HPIPE_TX_TRAIN_16BIT_AUTO_EN_OFFSET)
#define HPIPE_TX_TRAIN_PAT_SEL_OFFSET 9
#define HPIPE_TX_TRAIN_PAT_SEL_MASK \
(0x1 << HPIPE_TX_TRAIN_PAT_SEL_OFFSET)
#define HPIPE_SAVED_DFE_VALUES_REG 0x328
#define HPIPE_SAVED_DFE_VALUES_SAV_F0D_OFFSET 10
#define HPIPE_SAVED_DFE_VALUES_SAV_F0D_MASK \
(0x3f << HPIPE_SAVED_DFE_VALUES_SAV_F0D_OFFSET)
#define HPIPE_CDR_CONTROL_REG 0x418
#define HPIPE_CDR_RX_MAX_DFE_ADAPT_0_OFFSET 14
#define HPIPE_CDR_RX_MAX_DFE_ADAPT_0_MASK \
(0x3 << HPIPE_CDR_RX_MAX_DFE_ADAPT_0_OFFSET)
#define HPIPE_CDR_RX_MAX_DFE_ADAPT_1_OFFSET 12
#define HPIPE_CDR_RX_MAX_DFE_ADAPT_1_MASK \
(0x3 << HPIPE_CDR_RX_MAX_DFE_ADAPT_1_OFFSET)
#define HPIPE_CDR_MAX_DFE_ADAPT_0_OFFSET 9
#define HPIPE_CDR_MAX_DFE_ADAPT_0_MASK \
(0x7 << HPIPE_CDR_MAX_DFE_ADAPT_0_OFFSET)
#define HPIPE_CDR_MAX_DFE_ADAPT_1_OFFSET 6
#define HPIPE_CDR_MAX_DFE_ADAPT_1_MASK \
(0x7 << HPIPE_CDR_MAX_DFE_ADAPT_1_OFFSET)
#define HPIPE_TX_TRAIN_CTRL_11_REG 0x438
#define HPIPE_TX_STATUS_CHECK_MODE_OFFSET 6
#define HPIPE_TX_TX_STATUS_CHECK_MODE_MASK \
(0x1 << HPIPE_TX_STATUS_CHECK_MODE_OFFSET)
#define HPIPE_TX_NUM_OF_PRESET_OFFSET 10
#define HPIPE_TX_NUM_OF_PRESET_MASK \
(0x7 << HPIPE_TX_NUM_OF_PRESET_OFFSET)
#define HPIPE_TX_SWEEP_PRESET_EN_OFFSET 15
#define HPIPE_TX_SWEEP_PRESET_EN_MASK \
(0x1 << HPIPE_TX_SWEEP_PRESET_EN_OFFSET)
#define HPIPE_G1_SETTINGS_3_REG 0x440
#define HPIPE_G1_SETTINGS_3_G1_FFE_CAP_SEL_OFFSET 0
#define HPIPE_G1_SETTINGS_3_G1_FFE_CAP_SEL_MASK \
(0xf << HPIPE_G1_SETTINGS_3_G1_FFE_CAP_SEL_OFFSET)
#define HPIPE_G1_SETTINGS_3_G1_FFE_RES_SEL_OFFSET 4
#define HPIPE_G1_SETTINGS_3_G1_FFE_RES_SEL_MASK \
(0x7 << HPIPE_G1_SETTINGS_3_G1_FFE_RES_SEL_OFFSET)
#define HPIPE_G1_SETTINGS_3_G1_FFE_SETTING_FORCE_OFFSET 7
#define HPIPE_G1_SETTINGS_3_G1_FFE_SETTING_FORCE_MASK \
(0x1 << HPIPE_G1_SETTINGS_3_G1_FFE_SETTING_FORCE_OFFSET)
#define HPIPE_G1_SETTINGS_3_G1_FBCK_SEL_OFFSET 9
#define HPIPE_G1_SETTINGS_3_G1_FBCK_SEL_MASK \
(0x1 << HPIPE_G1_SETTINGS_3_G1_FBCK_SEL_OFFSET)
#define HPIPE_G1_SETTINGS_3_G1_FFE_DEG_RES_LEVEL_OFFSET 12
#define HPIPE_G1_SETTINGS_3_G1_FFE_DEG_RES_LEVEL_MASK \
(0x3 << HPIPE_G1_SETTINGS_3_G1_FFE_DEG_RES_LEVEL_OFFSET)
#define HPIPE_G1_SETTINGS_3_G1_FFE_LOAD_RES_LEVEL_OFFSET 14
#define HPIPE_G1_SETTINGS_3_G1_FFE_LOAD_RES_LEVEL_MASK \
(0x3 << HPIPE_G1_SETTINGS_3_G1_FFE_LOAD_RES_LEVEL_OFFSET)
#define HPIPE_G1_SETTINGS_4_REG 0x444
#define HPIPE_G1_SETTINGS_4_G1_DFE_RES_OFFSET 8
#define HPIPE_G1_SETTINGS_4_G1_DFE_RES_MASK \
(0x3 << HPIPE_G1_SETTINGS_4_G1_DFE_RES_OFFSET)
#define HPIPE_G2_SETTINGS_4_REG 0x44c
#define HPIPE_G2_DFE_RES_OFFSET 8
#define HPIPE_G2_DFE_RES_MASK \
(0x3 << HPIPE_G2_DFE_RES_OFFSET)
#define HPIPE_G3_SETTING_3_REG 0x450
#define HPIPE_G3_FFE_CAP_SEL_OFFSET 0
#define HPIPE_G3_FFE_CAP_SEL_MASK \
(0xf << HPIPE_G3_FFE_CAP_SEL_OFFSET)
#define HPIPE_G3_FFE_RES_SEL_OFFSET 4
#define HPIPE_G3_FFE_RES_SEL_MASK \
(0x7 << HPIPE_G3_FFE_RES_SEL_OFFSET)
#define HPIPE_G3_FFE_SETTING_FORCE_OFFSET 7
#define HPIPE_G3_FFE_SETTING_FORCE_MASK \
(0x1 << HPIPE_G3_FFE_SETTING_FORCE_OFFSET)
#define HPIPE_G3_FFE_DEG_RES_LEVEL_OFFSET 12
#define HPIPE_G3_FFE_DEG_RES_LEVEL_MASK \
(0x3 << HPIPE_G3_FFE_DEG_RES_LEVEL_OFFSET)
#define HPIPE_G3_FFE_LOAD_RES_LEVEL_OFFSET 14
#define HPIPE_G3_FFE_LOAD_RES_LEVEL_MASK \
(0x3 << HPIPE_G3_FFE_LOAD_RES_LEVEL_OFFSET)
#define HPIPE_G3_SETTING_4_REG 0x454
#define HPIPE_G3_DFE_RES_OFFSET 8
#define HPIPE_G3_DFE_RES_MASK (0x3 << HPIPE_G3_DFE_RES_OFFSET)
#define HPIPE_TX_PRESET_INDEX_REG 0x468
#define HPIPE_TX_PRESET_INDEX_OFFSET 0
#define HPIPE_TX_PRESET_INDEX_MASK \
(0xf << HPIPE_TX_PRESET_INDEX_OFFSET)
#define HPIPE_DFE_CONTROL_REG 0x470
#define HPIPE_DFE_TX_MAX_DFE_ADAPT_OFFSET 14
#define HPIPE_DFE_TX_MAX_DFE_ADAPT_MASK \
(0x3 << HPIPE_DFE_TX_MAX_DFE_ADAPT_OFFSET)
#define HPIPE_DFE_CTRL_28_REG 0x49C
#define HPIPE_DFE_CTRL_28_PIPE4_OFFSET 7
#define HPIPE_DFE_CTRL_28_PIPE4_MASK \
(0x1 << HPIPE_DFE_CTRL_28_PIPE4_OFFSET)
#define HPIPE_G1_SETTING_5_REG 0x538
#define HPIPE_G1_SETTING_5_G1_ICP_OFFSET 0
#define HPIPE_G1_SETTING_5_G1_ICP_MASK \
(0xf << HPIPE_G1_SETTING_5_G1_ICP_OFFSET)
#define HPIPE_G3_SETTING_5_REG 0x548
#define HPIPE_G3_SETTING_5_G3_ICP_OFFSET 0
#define HPIPE_G3_SETTING_5_G3_ICP_MASK \
(0xf << HPIPE_G3_SETTING_5_G3_ICP_OFFSET)
#define HPIPE_LANE_CONFIG0_REG 0x600
#define HPIPE_LANE_CONFIG0_TXDEEMPH0_OFFSET 0
#define HPIPE_LANE_CONFIG0_TXDEEMPH0_MASK \
(0x1 << HPIPE_LANE_CONFIG0_TXDEEMPH0_OFFSET)
#define HPIPE_LANE_STATUS1_REG 0x60C
#define HPIPE_LANE_STATUS1_PCLK_EN_OFFSET 0
#define HPIPE_LANE_STATUS1_PCLK_EN_MASK \
(0x1 << HPIPE_LANE_STATUS1_PCLK_EN_OFFSET)
#define HPIPE_LANE_CFG4_REG 0x620
#define HPIPE_LANE_CFG4_DFE_CTRL_OFFSET 0
#define HPIPE_LANE_CFG4_DFE_CTRL_MASK \
(0x7 << HPIPE_LANE_CFG4_DFE_CTRL_OFFSET)
#define HPIPE_LANE_CFG4_DFE_EN_SEL_OFFSET 3
#define HPIPE_LANE_CFG4_DFE_EN_SEL_MASK \
(0x1 << HPIPE_LANE_CFG4_DFE_EN_SEL_OFFSET)
#define HPIPE_LANE_CFG4_DFE_OVER_OFFSET 6
#define HPIPE_LANE_CFG4_DFE_OVER_MASK \
(0x1 << HPIPE_LANE_CFG4_DFE_OVER_OFFSET)
#define HPIPE_LANE_CFG4_SSC_CTRL_OFFSET 7
#define HPIPE_LANE_CFG4_SSC_CTRL_MASK \
(0x1 << HPIPE_LANE_CFG4_SSC_CTRL_OFFSET)
#define HPIPE_LANE_EQ_REMOTE_SETTING_REG 0x6f8
#define HPIPE_LANE_CFG_FOM_DIRN_OVERRIDE_OFFSET 0
#define HPIPE_LANE_CFG_FOM_DIRN_OVERRIDE_MASK \
(0x1 << HPIPE_LANE_CFG_FOM_DIRN_OVERRIDE_OFFSET)
#define HPIPE_LANE_CFG_FOM_ONLY_MODE_OFFFSET 1
#define HPIPE_LANE_CFG_FOM_ONLY_MODE_MASK \
(0x1 << HPIPE_LANE_CFG_FOM_ONLY_MODE_OFFFSET)
#define HPIPE_LANE_CFG_FOM_PRESET_VECTOR_OFFSET 2
#define HPIPE_LANE_CFG_FOM_PRESET_VECTOR_MASK \
(0xf << HPIPE_LANE_CFG_FOM_PRESET_VECTOR_OFFSET)
#define HPIPE_LANE_EQU_CONFIG_0_REG 0x69C
#define HPIPE_CFG_PHY_RC_EP_OFFSET 12
#define HPIPE_CFG_PHY_RC_EP_MASK \
(0x1 << HPIPE_CFG_PHY_RC_EP_OFFSET)
#define HPIPE_LANE_EQ_CFG1_REG 0x6a0
#define HPIPE_CFG_UPDATE_POLARITY_OFFSET 12
#define HPIPE_CFG_UPDATE_POLARITY_MASK \
(0x1 << HPIPE_CFG_UPDATE_POLARITY_OFFSET)
#define HPIPE_LANE_EQ_CFG2_REG 0x6a4
#define HPIPE_CFG_EQ_BUNDLE_DIS_OFFSET 14
#define HPIPE_CFG_EQ_BUNDLE_DIS_MASK \
(0x1 << HPIPE_CFG_EQ_BUNDLE_DIS_OFFSET)
#define HPIPE_RST_CLK_CTRL_REG 0x704
#define HPIPE_RST_CLK_CTRL_PIPE_RST_OFFSET 0
#define HPIPE_RST_CLK_CTRL_PIPE_RST_MASK \
(0x1 << HPIPE_RST_CLK_CTRL_PIPE_RST_OFFSET)
#define HPIPE_RST_CLK_CTRL_FIXED_PCLK_OFFSET 2
#define HPIPE_RST_CLK_CTRL_FIXED_PCLK_MASK \
(0x1 << HPIPE_RST_CLK_CTRL_FIXED_PCLK_OFFSET)
#define HPIPE_RST_CLK_CTRL_PIPE_WIDTH_OFFSET 3
#define HPIPE_RST_CLK_CTRL_PIPE_WIDTH_MASK \
(0x1 << HPIPE_RST_CLK_CTRL_PIPE_WIDTH_OFFSET)
#define HPIPE_RST_CLK_CTRL_CORE_FREQ_SEL_OFFSET 9
#define HPIPE_RST_CLK_CTRL_CORE_FREQ_SEL_MASK \
(0x1 << HPIPE_RST_CLK_CTRL_CORE_FREQ_SEL_OFFSET)
#define HPIPE_TST_MODE_CTRL_REG 0x708
#define HPIPE_TST_MODE_CTRL_MODE_MARGIN_OFFSET 2
#define HPIPE_TST_MODE_CTRL_MODE_MARGIN_MASK \
(0x1 << HPIPE_TST_MODE_CTRL_MODE_MARGIN_OFFSET)
#define HPIPE_CLK_SRC_LO_REG 0x70c
#define HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SEL_OFFSET 1
#define HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SEL_MASK \
(0x1 << HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SEL_OFFSET)
#define HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SCALE_OFFSET 2
#define HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SCALE_MASK \
(0x3 << HPIPE_CLK_SRC_LO_BUNDLE_PERIOD_SCALE_OFFSET)
#define HPIPE_CLK_SRC_LO_PLL_RDY_DL_OFFSET 5
#define HPIPE_CLK_SRC_LO_PLL_RDY_DL_MASK \
(0x7 << HPIPE_CLK_SRC_LO_PLL_RDY_DL_OFFSET)
#define HPIPE_CLK_SRC_HI_REG 0x710
#define HPIPE_CLK_SRC_HI_LANE_STRT_OFFSET 0
#define HPIPE_CLK_SRC_HI_LANE_STRT_MASK \
(0x1 << HPIPE_CLK_SRC_HI_LANE_STRT_OFFSET)
#define HPIPE_CLK_SRC_HI_LANE_BREAK_OFFSET 1
#define HPIPE_CLK_SRC_HI_LANE_BREAK_MASK \
(0x1 << HPIPE_CLK_SRC_HI_LANE_BREAK_OFFSET)
#define HPIPE_CLK_SRC_HI_LANE_MASTER_OFFSET 2
#define HPIPE_CLK_SRC_HI_LANE_MASTER_MASK \
(0x1 << HPIPE_CLK_SRC_HI_LANE_MASTER_OFFSET)
#define HPIPE_CLK_SRC_HI_MODE_PIPE_OFFSET 7
#define HPIPE_CLK_SRC_HI_MODE_PIPE_MASK \
(0x1 << HPIPE_CLK_SRC_HI_MODE_PIPE_OFFSET)
#define HPIPE_GLOBAL_MISC_CTRL 0x718
#define HPIPE_GLOBAL_PM_CTRL 0x740
#define HPIPE_GLOBAL_PM_RXDLOZ_WAIT_OFFSET 0
#define HPIPE_GLOBAL_PM_RXDLOZ_WAIT_MASK \
(0xFF << HPIPE_GLOBAL_PM_RXDLOZ_WAIT_OFFSET)
/* General defines */
#define PLL_LOCK_TIMEOUT 15000
#endif /* _PHY_COMPHY_CP110_H */

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,19 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* Marvell CP110 SoC COMPHY unit driver */
int mvebu_cp110_comphy_is_pll_locked(uint64_t comphy_base,
uint64_t comphy_index);
int mvebu_cp110_comphy_power_off(uint64_t comphy_base,
uint64_t comphy_index);
int mvebu_cp110_comphy_power_on(uint64_t comphy_base,
uint64_t comphy_index, uint64_t comphy_mode);
int mvebu_cp110_comphy_xfi_rx_training(uint64_t comphy_base,
uint8_t comphy_index);
int mvebu_cp110_comphy_digital_reset(uint64_t comphy_base, uint8_t comphy_index,
uint32_t comphy_mode, uint32_t command);

227
drivers/marvell/gwin.c Normal file
View file

@ -0,0 +1,227 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* GWIN unit device driver for Marvell AP810 SoC */
#include <a8k_common.h>
#include <debug.h>
#include <gwin.h>
#include <mmio.h>
#include <mvebu.h>
#include <mvebu_def.h>
#if LOG_LEVEL >= LOG_LEVEL_INFO
#define DEBUG_ADDR_MAP
#endif
/* common defines */
#define WIN_ENABLE_BIT (0x1)
#define WIN_TARGET_MASK (0xF)
#define WIN_TARGET_SHIFT (0x8)
#define WIN_TARGET(tgt) (((tgt) & WIN_TARGET_MASK) \
<< WIN_TARGET_SHIFT)
/* Bits[43:26] of the physical address are the window base,
* which is aligned to 64MB
*/
#define ADDRESS_RSHIFT (26)
#define ADDRESS_LSHIFT (10)
#define GWIN_ALIGNMENT_64M (0x4000000)
/* AP registers */
#define GWIN_CR_OFFSET(ap, win) (MVEBU_GWIN_BASE(ap) + 0x0 + \
(0x10 * (win)))
#define GWIN_ALR_OFFSET(ap, win) (MVEBU_GWIN_BASE(ap) + 0x8 + \
(0x10 * (win)))
#define GWIN_AHR_OFFSET(ap, win) (MVEBU_GWIN_BASE(ap) + 0xc + \
(0x10 * (win)))
#define CCU_GRU_CR_OFFSET(ap) (MVEBU_CCU_GRU_BASE(ap))
#define CCR_GRU_CR_GWIN_MBYPASS (1 << 1)
static void gwin_check(struct addr_map_win *win)
{
/* The base is always 64M aligned */
if (IS_NOT_ALIGN(win->base_addr, GWIN_ALIGNMENT_64M)) {
win->base_addr &= ~(GWIN_ALIGNMENT_64M - 1);
NOTICE("%s: Align the base address to 0x%llx\n",
__func__, win->base_addr);
}
/* size parameter validity check */
if (IS_NOT_ALIGN(win->win_size, GWIN_ALIGNMENT_64M)) {
win->win_size = ALIGN_UP(win->win_size, GWIN_ALIGNMENT_64M);
NOTICE("%s: Aligning window size to 0x%llx\n",
__func__, win->win_size);
}
}
static void gwin_enable_window(int ap_index, struct addr_map_win *win,
uint32_t win_num)
{
uint32_t alr, ahr;
uint64_t end_addr;
if ((win->target_id & WIN_TARGET_MASK) != win->target_id) {
ERROR("target ID = %d, is invalid\n", win->target_id);
return;
}
/* calculate 64bit end-address */
end_addr = (win->base_addr + win->win_size - 1);
alr = (uint32_t)((win->base_addr >> ADDRESS_RSHIFT) << ADDRESS_LSHIFT);
ahr = (uint32_t)((end_addr >> ADDRESS_RSHIFT) << ADDRESS_LSHIFT);
/* write start address and end address for GWIN */
mmio_write_32(GWIN_ALR_OFFSET(ap_index, win_num), alr);
mmio_write_32(GWIN_AHR_OFFSET(ap_index, win_num), ahr);
/* write the target ID and enable the window */
mmio_write_32(GWIN_CR_OFFSET(ap_index, win_num),
WIN_TARGET(win->target_id) | WIN_ENABLE_BIT);
}
static void gwin_disable_window(int ap_index, uint32_t win_num)
{
uint32_t win_reg;
win_reg = mmio_read_32(GWIN_CR_OFFSET(ap_index, win_num));
win_reg &= ~WIN_ENABLE_BIT;
mmio_write_32(GWIN_CR_OFFSET(ap_index, win_num), win_reg);
}
/* Insert/Remove temporary window for using the out-of reset default
* CPx base address to access the CP configuration space prior to
* the further base address update in accordance with address mapping
* design.
*
* NOTE: Use the same window array for insertion and removal of
* temporary windows.
*/
void gwin_temp_win_insert(int ap_index, struct addr_map_win *win, int size)
{
uint32_t win_id;
for (int i = 0; i < size; i++) {
win_id = MVEBU_GWIN_MAX_WINS - i - 1;
gwin_check(win);
gwin_enable_window(ap_index, win, win_id);
win++;
}
}
/*
* NOTE: Use the same window array for insertion and removal of
* temporary windows.
*/
void gwin_temp_win_remove(int ap_index, struct addr_map_win *win, int size)
{
uint32_t win_id;
for (int i = 0; i < size; i++) {
uint64_t base;
uint32_t target;
win_id = MVEBU_GWIN_MAX_WINS - i - 1;
target = mmio_read_32(GWIN_CR_OFFSET(ap_index, win_id));
target >>= WIN_TARGET_SHIFT;
target &= WIN_TARGET_MASK;
base = mmio_read_32(GWIN_ALR_OFFSET(ap_index, win_id));
base >>= ADDRESS_LSHIFT;
base <<= ADDRESS_RSHIFT;
if (win->target_id != target) {
ERROR("%s: Trying to remove bad window-%d!\n",
__func__, win_id);
continue;
}
gwin_disable_window(ap_index, win_id);
win++;
}
}
#ifdef DEBUG_ADDR_MAP
static void dump_gwin(int ap_index)
{
uint32_t win_num;
/* Dump all GWIN windows */
tf_printf("\tbank target start end\n");
tf_printf("\t----------------------------------------------------\n");
for (win_num = 0; win_num < MVEBU_GWIN_MAX_WINS; win_num++) {
uint32_t cr;
uint64_t alr, ahr;
cr = mmio_read_32(GWIN_CR_OFFSET(ap_index, win_num));
/* Window enabled */
if (cr & WIN_ENABLE_BIT) {
alr = mmio_read_32(GWIN_ALR_OFFSET(ap_index, win_num));
alr = (alr >> ADDRESS_LSHIFT) << ADDRESS_RSHIFT;
ahr = mmio_read_32(GWIN_AHR_OFFSET(ap_index, win_num));
ahr = (ahr >> ADDRESS_LSHIFT) << ADDRESS_RSHIFT;
tf_printf("\tgwin %d 0x%016llx 0x%016llx\n",
(cr >> 8) & 0xF, alr, ahr);
}
}
}
#endif
int init_gwin(int ap_index)
{
struct addr_map_win *win;
uint32_t win_id;
uint32_t win_count;
uint32_t win_reg;
INFO("Initializing GWIN Address decoding\n");
/* Get the array of the windows and its size */
marvell_get_gwin_memory_map(ap_index, &win, &win_count);
if (win_count <= 0) {
INFO("no windows configurations found\n");
return 0;
}
if (win_count > MVEBU_GWIN_MAX_WINS) {
ERROR("number of windows is bigger than %d\n",
MVEBU_GWIN_MAX_WINS);
return 0;
}
/* disable all windows */
for (win_id = 0; win_id < MVEBU_GWIN_MAX_WINS; win_id++)
gwin_disable_window(ap_index, win_id);
/* enable relevant windows */
for (win_id = 0; win_id < win_count; win_id++, win++) {
gwin_check(win);
gwin_enable_window(ap_index, win, win_id);
}
/* GWIN Miss feature has not verified, therefore any access towards
* remote AP should be accompanied with proper configuration to
* GWIN registers group and therefore the GWIN Miss feature
* should be set into Bypass mode, need to make sure all GWIN regions
* are defined correctly that will assure no GWIN miss occurrance
* JIRA-AURORA2-1630
*/
INFO("Update GWIN miss bypass\n");
win_reg = mmio_read_32(CCU_GRU_CR_OFFSET(ap_index));
win_reg |= CCR_GRU_CR_GWIN_MBYPASS;
mmio_write_32(CCU_GRU_CR_OFFSET(ap_index), win_reg);
#ifdef DEBUG_ADDR_MAP
dump_gwin(ap_index);
#endif
INFO("Done GWIN Address decoding Initializing\n");
return 0;
}

View file

@ -0,0 +1,613 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* This driver provides I2C support for Marvell A8K and compatible SoCs */
#include <a8k_i2c.h>
#include <debug.h>
#include <delay_timer.h>
#include <errno.h>
#include <mmio.h>
#include <mvebu_def.h>
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
#define DEBUG_I2C
#endif
#define CONFIG_SYS_TCLK 250000000
#define CONFIG_SYS_I2C_SPEED 100000
#define CONFIG_SYS_I2C_SLAVE 0x0
#define I2C_TIMEOUT_VALUE 0x500
#define I2C_MAX_RETRY_CNT 1000
#define I2C_CMD_WRITE 0x0
#define I2C_CMD_READ 0x1
#define I2C_DATA_ADDR_7BIT_OFFS 0x1
#define I2C_DATA_ADDR_7BIT_MASK (0xFF << I2C_DATA_ADDR_7BIT_OFFS)
#define I2C_CONTROL_ACK 0x00000004
#define I2C_CONTROL_IFLG 0x00000008
#define I2C_CONTROL_STOP 0x00000010
#define I2C_CONTROL_START 0x00000020
#define I2C_CONTROL_TWSIEN 0x00000040
#define I2C_CONTROL_INTEN 0x00000080
#define I2C_STATUS_START 0x08
#define I2C_STATUS_REPEATED_START 0x10
#define I2C_STATUS_ADDR_W_ACK 0x18
#define I2C_STATUS_DATA_W_ACK 0x28
#define I2C_STATUS_LOST_ARB_DATA_ADDR_TRANSFER 0x38
#define I2C_STATUS_ADDR_R_ACK 0x40
#define I2C_STATUS_DATA_R_ACK 0x50
#define I2C_STATUS_DATA_R_NAK 0x58
#define I2C_STATUS_LOST_ARB_GENERAL_CALL 0x78
#define I2C_STATUS_IDLE 0xF8
#define I2C_UNSTUCK_TRIGGER 0x1
#define I2C_UNSTUCK_ONGOING 0x2
#define I2C_UNSTUCK_ERROR 0x4
struct marvell_i2c_regs {
uint32_t slave_address;
uint32_t data;
uint32_t control;
union {
uint32_t status; /* when reading */
uint32_t baudrate; /* when writing */
} u;
uint32_t xtnd_slave_addr;
uint32_t reserved[2];
uint32_t soft_reset;
uint8_t reserved2[0xa0 - 0x20];
uint32_t unstuck;
};
static struct marvell_i2c_regs *base;
static int marvell_i2c_lost_arbitration(uint32_t *status)
{
*status = mmio_read_32((uintptr_t)&base->u.status);
if ((*status == I2C_STATUS_LOST_ARB_DATA_ADDR_TRANSFER) ||
(*status == I2C_STATUS_LOST_ARB_GENERAL_CALL))
return -EAGAIN;
return 0;
}
static void marvell_i2c_interrupt_clear(void)
{
uint32_t reg;
reg = mmio_read_32((uintptr_t)&base->control);
reg &= ~(I2C_CONTROL_IFLG);
mmio_write_32((uintptr_t)&base->control, reg);
/* Wait for 1 us for the clear to take effect */
udelay(1);
}
static int marvell_i2c_interrupt_get(void)
{
uint32_t reg;
/* get the interrupt flag bit */
reg = mmio_read_32((uintptr_t)&base->control);
reg &= I2C_CONTROL_IFLG;
return reg && I2C_CONTROL_IFLG;
}
static int marvell_i2c_wait_interrupt(void)
{
uint32_t timeout = 0;
while (!marvell_i2c_interrupt_get() && (timeout++ < I2C_TIMEOUT_VALUE))
;
if (timeout >= I2C_TIMEOUT_VALUE)
return -ETIMEDOUT;
return 0;
}
static int marvell_i2c_start_bit_set(void)
{
int is_int_flag = 0;
uint32_t status;
if (marvell_i2c_interrupt_get())
is_int_flag = 1;
/* set start bit */
mmio_write_32((uintptr_t)&base->control,
mmio_read_32((uintptr_t)&base->control) |
I2C_CONTROL_START);
/* in case that the int flag was set before i.e. repeated start bit */
if (is_int_flag) {
VERBOSE("%s: repeated start Bit\n", __func__);
marvell_i2c_interrupt_clear();
}
if (marvell_i2c_wait_interrupt()) {
ERROR("Start clear bit timeout\n");
return -ETIMEDOUT;
}
/* check that start bit went down */
if ((mmio_read_32((uintptr_t)&base->control) &
I2C_CONTROL_START) != 0) {
ERROR("Start bit didn't went down\n");
return -EPERM;
}
/* check the status */
if (marvell_i2c_lost_arbitration(&status)) {
ERROR("%s - %d: Lost arbitration, got status %x\n",
__func__, __LINE__, status);
return -EAGAIN;
}
if ((status != I2C_STATUS_START) &&
(status != I2C_STATUS_REPEATED_START)) {
ERROR("Got status %x after enable start bit.\n", status);
return -EPERM;
}
return 0;
}
static int marvell_i2c_stop_bit_set(void)
{
int timeout;
uint32_t status;
/* Generate stop bit */
mmio_write_32((uintptr_t)&base->control,
mmio_read_32((uintptr_t)&base->control) |
I2C_CONTROL_STOP);
marvell_i2c_interrupt_clear();
timeout = 0;
/* Read control register, check the control stop bit */
while ((mmio_read_32((uintptr_t)&base->control) & I2C_CONTROL_STOP) &&
(timeout++ < I2C_TIMEOUT_VALUE))
;
if (timeout >= I2C_TIMEOUT_VALUE) {
ERROR("Stop bit didn't went down\n");
return -ETIMEDOUT;
}
/* check that stop bit went down */
if ((mmio_read_32((uintptr_t)&base->control) & I2C_CONTROL_STOP) != 0) {
ERROR("Stop bit didn't went down\n");
return -EPERM;
}
/* check the status */
if (marvell_i2c_lost_arbitration(&status)) {
ERROR("%s - %d: Lost arbitration, got status %x\n",
__func__, __LINE__, status);
return -EAGAIN;
}
if (status != I2C_STATUS_IDLE) {
ERROR("Got status %x after enable stop bit.\n", status);
return -EPERM;
}
return 0;
}
static int marvell_i2c_address_set(uint8_t chain, int command)
{
uint32_t reg, status;
reg = (chain << I2C_DATA_ADDR_7BIT_OFFS) & I2C_DATA_ADDR_7BIT_MASK;
reg |= command;
mmio_write_32((uintptr_t)&base->data, reg);
udelay(1);
marvell_i2c_interrupt_clear();
if (marvell_i2c_wait_interrupt()) {
ERROR("Start clear bit timeout\n");
return -ETIMEDOUT;
}
/* check the status */
if (marvell_i2c_lost_arbitration(&status)) {
ERROR("%s - %d: Lost arbitration, got status %x\n",
__func__, __LINE__, status);
return -EAGAIN;
}
if (((status != I2C_STATUS_ADDR_R_ACK) && (command == I2C_CMD_READ)) ||
((status != I2C_STATUS_ADDR_W_ACK) && (command == I2C_CMD_WRITE))) {
/* only in debug, since in boot we try to read the SPD
* of both DRAM, and we don't want error messages in cas
* DIMM doesn't exist.
*/
INFO("%s: ERROR - status %x addr in %s mode.\n", __func__,
status, (command == I2C_CMD_WRITE) ? "Write" : "Read");
return -EPERM;
}
return 0;
}
/*
* The I2C module contains a clock divider to generate the SCL clock.
* This function calculates and sets the <N> and <M> fields in the I2C Baud
* Rate Register (t=01) to obtain given 'requested_speed'.
* The requested_speed will be equal to:
* CONFIG_SYS_TCLK / (10 * (M + 1) * (2 << N))
* Where M is the value represented by bits[6:3] and N is the value represented
* by bits[2:0] of "I2C Baud Rate Register".
* Therefore max M which can be set is 16 (2^4) and max N is 8 (2^3). So the
* lowest possible baudrate is:
* CONFIG_SYS_TCLK/(10 * (16 +1) * (2 << 8), which equals to:
* CONFIG_SYS_TCLK/87040. Assuming that CONFIG_SYS_TCLK=250MHz, the lowest
* possible frequency is ~2,872KHz.
*/
static unsigned int marvell_i2c_bus_speed_set(unsigned int requested_speed)
{
unsigned int n, m, freq, margin, min_margin = 0xffffffff;
unsigned int actual_n = 0, actual_m = 0;
int val;
/* Calculate N and M for the TWSI clock baud rate */
for (n = 0; n < 8; n++) {
for (m = 0; m < 16; m++) {
freq = CONFIG_SYS_TCLK / (10 * (m + 1) * (2 << n));
val = requested_speed - freq;
margin = (val > 0) ? val : -val;
if ((freq <= requested_speed) &&
(margin < min_margin)) {
min_margin = margin;
actual_n = n;
actual_m = m;
}
}
}
VERBOSE("%s: actual_n = %u, actual_m = %u\n",
__func__, actual_n, actual_m);
/* Set the baud rate */
mmio_write_32((uintptr_t)&base->u.baudrate, (actual_m << 3) | actual_n);
return 0;
}
#ifdef DEBUG_I2C
static int marvell_i2c_probe(uint8_t chip)
{
int ret = 0;
ret = marvell_i2c_start_bit_set();
if (ret != 0) {
marvell_i2c_stop_bit_set();
ERROR("%s - %d: %s", __func__, __LINE__,
"marvell_i2c_start_bit_set failed\n");
return -EPERM;
}
ret = marvell_i2c_address_set(chip, I2C_CMD_WRITE);
if (ret != 0) {
marvell_i2c_stop_bit_set();
ERROR("%s - %d: %s", __func__, __LINE__,
"marvell_i2c_address_set failed\n");
return -EPERM;
}
marvell_i2c_stop_bit_set();
VERBOSE("%s: successful I2C probe\n", __func__);
return ret;
}
#endif
/* regular i2c transaction */
static int marvell_i2c_data_receive(uint8_t *p_block, uint32_t block_size)
{
uint32_t reg, status, block_size_read = block_size;
/* Wait for cause interrupt */
if (marvell_i2c_wait_interrupt()) {
ERROR("Start clear bit timeout\n");
return -ETIMEDOUT;
}
while (block_size_read) {
if (block_size_read == 1) {
reg = mmio_read_32((uintptr_t)&base->control);
reg &= ~(I2C_CONTROL_ACK);
mmio_write_32((uintptr_t)&base->control, reg);
}
marvell_i2c_interrupt_clear();
if (marvell_i2c_wait_interrupt()) {
ERROR("Start clear bit timeout\n");
return -ETIMEDOUT;
}
/* check the status */
if (marvell_i2c_lost_arbitration(&status)) {
ERROR("%s - %d: Lost arbitration, got status %x\n",
__func__, __LINE__, status);
return -EAGAIN;
}
if ((status != I2C_STATUS_DATA_R_ACK) &&
(block_size_read != 1)) {
ERROR("Status %x in read transaction\n", status);
return -EPERM;
}
if ((status != I2C_STATUS_DATA_R_NAK) &&
(block_size_read == 1)) {
ERROR("Status %x in Rd Terminate\n", status);
return -EPERM;
}
/* read the data */
*p_block = (uint8_t) mmio_read_32((uintptr_t)&base->data);
VERBOSE("%s: place %d read %x\n", __func__,
block_size - block_size_read, *p_block);
p_block++;
block_size_read--;
}
return 0;
}
static int marvell_i2c_data_transmit(uint8_t *p_block, uint32_t block_size)
{
uint32_t status, block_size_write = block_size;
if (marvell_i2c_wait_interrupt()) {
ERROR("Start clear bit timeout\n");
return -ETIMEDOUT;
}
while (block_size_write) {
/* write the data */
mmio_write_32((uintptr_t)&base->data, (uint32_t) *p_block);
VERBOSE("%s: index = %d, data = %x\n", __func__,
block_size - block_size_write, *p_block);
p_block++;
block_size_write--;
marvell_i2c_interrupt_clear();
if (marvell_i2c_wait_interrupt()) {
ERROR("Start clear bit timeout\n");
return -ETIMEDOUT;
}
/* check the status */
if (marvell_i2c_lost_arbitration(&status)) {
ERROR("%s - %d: Lost arbitration, got status %x\n",
__func__, __LINE__, status);
return -EAGAIN;
}
if (status != I2C_STATUS_DATA_W_ACK) {
ERROR("Status %x in write transaction\n", status);
return -EPERM;
}
}
return 0;
}
static int marvell_i2c_target_offset_set(uint8_t chip, uint32_t addr, int alen)
{
uint8_t off_block[2];
uint32_t off_size;
if (alen == 2) { /* 2-byte addresses support */
off_block[0] = (addr >> 8) & 0xff;
off_block[1] = addr & 0xff;
off_size = 2;
} else { /* 1-byte addresses support */
off_block[0] = addr & 0xff;
off_size = 1;
}
VERBOSE("%s: off_size = %x addr1 = %x addr2 = %x\n", __func__,
off_size, off_block[0], off_block[1]);
return marvell_i2c_data_transmit(off_block, off_size);
}
static int marvell_i2c_unstuck(int ret)
{
uint32_t v;
if (ret != -ETIMEDOUT)
return ret;
VERBOSE("Trying to \"unstuck i2c\"... ");
i2c_init(base);
mmio_write_32((uintptr_t)&base->unstuck, I2C_UNSTUCK_TRIGGER);
do {
v = mmio_read_32((uintptr_t)&base->unstuck);
} while (v & I2C_UNSTUCK_ONGOING);
if (v & I2C_UNSTUCK_ERROR) {
VERBOSE("failed - soft reset i2c\n");
ret = -EPERM;
} else {
VERBOSE("ok\n");
i2c_init(base);
ret = -EAGAIN;
}
return ret;
}
/*
* API Functions
*/
void i2c_init(void *i2c_base)
{
/* For I2C speed and slave address, now we do not set them since
* we just provide the working speed and slave address in plat_def.h
* for i2c_init
*/
base = (struct marvell_i2c_regs *)i2c_base;
/* Reset the I2C logic */
mmio_write_32((uintptr_t)&base->soft_reset, 0);
udelay(200);
marvell_i2c_bus_speed_set(CONFIG_SYS_I2C_SPEED);
/* Enable the I2C and slave */
mmio_write_32((uintptr_t)&base->control,
I2C_CONTROL_TWSIEN | I2C_CONTROL_ACK);
/* set the I2C slave address */
mmio_write_32((uintptr_t)&base->xtnd_slave_addr, 0);
mmio_write_32((uintptr_t)&base->slave_address, CONFIG_SYS_I2C_SLAVE);
/* unmask I2C interrupt */
mmio_write_32((uintptr_t)&base->control,
mmio_read_32((uintptr_t)&base->control) |
I2C_CONTROL_INTEN);
udelay(10);
}
/*
* i2c_read: - Read multiple bytes from an i2c device
*
* The higher level routines take into account that this function is only
* called with len < page length of the device (see configuration file)
*
* @chip: address of the chip which is to be read
* @addr: i2c data address within the chip
* @alen: length of the i2c data address (1..2 bytes)
* @buffer: where to write the data
* @len: how much byte do we want to read
* @return: 0 in case of success
*/
int i2c_read(uint8_t chip, uint32_t addr, int alen, uint8_t *buffer, int len)
{
int ret = 0;
uint32_t counter = 0;
#ifdef DEBUG_I2C
marvell_i2c_probe(chip);
#endif
do {
if (ret != -EAGAIN && ret) {
ERROR("i2c transaction failed, after %d retries\n",
counter);
marvell_i2c_stop_bit_set();
return ret;
}
/* wait for 1 us for the interrupt clear to take effect */
if (counter > 0)
udelay(1);
counter++;
ret = marvell_i2c_start_bit_set();
if (ret) {
ret = marvell_i2c_unstuck(ret);
continue;
}
/* if EEPROM device */
if (alen != 0) {
ret = marvell_i2c_address_set(chip, I2C_CMD_WRITE);
if (ret)
continue;
ret = marvell_i2c_target_offset_set(chip, addr, alen);
if (ret)
continue;
ret = marvell_i2c_start_bit_set();
if (ret)
continue;
}
ret = marvell_i2c_address_set(chip, I2C_CMD_READ);
if (ret)
continue;
ret = marvell_i2c_data_receive(buffer, len);
if (ret)
continue;
ret = marvell_i2c_stop_bit_set();
} while ((ret == -EAGAIN) && (counter < I2C_MAX_RETRY_CNT));
if (counter == I2C_MAX_RETRY_CNT) {
ERROR("I2C transactions failed, got EAGAIN %d times\n",
I2C_MAX_RETRY_CNT);
ret = -EPERM;
}
mmio_write_32((uintptr_t)&base->control,
mmio_read_32((uintptr_t)&base->control) |
I2C_CONTROL_ACK);
udelay(1);
return ret;
}
/*
* i2c_write: - Write multiple bytes to an i2c device
*
* The higher level routines take into account that this function is only
* called with len < page length of the device (see configuration file)
*
* @chip: address of the chip which is to be written
* @addr: i2c data address within the chip
* @alen: length of the i2c data address (1..2 bytes)
* @buffer: where to find the data to be written
* @len: how much byte do we want to read
* @return: 0 in case of success
*/
int i2c_write(uint8_t chip, uint32_t addr, int alen, uint8_t *buffer, int len)
{
int ret = 0;
uint32_t counter = 0;
do {
if (ret != -EAGAIN && ret) {
ERROR("i2c transaction failed\n");
marvell_i2c_stop_bit_set();
return ret;
}
/* wait for 1 us for the interrupt clear to take effect */
if (counter > 0)
udelay(1);
counter++;
ret = marvell_i2c_start_bit_set();
if (ret) {
ret = marvell_i2c_unstuck(ret);
continue;
}
ret = marvell_i2c_address_set(chip, I2C_CMD_WRITE);
if (ret)
continue;
/* if EEPROM device */
if (alen != 0) {
ret = marvell_i2c_target_offset_set(chip, addr, alen);
if (ret)
continue;
}
ret = marvell_i2c_data_transmit(buffer, len);
if (ret)
continue;
ret = marvell_i2c_stop_bit_set();
} while ((ret == -EAGAIN) && (counter < I2C_MAX_RETRY_CNT));
if (counter == I2C_MAX_RETRY_CNT) {
ERROR("I2C transactions failed, got EAGAIN %d times\n",
I2C_MAX_RETRY_CNT);
ret = -EPERM;
}
udelay(1);
return ret;
}

267
drivers/marvell/io_win.c Normal file
View file

@ -0,0 +1,267 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* IO Window unit device driver for Marvell AP807, AP807 and AP810 SoCs */
#include <a8k_common.h>
#include <debug.h>
#include <io_win.h>
#include <mmio.h>
#include <mvebu.h>
#include <mvebu_def.h>
#if LOG_LEVEL >= LOG_LEVEL_INFO
#define DEBUG_ADDR_MAP
#endif
/* common defines */
#define WIN_ENABLE_BIT (0x1)
/* Physical address of the base of the window = {Addr[19:0],20`h0} */
#define ADDRESS_SHIFT (20 - 4)
#define ADDRESS_MASK (0xFFFFFFF0)
#define IO_WIN_ALIGNMENT_1M (0x100000)
#define IO_WIN_ALIGNMENT_64K (0x10000)
/* AP registers */
#define IO_WIN_ALR_OFFSET(ap, win) (MVEBU_IO_WIN_BASE(ap) + 0x0 + \
(0x10 * win))
#define IO_WIN_AHR_OFFSET(ap, win) (MVEBU_IO_WIN_BASE(ap) + 0x8 + \
(0x10 * win))
#define IO_WIN_CR_OFFSET(ap, win) (MVEBU_IO_WIN_BASE(ap) + 0xC + \
(0x10 * win))
/* For storage of CR, ALR, AHR abd GCR */
static uint32_t io_win_regs_save[MVEBU_IO_WIN_MAX_WINS * 3 + 1];
static void io_win_check(struct addr_map_win *win)
{
/* for IO The base is always 1M aligned */
/* check if address is aligned to 1M */
if (IS_NOT_ALIGN(win->base_addr, IO_WIN_ALIGNMENT_1M)) {
win->base_addr = ALIGN_UP(win->base_addr, IO_WIN_ALIGNMENT_1M);
NOTICE("%s: Align up the base address to 0x%llx\n",
__func__, win->base_addr);
}
/* size parameter validity check */
if (IS_NOT_ALIGN(win->win_size, IO_WIN_ALIGNMENT_1M)) {
win->win_size = ALIGN_UP(win->win_size, IO_WIN_ALIGNMENT_1M);
NOTICE("%s: Aligning size to 0x%llx\n",
__func__, win->win_size);
}
}
static void io_win_enable_window(int ap_index, struct addr_map_win *win,
uint32_t win_num)
{
uint32_t alr, ahr;
uint64_t end_addr;
if (win->target_id < 0 || win->target_id >= MVEBU_IO_WIN_MAX_WINS) {
ERROR("target ID = %d, is invalid\n", win->target_id);
return;
}
if ((win_num == 0) || (win_num > MVEBU_IO_WIN_MAX_WINS)) {
ERROR("Enabling wrong IOW window %d!\n", win_num);
return;
}
/* calculate the end-address */
end_addr = (win->base_addr + win->win_size - 1);
alr = (uint32_t)((win->base_addr >> ADDRESS_SHIFT) & ADDRESS_MASK);
alr |= WIN_ENABLE_BIT;
ahr = (uint32_t)((end_addr >> ADDRESS_SHIFT) & ADDRESS_MASK);
/* write start address and end address for IO window */
mmio_write_32(IO_WIN_ALR_OFFSET(ap_index, win_num), alr);
mmio_write_32(IO_WIN_AHR_OFFSET(ap_index, win_num), ahr);
/* write window target */
mmio_write_32(IO_WIN_CR_OFFSET(ap_index, win_num), win->target_id);
}
static void io_win_disable_window(int ap_index, uint32_t win_num)
{
uint32_t win_reg;
if ((win_num == 0) || (win_num > MVEBU_IO_WIN_MAX_WINS)) {
ERROR("Disabling wrong IOW window %d!\n", win_num);
return;
}
win_reg = mmio_read_32(IO_WIN_ALR_OFFSET(ap_index, win_num));
win_reg &= ~WIN_ENABLE_BIT;
mmio_write_32(IO_WIN_ALR_OFFSET(ap_index, win_num), win_reg);
}
/* Insert/Remove temporary window for using the out-of reset default
* CPx base address to access the CP configuration space prior to
* the further base address update in accordance with address mapping
* design.
*
* NOTE: Use the same window array for insertion and removal of
* temporary windows.
*/
void iow_temp_win_insert(int ap_index, struct addr_map_win *win, int size)
{
uint32_t win_id;
for (int i = 0; i < size; i++) {
win_id = MVEBU_IO_WIN_MAX_WINS - i - 1;
io_win_check(win);
io_win_enable_window(ap_index, win, win_id);
win++;
}
}
/*
* NOTE: Use the same window array for insertion and removal of
* temporary windows.
*/
void iow_temp_win_remove(int ap_index, struct addr_map_win *win, int size)
{
uint32_t win_id;
/* Start from the last window and do not touch Win0 */
for (int i = 0; i < size; i++) {
uint64_t base;
uint32_t target;
win_id = MVEBU_IO_WIN_MAX_WINS - i - 1;
target = mmio_read_32(IO_WIN_CR_OFFSET(ap_index, win_id));
base = mmio_read_32(IO_WIN_ALR_OFFSET(ap_index, win_id));
base &= ~WIN_ENABLE_BIT;
base <<= ADDRESS_SHIFT;
if ((win->target_id != target) || (win->base_addr != base)) {
ERROR("%s: Trying to remove bad window-%d!\n",
__func__, win_id);
continue;
}
io_win_disable_window(ap_index, win_id);
win++;
}
}
#ifdef DEBUG_ADDR_MAP
static void dump_io_win(int ap_index)
{
uint32_t trgt_id, win_id;
uint32_t alr, ahr;
uint64_t start, end;
/* Dump all IO windows */
tf_printf("\tbank target start end\n");
tf_printf("\t----------------------------------------------------\n");
for (win_id = 0; win_id < MVEBU_IO_WIN_MAX_WINS; win_id++) {
alr = mmio_read_32(IO_WIN_ALR_OFFSET(ap_index, win_id));
if (alr & WIN_ENABLE_BIT) {
alr &= ~WIN_ENABLE_BIT;
ahr = mmio_read_32(IO_WIN_AHR_OFFSET(ap_index, win_id));
trgt_id = mmio_read_32(IO_WIN_CR_OFFSET(ap_index,
win_id));
start = ((uint64_t)alr << ADDRESS_SHIFT);
end = (((uint64_t)ahr + 0x10) << ADDRESS_SHIFT);
tf_printf("\tio-win %d 0x%016llx 0x%016llx\n",
trgt_id, start, end);
}
}
tf_printf("\tio-win gcr is %x\n",
mmio_read_32(MVEBU_IO_WIN_BASE(ap_index) +
MVEBU_IO_WIN_GCR_OFFSET));
}
#endif
static void iow_save_win_range(int ap_id, int win_first, int win_last,
uint32_t *buffer)
{
int win_id, idx;
/* Save IOW */
for (idx = 0, win_id = win_first; win_id <= win_last; win_id++) {
buffer[idx++] = mmio_read_32(IO_WIN_CR_OFFSET(ap_id, win_id));
buffer[idx++] = mmio_read_32(IO_WIN_ALR_OFFSET(ap_id, win_id));
buffer[idx++] = mmio_read_32(IO_WIN_AHR_OFFSET(ap_id, win_id));
}
buffer[idx] = mmio_read_32(MVEBU_IO_WIN_BASE(ap_id) +
MVEBU_IO_WIN_GCR_OFFSET);
}
static void iow_restore_win_range(int ap_id, int win_first, int win_last,
uint32_t *buffer)
{
int win_id, idx;
/* Restore IOW */
for (idx = 0, win_id = win_first; win_id <= win_last; win_id++) {
mmio_write_32(IO_WIN_CR_OFFSET(ap_id, win_id), buffer[idx++]);
mmio_write_32(IO_WIN_ALR_OFFSET(ap_id, win_id), buffer[idx++]);
mmio_write_32(IO_WIN_AHR_OFFSET(ap_id, win_id), buffer[idx++]);
}
mmio_write_32(MVEBU_IO_WIN_BASE(ap_id) + MVEBU_IO_WIN_GCR_OFFSET,
buffer[idx++]);
}
void iow_save_win_all(int ap_id)
{
iow_save_win_range(ap_id, 0, MVEBU_IO_WIN_MAX_WINS - 1,
io_win_regs_save);
}
void iow_restore_win_all(int ap_id)
{
iow_restore_win_range(ap_id, 0, MVEBU_IO_WIN_MAX_WINS - 1,
io_win_regs_save);
}
int init_io_win(int ap_index)
{
struct addr_map_win *win;
uint32_t win_id, win_reg;
uint32_t win_count;
INFO("Initializing IO WIN Address decoding\n");
/* Get the array of the windows and its size */
marvell_get_io_win_memory_map(ap_index, &win, &win_count);
if (win_count <= 0)
INFO("no windows configurations found\n");
if (win_count > MVEBU_IO_WIN_MAX_WINS) {
INFO("number of windows is bigger than %d\n",
MVEBU_IO_WIN_MAX_WINS);
return 0;
}
/* Get the default target id to set the GCR */
win_reg = marvell_get_io_win_gcr_target(ap_index);
mmio_write_32(MVEBU_IO_WIN_BASE(ap_index) + MVEBU_IO_WIN_GCR_OFFSET,
win_reg);
/* disable all IO windows */
for (win_id = 1; win_id < MVEBU_IO_WIN_MAX_WINS; win_id++)
io_win_disable_window(ap_index, win_id);
/* enable relevant windows, starting from win_id = 1 because
* index 0 dedicated for BootROM
*/
for (win_id = 1; win_id <= win_count; win_id++, win++) {
io_win_check(win);
io_win_enable_window(ap_index, win, win_id);
}
#ifdef DEBUG_ADDR_MAP
dump_io_win(ap_index);
#endif
INFO("Done IO WIN Address decoding Initializing\n");
return 0;
}

195
drivers/marvell/iob.c Normal file
View file

@ -0,0 +1,195 @@
/*
* Copyright (C) 2016 - 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* IOW unit device driver for Marvell CP110 and CP115 SoCs */
#include <a8k_common.h>
#include <arch_helpers.h>
#include <debug.h>
#include <iob.h>
#include <mmio.h>
#include <mvebu.h>
#include <mvebu_def.h>
#if LOG_LEVEL >= LOG_LEVEL_INFO
#define DEBUG_ADDR_MAP
#endif
#define MVEBU_IOB_OFFSET (0x190000)
#define MVEBU_IOB_MAX_WINS 16
/* common defines */
#define WIN_ENABLE_BIT (0x1)
/* Physical address of the base of the window = {AddrLow[19:0],20`h0} */
#define ADDRESS_SHIFT (20 - 4)
#define ADDRESS_MASK (0xFFFFFFF0)
#define IOB_WIN_ALIGNMENT (0x100000)
/* IOB registers */
#define IOB_WIN_CR_OFFSET(win) (iob_base + 0x0 + (0x20 * win))
#define IOB_TARGET_ID_OFFSET (8)
#define IOB_TARGET_ID_MASK (0xF)
#define IOB_WIN_SCR_OFFSET(win) (iob_base + 0x4 + (0x20 * win))
#define IOB_WIN_ENA_CTRL_WRITE_SECURE (0x1)
#define IOB_WIN_ENA_CTRL_READ_SECURE (0x2)
#define IOB_WIN_ENA_WRITE_SECURE (0x4)
#define IOB_WIN_ENA_READ_SECURE (0x8)
#define IOB_WIN_ALR_OFFSET(win) (iob_base + 0x8 + (0x20 * win))
#define IOB_WIN_AHR_OFFSET(win) (iob_base + 0xC + (0x20 * win))
uintptr_t iob_base;
static void iob_win_check(struct addr_map_win *win, uint32_t win_num)
{
/* check if address is aligned to the size */
if (IS_NOT_ALIGN(win->base_addr, IOB_WIN_ALIGNMENT)) {
win->base_addr = ALIGN_UP(win->base_addr, IOB_WIN_ALIGNMENT);
ERROR("Window %d: base address unaligned to 0x%x\n",
win_num, IOB_WIN_ALIGNMENT);
tf_printf("Align up the base address to 0x%llx\n",
win->base_addr);
}
/* size parameter validity check */
if (IS_NOT_ALIGN(win->win_size, IOB_WIN_ALIGNMENT)) {
win->win_size = ALIGN_UP(win->win_size, IOB_WIN_ALIGNMENT);
ERROR("Window %d: window size unaligned to 0x%x\n", win_num,
IOB_WIN_ALIGNMENT);
tf_printf("Aligning size to 0x%llx\n", win->win_size);
}
}
static void iob_enable_win(struct addr_map_win *win, uint32_t win_id)
{
uint32_t iob_win_reg;
uint32_t alr, ahr;
uint64_t end_addr;
end_addr = (win->base_addr + win->win_size - 1);
alr = (uint32_t)((win->base_addr >> ADDRESS_SHIFT) & ADDRESS_MASK);
ahr = (uint32_t)((end_addr >> ADDRESS_SHIFT) & ADDRESS_MASK);
mmio_write_32(IOB_WIN_ALR_OFFSET(win_id), alr);
mmio_write_32(IOB_WIN_AHR_OFFSET(win_id), ahr);
iob_win_reg = WIN_ENABLE_BIT;
iob_win_reg |= (win->target_id & IOB_TARGET_ID_MASK)
<< IOB_TARGET_ID_OFFSET;
mmio_write_32(IOB_WIN_CR_OFFSET(win_id), iob_win_reg);
}
#ifdef DEBUG_ADDR_MAP
static void dump_iob(void)
{
uint32_t win_id, win_cr, alr, ahr;
uint8_t target_id;
uint64_t start, end;
char *iob_target_name[IOB_MAX_TID] = {
"CFG ", "MCI0 ", "PEX1 ", "PEX2 ",
"PEX0 ", "NAND ", "RUNIT", "MCI1 " };
/* Dump all IOB windows */
tf_printf("bank id target start end\n");
tf_printf("----------------------------------------------------\n");
for (win_id = 0; win_id < MVEBU_IOB_MAX_WINS; win_id++) {
win_cr = mmio_read_32(IOB_WIN_CR_OFFSET(win_id));
if (win_cr & WIN_ENABLE_BIT) {
target_id = (win_cr >> IOB_TARGET_ID_OFFSET) &
IOB_TARGET_ID_MASK;
alr = mmio_read_32(IOB_WIN_ALR_OFFSET(win_id));
start = ((uint64_t)alr << ADDRESS_SHIFT);
if (win_id != 0) {
ahr = mmio_read_32(IOB_WIN_AHR_OFFSET(win_id));
end = (((uint64_t)ahr + 0x10) << ADDRESS_SHIFT);
} else {
/* Window #0 size is hardcoded to 16MB, as it's
* reserved for CP configuration space.
*/
end = start + (16 << 20);
}
tf_printf("iob %02d %s 0x%016llx 0x%016llx\n",
win_id, iob_target_name[target_id],
start, end);
}
}
}
#endif
void iob_cfg_space_update(int ap_idx, int cp_idx, uintptr_t base,
uintptr_t new_base)
{
debug_enter();
iob_base = base + MVEBU_IOB_OFFSET;
NOTICE("Change the base address of AP%d-CP%d to %lx\n",
ap_idx, cp_idx, new_base);
mmio_write_32(IOB_WIN_ALR_OFFSET(0), new_base >> ADDRESS_SHIFT);
iob_base = new_base + MVEBU_IOB_OFFSET;
/* Make sure the address was configured by the CPU before
* any possible access to the CP.
*/
dsb();
debug_exit();
}
int init_iob(uintptr_t base)
{
struct addr_map_win *win;
uint32_t win_id, win_reg;
uint32_t win_count;
INFO("Initializing IOB Address decoding\n");
/* Get the base address of the address decoding MBUS */
iob_base = base + MVEBU_IOB_OFFSET;
/* Get the array of the windows and fill the map data */
marvell_get_iob_memory_map(&win, &win_count, base);
if (win_count <= 0) {
INFO("no windows configurations found\n");
return 0;
} else if (win_count > (MVEBU_IOB_MAX_WINS - 1)) {
ERROR("IOB mem map array > than max available windows (%d)\n",
MVEBU_IOB_MAX_WINS);
win_count = MVEBU_IOB_MAX_WINS;
}
/* disable all IOB windows, start from win_id = 1
* because can't disable internal register window
*/
for (win_id = 1; win_id < MVEBU_IOB_MAX_WINS; win_id++) {
win_reg = mmio_read_32(IOB_WIN_CR_OFFSET(win_id));
win_reg &= ~WIN_ENABLE_BIT;
mmio_write_32(IOB_WIN_CR_OFFSET(win_id), win_reg);
win_reg = ~IOB_WIN_ENA_CTRL_WRITE_SECURE;
win_reg &= ~IOB_WIN_ENA_CTRL_READ_SECURE;
win_reg &= ~IOB_WIN_ENA_WRITE_SECURE;
win_reg &= ~IOB_WIN_ENA_READ_SECURE;
mmio_write_32(IOB_WIN_SCR_OFFSET(win_id), win_reg);
}
for (win_id = 1; win_id < win_count + 1; win_id++, win++) {
iob_win_check(win, win_id);
iob_enable_win(win, win_id);
}
#ifdef DEBUG_ADDR_MAP
dump_iob();
#endif
INFO("Done IOB Address decoding Initializing\n");
return 0;
}

832
drivers/marvell/mci.c Normal file
View file

@ -0,0 +1,832 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* MCI bus driver for Marvell ARMADA 8K and 8K+ SoCs */
#include <debug.h>
#include <delay_timer.h>
#include <mmio.h>
#include <mci.h>
#include <mvebu.h>
#include <mvebu_def.h>
#include <plat_marvell.h>
/* /HB /Units /Direct_regs /Direct regs
* /Configuration Register Write/Read Data Register
*/
#define MCI_WRITE_READ_DATA_REG(mci_index) \
MVEBU_MCI_REG_BASE_REMAP(mci_index)
/* /HB /Units /Direct_regs /Direct regs
* /Configuration Register Access Command Register
*/
#define MCI_ACCESS_CMD_REG(mci_index) \
(MVEBU_MCI_REG_BASE_REMAP(mci_index) + 0x4)
/* Access Command fields :
* bit[3:0] - Sub command: 1 => Peripheral Config Register Read,
* 0 => Peripheral Config Register Write,
* 2 => Peripheral Assign ID request,
* 3 => Circular Config Write
* bit[5] - 1 => Local (same chip access) 0 => Remote
* bit[15:8] - Destination hop ID. Put Global ID (GID) here (see scheme below).
* bit[23:22] - 0x3 IHB PHY REG address space, 0x0 IHB Controller space
* bit[21:16] - Low 6 bits of offset. Hight 2 bits are taken from bit[28:27]
* of IHB_PHY_CTRL
* (must be set before any PHY register access occurs):
* /IHB_REG /IHB_REGInterchip Hopping Bus Registers
* /IHB Version Control Register
*
* ixi_ihb_top IHB PHY
* AXI ----------------------------- -------------
* <--| axi_hb_top | ihb_pipe_top |-->| |
* -->| GID=1 | GID=0 |<--| |
* ----------------------------- -------------
*/
#define MCI_INDIRECT_CTRL_READ_CMD 0x1
#define MCI_INDIRECT_CTRL_ASSIGN_CMD 0x2
#define MCI_INDIRECT_CTRL_CIRCULAR_CMD 0x3
#define MCI_INDIRECT_CTRL_LOCAL_PKT (1 << 5)
#define MCI_INDIRECT_CTRL_CMD_DONE_OFFSET 6
#define MCI_INDIRECT_CTRL_CMD_DONE \
(1 << MCI_INDIRECT_CTRL_CMD_DONE_OFFSET)
#define MCI_INDIRECT_CTRL_DATA_READY_OFFSET 7
#define MCI_INDIRECT_CTRL_DATA_READY \
(1 << MCI_INDIRECT_CTRL_DATA_READY_OFFSET)
#define MCI_INDIRECT_CTRL_HOPID_OFFSET 8
#define MCI_INDIRECT_CTRL_HOPID(id) \
(((id) & 0xFF) << MCI_INDIRECT_CTRL_HOPID_OFFSET)
#define MCI_INDIRECT_CTRL_REG_CHIPID_OFFSET 16
#define MCI_INDIRECT_REG_CTRL_ADDR(reg_num) \
(reg_num << MCI_INDIRECT_CTRL_REG_CHIPID_OFFSET)
/* Hop ID values */
#define GID_IHB_PIPE 0
#define GID_AXI_HB 1
#define GID_IHB_EXT 2
#define MCI_DID_GLOBAL_ASSIGNMENT_REQUEST_REG 0x2
/* Target MCi Local ID (LID, which is = self DID) */
#define MCI_DID_GLOBAL_ASSIGN_REQ_MCI_LOCAL_ID(val) (((val) & 0xFF) << 16)
/* Bits [15:8]: Number of MCis on chip of target MCi */
#define MCI_DID_GLOBAL_ASSIGN_REQ_MCI_COUNT(val) (((val) & 0xFF) << 8)
/* Bits [7:0]: Number of hops on chip of target MCi */
#define MCI_DID_GLOBAL_ASSIGN_REQ_HOPS_NUM(val) (((val) & 0xFF) << 0)
/* IHB_REG domain registers */
/* /HB /Units /IHB_REG /IHB_REGInterchip Hopping Bus Registers/
* Rx Memory Configuration Register (RX_MEM_CFG)
*/
#define MCI_CTRL_RX_MEM_CFG_REG_NUM 0x0
#define MCI_CTRL_RX_TX_MEM_CFG_RQ_THRESH(val) (((val) & 0xFF) << 24)
#define MCI_CTRL_RX_TX_MEM_CFG_PQ_THRESH(val) (((val) & 0xFF) << 16)
#define MCI_CTRL_RX_TX_MEM_CFG_NQ_THRESH(val) (((val) & 0xFF) << 8)
#define MCI_CTRL_RX_TX_MEM_CFG_DELTA_THRESH(val) (((val) & 0xF) << 4)
#define MCI_CTRL_RX_TX_MEM_CFG_RTC(val) (((val) & 0x3) << 2)
#define MCI_CTRL_RX_TX_MEM_CFG_WTC(val) (((val) & 0x3) << 0)
#define MCI_CTRL_RX_MEM_CFG_REG_DEF_CP_VAL \
(MCI_CTRL_RX_TX_MEM_CFG_RQ_THRESH(0x07) | \
MCI_CTRL_RX_TX_MEM_CFG_PQ_THRESH(0x3f) | \
MCI_CTRL_RX_TX_MEM_CFG_NQ_THRESH(0x3f) | \
MCI_CTRL_RX_TX_MEM_CFG_DELTA_THRESH(0xf) | \
MCI_CTRL_RX_TX_MEM_CFG_RTC(1) | \
MCI_CTRL_RX_TX_MEM_CFG_WTC(1))
#define MCI_CTRL_RX_MEM_CFG_REG_DEF_AP_VAL \
(MCI_CTRL_RX_TX_MEM_CFG_RQ_THRESH(0x3f) | \
MCI_CTRL_RX_TX_MEM_CFG_PQ_THRESH(0x03) | \
MCI_CTRL_RX_TX_MEM_CFG_NQ_THRESH(0x3f) | \
MCI_CTRL_RX_TX_MEM_CFG_DELTA_THRESH(0xf) | \
MCI_CTRL_RX_TX_MEM_CFG_RTC(1) | \
MCI_CTRL_RX_TX_MEM_CFG_WTC(1))
/* /HB /Units /IHB_REG /IHB_REGInterchip Hopping Bus Registers/
* Tx Memory Configuration Register (TX_MEM_CFG)
*/
#define MCI_CTRL_TX_MEM_CFG_REG_NUM 0x1
/* field mapping for TX mem config register
* are the same as for RX register - see register above
*/
#define MCI_CTRL_TX_MEM_CFG_REG_DEF_VAL \
(MCI_CTRL_RX_TX_MEM_CFG_RQ_THRESH(0x20) | \
MCI_CTRL_RX_TX_MEM_CFG_PQ_THRESH(0x20) | \
MCI_CTRL_RX_TX_MEM_CFG_NQ_THRESH(0x20) | \
MCI_CTRL_RX_TX_MEM_CFG_DELTA_THRESH(2) | \
MCI_CTRL_RX_TX_MEM_CFG_RTC(1) | \
MCI_CTRL_RX_TX_MEM_CFG_WTC(1))
/* /HB /Units /IHB_REG /IHB_REGInterchip Hopping Bus Registers
* /IHB Link CRC Control
*/
/* MCi Link CRC Control Register (MCi_CRC_CTRL) */
#define MCI_LINK_CRC_CTRL_REG_NUM 0x4
/* /HB /Units /IHB_REG /IHB_REGInterchip Hopping Bus Registers
* /IHB Status Register
*/
/* MCi Status Register (MCi_STS) */
#define MCI_CTRL_STATUS_REG_NUM 0x5
#define MCI_CTRL_STATUS_REG_PHY_READY (1 << 12)
#define MCI_CTRL_STATUS_REG_LINK_PRESENT (1 << 15)
#define MCI_CTRL_STATUS_REG_PHY_CID_VIO_OFFSET 24
#define MCI_CTRL_STATUS_REG_PHY_CID_VIO_MASK \
(0xF << MCI_CTRL_STATUS_REG_PHY_CID_VIO_OFFSET)
/* Expected successful Link result, including reserved bit */
#define MCI_CTRL_PHY_READY (MCI_CTRL_STATUS_REG_PHY_READY | \
MCI_CTRL_STATUS_REG_LINK_PRESENT | \
MCI_CTRL_STATUS_REG_PHY_CID_VIO_MASK)
/* /HB /Units /IHB_REG /IHB_REGInterchip Hopping Bus Registers/
* MCi PHY Speed Settings Register (MCi_PHY_SETTING)
*/
#define MCI_CTRL_MCI_PHY_SETTINGS_REG_NUM 0x8
#define MCI_CTRL_MCI_PHY_SET_DLO_FIFO_FULL_TRESH(val) (((val) & 0xF) << 28)
#define MCI_CTRL_MCI_PHY_SET_PHY_MAX_SPEED(val) (((val) & 0xF) << 12)
#define MCI_CTRL_MCI_PHY_SET_PHYCLK_SEL(val) (((val) & 0xF) << 8)
#define MCI_CTRL_MCI_PHY_SET_REFCLK_FREQ_SEL(val) (((val) & 0xF) << 4)
#define MCI_CTRL_MCI_PHY_SET_AUTO_LINK_EN(val) (((val) & 0x1) << 1)
#define MCI_CTRL_MCI_PHY_SET_REG_DEF_VAL \
(MCI_CTRL_MCI_PHY_SET_DLO_FIFO_FULL_TRESH(0x3) | \
MCI_CTRL_MCI_PHY_SET_PHY_MAX_SPEED(0x3) | \
MCI_CTRL_MCI_PHY_SET_PHYCLK_SEL(0x2) | \
MCI_CTRL_MCI_PHY_SET_REFCLK_FREQ_SEL(0x1))
#define MCI_CTRL_MCI_PHY_SET_REG_DEF_VAL2 \
(MCI_CTRL_MCI_PHY_SET_DLO_FIFO_FULL_TRESH(0x3) | \
MCI_CTRL_MCI_PHY_SET_PHY_MAX_SPEED(0x3) | \
MCI_CTRL_MCI_PHY_SET_PHYCLK_SEL(0x5) | \
MCI_CTRL_MCI_PHY_SET_REFCLK_FREQ_SEL(0x1))
/* /HB /Units /IHB_REG /IHB_REGInterchip Hopping Bus Registers
* /IHB Mode Config
*/
#define MCI_CTRL_IHB_MODE_CFG_REG_NUM 0x25
#define MCI_CTRL_IHB_MODE_HBCLK_DIV(val) ((val) & 0xFF)
#define MCI_CTRL_IHB_MODE_CHUNK_MOD_OFFSET 8
#define MCI_CTRL_IHB_MODE_CHUNK_MOD \
(1 << MCI_CTRL_IHB_MODE_CHUNK_MOD_OFFSET)
#define MCI_CTRL_IHB_MODE_FWD_MOD_OFFSET 9
#define MCI_CTRL_IHB_MODE_FWD_MOD \
(1 << MCI_CTRL_IHB_MODE_FWD_MOD_OFFSET)
#define MCI_CTRL_IHB_MODE_SEQFF_FINE_MOD(val) (((val) & 0xF) << 12)
#define MCI_CTRL_IHB_MODE_RX_COMB_THRESH(val) (((val) & 0xFF) << 16)
#define MCI_CTRL_IHB_MODE_TX_COMB_THRESH(val) (((val) & 0xFF) << 24)
#define MCI_CTRL_IHB_MODE_CFG_REG_DEF_VAL \
(MCI_CTRL_IHB_MODE_HBCLK_DIV(6) | \
MCI_CTRL_IHB_MODE_FWD_MOD | \
MCI_CTRL_IHB_MODE_SEQFF_FINE_MOD(0xF) | \
MCI_CTRL_IHB_MODE_RX_COMB_THRESH(0x3f) | \
MCI_CTRL_IHB_MODE_TX_COMB_THRESH(0x40))
/* AXI_HB registers */
#define MCI_AXI_ACCESS_DATA_REG_NUM 0x0
#define MCI_AXI_ACCESS_PCIE_MODE 1
#define MCI_AXI_ACCESS_CACHE_CHECK_OFFSET 5
#define MCI_AXI_ACCESS_CACHE_CHECK \
(1 << MCI_AXI_ACCESS_CACHE_CHECK_OFFSET)
#define MCI_AXI_ACCESS_FORCE_POST_WR_OFFSET 6
#define MCI_AXI_ACCESS_FORCE_POST_WR \
(1 << MCI_AXI_ACCESS_FORCE_POST_WR_OFFSET)
#define MCI_AXI_ACCESS_DISABLE_CLK_GATING_OFFSET 9
#define MCI_AXI_ACCESS_DISABLE_CLK_GATING \
(1 << MCI_AXI_ACCESS_DISABLE_CLK_GATING_OFFSET)
/* /HB /Units /HB_REG /HB_REGHopping Bus Registers
* /Window 0 Address Mask Register
*/
#define MCI_HB_CTRL_WIN0_ADDRESS_MASK_REG_NUM 0x2
/* /HB /Units /HB_REG /HB_REGHopping Bus Registers
* /Window 0 Destination Register
*/
#define MCI_HB_CTRL_WIN0_DESTINATION_REG_NUM 0x3
#define MCI_HB_CTRL_WIN0_DEST_VALID_FLAG(val) (((val) & 0x1) << 16)
#define MCI_HB_CTRL_WIN0_DEST_ID(val) (((val) & 0xFF) << 0)
/* /HB /Units /HB_REG /HB_REGHopping Bus Registers /Tx Control Register */
#define MCI_HB_CTRL_TX_CTRL_REG_NUM 0xD
#define MCI_HB_CTRL_TX_CTRL_PCIE_MODE_OFFSET 24
#define MCI_HB_CTRL_TX_CTRL_PCIE_MODE \
(1 << MCI_HB_CTRL_TX_CTRL_PCIE_MODE_OFFSET)
#define MCI_HB_CTRL_TX_CTRL_PRI_TH_QOS(val) (((val) & 0xF) << 12)
#define MCI_HB_CTRL_TX_CTRL_MAX_RD_CNT(val) (((val) & 0x1F) << 6)
#define MCI_HB_CTRL_TX_CTRL_MAX_WR_CNT(val) (((val) & 0x1F) << 0)
/* /HB /Units /IHB_REG /IHB_REGInterchip Hopping Bus Registers
* /IHB Version Control Register
*/
#define MCI_PHY_CTRL_REG_NUM 0x7
#define MCI_PHY_CTRL_MCI_MINOR 0x8 /* BITS [3:0] */
#define MCI_PHY_CTRL_MCI_MAJOR_OFFSET 4
#define MCI_PHY_CTRL_MCI_MAJOR \
(1 << MCI_PHY_CTRL_MCI_MAJOR_OFFSET)
#define MCI_PHY_CTRL_MCI_SLEEP_REQ_OFFSET 11
#define MCI_PHY_CTRL_MCI_SLEEP_REQ \
(1 << MCI_PHY_CTRL_MCI_SLEEP_REQ_OFFSET)
/* Host=1 / Device=0 PHY mode */
#define MCI_PHY_CTRL_MCI_PHY_MODE_OFFSET 24
#define MCI_PHY_CTRL_MCI_PHY_MODE_HOST \
(1 << MCI_PHY_CTRL_MCI_PHY_MODE_OFFSET)
/* Register=1 / PWM=0 interface */
#define MCI_PHY_CTRL_MCI_PHY_REG_IF_MODE_OFFSET 25
#define MCI_PHY_CTRL_MCI_PHY_REG_IF_MODE \
(1 << MCI_PHY_CTRL_MCI_PHY_REG_IF_MODE_OFFSET)
/* PHY code InReset=1 */
#define MCI_PHY_CTRL_MCI_PHY_RESET_CORE_OFFSET 26
#define MCI_PHY_CTRL_MCI_PHY_RESET_CORE \
(1 << MCI_PHY_CTRL_MCI_PHY_RESET_CORE_OFFSET)
#define MCI_PHY_CTRL_PHY_ADDR_MSB_OFFSET 27
#define MCI_PHY_CTRL_PHY_ADDR_MSB(addr) \
(((addr) & 0x3) << \
MCI_PHY_CTRL_PHY_ADDR_MSB_OFFSET)
#define MCI_PHY_CTRL_PIDI_MODE_OFFSET 31
#define MCI_PHY_CTRL_PIDI_MODE \
(1 << MCI_PHY_CTRL_PIDI_MODE_OFFSET)
/* Number of times to wait for the MCI link ready after MCI configurations
* Normally takes 34-35 successive reads
*/
#define LINK_READY_TIMEOUT 100
enum mci_register_type {
MCI_REG_TYPE_PHY = 0,
MCI_REG_TYPE_CTRL,
};
enum {
MCI_CMD_WRITE,
MCI_CMD_READ
};
/* Write wrapper callback for debug:
* will print written data in case LOG_LEVEL >= 40
*/
static void mci_mmio_write_32(uintptr_t addr, uint32_t value)
{
VERBOSE("Write:\t0x%x = 0x%x\n", (uint32_t)addr, value);
mmio_write_32(addr, value);
}
/* Read wrapper callback for debug:
* will print read data in case LOG_LEVEL >= 40
*/
static uint32_t mci_mmio_read_32(uintptr_t addr)
{
uint32_t value;
value = mmio_read_32(addr);
VERBOSE("Read:\t0x%x = 0x%x\n", (uint32_t)addr, value);
return value;
}
/* MCI indirect access command completion polling:
* Each write/read command done via MCI indirect registers must be polled
* for command completions status.
*
* Returns 1 in case of error
* Returns 0 in case of command completed successfully.
*/
static int mci_poll_command_completion(int mci_index, int command_type)
{
uint32_t mci_cmd_value = 0, retry_count = 100, ret = 0;
uint32_t completion_flags = MCI_INDIRECT_CTRL_CMD_DONE;
debug_enter();
/* Read commands require validating that requested data is ready */
if (command_type == MCI_CMD_READ)
completion_flags |= MCI_INDIRECT_CTRL_DATA_READY;
do {
/* wait 1 ms before each polling */
mdelay(1);
mci_cmd_value = mci_mmio_read_32(MCI_ACCESS_CMD_REG(mci_index));
} while (((mci_cmd_value & completion_flags) != completion_flags) &&
(retry_count-- > 0));
if (retry_count == 0) {
ERROR("%s: MCI command timeout (command status = 0x%x)\n",
__func__, mci_cmd_value);
ret = 1;
}
debug_exit();
return ret;
}
int mci_read(int mci_idx, uint32_t cmd, uint32_t *value)
{
int rval;
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_idx), cmd);
rval = mci_poll_command_completion(mci_idx, MCI_CMD_READ);
*value = mci_mmio_read_32(MCI_WRITE_READ_DATA_REG(mci_idx));
return rval;
}
int mci_write(int mci_idx, uint32_t cmd, uint32_t data)
{
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_idx), data);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_idx), cmd);
return mci_poll_command_completion(mci_idx, MCI_CMD_WRITE);
}
/* Perform 3 configurations in one command: PCI mode,
* queues separation and cache bit
*/
static int mci_axi_set_pcie_mode(int mci_index)
{
uint32_t reg_data, ret = 1;
debug_enter();
/* This configuration makes MCI IP behave consistently with AXI protocol
* It should be configured at one side only (for example locally at AP).
* The IP takes care of performing the same configurations at MCI on
* another side (for example remotely at CP).
*/
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_AXI_ACCESS_PCIE_MODE |
MCI_AXI_ACCESS_CACHE_CHECK |
MCI_AXI_ACCESS_FORCE_POST_WR |
MCI_AXI_ACCESS_DISABLE_CLK_GATING);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_AXI_ACCESS_DATA_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_AXI_HB) |
MCI_INDIRECT_CTRL_LOCAL_PKT |
MCI_INDIRECT_CTRL_CIRCULAR_CMD);
/* if Write command was successful, verify PCIe mode */
if (mci_poll_command_completion(mci_index, MCI_CMD_WRITE) == 0) {
/* Verify the PCIe mode selected */
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_HB_CTRL_TX_CTRL_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_AXI_HB) |
MCI_INDIRECT_CTRL_LOCAL_PKT |
MCI_INDIRECT_CTRL_READ_CMD);
/* if read was completed, verify PCIe mode */
if (mci_poll_command_completion(mci_index, MCI_CMD_READ) == 0) {
reg_data = mci_mmio_read_32(
MCI_WRITE_READ_DATA_REG(mci_index));
if (reg_data & MCI_HB_CTRL_TX_CTRL_PCIE_MODE)
ret = 0;
}
}
debug_exit();
return ret;
}
/* Reduce sequence FIFO timer expiration threshold */
static int mci_axi_set_fifo_thresh(int mci_index)
{
uint32_t reg_data, ret = 0;
debug_enter();
/* This configuration reduces sequence FIFO timer expiration threshold
* (to 0x7 instead of 0xA).
* In MCI 1.6 version this configuration prevents possible functional
* issues.
* In version 1.82 the configuration prevents performance degradation
*/
/* Configure local AP side */
reg_data = MCI_PHY_CTRL_PIDI_MODE |
MCI_PHY_CTRL_MCI_PHY_REG_IF_MODE |
MCI_PHY_CTRL_MCI_PHY_MODE_HOST |
MCI_PHY_CTRL_MCI_MAJOR |
MCI_PHY_CTRL_MCI_MINOR;
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index), reg_data);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(MCI_PHY_CTRL_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* Reduce the threshold */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_CTRL_IHB_MODE_CFG_REG_DEF_VAL);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_CTRL_IHB_MODE_CFG_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* Exit PIDI mode */
reg_data = MCI_PHY_CTRL_MCI_PHY_REG_IF_MODE |
MCI_PHY_CTRL_MCI_PHY_MODE_HOST |
MCI_PHY_CTRL_MCI_MAJOR |
MCI_PHY_CTRL_MCI_MINOR;
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index), reg_data);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(MCI_PHY_CTRL_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* Configure remote CP side */
reg_data = MCI_PHY_CTRL_PIDI_MODE |
MCI_PHY_CTRL_MCI_MAJOR |
MCI_PHY_CTRL_MCI_MINOR |
MCI_PHY_CTRL_MCI_PHY_REG_IF_MODE;
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index), reg_data);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(MCI_PHY_CTRL_REG_NUM) |
MCI_CTRL_IHB_MODE_FWD_MOD);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* Reduce the threshold */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_CTRL_IHB_MODE_CFG_REG_DEF_VAL);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_CTRL_IHB_MODE_CFG_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_IHB_EXT));
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* Exit PIDI mode */
reg_data = MCI_PHY_CTRL_MCI_MAJOR |
MCI_PHY_CTRL_MCI_MINOR |
MCI_PHY_CTRL_MCI_PHY_REG_IF_MODE;
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index), reg_data);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(MCI_PHY_CTRL_REG_NUM) |
MCI_CTRL_IHB_MODE_FWD_MOD);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
debug_exit();
return ret;
}
/* Configure:
* 1. AP & CP TX thresholds and delta configurations
* 2. DLO & DLI FIFO full threshold
* 3. RX thresholds and delta configurations
* 4. CP AR and AW outstanding
* 5. AP AR and AW outstanding
*/
static int mci_axi_set_fifo_rx_tx_thresh(int mci_index)
{
uint32_t ret = 0;
debug_enter();
/* AP TX thresholds and delta configurations (IHB_reg 0x1) */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_CTRL_TX_MEM_CFG_REG_DEF_VAL);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_CTRL_TX_MEM_CFG_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* CP TX thresholds and delta configurations (IHB_reg 0x1) */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_CTRL_TX_MEM_CFG_REG_DEF_VAL);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_CTRL_TX_MEM_CFG_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_IHB_EXT));
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* AP DLO & DLI FIFO full threshold & Auto-Link enable (IHB_reg 0x8) */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_CTRL_MCI_PHY_SET_REG_DEF_VAL |
MCI_CTRL_MCI_PHY_SET_AUTO_LINK_EN(1));
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_CTRL_MCI_PHY_SETTINGS_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* CP DLO & DLI FIFO full threshold (IHB_reg 0x8) */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_CTRL_MCI_PHY_SET_REG_DEF_VAL);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_CTRL_MCI_PHY_SETTINGS_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_IHB_EXT));
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* AP RX thresholds and delta configurations (IHB_reg 0x0) */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_CTRL_RX_MEM_CFG_REG_DEF_AP_VAL);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_CTRL_RX_MEM_CFG_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* CP RX thresholds and delta configurations (IHB_reg 0x0) */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_CTRL_RX_MEM_CFG_REG_DEF_CP_VAL);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_CTRL_RX_MEM_CFG_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_IHB_EXT));
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* AP AR & AW maximum AXI outstanding request cfg (HB_reg 0xd) */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_HB_CTRL_TX_CTRL_PRI_TH_QOS(8) |
MCI_HB_CTRL_TX_CTRL_MAX_RD_CNT(3) |
MCI_HB_CTRL_TX_CTRL_MAX_WR_CNT(3));
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_HB_CTRL_TX_CTRL_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_AXI_HB) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* CP AR & AW maximum AXI outstanding request cfg (HB_reg 0xd) */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(mci_index),
MCI_HB_CTRL_TX_CTRL_PRI_TH_QOS(8) |
MCI_HB_CTRL_TX_CTRL_MAX_RD_CNT(0xB) |
MCI_HB_CTRL_TX_CTRL_MAX_WR_CNT(0x11));
mci_mmio_write_32(MCI_ACCESS_CMD_REG(mci_index),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_HB_CTRL_TX_CTRL_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_IHB_EXT) |
MCI_INDIRECT_CTRL_HOPID(GID_AXI_HB));
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
debug_exit();
return ret;
}
/* configure MCI to allow read & write transactions to arrive at the same time.
* Without the below configuration, MCI won't sent response to CPU for
* transactions which arrived simultaneously and will lead to CPU hang.
* The below will configure MCI to be able to pass transactions from/to CP/AP.
*/
static int mci_enable_simultaneous_transactions(int mci_index)
{
uint32_t ret = 0;
debug_enter();
/* ID assignment (assigning global ID offset to CP) */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(0),
MCI_DID_GLOBAL_ASSIGN_REQ_MCI_LOCAL_ID(2) |
MCI_DID_GLOBAL_ASSIGN_REQ_MCI_COUNT(2) |
MCI_DID_GLOBAL_ASSIGN_REQ_HOPS_NUM(2));
mci_mmio_write_32(MCI_ACCESS_CMD_REG(0),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_DID_GLOBAL_ASSIGNMENT_REQUEST_REG) |
MCI_INDIRECT_CTRL_ASSIGN_CMD);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* Assigning dest. ID=3 to all transactions entering from AXI at AP */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(0),
MCI_HB_CTRL_WIN0_DEST_VALID_FLAG(1) |
MCI_HB_CTRL_WIN0_DEST_ID(3));
mci_mmio_write_32(MCI_ACCESS_CMD_REG(0),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_HB_CTRL_WIN0_DESTINATION_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_AXI_HB) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* Assigning dest. ID=1 to all transactions entering from AXI at CP */
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(0),
MCI_HB_CTRL_WIN0_DEST_VALID_FLAG(1) |
MCI_HB_CTRL_WIN0_DEST_ID(1));
mci_mmio_write_32(MCI_ACCESS_CMD_REG(0),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_HB_CTRL_WIN0_DESTINATION_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_IHB_EXT) |
MCI_INDIRECT_CTRL_HOPID(GID_AXI_HB));
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* End address to all transactions entering from AXI at AP.
* This will lead to get match for any AXI address
* and receive destination ID=3
*/
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(0), 0xffffffff);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(0),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_HB_CTRL_WIN0_ADDRESS_MASK_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_AXI_HB) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
/* End address to all transactions entering from AXI at CP.
* This will lead to get match for any AXI address
* and receive destination ID=1
*/
mci_mmio_write_32(MCI_WRITE_READ_DATA_REG(0), 0xffffffff);
mci_mmio_write_32(MCI_ACCESS_CMD_REG(0),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_HB_CTRL_WIN0_ADDRESS_MASK_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_IHB_EXT) |
MCI_INDIRECT_CTRL_HOPID(GID_AXI_HB));
ret |= mci_poll_command_completion(mci_index, MCI_CMD_WRITE);
debug_exit();
return ret;
}
/* Check if MCI simultaneous transaction was already enabled.
* Currently bootrom does this mci configuration only when the boot source is
* SAR_MCIX4, in other cases it should be done at this stage.
* It is worth noticing that in case of booting from uart, the bootrom
* flow is different and this mci initialization is skipped even if boot
* source is SAR_MCIX4. Therefore new verification bases on appropriate mci's
* register content: if the appropriate reg contains 0x0 it means that the
* bootrom didn't perform required mci configuration.
*
* Returns:
* 0 - configuration already done
* 1 - configuration missing
*/
static _Bool mci_simulatenous_trans_missing(int mci_index)
{
uint32_t reg, ret;
/* read 'Window 0 Destination ID assignment' from HB register 0x3
* (TX_CFG_W0_DST_ID) to check whether ID assignment was already
* performed by BootROM.
*/
debug_enter();
mci_mmio_write_32(MCI_ACCESS_CMD_REG(0),
MCI_INDIRECT_REG_CTRL_ADDR(
MCI_HB_CTRL_WIN0_DESTINATION_REG_NUM) |
MCI_INDIRECT_CTRL_HOPID(GID_AXI_HB) |
MCI_INDIRECT_CTRL_LOCAL_PKT |
MCI_INDIRECT_CTRL_READ_CMD);
ret = mci_poll_command_completion(mci_index, MCI_CMD_READ);
reg = mci_mmio_read_32(MCI_WRITE_READ_DATA_REG(mci_index));
if (ret)
ERROR("Failed to verify MCI simultaneous read/write status\n");
debug_exit();
/* default ID assignment is 0, so if register doesn't contain zeros
* it means that bootrom already performed required configuration.
*/
if (reg != 0)
return 0;
return 1;
}
/* For A1 revision, configure the MCI link for performance improvement:
* - set MCI to support read/write transactions to arrive at the same time
* - Switch AXI to PCIe mode
* - Reduce sequence FIFO threshold
* - Configure RX/TX FIFO thresholds
*
* Note:
* We don't exit on error code from any sub routine, to try (best effort) to
* complete the MCI configuration.
* (If we exit - Bootloader will surely fail to boot)
*/
int mci_configure(int mci_index)
{
int rval;
debug_enter();
/* According to design guidelines the MCI simultaneous transaction
* shouldn't be enabled more then once - therefore make sure that it
* wasn't already enabled in bootrom.
*/
if (mci_simulatenous_trans_missing(mci_index)) {
VERBOSE("Enabling MCI simultaneous transaction\n");
/* set MCI to support read/write transactions
* to arrive at the same time
*/
rval = mci_enable_simultaneous_transactions(mci_index);
if (rval)
ERROR("Failed to set MCI simultaneous read/write\n");
} else
VERBOSE("Skip MCI ID assignment - already done by bootrom\n");
/* Configure MCI for more consistent behavior with AXI protocol */
rval = mci_axi_set_pcie_mode(mci_index);
if (rval)
ERROR("Failed to set MCI to AXI PCIe mode\n");
/* reduce FIFO global threshold */
rval = mci_axi_set_fifo_thresh(mci_index);
if (rval)
ERROR("Failed to set MCI FIFO global threshold\n");
/* configure RX/TX FIFO thresholds */
rval = mci_axi_set_fifo_rx_tx_thresh(mci_index);
if (rval)
ERROR("Failed to set MCI RX/TX FIFO threshold\n");
debug_exit();
return 1;
}
int mci_get_link_status(void)
{
uint32_t cmd, data;
cmd = (MCI_INDIRECT_REG_CTRL_ADDR(MCI_CTRL_STATUS_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT | MCI_INDIRECT_CTRL_READ_CMD);
if (mci_read(0, cmd, &data)) {
ERROR("Failed to read status register\n");
return -1;
}
/* Check if the link is ready */
if (data != MCI_CTRL_PHY_READY) {
ERROR("Bad link status %x\n", data);
return -1;
}
return 0;
}
void mci_turn_link_down(void)
{
uint32_t cmd, data;
int rval = 0;
debug_enter();
/* Turn off auto-link */
cmd = (MCI_INDIRECT_REG_CTRL_ADDR(MCI_CTRL_MCI_PHY_SETTINGS_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
data = (MCI_CTRL_MCI_PHY_SET_REG_DEF_VAL2 |
MCI_CTRL_MCI_PHY_SET_AUTO_LINK_EN(0));
rval = mci_write(0, cmd, data);
if (rval)
ERROR("Failed to turn off auto-link\n");
/* Reset AP PHY */
cmd = (MCI_INDIRECT_REG_CTRL_ADDR(MCI_PHY_CTRL_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
data = (MCI_PHY_CTRL_MCI_MINOR |
MCI_PHY_CTRL_MCI_MAJOR |
MCI_PHY_CTRL_MCI_PHY_MODE_HOST |
MCI_PHY_CTRL_MCI_PHY_RESET_CORE);
rval = mci_write(0, cmd, data);
if (rval)
ERROR("Failed to reset AP PHY\n");
/* Clear all status & CRC values */
cmd = (MCI_INDIRECT_REG_CTRL_ADDR(MCI_LINK_CRC_CTRL_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
data = 0x0;
mci_write(0, cmd, data);
cmd = (MCI_INDIRECT_REG_CTRL_ADDR(MCI_CTRL_STATUS_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
data = 0x0;
rval = mci_write(0, cmd, data);
if (rval)
ERROR("Failed to reset AP PHY\n");
/* Wait 5ms before un-reset the PHY */
mdelay(5);
/* Un-reset AP PHY */
cmd = (MCI_INDIRECT_REG_CTRL_ADDR(MCI_PHY_CTRL_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
data = (MCI_PHY_CTRL_MCI_MINOR | MCI_PHY_CTRL_MCI_MAJOR |
MCI_PHY_CTRL_MCI_PHY_MODE_HOST);
rval = mci_write(0, cmd, data);
if (rval)
ERROR("Failed to un-reset AP PHY\n");
debug_exit();
}
void mci_turn_link_on(void)
{
uint32_t cmd, data;
int rval = 0;
debug_enter();
/* Turn on auto-link */
cmd = (MCI_INDIRECT_REG_CTRL_ADDR(MCI_CTRL_MCI_PHY_SETTINGS_REG_NUM) |
MCI_INDIRECT_CTRL_LOCAL_PKT);
data = (MCI_CTRL_MCI_PHY_SET_REG_DEF_VAL2 |
MCI_CTRL_MCI_PHY_SET_AUTO_LINK_EN(1));
rval = mci_write(0, cmd, data);
if (rval)
ERROR("Failed to turn on auto-link\n");
debug_exit();
}
/* Initialize MCI for performance improvements */
int mci_initialize(int mci_index)
{
int ret;
debug_enter();
INFO("MCI%d initialization:\n", mci_index);
ret = mci_configure(mci_index);
debug_exit();
return ret;
}

View file

@ -0,0 +1,237 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* AP807 Marvell SoC driver */
#include <ap_setup.h>
#include <cache_llc.h>
#include <ccu.h>
#include <debug.h>
#include <io_win.h>
#include <mci.h>
#include <mmio.h>
#include <mvebu_def.h>
#define SMMU_sACR (MVEBU_SMMU_BASE + 0x10)
#define SMMU_sACR_PG_64K (1 << 16)
#define CCU_GSPMU_CR (MVEBU_CCU_BASE(MVEBU_AP0) \
+ 0x3F0)
#define GSPMU_CPU_CONTROL (0x1 << 0)
#define CCU_HTC_CR (MVEBU_CCU_BASE(MVEBU_AP0) \
+ 0x200)
#define CCU_SET_POC_OFFSET 5
#define DSS_CR0 (MVEBU_RFU_BASE + 0x100)
#define DVM_48BIT_VA_ENABLE (1 << 21)
/* Secure MoChi incoming access */
#define SEC_MOCHI_IN_ACC_REG (MVEBU_RFU_BASE + 0x4738)
#define SEC_MOCHI_IN_ACC_IHB0_EN (1)
#define SEC_MOCHI_IN_ACC_IHB1_EN (1 << 3)
#define SEC_MOCHI_IN_ACC_IHB2_EN (1 << 6)
#define SEC_MOCHI_IN_ACC_PIDI_EN (1 << 9)
#define SEC_IN_ACCESS_ENA_ALL_MASTERS (SEC_MOCHI_IN_ACC_IHB0_EN | \
SEC_MOCHI_IN_ACC_IHB1_EN | \
SEC_MOCHI_IN_ACC_IHB2_EN | \
SEC_MOCHI_IN_ACC_PIDI_EN)
/* SYSRST_OUTn Config definitions */
#define MVEBU_SYSRST_OUT_CONFIG_REG (MVEBU_MISC_SOC_BASE + 0x4)
#define WD_MASK_SYS_RST_OUT (1 << 2)
/* DSS PHY for DRAM */
#define DSS_SCR_REG (MVEBU_RFU_BASE + 0x208)
#define DSS_PPROT_OFFS 4
#define DSS_PPROT_MASK 0x7
#define DSS_PPROT_PRIV_SECURE_DATA 0x1
/* Used for Units of AP-807 (e.g. SDIO and etc) */
#define MVEBU_AXI_ATTR_BASE (MVEBU_REGS_BASE + 0x6F4580)
#define MVEBU_AXI_ATTR_REG(index) (MVEBU_AXI_ATTR_BASE + \
0x4 * index)
enum axi_attr {
AXI_SDIO_ATTR = 0,
AXI_DFX_ATTR,
AXI_MAX_ATTR,
};
static void ap_sec_masters_access_en(uint32_t enable)
{
uint32_t reg;
/* Open/Close incoming access for all masters.
* The access is disabled in trusted boot mode
* Could only be done in EL3
*/
reg = mmio_read_32(SEC_MOCHI_IN_ACC_REG);
if (enable)
mmio_write_32(SEC_MOCHI_IN_ACC_REG, reg |
SEC_IN_ACCESS_ENA_ALL_MASTERS);
else
mmio_write_32(SEC_MOCHI_IN_ACC_REG,
reg & ~SEC_IN_ACCESS_ENA_ALL_MASTERS);
}
static void setup_smmu(void)
{
uint32_t reg;
/* Set the SMMU page size to 64 KB */
reg = mmio_read_32(SMMU_sACR);
reg |= SMMU_sACR_PG_64K;
mmio_write_32(SMMU_sACR, reg);
}
static void init_aurora2(void)
{
uint32_t reg;
/* Enable GSPMU control by CPU */
reg = mmio_read_32(CCU_GSPMU_CR);
reg |= GSPMU_CPU_CONTROL;
mmio_write_32(CCU_GSPMU_CR, reg);
#if LLC_ENABLE
/* Enable LLC for AP807 in exclusive mode */
llc_enable(0, 1);
/* Set point of coherency to DDR.
* This is required by units which have
* SW cache coherency
*/
reg = mmio_read_32(CCU_HTC_CR);
reg |= (0x1 << CCU_SET_POC_OFFSET);
mmio_write_32(CCU_HTC_CR, reg);
#endif /* LLC_ENABLE */
}
/* MCIx indirect access register are based by default at 0xf4000000/0xf6000000
* to avoid conflict of internal registers of units connected via MCIx, which
* can be based on the same address (i.e CP1 base is also 0xf4000000),
* the following routines remaps the MCIx indirect bases to another domain
*/
static void mci_remap_indirect_access_base(void)
{
uint32_t mci;
for (mci = 0; mci < MCI_MAX_UNIT_ID; mci++)
mmio_write_32(MCIX4_REG_START_ADDRESS_REG(mci),
MVEBU_MCI_REG_BASE_REMAP(mci) >>
MCI_REMAP_OFF_SHIFT);
}
static void ap807_axi_attr_init(void)
{
uint32_t index, data;
/* Initialize AXI attributes for AP807 */
/* Go over the AXI attributes and set Ax-Cache and Ax-Domain */
for (index = 0; index < AXI_MAX_ATTR; index++) {
switch (index) {
/* DFX works with no coherent only -
* there's no option to configure the Ax-Cache and Ax-Domain
*/
case AXI_DFX_ATTR:
continue;
default:
/* Set Ax-Cache as cacheable, no allocate, modifiable,
* bufferable.
* The values are different because Read & Write
* definition is different in Ax-Cache
*/
data = mmio_read_32(MVEBU_AXI_ATTR_REG(index));
data &= ~MVEBU_AXI_ATTR_ARCACHE_MASK;
data |= (CACHE_ATTR_WRITE_ALLOC |
CACHE_ATTR_CACHEABLE |
CACHE_ATTR_BUFFERABLE) <<
MVEBU_AXI_ATTR_ARCACHE_OFFSET;
data &= ~MVEBU_AXI_ATTR_AWCACHE_MASK;
data |= (CACHE_ATTR_READ_ALLOC |
CACHE_ATTR_CACHEABLE |
CACHE_ATTR_BUFFERABLE) <<
MVEBU_AXI_ATTR_AWCACHE_OFFSET;
/* Set Ax-Domain as Outer domain */
data &= ~MVEBU_AXI_ATTR_ARDOMAIN_MASK;
data |= DOMAIN_OUTER_SHAREABLE <<
MVEBU_AXI_ATTR_ARDOMAIN_OFFSET;
data &= ~MVEBU_AXI_ATTR_AWDOMAIN_MASK;
data |= DOMAIN_OUTER_SHAREABLE <<
MVEBU_AXI_ATTR_AWDOMAIN_OFFSET;
mmio_write_32(MVEBU_AXI_ATTR_REG(index), data);
}
}
}
static void misc_soc_configurations(void)
{
uint32_t reg;
/* Enable 48-bit VA */
mmio_setbits_32(DSS_CR0, DVM_48BIT_VA_ENABLE);
/* Un-mask Watchdog reset from influencing the SYSRST_OUTn.
* Otherwise, upon WD timeout, the WD reset signal won't trigger reset
*/
reg = mmio_read_32(MVEBU_SYSRST_OUT_CONFIG_REG);
reg &= ~(WD_MASK_SYS_RST_OUT);
mmio_write_32(MVEBU_SYSRST_OUT_CONFIG_REG, reg);
}
void ap_init(void)
{
/* Setup Aurora2. */
init_aurora2();
/* configure MCI mapping */
mci_remap_indirect_access_base();
/* configure IO_WIN windows */
init_io_win(MVEBU_AP0);
/* configure CCU windows */
init_ccu(MVEBU_AP0);
/* configure the SMMU */
setup_smmu();
/* Open AP incoming access for all masters */
ap_sec_masters_access_en(1);
/* configure axi for AP */
ap807_axi_attr_init();
/* misc configuration of the SoC */
misc_soc_configurations();
}
static void ap807_dram_phy_access_config(void)
{
uint32_t reg_val;
/* Update DSS port access permission to DSS_PHY */
reg_val = mmio_read_32(DSS_SCR_REG);
reg_val &= ~(DSS_PPROT_MASK << DSS_PPROT_OFFS);
reg_val |= ((DSS_PPROT_PRIV_SECURE_DATA & DSS_PPROT_MASK) <<
DSS_PPROT_OFFS);
mmio_write_32(DSS_SCR_REG, reg_val);
}
void ap_ble_init(void)
{
/* Enable DSS port */
ap807_dram_phy_access_config();
}
int ap_get_count(void)
{
return 1;
}

View file

@ -0,0 +1,251 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* AP806 Marvell SoC driver */
#include <ap_setup.h>
#include <ccu.h>
#include <cache_llc.h>
#include <debug.h>
#include <io_win.h>
#include <mci.h>
#include <mmio.h>
#include <mvebu_def.h>
#define SMMU_sACR (MVEBU_SMMU_BASE + 0x10)
#define SMMU_sACR_PG_64K (1 << 16)
#define CCU_GSPMU_CR (MVEBU_CCU_BASE(MVEBU_AP0) + \
0x3F0)
#define GSPMU_CPU_CONTROL (0x1 << 0)
#define CCU_HTC_CR (MVEBU_CCU_BASE(MVEBU_AP0) + \
0x200)
#define CCU_SET_POC_OFFSET 5
#define CCU_RGF(win) (MVEBU_CCU_BASE(MVEBU_AP0) + \
0x90 + 4 * (win))
#define DSS_CR0 (MVEBU_RFU_BASE + 0x100)
#define DVM_48BIT_VA_ENABLE (1 << 21)
/* Secure MoChi incoming access */
#define SEC_MOCHI_IN_ACC_REG (MVEBU_RFU_BASE + 0x4738)
#define SEC_MOCHI_IN_ACC_IHB0_EN (1)
#define SEC_MOCHI_IN_ACC_IHB1_EN (1 << 3)
#define SEC_MOCHI_IN_ACC_IHB2_EN (1 << 6)
#define SEC_MOCHI_IN_ACC_PIDI_EN (1 << 9)
#define SEC_IN_ACCESS_ENA_ALL_MASTERS (SEC_MOCHI_IN_ACC_IHB0_EN | \
SEC_MOCHI_IN_ACC_IHB1_EN | \
SEC_MOCHI_IN_ACC_IHB2_EN | \
SEC_MOCHI_IN_ACC_PIDI_EN)
/* SYSRST_OUTn Config definitions */
#define MVEBU_SYSRST_OUT_CONFIG_REG (MVEBU_MISC_SOC_BASE + 0x4)
#define WD_MASK_SYS_RST_OUT (1 << 2)
/* Generic Timer System Controller */
#define MVEBU_MSS_GTCR_REG (MVEBU_REGS_BASE + 0x581000)
#define MVEBU_MSS_GTCR_ENABLE_BIT 0x1
/*
* AXI Configuration.
*/
/* Used for Units of AP-806 (e.g. SDIO and etc) */
#define MVEBU_AXI_ATTR_BASE (MVEBU_REGS_BASE + 0x6F4580)
#define MVEBU_AXI_ATTR_REG(index) (MVEBU_AXI_ATTR_BASE + \
0x4 * index)
enum axi_attr {
AXI_SDIO_ATTR = 0,
AXI_DFX_ATTR,
AXI_MAX_ATTR,
};
static void apn_sec_masters_access_en(uint32_t enable)
{
uint32_t reg;
/* Open/Close incoming access for all masters.
* The access is disabled in trusted boot mode
* Could only be done in EL3
*/
reg = mmio_read_32(SEC_MOCHI_IN_ACC_REG);
if (enable)
mmio_write_32(SEC_MOCHI_IN_ACC_REG, reg |
SEC_IN_ACCESS_ENA_ALL_MASTERS);
else
mmio_write_32(SEC_MOCHI_IN_ACC_REG, reg &
~SEC_IN_ACCESS_ENA_ALL_MASTERS);
}
static void setup_smmu(void)
{
uint32_t reg;
/* Set the SMMU page size to 64 KB */
reg = mmio_read_32(SMMU_sACR);
reg |= SMMU_sACR_PG_64K;
mmio_write_32(SMMU_sACR, reg);
}
static void apn806_errata_wa_init(void)
{
/*
* ERRATA ID: RES-3033912 - Internal Address Space Init state causes
* a hang upon accesses to [0xf070_0000, 0xf07f_ffff]
* Workaround: Boot Firmware (ATF) should configure CCU_RGF_WIN(4) to
* split [0x6e_0000, 0xff_ffff] to values [0x6e_0000, 0x6f_ffff] and
* [0x80_0000, 0xff_ffff] that cause accesses to the
* segment of [0xf070_0000, 0xf07f_ffff] to act as RAZWI.
*/
mmio_write_32(CCU_RGF(4), 0x37f9b809);
mmio_write_32(CCU_RGF(5), 0x7ffa0009);
}
static void init_aurora2(void)
{
uint32_t reg;
/* Enable GSPMU control by CPU */
reg = mmio_read_32(CCU_GSPMU_CR);
reg |= GSPMU_CPU_CONTROL;
mmio_write_32(CCU_GSPMU_CR, reg);
#if LLC_ENABLE
/* Enable LLC for AP806 in exclusive mode */
llc_enable(0, 1);
/* Set point of coherency to DDR.
* This is required by units which have
* SW cache coherency
*/
reg = mmio_read_32(CCU_HTC_CR);
reg |= (0x1 << CCU_SET_POC_OFFSET);
mmio_write_32(CCU_HTC_CR, reg);
#endif /* LLC_ENABLE */
apn806_errata_wa_init();
}
/* MCIx indirect access register are based by default at 0xf4000000/0xf6000000
* to avoid conflict of internal registers of units connected via MCIx, which
* can be based on the same address (i.e CP1 base is also 0xf4000000),
* the following routines remaps the MCIx indirect bases to another domain
*/
static void mci_remap_indirect_access_base(void)
{
uint32_t mci;
for (mci = 0; mci < MCI_MAX_UNIT_ID; mci++)
mmio_write_32(MCIX4_REG_START_ADDRESS_REG(mci),
MVEBU_MCI_REG_BASE_REMAP(mci) >>
MCI_REMAP_OFF_SHIFT);
}
static void apn806_axi_attr_init(void)
{
uint32_t index, data;
/* Initialize AXI attributes for APN806 */
/* Go over the AXI attributes and set Ax-Cache and Ax-Domain */
for (index = 0; index < AXI_MAX_ATTR; index++) {
switch (index) {
/* DFX works with no coherent only -
* there's no option to configure the Ax-Cache and Ax-Domain
*/
case AXI_DFX_ATTR:
continue;
default:
/* Set Ax-Cache as cacheable, no allocate, modifiable,
* bufferable
* The values are different because Read & Write
* definition is different in Ax-Cache
*/
data = mmio_read_32(MVEBU_AXI_ATTR_REG(index));
data &= ~MVEBU_AXI_ATTR_ARCACHE_MASK;
data |= (CACHE_ATTR_WRITE_ALLOC |
CACHE_ATTR_CACHEABLE |
CACHE_ATTR_BUFFERABLE) <<
MVEBU_AXI_ATTR_ARCACHE_OFFSET;
data &= ~MVEBU_AXI_ATTR_AWCACHE_MASK;
data |= (CACHE_ATTR_READ_ALLOC |
CACHE_ATTR_CACHEABLE |
CACHE_ATTR_BUFFERABLE) <<
MVEBU_AXI_ATTR_AWCACHE_OFFSET;
/* Set Ax-Domain as Outer domain */
data &= ~MVEBU_AXI_ATTR_ARDOMAIN_MASK;
data |= DOMAIN_OUTER_SHAREABLE <<
MVEBU_AXI_ATTR_ARDOMAIN_OFFSET;
data &= ~MVEBU_AXI_ATTR_AWDOMAIN_MASK;
data |= DOMAIN_OUTER_SHAREABLE <<
MVEBU_AXI_ATTR_AWDOMAIN_OFFSET;
mmio_write_32(MVEBU_AXI_ATTR_REG(index), data);
}
}
}
static void dss_setup(void)
{
/* Enable 48-bit VA */
mmio_setbits_32(DSS_CR0, DVM_48BIT_VA_ENABLE);
}
void misc_soc_configurations(void)
{
uint32_t reg;
/* Un-mask Watchdog reset from influencing the SYSRST_OUTn.
* Otherwise, upon WD timeout, the WD reset signal won't trigger reset
*/
reg = mmio_read_32(MVEBU_SYSRST_OUT_CONFIG_REG);
reg &= ~(WD_MASK_SYS_RST_OUT);
mmio_write_32(MVEBU_SYSRST_OUT_CONFIG_REG, reg);
}
void ap_init(void)
{
/* Setup Aurora2. */
init_aurora2();
/* configure MCI mapping */
mci_remap_indirect_access_base();
/* configure IO_WIN windows */
init_io_win(MVEBU_AP0);
/* configure CCU windows */
init_ccu(MVEBU_AP0);
/* configure DSS */
dss_setup();
/* configure the SMMU */
setup_smmu();
/* Open APN incoming access for all masters */
apn_sec_masters_access_en(1);
/* configure axi for APN*/
apn806_axi_attr_init();
/* misc configuration of the SoC */
misc_soc_configurations();
}
void ap_ble_init(void)
{
}
int ap_get_count(void)
{
return 1;
}

View file

@ -0,0 +1,429 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* CP110 Marvell SoC driver */
#include <amb_adec.h>
#include <cp110_setup.h>
#include <debug.h>
#include <delay_timer.h>
#include <iob.h>
#include <plat_marvell.h>
/*
* AXI Configuration.
*/
/* Used for Units of CP-110 (e.g. USB device, USB Host, and etc) */
#define MVEBU_AXI_ATTR_OFFSET (0x441300)
#define MVEBU_AXI_ATTR_REG(index) (MVEBU_AXI_ATTR_OFFSET + \
0x4 * index)
/* AXI Protection bits */
#define MVEBU_AXI_PROT_OFFSET (0x441200)
/* AXI Protection regs */
#define MVEBU_AXI_PROT_REG(index) ((index <= 4) ? \
(MVEBU_AXI_PROT_OFFSET + \
0x4 * index) : \
(MVEBU_AXI_PROT_OFFSET + 0x18))
#define MVEBU_AXI_PROT_REGS_NUM (6)
#define MVEBU_SOC_CFGS_OFFSET (0x441900)
#define MVEBU_SOC_CFG_REG(index) (MVEBU_SOC_CFGS_OFFSET + \
0x4 * index)
#define MVEBU_SOC_CFG_REG_NUM (0)
#define MVEBU_SOC_CFG_GLOG_SECURE_EN_MASK (0xE)
/* SATA3 MBUS to AXI regs */
#define MVEBU_BRIDGE_WIN_DIS_REG (MVEBU_SOC_CFGS_OFFSET + 0x10)
#define MVEBU_BRIDGE_WIN_DIS_OFF (0x0)
/* SATA3 MBUS to AXI regs */
#define MVEBU_SATA_M2A_AXI_PORT_CTRL_REG (0x54ff04)
/* AXI to MBUS bridge registers */
#define MVEBU_AMB_IP_OFFSET (0x13ff00)
#define MVEBU_AMB_IP_BRIDGE_WIN_REG(win) (MVEBU_AMB_IP_OFFSET + \
(win * 0x8))
#define MVEBU_AMB_IP_BRIDGE_WIN_EN_OFFSET 0
#define MVEBU_AMB_IP_BRIDGE_WIN_EN_MASK \
(0x1 << MVEBU_AMB_IP_BRIDGE_WIN_EN_OFFSET)
#define MVEBU_AMB_IP_BRIDGE_WIN_SIZE_OFFSET 16
#define MVEBU_AMB_IP_BRIDGE_WIN_SIZE_MASK \
(0xffff << MVEBU_AMB_IP_BRIDGE_WIN_SIZE_OFFSET)
#define MVEBU_SAMPLE_AT_RESET_REG (0x440600)
#define SAR_PCIE1_CLK_CFG_OFFSET 31
#define SAR_PCIE1_CLK_CFG_MASK (0x1 << SAR_PCIE1_CLK_CFG_OFFSET)
#define SAR_PCIE0_CLK_CFG_OFFSET 30
#define SAR_PCIE0_CLK_CFG_MASK (0x1 << SAR_PCIE0_CLK_CFG_OFFSET)
#define SAR_I2C_INIT_EN_OFFSET 24
#define SAR_I2C_INIT_EN_MASK (1 << SAR_I2C_INIT_EN_OFFSET)
/*******************************************************************************
* PCIE clock buffer control
******************************************************************************/
#define MVEBU_PCIE_REF_CLK_BUF_CTRL (0x4404F0)
#define PCIE1_REFCLK_BUFF_SOURCE 0x800
#define PCIE0_REFCLK_BUFF_SOURCE 0x400
/*******************************************************************************
* MSS Device Push Set Register
******************************************************************************/
#define MVEBU_CP_MSS_DPSHSR_REG (0x280040)
#define MSS_DPSHSR_REG_PCIE_CLK_SEL 0x8
/*******************************************************************************
* RTC Configuration
******************************************************************************/
#define MVEBU_RTC_BASE (0x284000)
#define MVEBU_RTC_STATUS_REG (MVEBU_RTC_BASE + 0x0)
#define MVEBU_RTC_STATUS_ALARM1_MASK 0x1
#define MVEBU_RTC_STATUS_ALARM2_MASK 0x2
#define MVEBU_RTC_IRQ_1_CONFIG_REG (MVEBU_RTC_BASE + 0x4)
#define MVEBU_RTC_IRQ_2_CONFIG_REG (MVEBU_RTC_BASE + 0x8)
#define MVEBU_RTC_TIME_REG (MVEBU_RTC_BASE + 0xC)
#define MVEBU_RTC_ALARM_1_REG (MVEBU_RTC_BASE + 0x10)
#define MVEBU_RTC_ALARM_2_REG (MVEBU_RTC_BASE + 0x14)
#define MVEBU_RTC_CCR_REG (MVEBU_RTC_BASE + 0x18)
#define MVEBU_RTC_NOMINAL_TIMING 0x2000
#define MVEBU_RTC_NOMINAL_TIMING_MASK 0x7FFF
#define MVEBU_RTC_TEST_CONFIG_REG (MVEBU_RTC_BASE + 0x1C)
#define MVEBU_RTC_BRIDGE_TIMING_CTRL0_REG (MVEBU_RTC_BASE + 0x80)
#define MVEBU_RTC_WRCLK_PERIOD_MASK 0xFFFF
#define MVEBU_RTC_WRCLK_PERIOD_DEFAULT 0x3FF
#define MVEBU_RTC_WRCLK_SETUP_OFFS 16
#define MVEBU_RTC_WRCLK_SETUP_MASK 0xFFFF0000
#define MVEBU_RTC_WRCLK_SETUP_DEFAULT 0x29
#define MVEBU_RTC_BRIDGE_TIMING_CTRL1_REG (MVEBU_RTC_BASE + 0x84)
#define MVEBU_RTC_READ_OUTPUT_DELAY_MASK 0xFFFF
#define MVEBU_RTC_READ_OUTPUT_DELAY_DEFAULT 0x1F
enum axi_attr {
AXI_ADUNIT_ATTR = 0,
AXI_COMUNIT_ATTR,
AXI_EIP197_ATTR,
AXI_USB3D_ATTR,
AXI_USB3H0_ATTR,
AXI_USB3H1_ATTR,
AXI_SATA0_ATTR,
AXI_SATA1_ATTR,
AXI_DAP_ATTR,
AXI_DFX_ATTR,
AXI_DBG_TRC_ATTR = 12,
AXI_SDIO_ATTR,
AXI_MSS_ATTR,
AXI_MAX_ATTR,
};
/* Most stream IDS are configured centrally in the CP-110 RFU
* but some are configured inside the unit registers
*/
#define RFU_STREAM_ID_BASE (0x450000)
#define USB3H_0_STREAM_ID_REG (RFU_STREAM_ID_BASE + 0xC)
#define USB3H_1_STREAM_ID_REG (RFU_STREAM_ID_BASE + 0x10)
#define SATA_0_STREAM_ID_REG (RFU_STREAM_ID_BASE + 0x14)
#define SATA_1_STREAM_ID_REG (RFU_STREAM_ID_BASE + 0x18)
#define CP_DMA_0_STREAM_ID_REG (0x6B0010)
#define CP_DMA_1_STREAM_ID_REG (0x6D0010)
/* We allocate IDs 128-255 for PCIe */
#define MAX_STREAM_ID (0x80)
uintptr_t stream_id_reg[] = {
USB3H_0_STREAM_ID_REG,
USB3H_1_STREAM_ID_REG,
CP_DMA_0_STREAM_ID_REG,
CP_DMA_1_STREAM_ID_REG,
SATA_0_STREAM_ID_REG,
SATA_1_STREAM_ID_REG,
0
};
static void cp110_errata_wa_init(uintptr_t base)
{
uint32_t data;
/* ERRATA GL-4076863:
* Reset value for global_secure_enable inputs must be changed
* from '1' to '0'.
* When asserted, only "secured" transactions can enter IHB
* configuration space.
* However, blocking AXI transactions is performed by IOB.
* Performing it also at IHB/HB complicates programming model.
*
* Enable non-secure access in SOC configuration register
*/
data = mmio_read_32(base + MVEBU_SOC_CFG_REG(MVEBU_SOC_CFG_REG_NUM));
data &= ~MVEBU_SOC_CFG_GLOG_SECURE_EN_MASK;
mmio_write_32(base + MVEBU_SOC_CFG_REG(MVEBU_SOC_CFG_REG_NUM), data);
}
static void cp110_pcie_clk_cfg(uintptr_t base)
{
uint32_t pcie0_clk, pcie1_clk, reg;
/*
* Determine the pcie0/1 clock direction (input/output) from the
* sample at reset.
*/
reg = mmio_read_32(base + MVEBU_SAMPLE_AT_RESET_REG);
pcie0_clk = (reg & SAR_PCIE0_CLK_CFG_MASK) >> SAR_PCIE0_CLK_CFG_OFFSET;
pcie1_clk = (reg & SAR_PCIE1_CLK_CFG_MASK) >> SAR_PCIE1_CLK_CFG_OFFSET;
/* CP110 revision A2 */
if (cp110_rev_id_get(base) == MVEBU_CP110_REF_ID_A2) {
/*
* PCIe Reference Clock Buffer Control register must be
* set according to the clock direction (input/output)
*/
reg = mmio_read_32(base + MVEBU_PCIE_REF_CLK_BUF_CTRL);
reg &= ~(PCIE0_REFCLK_BUFF_SOURCE | PCIE1_REFCLK_BUFF_SOURCE);
if (!pcie0_clk)
reg |= PCIE0_REFCLK_BUFF_SOURCE;
if (!pcie1_clk)
reg |= PCIE1_REFCLK_BUFF_SOURCE;
mmio_write_32(base + MVEBU_PCIE_REF_CLK_BUF_CTRL, reg);
}
/* CP110 revision A1 */
if (cp110_rev_id_get(base) == MVEBU_CP110_REF_ID_A1) {
if (!pcie0_clk || !pcie1_clk) {
/*
* if one of the pcie clocks is set to input,
* we need to set mss_push[131] field, otherwise,
* the pcie clock might not work.
*/
reg = mmio_read_32(base + MVEBU_CP_MSS_DPSHSR_REG);
reg |= MSS_DPSHSR_REG_PCIE_CLK_SEL;
mmio_write_32(base + MVEBU_CP_MSS_DPSHSR_REG, reg);
}
}
}
/* Set a unique stream id for all DMA capable devices */
static void cp110_stream_id_init(uintptr_t base, uint32_t stream_id)
{
int i = 0;
while (stream_id_reg[i]) {
if (i > MAX_STREAM_ID_PER_CP) {
NOTICE("Only first %d (maximum) Stream IDs allocated\n",
MAX_STREAM_ID_PER_CP);
return;
}
if ((stream_id_reg[i] == CP_DMA_0_STREAM_ID_REG) ||
(stream_id_reg[i] == CP_DMA_1_STREAM_ID_REG))
mmio_write_32(base + stream_id_reg[i],
stream_id << 16 | stream_id);
else
mmio_write_32(base + stream_id_reg[i], stream_id);
/* SATA port 0/1 are in the same SATA unit, and they should use
* the same STREAM ID number
*/
if (stream_id_reg[i] != SATA_0_STREAM_ID_REG)
stream_id++;
i++;
}
}
static void cp110_axi_attr_init(uintptr_t base)
{
uint32_t index, data;
/* Initialize AXI attributes for Armada-7K/8K SoC */
/* Go over the AXI attributes and set Ax-Cache and Ax-Domain */
for (index = 0; index < AXI_MAX_ATTR; index++) {
switch (index) {
/* DFX and MSS unit works with no coherent only -
* there's no option to configure the Ax-Cache and Ax-Domain
*/
case AXI_DFX_ATTR:
case AXI_MSS_ATTR:
continue;
default:
/* Set Ax-Cache as cacheable, no allocate, modifiable,
* bufferable
* The values are different because Read & Write
* definition is different in Ax-Cache
*/
data = mmio_read_32(base + MVEBU_AXI_ATTR_REG(index));
data &= ~MVEBU_AXI_ATTR_ARCACHE_MASK;
data |= (CACHE_ATTR_WRITE_ALLOC |
CACHE_ATTR_CACHEABLE |
CACHE_ATTR_BUFFERABLE) <<
MVEBU_AXI_ATTR_ARCACHE_OFFSET;
data &= ~MVEBU_AXI_ATTR_AWCACHE_MASK;
data |= (CACHE_ATTR_READ_ALLOC |
CACHE_ATTR_CACHEABLE |
CACHE_ATTR_BUFFERABLE) <<
MVEBU_AXI_ATTR_AWCACHE_OFFSET;
/* Set Ax-Domain as Outer domain */
data &= ~MVEBU_AXI_ATTR_ARDOMAIN_MASK;
data |= DOMAIN_OUTER_SHAREABLE <<
MVEBU_AXI_ATTR_ARDOMAIN_OFFSET;
data &= ~MVEBU_AXI_ATTR_AWDOMAIN_MASK;
data |= DOMAIN_OUTER_SHAREABLE <<
MVEBU_AXI_ATTR_AWDOMAIN_OFFSET;
mmio_write_32(base + MVEBU_AXI_ATTR_REG(index), data);
}
}
/* SATA IOCC supported, cache attributes
* for SATA MBUS to AXI configuration.
*/
data = mmio_read_32(base + MVEBU_SATA_M2A_AXI_PORT_CTRL_REG);
data &= ~MVEBU_SATA_M2A_AXI_AWCACHE_MASK;
data |= (CACHE_ATTR_WRITE_ALLOC |
CACHE_ATTR_CACHEABLE |
CACHE_ATTR_BUFFERABLE) <<
MVEBU_SATA_M2A_AXI_AWCACHE_OFFSET;
data &= ~MVEBU_SATA_M2A_AXI_ARCACHE_MASK;
data |= (CACHE_ATTR_READ_ALLOC |
CACHE_ATTR_CACHEABLE |
CACHE_ATTR_BUFFERABLE) <<
MVEBU_SATA_M2A_AXI_ARCACHE_OFFSET;
mmio_write_32(base + MVEBU_SATA_M2A_AXI_PORT_CTRL_REG, data);
/* Set all IO's AXI attribute to non-secure access. */
for (index = 0; index < MVEBU_AXI_PROT_REGS_NUM; index++)
mmio_write_32(base + MVEBU_AXI_PROT_REG(index),
DOMAIN_SYSTEM_SHAREABLE);
}
static void amb_bridge_init(uintptr_t base)
{
uint32_t reg;
/* Open AMB bridge Window to Access COMPHY/MDIO registers */
reg = mmio_read_32(base + MVEBU_AMB_IP_BRIDGE_WIN_REG(0));
reg &= ~(MVEBU_AMB_IP_BRIDGE_WIN_SIZE_MASK |
MVEBU_AMB_IP_BRIDGE_WIN_EN_MASK);
reg |= (0x7ff << MVEBU_AMB_IP_BRIDGE_WIN_SIZE_OFFSET) |
(0x1 << MVEBU_AMB_IP_BRIDGE_WIN_EN_OFFSET);
mmio_write_32(base + MVEBU_AMB_IP_BRIDGE_WIN_REG(0), reg);
}
static void cp110_rtc_init(uintptr_t base)
{
/* Update MBus timing parameters before accessing RTC registers */
mmio_clrsetbits_32(base + MVEBU_RTC_BRIDGE_TIMING_CTRL0_REG,
MVEBU_RTC_WRCLK_PERIOD_MASK,
MVEBU_RTC_WRCLK_PERIOD_DEFAULT);
mmio_clrsetbits_32(base + MVEBU_RTC_BRIDGE_TIMING_CTRL0_REG,
MVEBU_RTC_WRCLK_SETUP_MASK,
MVEBU_RTC_WRCLK_SETUP_DEFAULT <<
MVEBU_RTC_WRCLK_SETUP_OFFS);
mmio_clrsetbits_32(base + MVEBU_RTC_BRIDGE_TIMING_CTRL1_REG,
MVEBU_RTC_READ_OUTPUT_DELAY_MASK,
MVEBU_RTC_READ_OUTPUT_DELAY_DEFAULT);
/*
* Issue reset to the RTC if Clock Correction register
* contents did not sustain the reboot/power-on.
*/
if ((mmio_read_32(base + MVEBU_RTC_CCR_REG) &
MVEBU_RTC_NOMINAL_TIMING_MASK) != MVEBU_RTC_NOMINAL_TIMING) {
/* Reset Test register */
mmio_write_32(base + MVEBU_RTC_TEST_CONFIG_REG, 0);
mdelay(500);
/* Reset Time register */
mmio_write_32(base + MVEBU_RTC_TIME_REG, 0);
udelay(62);
/* Reset Status register */
mmio_write_32(base + MVEBU_RTC_STATUS_REG,
(MVEBU_RTC_STATUS_ALARM1_MASK |
MVEBU_RTC_STATUS_ALARM2_MASK));
udelay(62);
/* Turn off Int1 and Int2 sources & clear the Alarm count */
mmio_write_32(base + MVEBU_RTC_IRQ_1_CONFIG_REG, 0);
mmio_write_32(base + MVEBU_RTC_IRQ_2_CONFIG_REG, 0);
mmio_write_32(base + MVEBU_RTC_ALARM_1_REG, 0);
mmio_write_32(base + MVEBU_RTC_ALARM_2_REG, 0);
/* Setup nominal register access timing */
mmio_write_32(base + MVEBU_RTC_CCR_REG,
MVEBU_RTC_NOMINAL_TIMING);
/* Reset Time register */
mmio_write_32(base + MVEBU_RTC_TIME_REG, 0);
udelay(10);
/* Reset Status register */
mmio_write_32(base + MVEBU_RTC_STATUS_REG,
(MVEBU_RTC_STATUS_ALARM1_MASK |
MVEBU_RTC_STATUS_ALARM2_MASK));
udelay(50);
}
}
static void cp110_amb_adec_init(uintptr_t base)
{
/* enable AXI-MBUS by clearing "Bridge Windows Disable" */
mmio_clrbits_32(base + MVEBU_BRIDGE_WIN_DIS_REG,
(1 << MVEBU_BRIDGE_WIN_DIS_OFF));
/* configure AXI-MBUS windows for CP */
init_amb_adec(base);
}
void cp110_init(uintptr_t cp110_base, uint32_t stream_id)
{
INFO("%s: Initialize CPx - base = %lx\n", __func__, cp110_base);
/* configure IOB windows for CP0*/
init_iob(cp110_base);
/* configure AXI-MBUS windows for CP0*/
cp110_amb_adec_init(cp110_base);
/* configure axi for CP0*/
cp110_axi_attr_init(cp110_base);
/* Execute SW WA for erratas */
cp110_errata_wa_init(cp110_base);
/* Confiure pcie clock according to clock direction */
cp110_pcie_clk_cfg(cp110_base);
/* configure stream id for CP0 */
cp110_stream_id_init(cp110_base, stream_id);
/* Open AMB bridge for comphy for CP0 & CP1*/
amb_bridge_init(cp110_base);
/* Reset RTC if needed */
cp110_rtc_init(cp110_base);
}
/* Do the minimal setup required to configure the CP in BLE */
void cp110_ble_init(uintptr_t cp110_base)
{
#if PCI_EP_SUPPORT
INFO("%s: Initialize CPx - base = %lx\n", __func__, cp110_base);
amb_bridge_init(cp110_base);
/* Configure PCIe clock */
cp110_pcie_clk_cfg(cp110_base);
/* Configure PCIe endpoint */
ble_plat_pcie_ep_setup();
#endif
}

54
drivers/marvell/thermal.c Normal file
View file

@ -0,0 +1,54 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* Driver for thermal unit located in Marvell ARMADA 8K and compatible SoCs */
#include <debug.h>
#include <thermal.h>
int marvell_thermal_init(struct tsen_config *tsen_cfg)
{
if (tsen_cfg->tsen_ready == 1) {
INFO("thermal sensor is already initialized\n");
return 0;
}
if (tsen_cfg->ptr_tsen_probe == NULL) {
ERROR("initial thermal sensor configuration is missing\n");
return -1;
}
if (tsen_cfg->ptr_tsen_probe(tsen_cfg)) {
ERROR("thermal sensor initialization failed\n");
return -1;
}
VERBOSE("thermal sensor was initialized\n");
return 0;
}
int marvell_thermal_read(struct tsen_config *tsen_cfg, int *temp)
{
if (temp == NULL) {
ERROR("NULL pointer for temperature read\n");
return -1;
}
if (tsen_cfg->ptr_tsen_read == NULL ||
tsen_cfg->tsen_ready == 0) {
ERROR("thermal sensor was not initialized\n");
return -1;
}
if (tsen_cfg->ptr_tsen_read(tsen_cfg, temp)) {
ERROR("temperature read failed\n");
return -1;
}
return 0;
}

View file

@ -0,0 +1,38 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* This driver provides I2C support for Marvell A8K and compatible SoCs */
#ifndef _A8K_I2C_H_
#define _A8K_I2C_H_
#include <stdint.h>
/*
* Initialization, must be called once on start up, may be called
* repeatedly to change the speed and slave addresses.
*/
void i2c_init(void *i2c_base);
/*
* Read/Write interface:
* chip: I2C chip address, range 0..127
* addr: Memory (register) address within the chip
* alen: Number of bytes to use for addr (typically 1, 2 for larger
* memories, 0 for register type devices with only one
* register)
* buffer: Where to read/write the data
* len: How many bytes to read/write
*
* Returns: 0 on success, not 0 on failure
*/
int i2c_read(uint8_t chip,
unsigned int addr, int alen, uint8_t *buffer, int len);
int i2c_write(uint8_t chip,
unsigned int addr, int alen, uint8_t *buffer, int len);
#endif

View file

@ -0,0 +1,21 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* Address map types for Marvell address translation unit drivers */
#ifndef _ADDR_MAP_H_
#define _ADDR_MAP_H_
#include <stdint.h>
struct addr_map_win {
uint64_t base_addr;
uint64_t win_size;
uint32_t target_id;
};
#endif /* _ADDR_MAP_H_ */

View file

@ -0,0 +1,36 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* AXI to M-Bridge decoding unit driver for Marvell Armada 8K and 8K+ SoCs */
#ifndef _AMB_ADEC_H_
#define _AMB_ADEC_H_
#include <stdint.h>
enum amb_attribute_ids {
AMB_SPI0_CS0_ID = 0x1E,
AMB_SPI0_CS1_ID = 0x5E,
AMB_SPI0_CS2_ID = 0x9E,
AMB_SPI0_CS3_ID = 0xDE,
AMB_SPI1_CS0_ID = 0x1A,
AMB_SPI1_CS1_ID = 0x5A,
AMB_SPI1_CS2_ID = 0x9A,
AMB_SPI1_CS3_ID = 0xDA,
AMB_DEV_CS0_ID = 0x3E,
AMB_DEV_CS1_ID = 0x3D,
AMB_DEV_CS2_ID = 0x3B,
AMB_DEV_CS3_ID = 0x37,
AMB_BOOT_CS_ID = 0x2f,
AMB_BOOT_ROM_ID = 0x1D,
};
#define AMB_MAX_WIN_ID 7
int init_amb_adec(uintptr_t base);
#endif /* _AMB_ADEC_H_ */

View file

@ -0,0 +1,46 @@
/*
* Copyright (C) 2017 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef _ARO_H_
#define _ARO_H_
enum hws_freq {
CPU_FREQ_2000,
CPU_FREQ_1800,
CPU_FREQ_1600,
CPU_FREQ_1400,
CPU_FREQ_1300,
CPU_FREQ_1200,
CPU_FREQ_1000,
CPU_FREQ_600,
CPU_FREQ_800,
DDR_FREQ_LAST,
DDR_FREQ_SAR
};
enum cpu_clock_freq_mode {
CPU_2000_DDR_1200_RCLK_1200 = 0x0,
CPU_2000_DDR_1050_RCLK_1050 = 0x1,
CPU_1600_DDR_800_RCLK_800 = 0x4,
CPU_1800_DDR_1200_RCLK_1200 = 0x6,
CPU_1800_DDR_1050_RCLK_1050 = 0x7,
CPU_1600_DDR_900_RCLK_900 = 0x0B,
CPU_1600_DDR_1050_RCLK_1050 = 0x0D,
CPU_1600_DDR_900_RCLK_900_2 = 0x0E,
CPU_1000_DDR_650_RCLK_650 = 0x13,
CPU_1300_DDR_800_RCLK_800 = 0x14,
CPU_1300_DDR_650_RCLK_650 = 0x17,
CPU_1200_DDR_800_RCLK_800 = 0x19,
CPU_1400_DDR_800_RCLK_800 = 0x1a,
CPU_600_DDR_800_RCLK_800 = 0x1B,
CPU_800_DDR_800_RCLK_800 = 0x1C,
CPU_1000_DDR_800_RCLK_800 = 0x1D,
CPU_DDR_RCLK_INVALID
};
int init_aro(void);
#endif /* _ARO_H_ */

View file

@ -0,0 +1,42 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* LLC driver is the Last Level Cache (L3C) driver
* for Marvell SoCs in AP806, AP807, and AP810
*/
#ifndef _CACHE_LLC_H_
#define _CACHE_LLC_H_
#define LLC_CTRL(ap) (MVEBU_LLC_BASE(ap) + 0x100)
#define LLC_SYNC(ap) (MVEBU_LLC_BASE(ap) + 0x700)
#define L2X0_INV_WAY(ap) (MVEBU_LLC_BASE(ap) + 0x77C)
#define L2X0_CLEAN_WAY(ap) (MVEBU_LLC_BASE(ap) + 0x7BC)
#define L2X0_CLEAN_INV_WAY(ap) (MVEBU_LLC_BASE(ap) + 0x7FC)
#define LLC_TC0_LOCK(ap) (MVEBU_LLC_BASE(ap) + 0x920)
#define MASTER_LLC_CTRL LLC_CTRL(MVEBU_AP0)
#define MASTER_L2X0_INV_WAY L2X0_INV_WAY(MVEBU_AP0)
#define MASTER_LLC_TC0_LOCK LLC_TC0_LOCK(MVEBU_AP0)
#define LLC_CTRL_EN 1
#define LLC_EXCLUSIVE_EN 0x100
#define LLC_WAY_MASK 0xFFFFFFFF
#ifndef __ASSEMBLY__
void llc_cache_sync(int ap_index);
void llc_flush_all(int ap_index);
void llc_clean_all(int ap_index);
void llc_inv_all(int ap_index);
void llc_disable(int ap_index);
void llc_enable(int ap_index, int excl_mode);
int llc_is_exclusive(int ap_index);
void llc_runtime_enable(int ap_index);
#endif
#endif /* _CACHE_LLC_H_ */

View file

@ -0,0 +1,51 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* CCU unit device driver for Marvell AP807, AP807 and AP810 SoCs */
#ifndef _CCU_H_
#define _CCU_H_
#ifndef __ASSEMBLY__
#include <addr_map.h>
#endif
/* CCU registers definitions */
#define CCU_WIN_CR_OFFSET(ap, win) (MVEBU_CCU_BASE(ap) + 0x0 + \
(0x10 * win))
#define CCU_TARGET_ID_OFFSET (8)
#define CCU_TARGET_ID_MASK (0x7F)
#define CCU_WIN_SCR_OFFSET(ap, win) (MVEBU_CCU_BASE(ap) + 0x4 + \
(0x10 * win))
#define CCU_WIN_ENA_WRITE_SECURE (0x1)
#define CCU_WIN_ENA_READ_SECURE (0x2)
#define CCU_WIN_ALR_OFFSET(ap, win) (MVEBU_CCU_BASE(ap) + 0x8 + \
(0x10 * win))
#define CCU_WIN_AHR_OFFSET(ap, win) (MVEBU_CCU_BASE(ap) + 0xC + \
(0x10 * win))
#define CCU_WIN_GCR_OFFSET(ap) (MVEBU_CCU_BASE(ap) + 0xD0)
#define CCU_GCR_TARGET_OFFSET (8)
#define CCU_GCR_TARGET_MASK (0xFF)
#define CCU_SRAM_WIN_CR CCU_WIN_CR_OFFSET(MVEBU_AP0, 1)
#ifndef __ASSEMBLY__
int init_ccu(int);
void ccu_win_check(struct addr_map_win *win);
void ccu_enable_win(int ap_index, struct addr_map_win *win, uint32_t win_id);
void ccu_temp_win_insert(int ap_index, struct addr_map_win *win, int size);
void ccu_temp_win_remove(int ap_index, struct addr_map_win *win, int size);
void ccu_dram_win_config(int ap_index, struct addr_map_win *win);
void ccu_dram_target_set(int ap_index, uint32_t target);
void ccu_save_win_all(int ap_id);
void ccu_restore_win_all(int ap_id);
#endif
#endif /* _CCU_H_ */

View file

@ -0,0 +1,19 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* GWIN unit device driver for Marvell AP810 SoC */
#ifndef _GWIN_H_
#define _GWIN_H_
#include <addr_map.h>
int init_gwin(int ap_index);
void gwin_temp_win_insert(int ap_index, struct addr_map_win *win, int size);
void gwin_temp_win_remove(int ap_index, struct addr_map_win *win, int size);
#endif /* _GWIN_H_ */

View file

@ -0,0 +1,19 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef _I2C_H_
#define _I2C_H_
void i2c_init(void);
int i2c_read(uint8_t chip,
unsigned int addr, int alen, uint8_t *buffer, int len);
int i2c_write(uint8_t chip,
unsigned int addr, int alen, uint8_t *buffer, int len);
#endif

View file

@ -0,0 +1,21 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* IO Window unit device driver for Marvell AP807, AP807 and AP810 SoCs */
#ifndef _IO_WIN_H_
#define _IO_WIN_H_
#include <addr_map.h>
int init_io_win(int ap_index);
void iow_temp_win_insert(int ap_index, struct addr_map_win *win, int size);
void iow_temp_win_remove(int ap_index, struct addr_map_win *win, int size);
void iow_save_win_all(int ap_id);
void iow_restore_win_all(int ap_id);
#endif /* _IO_WIN_H_ */

View file

@ -0,0 +1,31 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* IOW unit device driver for Marvell CP110 and CP115 SoCs */
#ifndef _IOB_H_
#define _IOB_H_
#include <addr_map.h>
enum target_ids_iob {
INTERNAL_TID = 0x0,
MCI0_TID = 0x1,
PEX1_TID = 0x2,
PEX2_TID = 0x3,
PEX0_TID = 0x4,
NAND_TID = 0x5,
RUNIT_TID = 0x6,
MCI1_TID = 0x7,
IOB_MAX_TID
};
int init_iob(uintptr_t base);
void iob_cfg_space_update(int ap_idx, int cp_idx,
uintptr_t base, uintptr_t new_base);
#endif /* _IOB_H_ */

View file

@ -0,0 +1,18 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* MCI bus driver for Marvell ARMADA 8K and 8K+ SoCs */
#ifndef _MCI_H_
#define _MCI_H_
int mci_initialize(int mci_index);
void mci_turn_link_down(void);
void mci_turn_link_on(void);
int mci_get_link_status(void);
#endif /* _MCI_H_ */

View file

@ -0,0 +1,17 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* AP8xx Marvell SoC driver */
#ifndef __AP_SETUP_H__
#define __AP_SETUP_H__
void ap_init(void);
void ap_ble_init(void);
int ap_get_count(void);
#endif /* __AP_SETUP_H__ */

View file

@ -0,0 +1,53 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* CP110 Marvell SoC driver */
#ifndef __CP110_SETUP_H__
#define __CP110_SETUP_H__
#include <mmio.h>
#include <mvebu_def.h>
#define MVEBU_DEVICE_ID_REG (MVEBU_CP_DFX_OFFSET + 0x40)
#define MVEBU_DEVICE_ID_OFFSET (0)
#define MVEBU_DEVICE_ID_MASK (0xffff << MVEBU_DEVICE_ID_OFFSET)
#define MVEBU_DEVICE_REV_OFFSET (16)
#define MVEBU_DEVICE_REV_MASK (0xf << MVEBU_DEVICE_REV_OFFSET)
#define MVEBU_70X0_DEV_ID (0x7040)
#define MVEBU_70X0_CP115_DEV_ID (0x7045)
#define MVEBU_80X0_DEV_ID (0x8040)
#define MVEBU_80X0_CP115_DEV_ID (0x8045)
#define MVEBU_CP110_SA_DEV_ID (0x110)
#define MVEBU_CP110_REF_ID_A1 1
#define MVEBU_CP110_REF_ID_A2 2
#define MAX_STREAM_ID_PER_CP (0x10)
#define STREAM_ID_BASE (0x40)
static inline uint32_t cp110_device_id_get(uintptr_t base)
{
/* Returns:
* - MVEBU_70X0_DEV_ID for A70X0 family
* - MVEBU_80X0_DEV_ID for A80X0 family
* - MVEBU_CP110_SA_DEV_ID for CP that connected stand alone
*/
return (mmio_read_32(base + MVEBU_DEVICE_ID_REG) >>
MVEBU_DEVICE_ID_OFFSET) &
MVEBU_DEVICE_ID_MASK;
}
static inline uint32_t cp110_rev_id_get(uintptr_t base)
{
return (mmio_read_32(base + MVEBU_DEVICE_ID_REG) &
MVEBU_DEVICE_REV_MASK) >>
MVEBU_DEVICE_REV_OFFSET;
}
void cp110_init(uintptr_t cp110_base, uint32_t stream_id);
void cp110_ble_init(uintptr_t cp110_base);
#endif /* __CP110_SETUP_H__ */

View file

@ -0,0 +1,31 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
/* Driver for thermal unit located in Marvell ARMADA 8K and compatible SoCs */
#ifndef _THERMAL_H
#define _THERMAL_H
struct tsen_config {
/* thermal temperature parameters */
int tsen_offset;
int tsen_gain;
int tsen_divisor;
/* thermal data */
int tsen_ready;
void *regs_base;
/* thermal functionality */
int (*ptr_tsen_probe)(struct tsen_config *cfg);
int (*ptr_tsen_read)(struct tsen_config *cfg, int *temp);
};
/* Thermal driver APIs */
int marvell_thermal_init(struct tsen_config *tsen_cfg);
int marvell_thermal_read(struct tsen_config *tsen_cfg, int *temp);
struct tsen_config *marvell_thermal_config_get(void);
#endif /* _THERMAL_H */

View file

@ -37,6 +37,13 @@
#define CORTEX_A72_CPUACTLR_EL1_DCC_AS_DCCI (ULL(1) << 44)
#define CORTEX_A72_CPUACTLR_EL1_DIS_INSTR_PREFETCH (ULL(1) << 32)
/*******************************************************************************
* L2 Auxiliary Control register specific definitions.
******************************************************************************/
#define CORTEX_A72_L2ACTLR_EL1 S3_1_C15_C0_0
#define CORTEX_A72_L2ACTLR_ENABLE_UNIQUE_CLEAN (ULL(1) << 14)
/*******************************************************************************
* L2 Control register specific definitions.
******************************************************************************/

View file

@ -0,0 +1,128 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __A8K_COMMON_H__
#define __A8K_COMMON_H__
#include <amb_adec.h>
#include <io_win.h>
#include <iob.h>
#include <ccu.h>
/*
* This struct supports skip image request
* detection_method: the method used to detect the request "signal".
* info:
* GPIO:
* detection_method: HIGH (pressed button), LOW (unpressed button),
* num (button mpp number).
* i2c:
* i2c_addr: the address of the i2c chosen.
* i2d_reg: the i2c register chosen.
* test:
* choose the DIE you picked the button in (AP or CP).
* in case of CP(cp_index = 0 if CP0, cp_index = 1 if CP1)
*/
struct skip_image {
enum {
GPIO,
I2C,
USER_DEFINED
} detection_method;
struct {
struct {
int num;
enum {
HIGH,
LOW
} button_state;
} gpio;
struct {
int i2c_addr;
int i2c_reg;
} i2c;
struct {
enum {
CP,
AP
} cp_ap;
int cp_index;
} test;
} info;
};
/*
* This struct supports SoC power off method
* type: the method used to power off the SoC
* cfg:
* PMIC_GPIO:
* pin_count: current GPIO pin number used for toggling the signal for
* notifying external PMIC
* info: holds the GPIOs information, CP GPIO should be used and
* all GPIOs should be within same GPIO config. register
* step_count: current step number to toggle the GPIO for PMIC
* seq: GPIO toggling values in sequence, each bit represents a GPIO.
* For example, bit0 represents first GPIO used for toggling
* the GPIO the last step is used to trigger the power off
* signal
* delay_ms: transition interval for the GPIO setting to take effect
* in unit of ms
*/
/* Max GPIO number used to notify PMIC to power off the SoC */
#define PMIC_GPIO_MAX_NUMBER 8
/* Max GPIO toggling steps in sequence to power off the SoC */
#define PMIC_GPIO_MAX_TOGGLE_STEP 8
enum gpio_output_state {
GPIO_LOW = 0,
GPIO_HIGH
};
typedef struct gpio_info {
int cp_index;
int gpio_index;
} gpio_info_t;
struct power_off_method {
enum {
PMIC_GPIO,
} type;
struct {
struct {
int pin_count;
struct gpio_info info[PMIC_GPIO_MAX_NUMBER];
int step_count;
uint32_t seq[PMIC_GPIO_MAX_TOGGLE_STEP];
int delay_ms;
} gpio;
} cfg;
};
int marvell_gpio_config(void);
uint32_t marvell_get_io_win_gcr_target(int ap_idx);
uint32_t marvell_get_ccu_gcr_target(int ap_idx);
/*
* The functions below are defined as Weak and may be overridden
* in specific Marvell standard platform
*/
int marvell_get_amb_memory_map(struct addr_map_win **win,
uint32_t *size, uintptr_t base);
int marvell_get_io_win_memory_map(int ap_idx, struct addr_map_win **win,
uint32_t *size);
int marvell_get_iob_memory_map(struct addr_map_win **win,
uint32_t *size, uintptr_t base);
int marvell_get_ccu_memory_map(int ap_idx, struct addr_map_win **win,
uint32_t *size);
#endif /* __A8K_COMMON_H__ */

View file

@ -0,0 +1,79 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __BOARD_MARVELL_DEF_H__
#define __BOARD_MARVELL_DEF_H__
/*
* Required platform porting definitions common to all ARM
* development platforms
*/
/* Size of cacheable stacks */
#if DEBUG_XLAT_TABLE
# define PLATFORM_STACK_SIZE 0x800
#elif IMAGE_BL1
#if TRUSTED_BOARD_BOOT
# define PLATFORM_STACK_SIZE 0x1000
#else
# define PLATFORM_STACK_SIZE 0x440
#endif
#elif IMAGE_BL2
# if TRUSTED_BOARD_BOOT
# define PLATFORM_STACK_SIZE 0x1000
# else
# define PLATFORM_STACK_SIZE 0x400
# endif
#elif IMAGE_BL31
# define PLATFORM_STACK_SIZE 0x400
#elif IMAGE_BL32
# define PLATFORM_STACK_SIZE 0x440
#endif
/*
* PLAT_MARVELL_MMAP_ENTRIES depends on the number of entries in the
* plat_arm_mmap array defined for each BL stage.
*/
#if IMAGE_BLE
# define PLAT_MARVELL_MMAP_ENTRIES 3
#endif
#if IMAGE_BL1
# if TRUSTED_BOARD_BOOT
# define PLAT_MARVELL_MMAP_ENTRIES 7
# else
# define PLAT_MARVELL_MMAP_ENTRIES 6
# endif /* TRUSTED_BOARD_BOOT */
#endif
#if IMAGE_BL2
# define PLAT_MARVELL_MMAP_ENTRIES 8
#endif
#if IMAGE_BL31
#define PLAT_MARVELL_MMAP_ENTRIES 5
#endif
/*
* Platform specific page table and MMU setup constants
*/
#if IMAGE_BL1
#define MAX_XLAT_TABLES 4
#elif IMAGE_BLE
# define MAX_XLAT_TABLES 4
#elif IMAGE_BL2
# define MAX_XLAT_TABLES 4
#elif IMAGE_BL31
# define MAX_XLAT_TABLES 4
#elif IMAGE_BL32
# define MAX_XLAT_TABLES 4
#endif
#define MAX_IO_DEVICES 3
#define MAX_IO_HANDLES 4
#define PLAT_MARVELL_TRUSTED_SRAM_SIZE 0x80000 /* 512 KB */
#endif /* __BOARD_MARVELL_DEF_H__ */

View file

@ -0,0 +1,180 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __MARVELL_DEF_H__
#define __MARVELL_DEF_H__
#include <arch.h>
#include <common_def.h>
#include <platform_def.h>
#include <tbbr_img_def.h>
#include <xlat_tables.h>
/******************************************************************************
* Definitions common to all MARVELL standard platforms
*****************************************************************************/
/* Special value used to verify platform parameters from BL2 to BL31 */
#define MARVELL_BL31_PLAT_PARAM_VAL 0x0f1e2d3c4b5a6978ULL
#define MARVELL_CACHE_WRITEBACK_SHIFT 6
/*
* Macros mapping the MPIDR Affinity levels to MARVELL Platform Power levels.
* The power levels have a 1:1 mapping with the MPIDR affinity levels.
*/
#define MARVELL_PWR_LVL0 MPIDR_AFFLVL0
#define MARVELL_PWR_LVL1 MPIDR_AFFLVL1
#define MARVELL_PWR_LVL2 MPIDR_AFFLVL2
/*
* Macros for local power states in Marvell platforms encoded by
* State-ID field within the power-state parameter.
*/
/* Local power state for power domains in Run state. */
#define MARVELL_LOCAL_STATE_RUN 0
/* Local power state for retention. Valid only for CPU power domains */
#define MARVELL_LOCAL_STATE_RET 1
/*
* Local power state for OFF/power-down. Valid for CPU
* and cluster power domains
*/
#define MARVELL_LOCAL_STATE_OFF 2
/* The first 4KB of Trusted SRAM are used as shared memory */
#define MARVELL_TRUSTED_SRAM_BASE PLAT_MARVELL_ATF_BASE
#define MARVELL_SHARED_RAM_BASE MARVELL_TRUSTED_SRAM_BASE
#define MARVELL_SHARED_RAM_SIZE 0x00001000 /* 4 KB */
/* The remaining Trusted SRAM is used to load the BL images */
#define MARVELL_BL_RAM_BASE (MARVELL_SHARED_RAM_BASE + \
MARVELL_SHARED_RAM_SIZE)
#define MARVELL_BL_RAM_SIZE (PLAT_MARVELL_TRUSTED_SRAM_SIZE - \
MARVELL_SHARED_RAM_SIZE)
/* Non-shared DRAM */
#define MARVELL_DRAM_BASE ULL(0x0)
#define MARVELL_DRAM_SIZE ULL(0x80000000)
#define MARVELL_DRAM_END (MARVELL_DRAM_BASE + \
MARVELL_DRAM_SIZE - 1)
#define MARVELL_IRQ_SEC_PHY_TIMER 29
#define MARVELL_IRQ_SEC_SGI_0 8
#define MARVELL_IRQ_SEC_SGI_1 9
#define MARVELL_IRQ_SEC_SGI_2 10
#define MARVELL_IRQ_SEC_SGI_3 11
#define MARVELL_IRQ_SEC_SGI_4 12
#define MARVELL_IRQ_SEC_SGI_5 13
#define MARVELL_IRQ_SEC_SGI_6 14
#define MARVELL_IRQ_SEC_SGI_7 15
#define MARVELL_MAP_SHARED_RAM MAP_REGION_FLAT( \
MARVELL_SHARED_RAM_BASE,\
MARVELL_SHARED_RAM_SIZE,\
MT_MEMORY | MT_RW | MT_SECURE)
#define MARVELL_MAP_DRAM MAP_REGION_FLAT( \
MARVELL_DRAM_BASE, \
MARVELL_DRAM_SIZE, \
MT_MEMORY | MT_RW | MT_NS)
/*
* The number of regions like RO(code), coherent and data required by
* different BL stages which need to be mapped in the MMU.
*/
#if USE_COHERENT_MEM
#define MARVELL_BL_REGIONS 3
#else
#define MARVELL_BL_REGIONS 2
#endif
#define MAX_MMAP_REGIONS (PLAT_MARVELL_MMAP_ENTRIES + \
MARVELL_BL_REGIONS)
#define MARVELL_CONSOLE_BAUDRATE 115200
/******************************************************************************
* Required platform porting definitions common to all MARVELL std. platforms
*****************************************************************************/
#define PLAT_PHY_ADDR_SPACE_SIZE (1ULL << 32)
#define PLAT_VIRT_ADDR_SPACE_SIZE (1ULL << 32)
/*
* This macro defines the deepest retention state possible. A higher state
* id will represent an invalid or a power down state.
*/
#define PLAT_MAX_RET_STATE MARVELL_LOCAL_STATE_RET
/*
* This macro defines the deepest power down states possible. Any state ID
* higher than this is invalid.
*/
#define PLAT_MAX_OFF_STATE MARVELL_LOCAL_STATE_OFF
#define PLATFORM_CORE_COUNT PLAT_MARVELL_CORE_COUNT
#define PLAT_NUM_PWR_DOMAINS (PLAT_MARVELL_CLUSTER_COUNT + \
PLATFORM_CORE_COUNT)
/*
* Some data must be aligned on the biggest cache line size in the platform.
* This is known only to the platform as it might have a combination of
* integrated and external caches.
*/
#define CACHE_WRITEBACK_GRANULE (1 << MARVELL_CACHE_WRITEBACK_SHIFT)
/*******************************************************************************
* BL1 specific defines.
* BL1 RW data is relocated from ROM to RAM at runtime so we need 2 sets of
* addresses.
******************************************************************************/
#define BL1_RO_BASE PLAT_MARVELL_TRUSTED_ROM_BASE
#define BL1_RO_LIMIT (PLAT_MARVELL_TRUSTED_ROM_BASE \
+ PLAT_MARVELL_TRUSTED_ROM_SIZE)
/*
* Put BL1 RW at the top of the Trusted SRAM.
*/
#define BL1_RW_BASE (MARVELL_BL_RAM_BASE + \
MARVELL_BL_RAM_SIZE - \
PLAT_MARVELL_MAX_BL1_RW_SIZE)
#define BL1_RW_LIMIT (MARVELL_BL_RAM_BASE + MARVELL_BL_RAM_SIZE)
/*******************************************************************************
* BLE specific defines.
******************************************************************************/
#define BLE_BASE PLAT_MARVELL_SRAM_BASE
#define BLE_LIMIT PLAT_MARVELL_SRAM_END
/*******************************************************************************
* BL2 specific defines.
******************************************************************************/
/*
* Put BL2 just below BL31.
*/
#define BL2_BASE (BL31_BASE - PLAT_MARVELL_MAX_BL2_SIZE)
#define BL2_LIMIT BL31_BASE
/*******************************************************************************
* BL31 specific defines.
******************************************************************************/
/*
* Put BL31 at the top of the Trusted SRAM.
*/
#define BL31_BASE (MARVELL_BL_RAM_BASE + \
MARVELL_BL_RAM_SIZE - \
PLAT_MARVEL_MAX_BL31_SIZE)
#define BL31_PROGBITS_LIMIT BL1_RW_BASE
#define BL31_LIMIT (MARVELL_BL_RAM_BASE + \
MARVELL_BL_RAM_SIZE)
#endif /* __MARVELL_DEF_H__ */

View file

@ -0,0 +1,109 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __PLAT_MARVELL_H__
#define __PLAT_MARVELL_H__
#include <cassert.h>
#include <cpu_data.h>
#include <stdint.h>
#include <utils.h>
#include <xlat_tables.h>
/*
* Extern declarations common to Marvell standard platforms
*/
extern const mmap_region_t plat_marvell_mmap[];
#define MARVELL_CASSERT_MMAP \
CASSERT((ARRAY_SIZE(plat_marvell_mmap) + MARVELL_BL_REGIONS) \
<= MAX_MMAP_REGIONS, \
assert_max_mmap_regions)
/*
* Utility functions common to Marvell standard platforms
*/
void marvell_setup_page_tables(uintptr_t total_base,
size_t total_size,
uintptr_t code_start,
uintptr_t code_limit,
uintptr_t rodata_start,
uintptr_t rodata_limit
#if USE_COHERENT_MEM
, uintptr_t coh_start,
uintptr_t coh_limit
#endif
);
/* IO storage utility functions */
void marvell_io_setup(void);
/* Systimer utility function */
void marvell_configure_sys_timer(void);
/* Topology utility function */
int marvell_check_mpidr(u_register_t mpidr);
/* BLE utility functions */
int ble_plat_setup(int *skip);
void plat_marvell_dram_update_topology(void);
void ble_plat_pcie_ep_setup(void);
struct pci_hw_cfg *plat_get_pcie_hw_data(void);
/* BL1 utility functions */
void marvell_bl1_early_platform_setup(void);
void marvell_bl1_platform_setup(void);
void marvell_bl1_plat_arch_setup(void);
/* BL2 utility functions */
void marvell_bl2_early_platform_setup(meminfo_t *mem_layout);
void marvell_bl2_platform_setup(void);
void marvell_bl2_plat_arch_setup(void);
uint32_t marvell_get_spsr_for_bl32_entry(void);
uint32_t marvell_get_spsr_for_bl33_entry(void);
/* BL31 utility functions */
void marvell_bl31_early_platform_setup(bl31_params_t *from_bl2,
void *plat_params_from_bl2);
void marvell_bl31_platform_setup(void);
void marvell_bl31_plat_runtime_setup(void);
void marvell_bl31_plat_arch_setup(void);
/* Power management config to power off the SoC */
void *plat_marvell_get_pm_cfg(void);
/* Check if MSS AP CM3 firmware contains PM support */
_Bool is_pm_fw_running(void);
/* Bootrom image recovery utility functions */
void *plat_marvell_get_skip_image_data(void);
/* FIP TOC validity check */
int marvell_io_is_toc_valid(void);
/*
* PSCI functionality
*/
void marvell_psci_arch_init(int ap_idx);
void plat_marvell_system_reset(void);
/*
* Optional functions required in Marvell standard platforms
*/
void plat_marvell_io_setup(void);
int plat_marvell_get_alt_image_source(
unsigned int image_id,
uintptr_t *dev_handle,
uintptr_t *image_spec);
unsigned int plat_marvell_calc_core_pos(u_register_t mpidr);
const mmap_region_t *plat_marvell_get_mmap(void);
void marvell_ble_prepare_exit(void);
void marvell_exit_bootrom(uintptr_t base);
int plat_marvell_early_cpu_powerdown(void);
#endif /* __PLAT_MARVELL_H__ */

View file

@ -0,0 +1,100 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __PLAT_PM_TRACE_H
#define __PLAT_PM_TRACE_H
/*
* PM Trace is for Debug purpose only!!!
* It should not be enabled during System Run time
*/
#undef PM_TRACE_ENABLE
/* trace entry time */
struct pm_trace_entry {
/* trace entry time stamp */
unsigned int timestamp;
/* trace info
* [16-31] - API Trace Id
* [00-15] - API Step Id
*/
unsigned int trace_info;
};
struct pm_trace_ctrl {
/* trace pointer - points to next free entry in trace cyclic queue */
unsigned int trace_pointer;
/* trace count - number of entries in the queue, clear upon read */
unsigned int trace_count;
};
/* trace size definition */
#define AP_MSS_ATF_CORE_INFO_SIZE (256)
#define AP_MSS_ATF_CORE_ENTRY_SIZE (8)
#define AP_MSS_ATF_TRACE_SIZE_MASK (0xFF)
/* trace address definition */
#define AP_MSS_TIMER_BASE (MVEBU_REGS_BASE_MASK + 0x580110)
#define AP_MSS_ATF_CORE_0_CTRL_BASE (MVEBU_REGS_BASE_MASK + 0x520140)
#define AP_MSS_ATF_CORE_1_CTRL_BASE (MVEBU_REGS_BASE_MASK + 0x520150)
#define AP_MSS_ATF_CORE_2_CTRL_BASE (MVEBU_REGS_BASE_MASK + 0x520160)
#define AP_MSS_ATF_CORE_3_CTRL_BASE (MVEBU_REGS_BASE_MASK + 0x520170)
#define AP_MSS_ATF_CORE_CTRL_BASE (AP_MSS_ATF_CORE_0_CTRL_BASE)
#define AP_MSS_ATF_CORE_0_INFO_BASE (MVEBU_REGS_BASE_MASK + 0x5201C0)
#define AP_MSS_ATF_CORE_0_INFO_TRACE (MVEBU_REGS_BASE_MASK + 0x5201C4)
#define AP_MSS_ATF_CORE_1_INFO_BASE (MVEBU_REGS_BASE_MASK + 0x5209C0)
#define AP_MSS_ATF_CORE_1_INFO_TRACE (MVEBU_REGS_BASE_MASK + 0x5209C4)
#define AP_MSS_ATF_CORE_2_INFO_BASE (MVEBU_REGS_BASE_MASK + 0x5211C0)
#define AP_MSS_ATF_CORE_2_INFO_TRACE (MVEBU_REGS_BASE_MASK + 0x5211C4)
#define AP_MSS_ATF_CORE_3_INFO_BASE (MVEBU_REGS_BASE_MASK + 0x5219C0)
#define AP_MSS_ATF_CORE_3_INFO_TRACE (MVEBU_REGS_BASE_MASK + 0x5219C4)
#define AP_MSS_ATF_CORE_INFO_BASE (AP_MSS_ATF_CORE_0_INFO_BASE)
/* trace info definition */
#define TRACE_PWR_DOMAIN_OFF (0x10000)
#define TRACE_PWR_DOMAIN_SUSPEND (0x20000)
#define TRACE_PWR_DOMAIN_SUSPEND_FINISH (0x30000)
#define TRACE_PWR_DOMAIN_ON (0x40000)
#define TRACE_PWR_DOMAIN_ON_FINISH (0x50000)
#define TRACE_PWR_DOMAIN_ON_MASK (0xFF)
#ifdef PM_TRACE_ENABLE
/* trace API definition */
void pm_core_0_trace(unsigned int trace);
void pm_core_1_trace(unsigned int trace);
void pm_core_2_trace(unsigned int trace);
void pm_core_3_trace(unsigned int trace);
typedef void (*core_trace_func)(unsigned int);
extern core_trace_func funcTbl[PLATFORM_CORE_COUNT];
#define PM_TRACE(trace) funcTbl[plat_my_core_pos()](trace)
#else
#define PM_TRACE(trace)
#endif
/*******************************************************************************
* pm_trace_add
*
* DESCRIPTION: Add PM trace
******************************************************************************
*/
void pm_trace_add(unsigned int trace, unsigned int core);
#endif /* __PLAT_PM_TRACE_H */

View file

@ -0,0 +1,39 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __CCI_MACROS_S__
#define __CCI_MACROS_S__
#include <cci.h>
#include <platform_def.h>
.section .rodata.cci_reg_name, "aS"
cci_iface_regs:
.asciz "cci_snoop_ctrl_cluster0", "cci_snoop_ctrl_cluster1" , ""
/* ------------------------------------------------
* The below required platform porting macro prints
* out relevant interconnect registers whenever an
* unhandled exception is taken in BL31.
* Clobbers: x0 - x9, sp
* ------------------------------------------------
*/
.macro print_cci_regs
adr x6, cci_iface_regs
/* Store in x7 the base address of the first interface */
mov_imm x7, (PLAT_MARVELL_CCI_BASE + SLAVE_IFACE_OFFSET( \
PLAT_MARVELL_CCI_CLUSTER0_SL_IFACE_IX))
ldr w8, [x7, #SNOOP_CTRL_REG]
/* Store in x7 the base address of the second interface */
mov_imm x7, (PLAT_MARVELL_CCI_BASE + SLAVE_IFACE_OFFSET( \
PLAT_MARVELL_CCI_CLUSTER1_SL_IFACE_IX))
ldr w9, [x7, #SNOOP_CTRL_REG]
/* Store to the crash buf and print to console */
bl str_in_crash_buf_print
.endm
#endif /* __CCI_MACROS_S__ */

View file

@ -0,0 +1,134 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __MARVELL_MACROS_S__
#define __MARVELL_MACROS_S__
#include <cci.h>
#include <gic_common.h>
#include <gicv2.h>
#include <gicv3.h>
#include <platform_def.h>
/*
* These Macros are required by ATF
*/
.section .rodata.gic_reg_name, "aS"
/* Applicable only to GICv2 and GICv3 with SRE disabled (legacy mode) */
gicc_regs:
.asciz "gicc_hppir", "gicc_ahppir", "gicc_ctlr", ""
#ifdef USE_CCI
/* Applicable only to GICv3 with SRE enabled */
icc_regs:
.asciz "icc_hppir0_el1", "icc_hppir1_el1", "icc_ctlr_el3", ""
#endif
/* Registers common to both GICv2 and GICv3 */
gicd_pend_reg:
.asciz "gicd_ispendr regs (Offsets 0x200 - 0x278)\n" \
" Offset:\t\t\tvalue\n"
newline:
.asciz "\n"
spacer:
.asciz ":\t\t0x"
/* ---------------------------------------------
* The below utility macro prints out relevant GIC
* registers whenever an unhandled exception is
* taken in BL31 on ARM standard platforms.
* Expects: GICD base in x16, GICC base in x17
* Clobbers: x0 - x10, sp
* ---------------------------------------------
*/
.macro arm_print_gic_regs
/* Check for GICv3 system register access */
mrs x7, id_aa64pfr0_el1
ubfx x7, x7, #ID_AA64PFR0_GIC_SHIFT, #ID_AA64PFR0_GIC_WIDTH
cmp x7, #1
b.ne print_gicv2
/* Check for SRE enable */
mrs x8, ICC_SRE_EL3
tst x8, #ICC_SRE_SRE_BIT
b.eq print_gicv2
#ifdef USE_CCI
/* Load the icc reg list to x6 */
adr x6, icc_regs
/* Load the icc regs to gp regs used by str_in_crash_buf_print */
mrs x8, ICC_HPPIR0_EL1
mrs x9, ICC_HPPIR1_EL1
mrs x10, ICC_CTLR_EL3
/* Store to the crash buf and print to console */
bl str_in_crash_buf_print
#endif
b print_gic_common
print_gicv2:
/* Load the gicc reg list to x6 */
adr x6, gicc_regs
/* Load the gicc regs to gp regs used by str_in_crash_buf_print */
ldr w8, [x17, #GICC_HPPIR]
ldr w9, [x17, #GICC_AHPPIR]
ldr w10, [x17, #GICC_CTLR]
/* Store to the crash buf and print to console */
bl str_in_crash_buf_print
print_gic_common:
/* Print the GICD_ISPENDR regs */
add x7, x16, #GICD_ISPENDR
adr x4, gicd_pend_reg
bl asm_print_str
gicd_ispendr_loop:
sub x4, x7, x16
cmp x4, #0x280
b.eq exit_print_gic_regs
bl asm_print_hex
adr x4, spacer
bl asm_print_str
ldr x4, [x7], #8
bl asm_print_hex
adr x4, newline
bl asm_print_str
b gicd_ispendr_loop
exit_print_gic_regs:
.endm
.section .rodata.cci_reg_name, "aS"
cci_iface_regs:
.asciz "cci_snoop_ctrl_cluster0", "cci_snoop_ctrl_cluster1" , ""
/* ------------------------------------------------
* The below required platform porting macro prints
* out relevant interconnect registers whenever an
* unhandled exception is taken in BL31.
* Clobbers: x0 - x9, sp
* ------------------------------------------------
*/
.macro print_cci_regs
#ifdef USE_CCI
adr x6, cci_iface_regs
/* Store in x7 the base address of the first interface */
mov_imm x7, (PLAT_MARVELL_CCI_BASE + SLAVE_IFACE_OFFSET( \
PLAT_MARVELL_CCI_CLUSTER0_SL_IFACE_IX))
ldr w8, [x7, #SNOOP_CTRL_REG]
/* Store in x7 the base address of the second interface */
mov_imm x7, (PLAT_MARVELL_CCI_BASE + SLAVE_IFACE_OFFSET( \
PLAT_MARVELL_CCI_CLUSTER1_SL_IFACE_IX))
ldr w9, [x7, #SNOOP_CTRL_REG]
/* Store to the crash buf and print to console */
bl str_in_crash_buf_print
#endif
.endm
#endif /* __MARVELL_MACROS_S__ */

View file

@ -0,0 +1,34 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __MARVELL_PLAT_PRIV_H__
#define __MARVELL_PLAT_PRIV_H__
#include <utils.h>
/*****************************************************************************
* Function and variable prototypes
*****************************************************************************
*/
void plat_delay_timer_init(void);
uint64_t mvebu_get_dram_size(uint64_t ap_base_addr);
/*
* GIC operation, mandatory functions required in Marvell standard platforms
*/
void plat_marvell_gic_driver_init(void);
void plat_marvell_gic_init(void);
void plat_marvell_gic_cpuif_enable(void);
void plat_marvell_gic_cpuif_disable(void);
void plat_marvell_gic_pcpu_init(void);
void plat_marvell_gic_irq_save(void);
void plat_marvell_gic_irq_restore(void);
void plat_marvell_gic_irq_pcpu_save(void);
void plat_marvell_gic_irq_pcpu_restore(void);
#endif /* __MARVELL_PLAT_PRIV_H__ */

View file

@ -0,0 +1,26 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef _MARVELL_PM_H_
#define _MARVELL_PM_H_
#define MVEBU_MAILBOX_MAGIC_NUM PLAT_MARVELL_MAILBOX_MAGIC_NUM
#define MVEBU_MAILBOX_SUSPEND_STATE 0xb007de7c
/* Mailbox entry indexes */
/* Magic number for validity check */
#define MBOX_IDX_MAGIC 0
/* Recovery from suspend entry point */
#define MBOX_IDX_SEC_ADDR 1
/* Suspend state magic number */
#define MBOX_IDX_SUSPEND_MAGIC 2
/* Recovery jump address for ROM bypass */
#define MBOX_IDX_ROM_EXIT_ADDR 3
/* BLE execution start counter value */
#define MBOX_IDX_START_CNT 4
#endif /* _MARVELL_PM_H_ */

View file

@ -0,0 +1,38 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef _MVEBU_H_
#define _MVEBU_H_
/* Use this functions only when printf is allowed */
#define debug_enter() VERBOSE("----> Enter %s\n", __func__)
#define debug_exit() VERBOSE("<---- Exit %s\n", __func__)
/* Macro for testing alignment. Positive if number is NOT aligned */
#define IS_NOT_ALIGN(number, align) ((number) & ((align) - 1))
/* Macro for alignment up. For example, ALIGN_UP(0x0330, 0x20) = 0x0340 */
#define ALIGN_UP(number, align) (((number) & ((align) - 1)) ? \
(((number) + (align)) & ~((align)-1)) : (number))
/* Macro for testing whether a number is a power of 2. Positive if so */
#define IS_POWER_OF_2(number) ((number) != 0 && \
(((number) & ((number) - 1)) == 0))
/*
* Macro for ronding up to next power of 2
* it is done by count leading 0 (clz assembly opcode) and see msb set bit.
* then you can shift it left and get number which power of 2
* Note: this Macro is for 32 bit number
*/
#define ROUND_UP_TO_POW_OF_2(number) (1 << \
(32 - __builtin_clz((number) - 1)))
#define _1MB_ (1024ULL*1024ULL)
#define _1GB_ (_1MB_*1024ULL)
#endif /* MVEBU_H */

View file

@ -66,6 +66,14 @@ MediaTek platform ports
:G: `mtk09422`_
:F: plat/mediatek/
Marvell platform ports and SoC drivers
--------------------------------------
:M: Konstantin Porotchkin <kostap@marvell.com>
:G: `kostapr`_
:F: docs/plat/marvell/
:F: plat/marvell/
:F: drivers/marvell/
NVidia platform ports
---------------------
:M: Varun Wadekar <vwadekar@nvidia.com>
@ -165,6 +173,7 @@ Xilinx platform port
.. _glneo: https://github.com/glneo
.. _hzhuang1: https://github.com/hzhuang1
.. _jenswi-linaro: https://github.com/jenswi-linaro
.. _kostapr: https://github.com/kostapr
.. _masahir0y: https://github.com/masahir0y
.. _mtk09422: https://github.com/mtk09422
.. _qoriq-open-source: https://github.com/qoriq-open-source

View file

@ -290,6 +290,7 @@ define MAKE_BL
$(eval DUMP := $(call IMG_DUMP,$(1)))
$(eval BIN := $(call IMG_BIN,$(1)))
$(eval BL_LINKERFILE := $(BL$(call uppercase,$(1))_LINKERFILE))
$(eval BL_LIBS := $(BL$(call uppercase,$(1))_LIBS))
# We use sort only to get a list of unique object directory names.
# ordering is not relevant but sort removes duplicates.
$(eval TEMP_OBJ_DIRS := $(sort $(dir ${OBJS} ${LINKERFILE})))
@ -312,7 +313,7 @@ bl${1}_dirs: | ${OBJ_DIRS}
$(eval $(call MAKE_OBJS,$(BUILD_DIR),$(SOURCES),$(1)))
$(eval $(call MAKE_LD,$(LINKERFILE),$(BL_LINKERFILE),$(1)))
$(ELF): $(OBJS) $(LINKERFILE) | bl$(1)_dirs
$(ELF): $(OBJS) $(LINKERFILE) | bl$(1)_dirs $(BL_LIBS)
@echo " LD $$@"
ifdef MAKE_BUILD_STRINGS
$(call MAKE_BUILD_STRINGS, $(BUILD_DIR)/build_message.o)
@ -322,7 +323,7 @@ else
$$(CC) $$(TF_CFLAGS) $$(CFLAGS) -xc -c - -o $(BUILD_DIR)/build_message.o
endif
$$(Q)$$(LD) -o $$@ $$(TF_LDFLAGS) $$(LDFLAGS) -Map=$(MAPFILE) \
--script $(LINKERFILE) $(BUILD_DIR)/build_message.o $(OBJS) $(LDLIBS)
--script $(LINKERFILE) $(BUILD_DIR)/build_message.o $(OBJS) $(LDLIBS) $(BL_LIBS)
$(DUMP): $(ELF)
@echo " OD $$@"

View file

@ -0,0 +1,89 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <arch_helpers.h>
#include <debug.h>
#include <mv_ddr_if.h>
#include <plat_marvell.h>
/*
* This function may modify the default DRAM parameters
* based on information received from SPD or bootloader
* configuration located on non volatile storage
*/
void plat_marvell_dram_update_topology(void)
{
}
/*
* This struct provides the DRAM training code with
* the appropriate board DRAM configuration
*/
static struct mv_ddr_topology_map board_topology_map = {
/* FIXME: MISL board 2CS 4Gb x8 devices of micron - 2133P */
DEBUG_LEVEL_ERROR,
0x1, /* active interfaces */
/* cs_mask, mirror, dqs_swap, ck_swap X subphys */
{ { { {0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0} },
SPEED_BIN_DDR_2133P, /* speed_bin */
MV_DDR_DEV_WIDTH_8BIT, /* sdram device width */
MV_DDR_DIE_CAP_4GBIT, /* die capacity */
MV_DDR_FREQ_SAR, /* frequency */
0, 0, /* cas_l, cas_wl */
MV_DDR_TEMP_LOW} }, /* temperature */
MV_DDR_32BIT_ECC_PUP8_BUS_MASK, /* subphys mask */
MV_DDR_CFG_DEFAULT, /* ddr configuration data source */
{ {0} }, /* raw spd data */
{0}, /* timing parameters */
{ /* electrical configuration */
{ /* memory electrical configuration */
MV_DDR_RTT_NOM_PARK_RZQ_DISABLE, /* rtt_nom */
{
MV_DDR_RTT_NOM_PARK_RZQ_DIV4, /* rtt_park 1cs */
MV_DDR_RTT_NOM_PARK_RZQ_DIV1 /* rtt_park 2cs */
},
{
MV_DDR_RTT_WR_DYN_ODT_OFF, /* rtt_wr 1cs */
MV_DDR_RTT_WR_RZQ_DIV2 /* rtt_wr 2cs */
},
MV_DDR_DIC_RZQ_DIV7 /* dic */
},
{ /* phy electrical configuration */
MV_DDR_OHM_30, /* data_drv_p */
MV_DDR_OHM_30, /* data_drv_n */
MV_DDR_OHM_30, /* ctrl_drv_p */
MV_DDR_OHM_30, /* ctrl_drv_n */
{
MV_DDR_OHM_60, /* odt_p 1cs */
MV_DDR_OHM_120 /* odt_p 2cs */
},
{
MV_DDR_OHM_60, /* odt_n 1cs */
MV_DDR_OHM_120 /* odt_n 2cs */
},
},
{ /* mac electrical configuration */
MV_DDR_ODT_CFG_NORMAL, /* odtcfg_pattern */
MV_DDR_ODT_CFG_ALWAYS_ON, /* odtcfg_write */
MV_DDR_ODT_CFG_NORMAL, /* odtcfg_read */
},
}
};
struct mv_ddr_topology_map *mv_ddr_topology_map_get(void)
{
/* Return the board topology as defined in the board code */
return &board_topology_map;
}

View file

@ -0,0 +1,141 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <a8k_common.h>
/*
* If bootrom is currently at BLE there's no need to include the memory
* maps structure at this point
*/
#include <mvebu_def.h>
#ifndef IMAGE_BLE
/*****************************************************************************
* AMB Configuration
*****************************************************************************
*/
struct addr_map_win amb_memory_map[] = {
/* CP0 SPI1 CS0 Direct Mode access */
{0xf900, 0x1000000, AMB_SPI1_CS0_ID},
};
int marvell_get_amb_memory_map(struct addr_map_win **win,
uint32_t *size, uintptr_t base)
{
*win = amb_memory_map;
if (*win == NULL)
*size = 0;
else
*size = ARRAY_SIZE(amb_memory_map);
return 0;
}
#endif
/*****************************************************************************
* IO_WIN Configuration
*****************************************************************************
*/
struct addr_map_win io_win_memory_map[] = {
#ifndef IMAGE_BLE
/* MCI 0 indirect window */
{MVEBU_MCI_REG_BASE_REMAP(0), 0x100000, MCI_0_TID},
/* MCI 1 indirect window */
{MVEBU_MCI_REG_BASE_REMAP(1), 0x100000, MCI_1_TID},
#endif
};
uint32_t marvell_get_io_win_gcr_target(int ap_index)
{
return PIDI_TID;
}
int marvell_get_io_win_memory_map(int ap_index, struct addr_map_win **win,
uint32_t *size)
{
*win = io_win_memory_map;
if (*win == NULL)
*size = 0;
else
*size = ARRAY_SIZE(io_win_memory_map);
return 0;
}
#ifndef IMAGE_BLE
/*****************************************************************************
* IOB Configuration
*****************************************************************************
*/
struct addr_map_win iob_memory_map[] = {
/* PEX1_X1 window */
{0x00000000f7000000, 0x1000000, PEX1_TID},
/* PEX2_X1 window */
{0x00000000f8000000, 0x1000000, PEX2_TID},
/* PEX0_X4 window */
{0x00000000f6000000, 0x1000000, PEX0_TID},
/* SPI1_CS0 (RUNIT) window */
{0x00000000f9000000, 0x1000000, RUNIT_TID},
};
int marvell_get_iob_memory_map(struct addr_map_win **win, uint32_t *size,
uintptr_t base)
{
*win = iob_memory_map;
*size = ARRAY_SIZE(iob_memory_map);
return 0;
}
#endif
/*****************************************************************************
* CCU Configuration
*****************************************************************************
*/
struct addr_map_win ccu_memory_map[] = { /* IO window */
#ifdef IMAGE_BLE
{0x00000000f2000000, 0x4000000, IO_0_TID}, /* IO window */
#else
{0x00000000f2000000, 0xe000000, IO_0_TID},
#endif
};
uint32_t marvell_get_ccu_gcr_target(int ap)
{
return DRAM_0_TID;
}
int marvell_get_ccu_memory_map(int ap_index, struct addr_map_win **win,
uint32_t *size)
{
*win = ccu_memory_map;
*size = ARRAY_SIZE(ccu_memory_map);
return 0;
}
#ifdef IMAGE_BLE
/*****************************************************************************
* SKIP IMAGE Configuration
*****************************************************************************
*/
#if PLAT_RECOVERY_IMAGE_ENABLE
struct skip_image skip_im = {
.detection_method = GPIO,
.info.gpio.num = 33,
.info.gpio.button_state = HIGH,
.info.test.cp_ap = CP,
.info.test.cp_index = 0,
};
void *plat_marvell_get_skip_image_data(void)
{
/* Return the skip_image configurations */
return &skip_im;
}
#endif
#endif

View file

@ -0,0 +1,15 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __MVEBU_DEF_H__
#define __MVEBU_DEF_H__
#include <a8k_plat_def.h>
#define CP_COUNT 1 /* A70x0 has single CP0 */
#endif /* __MVEBU_DEF_H__ */

View file

@ -0,0 +1,16 @@
#
# Copyright (C) 2018 Marvell International Ltd.
#
# SPDX-License-Identifier: BSD-3-Clause
# https://spdx.org/licenses
#
PCI_EP_SUPPORT := 0
DOIMAGE_SEC := tools/doimage/secure/sec_img_7K.cfg
MARVELL_MOCHI_DRV := drivers/marvell/mochi/apn806_setup.c
include plat/marvell/a8k/common/a8k_common.mk
include plat/marvell/common/marvell_common.mk

View file

@ -0,0 +1,89 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <arch_helpers.h>
#include <debug.h>
#include <mv_ddr_if.h>
#include <plat_marvell.h>
/*
* This function may modify the default DRAM parameters
* based on information received from SPD or bootloader
* configuration located on non volatile storage
*/
void plat_marvell_dram_update_topology(void)
{
}
/*
* This struct provides the DRAM training code with
* the appropriate board DRAM configuration
*/
static struct mv_ddr_topology_map board_topology_map = {
/* FIXME: MISL board 2CS 8Gb x8 devices of micron - 2133P */
DEBUG_LEVEL_ERROR,
0x1, /* active interfaces */
/* cs_mask, mirror, dqs_swap, ck_swap X subphys */
{ { { {0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0},
{0x3, 0x2, 0, 0} },
SPEED_BIN_DDR_2400T, /* speed_bin */
MV_DDR_DEV_WIDTH_8BIT, /* sdram device width */
MV_DDR_DIE_CAP_8GBIT, /* die capacity */
MV_DDR_FREQ_SAR, /* frequency */
0, 0, /* cas_l, cas_wl */
MV_DDR_TEMP_LOW} }, /* temperature */
MV_DDR_32BIT_ECC_PUP8_BUS_MASK, /* subphys mask */
MV_DDR_CFG_DEFAULT, /* ddr configuration data source */
{ {0} }, /* raw spd data */
{0}, /* timing parameters */
{ /* electrical configuration */
{ /* memory electrical configuration */
MV_DDR_RTT_NOM_PARK_RZQ_DISABLE, /* rtt_nom */
{
MV_DDR_RTT_NOM_PARK_RZQ_DIV4, /* rtt_park 1cs */
MV_DDR_RTT_NOM_PARK_RZQ_DIV1 /* rtt_park 2cs */
},
{
MV_DDR_RTT_WR_DYN_ODT_OFF, /* rtt_wr 1cs */
MV_DDR_RTT_WR_RZQ_DIV2 /* rtt_wr 2cs */
},
MV_DDR_DIC_RZQ_DIV7 /* dic */
},
{ /* phy electrical configuration */
MV_DDR_OHM_30, /* data_drv_p */
MV_DDR_OHM_30, /* data_drv_n */
MV_DDR_OHM_30, /* ctrl_drv_p */
MV_DDR_OHM_30, /* ctrl_drv_n */
{
MV_DDR_OHM_60, /* odt_p 1cs */
MV_DDR_OHM_120 /* odt_p 2cs */
},
{
MV_DDR_OHM_60, /* odt_n 1cs */
MV_DDR_OHM_120 /* odt_n 2cs */
},
},
{ /* mac electrical configuration */
MV_DDR_ODT_CFG_NORMAL, /* odtcfg_pattern */
MV_DDR_ODT_CFG_ALWAYS_ON, /* odtcfg_write */
MV_DDR_ODT_CFG_NORMAL, /* odtcfg_read */
},
}
};
struct mv_ddr_topology_map *mv_ddr_topology_map_get(void)
{
/* Return the board topology as defined in the board code */
return &board_topology_map;
}

View file

@ -0,0 +1,142 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <a8k_common.h>
/*
* If bootrom is currently at BLE there's no need to include the memory
* maps structure at this point
*/
#include <mvebu_def.h>
#ifndef IMAGE_BLE
/*****************************************************************************
* AMB Configuration
*****************************************************************************
*/
struct addr_map_win *amb_memory_map;
int marvell_get_amb_memory_map(struct addr_map_win **win, uint32_t *size,
uintptr_t base)
{
*win = amb_memory_map;
if (*win == NULL)
*size = 0;
else
*size = ARRAY_SIZE(amb_memory_map);
return 0;
}
#endif
/*****************************************************************************
* IO WIN Configuration
*****************************************************************************
*/
struct addr_map_win io_win_memory_map[] = {
#ifndef IMAGE_BLE
/* MCI 0 indirect window */
{MVEBU_MCI_REG_BASE_REMAP(0), 0x100000, MCI_0_TID},
/* MCI 1 indirect window */
{MVEBU_MCI_REG_BASE_REMAP(1), 0x100000, MCI_1_TID},
#endif
};
uint32_t marvell_get_io_win_gcr_target(int ap_index)
{
return PIDI_TID;
}
int marvell_get_io_win_memory_map(int ap_index, struct addr_map_win **win,
uint32_t *size)
{
*win = io_win_memory_map;
if (*win == NULL)
*size = 0;
else
*size = ARRAY_SIZE(io_win_memory_map);
return 0;
}
#ifndef IMAGE_BLE
/*****************************************************************************
* IOB Configuration
*****************************************************************************
*/
struct addr_map_win iob_memory_map[] = {
/* PEX0_X4 window */
{0x00000000f6000000, 0x6000000, PEX0_TID},
{0x00000000c0000000, 0x30000000, PEX0_TID},
{0x0000000800000000, 0x200000000, PEX0_TID},
};
int marvell_get_iob_memory_map(struct addr_map_win **win, uint32_t *size,
uintptr_t base)
{
*win = iob_memory_map;
*size = ARRAY_SIZE(iob_memory_map);
return 0;
}
#endif
/*****************************************************************************
* CCU Configuration
*****************************************************************************
*/
struct addr_map_win ccu_memory_map[] = {
#ifdef IMAGE_BLE
{0x00000000f2000000, 0x4000000, IO_0_TID}, /* IO window */
#else
{0x00000000f2000000, 0xe000000, IO_0_TID},
{0x00000000c0000000, 0x30000000, IO_0_TID}, /* IO window */
{0x0000000800000000, 0x200000000, IO_0_TID}, /* IO window */
#endif
};
uint32_t marvell_get_ccu_gcr_target(int ap)
{
return DRAM_0_TID;
}
int marvell_get_ccu_memory_map(int ap_index, struct addr_map_win **win,
uint32_t *size)
{
*win = ccu_memory_map;
*size = ARRAY_SIZE(ccu_memory_map);
return 0;
}
#ifdef IMAGE_BLE
struct pci_hw_cfg *plat_get_pcie_hw_data(void)
{
return NULL;
}
/*****************************************************************************
* SKIP IMAGE Configuration
*****************************************************************************
*/
#if PLAT_RECOVERY_IMAGE_ENABLE
struct skip_image skip_im = {
.detection_method = GPIO,
.info.gpio.num = 33,
.info.gpio.button_state = HIGH,
.info.test.cp_ap = CP,
.info.test.cp_index = 0,
};
void *plat_marvell_get_skip_image_data(void)
{
/* Return the skip_image configurations */
return &skip_im;
}
#endif
#endif

View file

@ -0,0 +1,31 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __MVEBU_DEF_H__
#define __MVEBU_DEF_H__
#include <a8k_plat_def.h>
#define CP_COUNT 1 /* A70x0 has single CP0 */
/***********************************************************************
* Required platform porting definitions common to all
* Management Compute SubSystems (MSS)
***********************************************************************
*/
/*
* Load address of SCP_BL2
* SCP_BL2 is loaded to the same place as BL31.
* Once SCP_BL2 is transferred to the SCP,
* it is discarded and BL31 is loaded over the top.
*/
#ifdef SCP_IMAGE
#define SCP_BL2_BASE BL31_BASE
#endif
#endif /* __MVEBU_DEF_H__ */

View file

@ -0,0 +1,16 @@
#
# Copyright (C) 2018 Marvell International Ltd.
#
# SPDX-License-Identifier: BSD-3-Clause
# https://spdx.org/licenses
#
PCI_EP_SUPPORT := 0
DOIMAGE_SEC := tools/doimage/secure/sec_img_7K.cfg
MARVELL_MOCHI_DRV := drivers/marvell/mochi/apn806_setup.c
include plat/marvell/a8k/common/a8k_common.mk
include plat/marvell/common/marvell_common.mk

View file

@ -0,0 +1,141 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <arch_helpers.h>
#include <a8k_i2c.h>
#include <debug.h>
#include <mmio.h>
#include <mv_ddr_if.h>
#include <mvebu_def.h>
#include <plat_marvell.h>
#define MVEBU_AP_MPP_CTRL0_7_REG MVEBU_AP_MPP_REGS(0)
#define MVEBU_AP_MPP_CTRL4_OFFS 16
#define MVEBU_AP_MPP_CTRL5_OFFS 20
#define MVEBU_AP_MPP_CTRL4_I2C0_SDA_ENA 0x3
#define MVEBU_AP_MPP_CTRL5_I2C0_SCK_ENA 0x3
#define MVEBU_CP_MPP_CTRL37_OFFS 20
#define MVEBU_CP_MPP_CTRL38_OFFS 24
#define MVEBU_CP_MPP_CTRL37_I2C0_SCK_ENA 0x2
#define MVEBU_CP_MPP_CTRL38_I2C0_SDA_ENA 0x2
#define MVEBU_MPP_CTRL_MASK 0xf
/*
* This struct provides the DRAM training code with
* the appropriate board DRAM configuration
*/
static struct mv_ddr_topology_map board_topology_map = {
/* MISL board with 1CS 8Gb x4 devices of Micron 2400T */
DEBUG_LEVEL_ERROR,
0x1, /* active interfaces */
/* cs_mask, mirror, dqs_swap, ck_swap X subphys */
{ { { {0x1, 0x0, 0, 0}, /* FIXME: change the cs mask for all 64 bit */
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0} },
/* TODO: double check if the speed bin is 2400T */
SPEED_BIN_DDR_2400T, /* speed_bin */
MV_DDR_DEV_WIDTH_8BIT, /* sdram device width */
MV_DDR_DIE_CAP_8GBIT, /* die capacity */
MV_DDR_FREQ_SAR, /* frequency */
0, 0, /* cas_l, cas_wl */
MV_DDR_TEMP_LOW} }, /* temperature */
MV_DDR_64BIT_ECC_PUP8_BUS_MASK, /* subphys mask */
MV_DDR_CFG_SPD, /* ddr configuration data source */
{ {0} }, /* raw spd data */
{0}, /* timing parameters */
{ /* electrical configuration */
{ /* memory electrical configuration */
MV_DDR_RTT_NOM_PARK_RZQ_DISABLE, /* rtt_nom */
{
MV_DDR_RTT_NOM_PARK_RZQ_DIV4, /* rtt_park 1cs */
MV_DDR_RTT_NOM_PARK_RZQ_DIV1 /* rtt_park 2cs */
},
{
MV_DDR_RTT_WR_DYN_ODT_OFF, /* rtt_wr 1cs */
MV_DDR_RTT_WR_RZQ_DIV2 /* rtt_wr 2cs */
},
MV_DDR_DIC_RZQ_DIV7 /* dic */
},
{ /* phy electrical configuration */
MV_DDR_OHM_30, /* data_drv_p */
MV_DDR_OHM_30, /* data_drv_n */
MV_DDR_OHM_30, /* ctrl_drv_p */
MV_DDR_OHM_30, /* ctrl_drv_n */
{
MV_DDR_OHM_60, /* odt_p 1cs */
MV_DDR_OHM_120 /* odt_p 2cs */
},
{
MV_DDR_OHM_60, /* odt_n 1cs */
MV_DDR_OHM_120 /* odt_n 2cs */
},
},
{ /* mac electrical configuration */
MV_DDR_ODT_CFG_NORMAL, /* odtcfg_pattern */
MV_DDR_ODT_CFG_ALWAYS_ON, /* odtcfg_write */
MV_DDR_ODT_CFG_NORMAL, /* odtcfg_read */
},
}
};
struct mv_ddr_topology_map *mv_ddr_topology_map_get(void)
{
/* Return the board topology as defined in the board code */
return &board_topology_map;
}
static void mpp_config(void)
{
uintptr_t reg;
uint32_t val;
reg = MVEBU_CP_MPP_REGS(0, 4);
/* configure CP0 MPP 37 and 38 to i2c */
val = mmio_read_32(reg);
val &= ~((MVEBU_MPP_CTRL_MASK << MVEBU_CP_MPP_CTRL37_OFFS) |
(MVEBU_MPP_CTRL_MASK << MVEBU_CP_MPP_CTRL38_OFFS));
val |= (MVEBU_CP_MPP_CTRL37_I2C0_SCK_ENA <<
MVEBU_CP_MPP_CTRL37_OFFS) |
(MVEBU_CP_MPP_CTRL38_I2C0_SDA_ENA <<
MVEBU_CP_MPP_CTRL38_OFFS);
mmio_write_32(reg, val);
}
/*
* This function may modify the default DRAM parameters
* based on information received from SPD or bootloader
* configuration located on non volatile storage
*/
void plat_marvell_dram_update_topology(void)
{
struct mv_ddr_topology_map *tm = mv_ddr_topology_map_get();
INFO("Gathering DRAM information\n");
if (tm->cfg_src == MV_DDR_CFG_SPD) {
/* configure MPPs to enable i2c */
mpp_config();
/* initialize i2c */
i2c_init((void *)MVEBU_CP0_I2C_BASE);
/* select SPD memory page 0 to access DRAM configuration */
i2c_write(I2C_SPD_P0_ADDR, 0x0, 1, tm->spd_data.all_bytes, 1);
/* read data from spd */
i2c_read(I2C_SPD_ADDR, 0x0, 1, tm->spd_data.all_bytes,
sizeof(tm->spd_data.all_bytes));
}
}

View file

@ -0,0 +1,191 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <a8k_common.h>
/*
* If bootrom is currently at BLE there's no need to include the memory
* maps structure at this point
*/
#include <mvebu_def.h>
#ifndef IMAGE_BLE
/*****************************************************************************
* AMB Configuration
*****************************************************************************
*/
struct addr_map_win amb_memory_map[] = {
/* CP1 SPI1 CS0 Direct Mode access */
{0xf900, 0x1000000, AMB_SPI1_CS0_ID},
};
int marvell_get_amb_memory_map(struct addr_map_win **win, uint32_t *size,
uintptr_t base)
{
*win = amb_memory_map;
if (*win == NULL)
*size = 0;
else
*size = ARRAY_SIZE(amb_memory_map);
return 0;
}
#endif
/*****************************************************************************
* IO WIN Configuration
*****************************************************************************
*/
struct addr_map_win io_win_memory_map[] = {
/* CP1 (MCI0) internal regs */
{0x00000000f4000000, 0x2000000, MCI_0_TID},
#ifndef IMAGE_BLE
/* PCIe0 and SPI1_CS0 (RUNIT) on CP1*/
{0x00000000f9000000, 0x2000000, MCI_0_TID},
/* PCIe1 on CP1*/
{0x00000000fb000000, 0x1000000, MCI_0_TID},
/* PCIe2 on CP1*/
{0x00000000fc000000, 0x1000000, MCI_0_TID},
/* MCI 0 indirect window */
{MVEBU_MCI_REG_BASE_REMAP(0), 0x100000, MCI_0_TID},
/* MCI 1 indirect window */
{MVEBU_MCI_REG_BASE_REMAP(1), 0x100000, MCI_1_TID},
#endif
};
uint32_t marvell_get_io_win_gcr_target(int ap_index)
{
return PIDI_TID;
}
int marvell_get_io_win_memory_map(int ap_index, struct addr_map_win **win,
uint32_t *size)
{
*win = io_win_memory_map;
if (*win == NULL)
*size = 0;
else
*size = ARRAY_SIZE(io_win_memory_map);
return 0;
}
#ifndef IMAGE_BLE
/*****************************************************************************
* IOB Configuration
*****************************************************************************
*/
struct addr_map_win iob_memory_map_cp0[] = {
/* CP0 */
/* PEX1_X1 window */
{0x00000000f7000000, 0x1000000, PEX1_TID},
/* PEX2_X1 window */
{0x00000000f8000000, 0x1000000, PEX2_TID},
/* PEX0_X4 window */
{0x00000000f6000000, 0x1000000, PEX0_TID}
};
struct addr_map_win iob_memory_map_cp1[] = {
/* CP1 */
/* SPI1_CS0 (RUNIT) window */
{0x00000000f9000000, 0x1000000, RUNIT_TID},
/* PEX1_X1 window */
{0x00000000fb000000, 0x1000000, PEX1_TID},
/* PEX2_X1 window */
{0x00000000fc000000, 0x1000000, PEX2_TID},
/* PEX0_X4 window */
{0x00000000fa000000, 0x1000000, PEX0_TID}
};
int marvell_get_iob_memory_map(struct addr_map_win **win, uint32_t *size,
uintptr_t base)
{
switch (base) {
case MVEBU_CP_REGS_BASE(0):
*win = iob_memory_map_cp0;
*size = ARRAY_SIZE(iob_memory_map_cp0);
return 0;
case MVEBU_CP_REGS_BASE(1):
*win = iob_memory_map_cp1;
*size = ARRAY_SIZE(iob_memory_map_cp1);
return 0;
default:
*size = 0;
*win = 0;
return 1;
}
}
#endif
/*****************************************************************************
* CCU Configuration
*****************************************************************************
*/
struct addr_map_win ccu_memory_map[] = {
#ifdef IMAGE_BLE
{0x00000000f2000000, 0x4000000, IO_0_TID}, /* IO window */
#else
{0x00000000f2000000, 0xe000000, IO_0_TID}, /* IO window */
#endif
};
uint32_t marvell_get_ccu_gcr_target(int ap)
{
return DRAM_0_TID;
}
int marvell_get_ccu_memory_map(int ap, struct addr_map_win **win,
uint32_t *size)
{
*win = ccu_memory_map;
*size = ARRAY_SIZE(ccu_memory_map);
return 0;
}
#ifndef IMAGE_BLE
/*****************************************************************************
* SoC PM configuration
*****************************************************************************
*/
/* CP GPIO should be used and the GPIOs should be within same GPIO register */
struct power_off_method pm_cfg = {
.type = PMIC_GPIO,
.cfg.gpio.pin_count = 1,
.cfg.gpio.info = {{0, 35} },
.cfg.gpio.step_count = 7,
.cfg.gpio.seq = {1, 0, 1, 0, 1, 0, 1},
.cfg.gpio.delay_ms = 10,
};
void *plat_marvell_get_pm_cfg(void)
{
/* Return the PM configurations */
return &pm_cfg;
}
/* In reference to #ifndef IMAGE_BLE, this part is used for BLE only. */
#else
/*****************************************************************************
* SKIP IMAGE Configuration
*****************************************************************************
*/
#if PLAT_RECOVERY_IMAGE_ENABLE
struct skip_image skip_im = {
.detection_method = GPIO,
.info.gpio.num = 33,
.info.gpio.button_state = HIGH,
.info.test.cp_ap = CP,
.info.test.cp_index = 0,
};
void *plat_marvell_get_skip_image_data(void)
{
/* Return the skip_image configurations */
return &skip_im;
}
#endif
#endif

View file

@ -0,0 +1,17 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __MVEBU_DEF_H__
#define __MVEBU_DEF_H__
#include <a8k_plat_def.h>
#define CP_COUNT 2 /* A80x0 has both CP0 & CP1 */
#define I2C_SPD_ADDR 0x53 /* Access SPD data */
#define I2C_SPD_P0_ADDR 0x36 /* Select SPD data page 0 */
#endif /* __MVEBU_DEF_H__ */

View file

@ -0,0 +1,16 @@
#
# Copyright (C) 2018 Marvell International Ltd.
#
# SPDX-License-Identifier: BSD-3-Clause
# https://spdx.org/licenses
#
PCI_EP_SUPPORT := 0
DOIMAGE_SEC := tools/doimage/secure/sec_img_8K.cfg
MARVELL_MOCHI_DRV := drivers/marvell/mochi/apn806_setup.c
include plat/marvell/a8k/common/a8k_common.mk
include plat/marvell/common/marvell_common.mk

View file

@ -0,0 +1,129 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <arch_helpers.h>
#include <a8k_i2c.h>
#include <debug.h>
#include <mmio.h>
#include <mv_ddr_if.h>
#include <mvebu_def.h>
#include <plat_marvell.h>
#define MVEBU_CP_MPP_CTRL37_OFFS 20
#define MVEBU_CP_MPP_CTRL38_OFFS 24
#define MVEBU_CP_MPP_CTRL37_I2C0_SCK_ENA 0x2
#define MVEBU_CP_MPP_CTRL38_I2C0_SDA_ENA 0x2
#define MVEBU_MPP_CTRL_MASK 0xf
/*
* This struct provides the DRAM training code with
* the appropriate board DRAM configuration
*/
static struct mv_ddr_topology_map board_topology_map = {
/* Board with 1CS 8Gb x4 devices of Micron 2400T */
DEBUG_LEVEL_ERROR,
0x1, /* active interfaces */
/* cs_mask, mirror, dqs_swap, ck_swap X subphys */
{ { { {0x1, 0x0, 0, 0}, /* FIXME: change the cs mask for all 64 bit */
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0},
{0x1, 0x0, 0, 0} },
/* TODO: double check if the speed bin is 2400T */
SPEED_BIN_DDR_2400T, /* speed_bin */
MV_DDR_DEV_WIDTH_8BIT, /* sdram device width */
MV_DDR_DIE_CAP_8GBIT, /* die capacity */
MV_DDR_FREQ_SAR, /* frequency */
0, 0, /* cas_l, cas_wl */
MV_DDR_TEMP_LOW} }, /* temperature */
MV_DDR_64BIT_BUS_MASK, /* subphys mask */
MV_DDR_CFG_SPD, /* ddr configuration data source */
{ {0} }, /* raw spd data */
{0}, /* timing parameters */
{ /* electrical configuration */
{ /* memory electrical configuration */
MV_DDR_RTT_NOM_PARK_RZQ_DISABLE, /* rtt_nom */
{
MV_DDR_RTT_NOM_PARK_RZQ_DIV4, /* rtt_park 1cs */
MV_DDR_RTT_NOM_PARK_RZQ_DIV1 /* rtt_park 2cs */
},
{
MV_DDR_RTT_WR_DYN_ODT_OFF, /* rtt_wr 1cs */
MV_DDR_RTT_WR_RZQ_DIV2 /* rtt_wr 2cs */
},
MV_DDR_DIC_RZQ_DIV7 /* dic */
},
{ /* phy electrical configuration */
MV_DDR_OHM_30, /* data_drv_p */
MV_DDR_OHM_30, /* data_drv_n */
MV_DDR_OHM_30, /* ctrl_drv_p */
MV_DDR_OHM_30, /* ctrl_drv_n */
{
MV_DDR_OHM_60, /* odt_p 1cs */
MV_DDR_OHM_120 /* odt_p 2cs */
},
{
MV_DDR_OHM_60, /* odt_n 1cs */
MV_DDR_OHM_120 /* odt_n 2cs */
},
},
{ /* mac electrical configuration */
MV_DDR_ODT_CFG_NORMAL, /* odtcfg_pattern */
MV_DDR_ODT_CFG_ALWAYS_ON, /* odtcfg_write */
MV_DDR_ODT_CFG_NORMAL, /* odtcfg_read */
},
}
};
struct mv_ddr_topology_map *mv_ddr_topology_map_get(void)
{
/* Return the board topology as defined in the board code */
return &board_topology_map;
}
static void mpp_config(void)
{
uint32_t val;
uintptr_t reg = MVEBU_CP_MPP_REGS(0, 4);
/* configure CP0 MPP 37 and 38 to i2c */
val = mmio_read_32(reg);
val &= ~((MVEBU_MPP_CTRL_MASK << MVEBU_CP_MPP_CTRL37_OFFS) |
(MVEBU_MPP_CTRL_MASK << MVEBU_CP_MPP_CTRL38_OFFS));
val |= (MVEBU_CP_MPP_CTRL37_I2C0_SCK_ENA << MVEBU_CP_MPP_CTRL37_OFFS) |
(MVEBU_CP_MPP_CTRL38_I2C0_SDA_ENA << MVEBU_CP_MPP_CTRL38_OFFS);
mmio_write_32(reg, val);
}
/*
* This function may modify the default DRAM parameters
* based on information received from SPD or bootloader
* configuration located on non volatile storage
*/
void plat_marvell_dram_update_topology(void)
{
struct mv_ddr_topology_map *tm = mv_ddr_topology_map_get();
INFO("Gathering DRAM information\n");
if (tm->cfg_src == MV_DDR_CFG_SPD) {
/* configure MPPs to enable i2c */
mpp_config();
/* initialize the i2c */
i2c_init((void *)MVEBU_CP0_I2C_BASE);
/* select SPD memory page 0 to access DRAM configuration */
i2c_write(I2C_SPD_P0_ADDR, 0x0, 1, tm->spd_data.all_bytes, 1);
/* read data from spd */
i2c_read(I2C_SPD_ADDR, 0x0, 1, tm->spd_data.all_bytes,
sizeof(tm->spd_data.all_bytes));
}
}

View file

@ -0,0 +1,196 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <a8k_common.h>
#include <delay_timer.h>
#include <mmio.h>
/*
* If bootrom is currently at BLE there's no need to include the memory
* maps structure at this point
*/
#include <mvebu_def.h>
#ifndef IMAGE_BLE
/*****************************************************************************
* GPIO Configuration
*****************************************************************************
*/
#define MPP_CONTROL_REGISTER 0xf2440018
#define MPP_CONTROL_MPP_SEL_52_MASK 0xf0000
#define GPIO_DATA_OUT1_REGISTER 0xf2440140
#define GPIO_DATA_OUT_EN_CTRL1_REGISTER 0xf2440144
#define GPIO52_MASK 0x100000
/* Reset PCIe via GPIO number 52 */
int marvell_gpio_config(void)
{
uint32_t reg;
reg = mmio_read_32(MPP_CONTROL_REGISTER);
reg |= MPP_CONTROL_MPP_SEL_52_MASK;
mmio_write_32(MPP_CONTROL_REGISTER, reg);
reg = mmio_read_32(GPIO_DATA_OUT1_REGISTER);
reg |= GPIO52_MASK;
mmio_write_32(GPIO_DATA_OUT1_REGISTER, reg);
reg = mmio_read_32(GPIO_DATA_OUT_EN_CTRL1_REGISTER);
reg &= ~GPIO52_MASK;
mmio_write_32(GPIO_DATA_OUT_EN_CTRL1_REGISTER, reg);
udelay(100);
return 0;
}
/*****************************************************************************
* AMB Configuration
*****************************************************************************
*/
struct addr_map_win amb_memory_map[] = {
/* CP1 SPI1 CS0 Direct Mode access */
{0xf900, 0x1000000, AMB_SPI1_CS0_ID},
};
int marvell_get_amb_memory_map(struct addr_map_win **win, uint32_t *size,
uintptr_t base)
{
*win = amb_memory_map;
if (*win == NULL)
*size = 0;
else
*size = ARRAY_SIZE(amb_memory_map);
return 0;
}
#endif
/*****************************************************************************
* IO WIN Configuration
*****************************************************************************
*/
struct addr_map_win io_win_memory_map[] = {
/* CP1 (MCI0) internal regs */
{0x00000000f4000000, 0x2000000, MCI_0_TID},
#ifndef IMAGE_BLE
/* PCIe0 and SPI1_CS0 (RUNIT) on CP1*/
{0x00000000f9000000, 0x2000000, MCI_0_TID},
/* PCIe1 on CP1*/
{0x00000000fb000000, 0x1000000, MCI_0_TID},
/* PCIe2 on CP1*/
{0x00000000fc000000, 0x1000000, MCI_0_TID},
/* MCI 0 indirect window */
{MVEBU_MCI_REG_BASE_REMAP(0), 0x100000, MCI_0_TID},
/* MCI 1 indirect window */
{MVEBU_MCI_REG_BASE_REMAP(1), 0x100000, MCI_1_TID},
#endif
};
uint32_t marvell_get_io_win_gcr_target(int ap_index)
{
return PIDI_TID;
}
int marvell_get_io_win_memory_map(int ap_index, struct addr_map_win **win,
uint32_t *size)
{
*win = io_win_memory_map;
if (*win == NULL)
*size = 0;
else
*size = ARRAY_SIZE(io_win_memory_map);
return 0;
}
#ifndef IMAGE_BLE
/*****************************************************************************
* IOB Configuration
*****************************************************************************
*/
struct addr_map_win iob_memory_map_cp0[] = {
/* CP0 */
/* PEX1_X1 window */
{0x00000000f7000000, 0x1000000, PEX1_TID},
/* PEX2_X1 window */
{0x00000000f8000000, 0x1000000, PEX2_TID},
/* PEX0_X4 window */
{0x00000000f6000000, 0x1000000, PEX0_TID},
{0x00000000c0000000, 0x30000000, PEX0_TID},
{0x0000000800000000, 0x100000000, PEX0_TID},
};
struct addr_map_win iob_memory_map_cp1[] = {
/* CP1 */
/* SPI1_CS0 (RUNIT) window */
{0x00000000f9000000, 0x1000000, RUNIT_TID},
/* PEX1_X1 window */
{0x00000000fb000000, 0x1000000, PEX1_TID},
/* PEX2_X1 window */
{0x00000000fc000000, 0x1000000, PEX2_TID},
/* PEX0_X4 window */
{0x00000000fa000000, 0x1000000, PEX0_TID}
};
int marvell_get_iob_memory_map(struct addr_map_win **win, uint32_t *size,
uintptr_t base)
{
switch (base) {
case MVEBU_CP_REGS_BASE(0):
*win = iob_memory_map_cp0;
*size = ARRAY_SIZE(iob_memory_map_cp0);
return 0;
case MVEBU_CP_REGS_BASE(1):
*win = iob_memory_map_cp1;
*size = ARRAY_SIZE(iob_memory_map_cp1);
return 0;
default:
*size = 0;
*win = 0;
return 1;
}
}
#endif
/*****************************************************************************
* CCU Configuration
*****************************************************************************
*/
struct addr_map_win ccu_memory_map[] = {
#ifdef IMAGE_BLE
{0x00000000f2000000, 0x4000000, IO_0_TID}, /* IO window */
#else
{0x00000000f2000000, 0xe000000, IO_0_TID}, /* IO window */
{0x00000000c0000000, 0x30000000, IO_0_TID}, /* IO window */
{0x0000000800000000, 0x100000000, IO_0_TID}, /* IO window */
#endif
};
uint32_t marvell_get_ccu_gcr_target(int ap)
{
return DRAM_0_TID;
}
int marvell_get_ccu_memory_map(int ap_index, struct addr_map_win **win,
uint32_t *size)
{
*win = ccu_memory_map;
*size = ARRAY_SIZE(ccu_memory_map);
return 0;
}
/* In reference to #ifndef IMAGE_BLE, this part is used for BLE only. */
/*****************************************************************************
* SKIP IMAGE Configuration
*****************************************************************************
*/
void *plat_marvell_get_skip_image_data(void)
{
/* No recovery button on A8k-MCBIN board */
return NULL;
}

View file

@ -0,0 +1,17 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __MVEBU_DEF_H__
#define __MVEBU_DEF_H__
#include <a8k_plat_def.h>
#define CP_COUNT 2 /* A80x0 has both CP0 & CP1 */
#define I2C_SPD_ADDR 0x53 /* Access SPD data */
#define I2C_SPD_P0_ADDR 0x36 /* Select SPD data page 0 */
#endif /* __MVEBU_DEF_H__ */

View file

@ -0,0 +1,16 @@
#
# Copyright (C) 2018 Marvell International Ltd.
#
# SPDX-License-Identifier: BSD-3-Clause
# https://spdx.org/licenses
#
PCI_EP_SUPPORT := 0
DOIMAGE_SEC := tools/doimage/secure/sec_img_8K.cfg
MARVELL_MOCHI_DRV := drivers/marvell/mochi/apn806_setup.c
include plat/marvell/a8k/common/a8k_common.mk
include plat/marvell/common/marvell_common.mk

View file

@ -0,0 +1,122 @@
#
# Copyright (C) 2016 - 2018 Marvell International Ltd.
#
# SPDX-License-Identifier: BSD-3-Clause
# https://spdx.org/licenses
include tools/doimage/doimage.mk
PLAT_FAMILY := a8k
PLAT_FAMILY_BASE := plat/marvell/$(PLAT_FAMILY)
PLAT_INCLUDE_BASE := include/plat/marvell/$(PLAT_FAMILY)
PLAT_COMMON_BASE := $(PLAT_FAMILY_BASE)/common
MARVELL_DRV_BASE := drivers/marvell
MARVELL_COMMON_BASE := plat/marvell/common
ERRATA_A72_859971 := 1
# Enable MSS support for a8k family
MSS_SUPPORT := 1
# Disable EL3 cache for power management
BL31_CACHE_DISABLE := 1
$(eval $(call add_define,BL31_CACHE_DISABLE))
$(eval $(call add_define,PCI_EP_SUPPORT))
$(eval $(call assert_boolean,PCI_EP_SUPPORT))
DOIMAGEPATH ?= tools/doimage
DOIMAGETOOL ?= ${DOIMAGEPATH}/doimage
ROM_BIN_EXT ?= $(BUILD_PLAT)/ble.bin
DOIMAGE_FLAGS += -b $(ROM_BIN_EXT) $(NAND_DOIMAGE_FLAGS) $(DOIMAGE_SEC_FLAGS)
# This define specifies DDR type for BLE
$(eval $(call add_define,CONFIG_DDR4))
MARVELL_GIC_SOURCES := drivers/arm/gic/common/gic_common.c \
drivers/arm/gic/v2/gicv2_main.c \
drivers/arm/gic/v2/gicv2_helpers.c \
plat/common/plat_gicv2.c
ATF_INCLUDES := -Iinclude/common/tbbr
PLAT_INCLUDES := -I$(PLAT_FAMILY_BASE)/$(PLAT) \
-I$(PLAT_COMMON_BASE)/include \
-I$(PLAT_INCLUDE_BASE)/common \
-Iinclude/drivers/marvell \
-Iinclude/drivers/marvell/mochi \
$(ATF_INCLUDES)
PLAT_BL_COMMON_SOURCES := $(PLAT_COMMON_BASE)/aarch64/a8k_common.c \
drivers/console/aarch64/console.S \
drivers/ti/uart/aarch64/16550_console.S
BLE_PORTING_SOURCES := $(PLAT_FAMILY_BASE)/$(PLAT)/board/dram_port.c \
$(PLAT_FAMILY_BASE)/$(PLAT)/board/marvell_plat_config.c
MARVELL_MOCHI_DRV += $(MARVELL_DRV_BASE)/mochi/cp110_setup.c
BLE_SOURCES := $(PLAT_COMMON_BASE)/plat_ble_setup.c \
$(MARVELL_MOCHI_DRV) \
$(MARVELL_DRV_BASE)/i2c/a8k_i2c.c \
$(PLAT_COMMON_BASE)/plat_pm.c \
$(MARVELL_DRV_BASE)/thermal.c \
$(PLAT_COMMON_BASE)/plat_thermal.c \
$(BLE_PORTING_SOURCES) \
$(MARVELL_DRV_BASE)/ccu.c \
$(MARVELL_DRV_BASE)/io_win.c
BL1_SOURCES += $(PLAT_COMMON_BASE)/aarch64/plat_helpers.S \
lib/cpus/aarch64/cortex_a72.S
MARVELL_DRV := $(MARVELL_DRV_BASE)/io_win.c \
$(MARVELL_DRV_BASE)/iob.c \
$(MARVELL_DRV_BASE)/mci.c \
$(MARVELL_DRV_BASE)/amb_adec.c \
$(MARVELL_DRV_BASE)/ccu.c \
$(MARVELL_DRV_BASE)/cache_llc.c \
$(MARVELL_DRV_BASE)/comphy/phy-comphy-cp110.c
BL31_PORTING_SOURCES := $(PLAT_FAMILY_BASE)/$(PLAT)/board/marvell_plat_config.c
BL31_SOURCES += lib/cpus/aarch64/cortex_a72.S \
$(PLAT_COMMON_BASE)/aarch64/plat_helpers.S \
$(PLAT_COMMON_BASE)/aarch64/plat_arch_config.c \
$(PLAT_COMMON_BASE)/plat_pm.c \
$(PLAT_COMMON_BASE)/plat_bl31_setup.c \
$(MARVELL_COMMON_BASE)/marvell_gicv2.c \
$(MARVELL_COMMON_BASE)/mrvl_sip_svc.c \
$(MARVELL_COMMON_BASE)/marvell_ddr_info.c \
$(BL31_PORTING_SOURCES) \
$(MARVELL_DRV) \
$(MARVELL_MOCHI_DRV) \
$(MARVELL_GIC_SOURCES)
# Add trace functionality for PM
BL31_SOURCES += $(PLAT_COMMON_BASE)/plat_pm_trace.c
# Disable the PSCI platform compatibility layer (allows porting
# from Old Platform APIs to the new APIs).
# It is not needed since Marvell platform already used the new platform APIs.
ENABLE_PLAT_COMPAT := 0
# Force builds with BL2 image on a80x0 platforms
ifndef SCP_BL2
$(error "Error: SCP_BL2 image is mandatory for a8k family")
endif
# MSS (SCP) build
include $(PLAT_COMMON_BASE)/mss/mss_a8k.mk
# BLE (ROM context execution code, AKA binary extension)
BLE_PATH ?= ble
include ${BLE_PATH}/ble.mk
$(eval $(call MAKE_BL,e))
mrvl_flash: ${BUILD_PLAT}/${FIP_NAME} ${DOIMAGETOOL} ${BUILD_PLAT}/ble.bin
$(shell truncate -s %128K ${BUILD_PLAT}/bl1.bin)
$(shell cat ${BUILD_PLAT}/bl1.bin ${BUILD_PLAT}/${FIP_NAME} > ${BUILD_PLAT}/${BOOT_IMAGE})
${DOIMAGETOOL} ${DOIMAGE_FLAGS} ${BUILD_PLAT}/${BOOT_IMAGE} ${BUILD_PLAT}/${FLASH_IMAGE}

View file

@ -0,0 +1,64 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <plat_marvell.h>
/* MMU entry for internal (register) space access */
#define MAP_DEVICE0 MAP_REGION_FLAT(DEVICE0_BASE, \
DEVICE0_SIZE, \
MT_DEVICE | MT_RW | MT_SECURE)
/*
* Table of regions for various BL stages to map using the MMU.
*/
#if IMAGE_BL1
const mmap_region_t plat_marvell_mmap[] = {
MARVELL_MAP_SHARED_RAM,
MAP_DEVICE0,
{0}
};
#endif
#if IMAGE_BL2
const mmap_region_t plat_marvell_mmap[] = {
MARVELL_MAP_SHARED_RAM,
MAP_DEVICE0,
MARVELL_MAP_DRAM,
{0}
};
#endif
#if IMAGE_BL2U
const mmap_region_t plat_marvell_mmap[] = {
MAP_DEVICE0,
{0}
};
#endif
#if IMAGE_BLE
const mmap_region_t plat_marvell_mmap[] = {
MAP_DEVICE0,
{0}
};
#endif
#if IMAGE_BL31
const mmap_region_t plat_marvell_mmap[] = {
MARVELL_MAP_SHARED_RAM,
MAP_DEVICE0,
MARVELL_MAP_DRAM,
{0}
};
#endif
#if IMAGE_BL32
const mmap_region_t plat_marvell_mmap[] = {
MAP_DEVICE0,
{0}
};
#endif
MARVELL_CASSERT_MMAP;

View file

@ -0,0 +1,46 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <platform.h>
#include <arch_helpers.h>
#include <mmio.h>
#include <debug.h>
#include <cache_llc.h>
#define CCU_HTC_ASET (MVEBU_CCU_BASE(MVEBU_AP0) + 0x264)
#define MVEBU_IO_AFFINITY (0xF00)
static void plat_enable_affinity(void)
{
int cluster_id;
int affinity;
/* set CPU Affinity */
cluster_id = plat_my_core_pos() / PLAT_MARVELL_CLUSTER_CORE_COUNT;
affinity = (MVEBU_IO_AFFINITY | (1 << cluster_id));
mmio_write_32(CCU_HTC_ASET, affinity);
/* set barier */
isb();
}
void marvell_psci_arch_init(int die_index)
{
#if LLC_ENABLE
/* check if LLC is in exclusive mode
* as L2 is configured to UniqueClean eviction
* (in a8k reset handler)
*/
if (llc_is_exclusive(0) == 0)
ERROR("LLC should be configured to exclusice mode\n");
#endif
/* Enable Affinity */
plat_enable_affinity();
}

View file

@ -0,0 +1,112 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <asm_macros.S>
#include <platform_def.h>
#include <marvell_pm.h>
.globl plat_secondary_cold_boot_setup
.globl plat_get_my_entrypoint
.globl plat_is_my_cpu_primary
.globl plat_reset_handler
/* -----------------------------------------------------
* void plat_secondary_cold_boot_setup (void);
*
* This function performs any platform specific actions
* needed for a secondary cpu after a cold reset. Right
* now this is a stub function.
* -----------------------------------------------------
*/
func plat_secondary_cold_boot_setup
mov x0, #0
ret
endfunc plat_secondary_cold_boot_setup
/* ---------------------------------------------------------------------
* unsigned long plat_get_my_entrypoint (void);
*
* Main job of this routine is to distinguish
* between a cold and warm boot
* For a cold boot, return 0.
* For a warm boot, read the mailbox and return the address it contains.
*
* ---------------------------------------------------------------------
*/
func plat_get_my_entrypoint
/* Read first word and compare it with magic num */
mov_imm x0, PLAT_MARVELL_MAILBOX_BASE
ldr x1, [x0]
mov_imm x2, MVEBU_MAILBOX_MAGIC_NUM
cmp x1, x2
beq warm_boot /* If compare failed, return 0, i.e. cold boot */
mov x0, #0
ret
warm_boot:
mov_imm x1, MBOX_IDX_SEC_ADDR /* Get the jump address */
subs x1, x1, #1
mov x2, #(MBOX_IDX_SEC_ADDR * 8)
lsl x3, x2, x1
add x0, x0, x3
ldr x0, [x0]
ret
endfunc plat_get_my_entrypoint
/* -----------------------------------------------------
* unsigned int plat_is_my_cpu_primary (void);
*
* Find out whether the current cpu is the primary
* cpu.
* -----------------------------------------------------
*/
func plat_is_my_cpu_primary
mrs x0, mpidr_el1
and x0, x0, #(MPIDR_CLUSTER_MASK | MPIDR_CPU_MASK)
cmp x0, #MVEBU_PRIMARY_CPU
cset w0, eq
ret
endfunc plat_is_my_cpu_primary
/* -----------------------------------------------------
* void plat_reset_handler (void);
*
* Platform specific configuration right after cpu is
* is our of reset.
*
* The plat_reset_handler can clobber x0 - x18, x30.
* -----------------------------------------------------
*/
func plat_reset_handler
/*
* Note: the configurations below should be done before MMU,
* I Cache and L2are enabled.
* The reset handler is executed right after reset
* and before Caches are enabled.
*/
/* Enable L1/L2 ECC and Parity */
mrs x5, s3_1_c11_c0_2 /* L2 Ctrl */
orr x5, x5, #(1 << 21) /* Enable L1/L2 cache ECC & Parity */
msr s3_1_c11_c0_2, x5 /* L2 Ctrl */
#if LLC_ENABLE
/*
* Enable L2 UniqueClean evictions
* Note: this configuration assumes that LLC is configured
* in exclusive mode.
* Later on in the code this assumption will be validated
*/
mrs x5, s3_1_c15_c0_0 /* L2 Ctrl */
orr x5, x5, #(1 << 14) /* Enable UniqueClean evictions with data */
msr s3_1_c15_c0_0, x5 /* L2 Ctrl */
#endif
/* Instruction Barrier to allow msr command completion */
isb
ret
endfunc plat_reset_handler

View file

@ -0,0 +1,190 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __A8K_PLAT_DEF_H__
#define __A8K_PLAT_DEF_H__
#include <marvell_def.h>
#define MVEBU_PRIMARY_CPU 0x0
#define MVEBU_AP0 0x0
/* APN806 revision ID */
#define MVEBU_CSS_GWD_CTRL_IIDR2_REG (MVEBU_REGS_BASE + 0x610FCC)
#define GWD_IIDR2_REV_ID_OFFSET 12
#define GWD_IIDR2_REV_ID_MASK 0xF
#define GWD_IIDR2_CHIP_ID_OFFSET 20
#define GWD_IIDR2_CHIP_ID_MASK (0xFFF << GWD_IIDR2_CHIP_ID_OFFSET)
#define CHIP_ID_AP806 0x806
#define CHIP_ID_AP807 0x807
#define COUNTER_FREQUENCY 25000000
#define MVEBU_REGS_BASE 0xF0000000
#define MVEBU_REGS_BASE_MASK 0xF0000000
#define MVEBU_REGS_BASE_AP(ap) MVEBU_REGS_BASE
#define MVEBU_CP_REGS_BASE(cp_index) (0xF2000000 + (cp_index) * 0x2000000)
#define MVEBU_RFU_BASE (MVEBU_REGS_BASE + 0x6F0000)
#define MVEBU_IO_WIN_BASE(ap_index) (MVEBU_RFU_BASE)
#define MVEBU_IO_WIN_GCR_OFFSET (0x70)
#define MVEBU_IO_WIN_MAX_WINS (7)
/* Misc SoC configurations Base */
#define MVEBU_MISC_SOC_BASE (MVEBU_REGS_BASE + 0x6F4300)
#define MVEBU_CCU_BASE(ap_index) (MVEBU_REGS_BASE + 0x4000)
#define MVEBU_CCU_MAX_WINS (8)
#define MVEBU_LLC_BASE(ap_index) (MVEBU_REGS_BASE + 0x8000)
#define MVEBU_DRAM_MAC_BASE (MVEBU_REGS_BASE + 0x20000)
#define MVEBU_DRAM_PHY_BASE (MVEBU_REGS_BASE + 0x20000)
#define MVEBU_SMMU_BASE (MVEBU_REGS_BASE + 0x100000)
#define MVEBU_CP_MPP_REGS(cp_index, n) (MVEBU_CP_REGS_BASE(cp_index) + \
0x440000 + ((n) << 2))
#define MVEBU_PM_MPP_REGS(cp_index, n) (MVEBU_CP_REGS_BASE(cp_index) + \
0x440000 + ((n / 8) << 2))
#define MVEBU_CP_GPIO_DATA_OUT(cp_index, n) \
(MVEBU_CP_REGS_BASE(cp_index) + \
0x440100 + ((n > 32) ? 0x40 : 0x00))
#define MVEBU_CP_GPIO_DATA_OUT_EN(cp_index, n) \
(MVEBU_CP_REGS_BASE(cp_index) + \
0x440104 + ((n > 32) ? 0x40 : 0x00))
#define MVEBU_CP_GPIO_DATA_IN(cp_index, n) (MVEBU_CP_REGS_BASE(cp_index) + \
0x440110 + ((n > 32) ? 0x40 : 0x00))
#define MVEBU_AP_MPP_REGS(n) (MVEBU_RFU_BASE + 0x4000 + ((n) << 2))
#define MVEBU_AP_GPIO_REGS (MVEBU_RFU_BASE + 0x5040)
#define MVEBU_AP_GPIO_DATA_IN (MVEBU_AP_GPIO_REGS + 0x10)
#define MVEBU_AP_I2C_BASE (MVEBU_REGS_BASE + 0x511000)
#define MVEBU_CP0_I2C_BASE (MVEBU_CP_REGS_BASE(0) + 0x701000)
#define MVEBU_AP_EXT_TSEN_BASE (MVEBU_RFU_BASE + 0x8084)
#define MVEBU_AP_MC_TRUSTZONE_REG_LOW(ap, win) (MVEBU_REGS_BASE_AP(ap) + \
0x20080 + ((win) * 0x8))
#define MVEBU_AP_MC_TRUSTZONE_REG_HIGH(ap, win) (MVEBU_REGS_BASE_AP(ap) + \
0x20084 + ((win) * 0x8))
/* MCI indirect access definitions */
#define MCI_MAX_UNIT_ID 2
/* SoC RFU / IHBx4 Control */
#define MCIX4_REG_START_ADDRESS_REG(unit_id) (MVEBU_RFU_BASE + \
0x4218 + (unit_id * 0x20))
#define MCI_REMAP_OFF_SHIFT 8
#define MVEBU_MCI_REG_BASE_REMAP(index) (0xFD000000 + \
((index) * 0x1000000))
#define MVEBU_PCIE_X4_MAC_BASE(x) (MVEBU_CP_REGS_BASE(x) + 0x600000)
#define MVEBU_COMPHY_BASE(x) (MVEBU_CP_REGS_BASE(x) + 0x441000)
#define MVEBU_HPIPE_BASE(x) (MVEBU_CP_REGS_BASE(x) + 0x120000)
#define MVEBU_CP_DFX_OFFSET (0x400200)
/*****************************************************************************
* MVEBU memory map related constants
*****************************************************************************
*/
/* Aggregate of all devices in the first GB */
#define DEVICE0_BASE MVEBU_REGS_BASE
#define DEVICE0_SIZE 0x10000000
/*****************************************************************************
* GIC-400 & interrupt handling related constants
*****************************************************************************
*/
/* Base MVEBU compatible GIC memory map */
#define MVEBU_GICD_BASE 0x210000
#define MVEBU_GICC_BASE 0x220000
/*****************************************************************************
* AXI Configuration
*****************************************************************************
*/
#define MVEBU_AXI_ATTR_ARCACHE_OFFSET 4
#define MVEBU_AXI_ATTR_ARCACHE_MASK (0xF << \
MVEBU_AXI_ATTR_ARCACHE_OFFSET)
#define MVEBU_AXI_ATTR_ARDOMAIN_OFFSET 12
#define MVEBU_AXI_ATTR_ARDOMAIN_MASK (0x3 << \
MVEBU_AXI_ATTR_ARDOMAIN_OFFSET)
#define MVEBU_AXI_ATTR_AWCACHE_OFFSET 20
#define MVEBU_AXI_ATTR_AWCACHE_MASK (0xF << \
MVEBU_AXI_ATTR_AWCACHE_OFFSET)
#define MVEBU_AXI_ATTR_AWDOMAIN_OFFSET 28
#define MVEBU_AXI_ATTR_AWDOMAIN_MASK (0x3 << \
MVEBU_AXI_ATTR_AWDOMAIN_OFFSET)
/* SATA MBUS to AXI configuration */
#define MVEBU_SATA_M2A_AXI_ARCACHE_OFFSET 1
#define MVEBU_SATA_M2A_AXI_ARCACHE_MASK (0xF << \
MVEBU_SATA_M2A_AXI_ARCACHE_OFFSET)
#define MVEBU_SATA_M2A_AXI_AWCACHE_OFFSET 5
#define MVEBU_SATA_M2A_AXI_AWCACHE_MASK (0xF << \
MVEBU_SATA_M2A_AXI_AWCACHE_OFFSET)
/* ARM cache attributes */
#define CACHE_ATTR_BUFFERABLE 0x1
#define CACHE_ATTR_CACHEABLE 0x2
#define CACHE_ATTR_READ_ALLOC 0x4
#define CACHE_ATTR_WRITE_ALLOC 0x8
/* Domain */
#define DOMAIN_NON_SHAREABLE 0x0
#define DOMAIN_INNER_SHAREABLE 0x1
#define DOMAIN_OUTER_SHAREABLE 0x2
#define DOMAIN_SYSTEM_SHAREABLE 0x3
/************************************************************************
* Required platform porting definitions common to all
* Management Compute SubSystems (MSS)
************************************************************************
*/
/*
* Load address of SCP_BL2
* SCP_BL2 is loaded to the same place as BL31.
* Once SCP_BL2 is transferred to the SCP,
* it is discarded and BL31 is loaded over the top.
*/
#ifdef SCP_IMAGE
#define SCP_BL2_BASE BL31_BASE
#endif
#ifndef __ASSEMBLER__
enum ap806_sar_target_dev {
SAR_PIDI_MCIX2 = 0x0,
SAR_MCIX4 = 0x1,
SAR_SPI = 0x2,
SAR_SD = 0x3,
SAR_PIDI_MCIX2_BD = 0x4, /* BootRom disabled */
SAR_MCIX4_DB = 0x5, /* BootRom disabled */
SAR_SPI_DB = 0x6, /* BootRom disabled */
SAR_EMMC = 0x7
};
enum io_win_target_ids {
MCI_0_TID = 0x0,
MCI_1_TID = 0x1,
MCI_2_TID = 0x2,
PIDI_TID = 0x3,
SPI_TID = 0x4,
STM_TID = 0x5,
BOOTROM_TID = 0x6,
IO_WIN_MAX_TID
};
enum ccu_target_ids {
IO_0_TID = 0x00,
DRAM_0_TID = 0x03,
IO_1_TID = 0x0F,
CFG_REG_TID = 0x10,
RAR_TID = 0x20,
SRAM_TID = 0x40,
DRAM_1_TID = 0xC0,
CCU_MAX_TID,
INVALID_TID = 0xFF
};
#endif /* __ASSEMBLER__ */
#endif /* __A8K_PLAT_DEF_H__ */

View file

@ -0,0 +1,9 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#define DRAM_MAX_IFACE 1
#define DRAM_CH0_MMAP_LOW_OFFSET 0x20200

View file

@ -0,0 +1,20 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __PLAT_MACROS_S__
#define __PLAT_MACROS_S__
#include <marvell_macros.S>
/*
* Required platform porting macros
* (Provided by included headers)
*/
.macro plat_crash_print_regs
.endm
#endif /* __PLAT_MACROS_S__ */

View file

@ -0,0 +1,202 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __PLATFORM_DEF_H__
#define __PLATFORM_DEF_H__
#include <board_marvell_def.h>
#include <gic_common.h>
#include <interrupt_props.h>
#include <mvebu_def.h>
#ifndef __ASSEMBLY__
#include <stdio.h>
#endif /* __ASSEMBLY__ */
/*
* Most platform porting definitions provided by included headers
*/
/*
* DRAM Memory layout:
* +-----------------------+
* : :
* : Linux :
* 0x04X00000-->+-----------------------+
* | BL3-3(u-boot) |>>}>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
* |-----------------------| } |
* | BL3-[0,1, 2] | }---------------------------------> |
* |-----------------------| } || |
* | BL2 | }->FIP (loaded by || |
* |-----------------------| } BootROM to DRAM) || |
* | FIP_TOC | } || |
* 0x04120000-->|-----------------------| || |
* | BL1 (RO) | || |
* 0x04100000-->+-----------------------+ || |
* : : || |
* : Trusted SRAM section : \/ |
* 0x04040000-->+-----------------------+ Replaced by BL2 +----------------+ |
* | BL1 (RW) | <<<<<<<<<<<<<<<< | BL3-1 NOBITS | |
* 0x04037000-->|-----------------------| <<<<<<<<<<<<<<<< |----------------| |
* | | <<<<<<<<<<<<<<<< | BL3-1 PROGBITS | |
* 0x04023000-->|-----------------------| +----------------+ |
* | BL2 | |
* |-----------------------| |
* | | |
* 0x04001000-->|-----------------------| |
* | Shared | |
* 0x04000000-->+-----------------------+ |
* : : |
* : Linux : |
* : : |
* |-----------------------| |
* | | U-Boot(BL3-3) Loaded by BL2 |
* | U-Boot | <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
* 0x00000000-->+-----------------------+
*
* Trusted SRAM section 0x4000000..0x4200000:
* ----------------------------------------
* SRAM_BASE = 0x4001000
* BL2_BASE = 0x4006000
* BL2_LIMIT = BL31_BASE
* BL31_BASE = 0x4023000 = (64MB + 256KB - 0x1D000)
* BL31_PROGBITS_LIMIT = BL1_RW_BASE
* BL1_RW_BASE = 0x4037000 = (64MB + 256KB - 0x9000)
* BL1_RW_LIMIT = BL31_LIMIT = 0x4040000
*
*
* PLAT_MARVELL_FIP_BASE = 0x4120000
*/
/*
* Since BL33 is loaded by BL2 (and validated by BL31) to DRAM offset 0,
* it is allowed to load/copy images to 'NULL' pointers
*/
#if defined(IMAGE_BL2) || defined(IMAGE_BL31)
#define PLAT_ALLOW_ZERO_ADDR_COPY
#endif
#define PLAT_MARVELL_SRAM_BASE 0xFFE1C048
#define PLAT_MARVELL_SRAM_END 0xFFE78000
#define PLAT_MARVELL_ATF_BASE 0x4000000
#define PLAT_MARVELL_ATF_LOAD_ADDR (PLAT_MARVELL_ATF_BASE + \
0x100000)
#define PLAT_MARVELL_FIP_BASE (PLAT_MARVELL_ATF_LOAD_ADDR + \
0x20000)
#define PLAT_MARVELL_FIP_MAX_SIZE 0x4000000
#define PLAT_MARVELL_NORTHB_COUNT 1
#define PLAT_MARVELL_CLUSTER_COUNT 2
#define PLAT_MARVELL_CLUSTER_CORE_COUNT 2
#define PLAT_MARVELL_CORE_COUNT (PLAT_MARVELL_CLUSTER_COUNT * \
PLAT_MARVELL_CLUSTER_CORE_COUNT)
/* DRAM[2MB..66MB] is used as Trusted ROM */
#define PLAT_MARVELL_TRUSTED_ROM_BASE PLAT_MARVELL_ATF_LOAD_ADDR
/* 64 MB TODO: reduce this to minimum needed according to fip image size */
#define PLAT_MARVELL_TRUSTED_ROM_SIZE 0x04000000
/* Reserve 16M for SCP (Secure PayLoad) Trusted DRAM */
#define PLAT_MARVELL_TRUSTED_DRAM_BASE 0x04400000
#define PLAT_MARVELL_TRUSTED_DRAM_SIZE 0x01000000 /* 16 MB */
/*
* PLAT_ARM_MAX_BL1_RW_SIZE is calculated using the current BL1 RW debug size
* plus a little space for growth.
*/
#define PLAT_MARVELL_MAX_BL1_RW_SIZE 0xA000
/*
* PLAT_ARM_MAX_BL2_SIZE is calculated using the current BL2 debug size plus a
* little space for growth.
*/
#define PLAT_MARVELL_MAX_BL2_SIZE 0xF000
/*
* PLAT_ARM_MAX_BL31_SIZE is calculated using the current BL31 debug size plus a
* little space for growth.
*/
#define PLAT_MARVEL_MAX_BL31_SIZE 0x5D000
#define PLAT_MARVELL_CPU_ENTRY_ADDR BL1_RO_BASE
/* GIC related definitions */
#define PLAT_MARVELL_GICD_BASE (MVEBU_REGS_BASE + MVEBU_GICD_BASE)
#define PLAT_MARVELL_GICC_BASE (MVEBU_REGS_BASE + MVEBU_GICC_BASE)
#define PLAT_MARVELL_G0_IRQ_PROPS(grp) \
INTR_PROP_DESC(MARVELL_IRQ_SEC_SGI_0, GIC_HIGHEST_SEC_PRIORITY, grp, \
GIC_INTR_CFG_LEVEL), \
INTR_PROP_DESC(MARVELL_IRQ_SEC_SGI_6, GIC_HIGHEST_SEC_PRIORITY, grp, \
GIC_INTR_CFG_LEVEL)
#define PLAT_MARVELL_G1S_IRQ_PROPS(grp) \
INTR_PROP_DESC(MARVELL_IRQ_SEC_PHY_TIMER, GIC_HIGHEST_SEC_PRIORITY, \
grp, GIC_INTR_CFG_LEVEL), \
INTR_PROP_DESC(MARVELL_IRQ_SEC_SGI_1, GIC_HIGHEST_SEC_PRIORITY, grp, \
GIC_INTR_CFG_LEVEL), \
INTR_PROP_DESC(MARVELL_IRQ_SEC_SGI_2, GIC_HIGHEST_SEC_PRIORITY, grp, \
GIC_INTR_CFG_LEVEL), \
INTR_PROP_DESC(MARVELL_IRQ_SEC_SGI_3, GIC_HIGHEST_SEC_PRIORITY, grp, \
GIC_INTR_CFG_LEVEL), \
INTR_PROP_DESC(MARVELL_IRQ_SEC_SGI_4, GIC_HIGHEST_SEC_PRIORITY, grp, \
GIC_INTR_CFG_LEVEL), \
INTR_PROP_DESC(MARVELL_IRQ_SEC_SGI_5, GIC_HIGHEST_SEC_PRIORITY, grp, \
GIC_INTR_CFG_LEVEL), \
INTR_PROP_DESC(MARVELL_IRQ_SEC_SGI_7, GIC_HIGHEST_SEC_PRIORITY, grp, \
GIC_INTR_CFG_LEVEL)
#define PLAT_MARVELL_SHARED_RAM_CACHED 1
/*
* Load address of BL3-3 for this platform port
*/
#define PLAT_MARVELL_NS_IMAGE_OFFSET 0x0
/* System Reference Clock*/
#define PLAT_REF_CLK_IN_HZ COUNTER_FREQUENCY
/*
* PL011 related constants
*/
#define PLAT_MARVELL_BOOT_UART_BASE (MVEBU_REGS_BASE + 0x512000)
#define PLAT_MARVELL_BOOT_UART_CLK_IN_HZ 200000000
#define PLAT_MARVELL_CRASH_UART_BASE PLAT_MARVELL_BOOT_UART_BASE
#define PLAT_MARVELL_CRASH_UART_CLK_IN_HZ PLAT_MARVELL_BOOT_UART_CLK_IN_HZ
#define PLAT_MARVELL_BL31_RUN_UART_BASE PLAT_MARVELL_BOOT_UART_BASE
#define PLAT_MARVELL_BL31_RUN_UART_CLK_IN_HZ PLAT_MARVELL_BOOT_UART_CLK_IN_HZ
/* Recovery image enable */
#define PLAT_RECOVERY_IMAGE_ENABLE 0
/* Required platform porting definitions */
#define PLAT_MAX_PWR_LVL MPIDR_AFFLVL1
/* System timer related constants */
#define PLAT_MARVELL_NSTIMER_FRAME_ID 1
/* Mailbox base address (note the lower memory space
* is reserved for BLE data)
*/
#define PLAT_MARVELL_MAILBOX_BASE (MARVELL_TRUSTED_SRAM_BASE \
+ 0x400)
#define PLAT_MARVELL_MAILBOX_SIZE 0x100
#define PLAT_MARVELL_MAILBOX_MAGIC_NUM 0x6D72766C /* mrvl */
/* Securities */
#define IRQ_SEC_OS_TICK_INT MARVELL_IRQ_SEC_PHY_TIMER
#define TRUSTED_DRAM_BASE PLAT_MARVELL_TRUSTED_DRAM_BASE
#define TRUSTED_DRAM_SIZE PLAT_MARVELL_TRUSTED_DRAM_SIZE
#define BL32_BASE TRUSTED_DRAM_BASE
#endif /* __PLATFORM_DEF_H__ */

View file

@ -0,0 +1,20 @@
#
# Copyright (C) 2018 Marvell International Ltd.
#
# SPDX-License-Identifier: BSD-3-Clause
# https://spdx.org/licenses
#
PLAT_MARVELL := plat/marvell
A8K_MSS_SOURCE := $(PLAT_MARVELL)/a8k/common/mss
BL2_SOURCES += $(A8K_MSS_SOURCE)/mss_bl2_setup.c
BL31_SOURCES += $(A8K_MSS_SOURCE)/mss_pm_ipc.c
PLAT_INCLUDES += -I$(A8K_MSS_SOURCE)
ifneq (${SCP_BL2},)
# This define is used to inidcate the SCP image is present
$(eval $(call add_define,SCP_IMAGE))
endif

View file

@ -0,0 +1,144 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <a8k_common.h>
#include <bl_common.h>
#include <ccu.h>
#include <cp110_setup.h>
#include <debug.h>
#include <marvell_plat_priv.h> /* timer functionality */
#include <mmio.h>
#include <platform_def.h>
#include "mss_scp_bootloader.h"
/* IO windows configuration */
#define IOW_GCR_OFFSET (0x70)
/* MSS windows configuration */
#define MSS_AEBR(base) (base + 0x160)
#define MSS_AIBR(base) (base + 0x164)
#define MSS_AEBR_MASK 0xFFF
#define MSS_AIBR_MASK 0xFFF
#define MSS_EXTERNAL_SPACE 0x50000000
#define MSS_EXTERNAL_ACCESS_BIT 28
#define MSS_EXTERNAL_ADDR_MASK 0xfffffff
#define MSS_INTERNAL_ACCESS_BIT 28
struct addr_map_win ccu_mem_map[] = {
{MVEBU_CP_REGS_BASE(0), 0x4000000, IO_0_TID}
};
/* Since the scp_bl2 image can contain firmware for cp1 and cp0 coprocessors,
* the access to cp0 and cp1 need to be provided. More precisely it is
* required to:
* - get the information about device id which is stored in CP0 registers
* (to distinguish between cases where we have cp0 and cp1 or standalone cp0)
* - get the access to cp which is needed for loading fw for cp0/cp1
* coprocessors
* This function configures ccu windows accordingly.
*
* Note: there is no need to restore previous ccu configuration, since in next
* phase (BL31) the init_ccu will be called (via apn806_init/
* bl31_plat_arch_setu) and therefore the ccu configuration will be overwritten.
*/
static int bl2_plat_mmap_init(void)
{
int cfg_num, win_id, cfg_idx;
cfg_num = ARRAY_SIZE(ccu_mem_map);
/* CCU window-0 should not be counted - it's already used */
if (cfg_num > (MVEBU_CCU_MAX_WINS - 1)) {
ERROR("BL2: %s: trying to open too many windows\n", __func__);
return -1;
}
/* Enable required CCU windows
* Do not touch CCU window 0,
* it's used for the internal registers access
*/
for (cfg_idx = 0, win_id = 1; cfg_idx < cfg_num; cfg_idx++, win_id++) {
/* Enable required CCU windows */
ccu_win_check(&ccu_mem_map[cfg_idx]);
ccu_enable_win(MVEBU_AP0, &ccu_mem_map[cfg_idx], win_id);
}
/* Set the default target id to PIDI */
mmio_write_32(MVEBU_IO_WIN_BASE(MVEBU_AP0) + IOW_GCR_OFFSET, PIDI_TID);
return 0;
}
/*****************************************************************************
* Transfer SCP_BL2 from Trusted RAM using the SCP Download protocol.
* Return 0 on success, -1 otherwise.
*****************************************************************************
*/
int bl2_plat_handle_scp_bl2(image_info_t *scp_bl2_image_info)
{
int ret;
INFO("BL2: Initiating SCP_BL2 transfer to SCP\n");
printf("BL2: Initiating SCP_BL2 transfer to SCP\n");
/* initialize time (for delay functionality) */
plat_delay_timer_init();
ret = bl2_plat_mmap_init();
if (ret != 0)
return ret;
ret = scp_bootloader_transfer((void *)scp_bl2_image_info->image_base,
scp_bl2_image_info->image_size);
if (ret == 0)
INFO("BL2: SCP_BL2 transferred to SCP\n");
else
ERROR("BL2: SCP_BL2 transfer failure\n");
return ret;
}
uintptr_t bl2_plat_get_cp_mss_regs(int ap_idx, int cp_idx)
{
return MVEBU_CP_REGS_BASE(cp_idx) + 0x280000;
}
uintptr_t bl2_plat_get_ap_mss_regs(int ap_idx)
{
return MVEBU_REGS_BASE + 0x580000;
}
uint32_t bl2_plat_get_cp_count(int ap_idx)
{
uint32_t revision = cp110_device_id_get(MVEBU_CP_REGS_BASE(0));
/* A8040: two CPs.
* A7040: one CP.
*/
if (revision == MVEBU_80X0_DEV_ID ||
revision == MVEBU_80X0_CP115_DEV_ID)
return 2;
else
return 1;
}
uint32_t bl2_plat_get_ap_count(void)
{
/* A8040 and A7040 have only one AP */
return 1;
}
void bl2_plat_configure_mss_windows(uintptr_t mss_regs)
{
/* set AXI External and Internal Address Bus extension */
mmio_write_32(MSS_AEBR(mss_regs),
((0x0 >> MSS_EXTERNAL_ACCESS_BIT) & MSS_AEBR_MASK));
mmio_write_32(MSS_AIBR(mss_regs),
((mss_regs >> MSS_INTERNAL_ACCESS_BIT) & MSS_AIBR_MASK));
}

View file

@ -0,0 +1,84 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <debug.h>
#include <mmio.h>
#include <psci.h>
#include <string.h>
#include <mss_pm_ipc.h>
/*
* SISR is 32 bit interrupt register representing 32 interrupts
*
* +======+=============+=============+
* + Bits + 31 + 30 - 00 +
* +======+=============+=============+
* + Desc + MSS Msg Int + Reserved +
* +======+=============+=============+
*/
#define MSS_SISR (MVEBU_REGS_BASE + 0x5800D0)
#define MSS_SISTR (MVEBU_REGS_BASE + 0x5800D8)
#define MSS_MSG_INT_MASK (0x80000000)
#define MSS_TIMER_BASE (MVEBU_REGS_BASE_MASK + 0x580110)
#define MSS_TRIGGER_TIMEOUT (1000)
/*****************************************************************************
* mss_pm_ipc_msg_send
*
* DESCRIPTION: create and transmit IPC message
*****************************************************************************
*/
int mss_pm_ipc_msg_send(unsigned int channel_id, unsigned int msg_id,
const psci_power_state_t *target_state)
{
/* Transmit IPC message */
#ifndef DISABLE_CLUSTER_LEVEL
mv_pm_ipc_msg_tx(channel_id, msg_id,
(unsigned int)target_state->pwr_domain_state[
MPIDR_AFFLVL1]);
#else
mv_pm_ipc_msg_tx(channel_id, msg_id, 0);
#endif
return 0;
}
/*****************************************************************************
* mss_pm_ipc_msg_trigger
*
* DESCRIPTION: Trigger IPC message interrupt to MSS
*****************************************************************************
*/
int mss_pm_ipc_msg_trigger(void)
{
unsigned int timeout;
unsigned int t_end;
unsigned int t_start = mmio_read_32(MSS_TIMER_BASE);
mmio_write_32(MSS_SISR, MSS_MSG_INT_MASK);
do {
/* wait while SCP process incoming interrupt */
if (mmio_read_32(MSS_SISTR) != MSS_MSG_INT_MASK)
break;
/* check timeout */
t_end = mmio_read_32(MSS_TIMER_BASE);
timeout = ((t_start > t_end) ?
(t_start - t_end) : (t_end - t_start));
if (timeout > MSS_TRIGGER_TIMEOUT) {
ERROR("PM MSG Trigger Timeout\n");
break;
}
} while (1);
return 0;
}

View file

@ -0,0 +1,35 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#ifndef __MSS_PM_IPC_H
#define __MSS_PM_IPC_H
#include <mss_ipc_drv.h>
/* Currently MSS does not support Cluster level Power Down */
#define DISABLE_CLUSTER_LEVEL
/*****************************************************************************
* mss_pm_ipc_msg_send
*
* DESCRIPTION: create and transmit IPC message
*****************************************************************************
*/
int mss_pm_ipc_msg_send(unsigned int channel_id, unsigned int msg_id,
const psci_power_state_t *target_state);
/*****************************************************************************
* mss_pm_ipc_msg_trigger
*
* DESCRIPTION: Trigger IPC message interrupt to MSS
*****************************************************************************
*/
int mss_pm_ipc_msg_trigger(void);
#endif /* __MSS_PM_IPC_H */

View file

@ -0,0 +1,18 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <mmio.h>
#include <plat_marvell.h>
void marvell_bl1_setup_mpps(void)
{
/* Enable UART MPPs.
** In a normal system, this is done by Bootrom.
*/
mmio_write_32(MVEBU_AP_MPP_REGS(1), 0x3000);
mmio_write_32(MVEBU_AP_MPP_REGS(2), 0x3000);
}

View file

@ -0,0 +1,119 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <a8k_common.h>
#include <ap_setup.h>
#include <cp110_setup.h>
#include <debug.h>
#include <marvell_plat_priv.h>
#include <marvell_pm.h>
#include <mmio.h>
#include <mci.h>
#include <plat_marvell.h>
#include <mss_ipc_drv.h>
#include <mss_mem.h>
/* In Armada-8k family AP806/AP807, CP0 connected to PIDI
* and CP1 connected to IHB via MCI #0
*/
#define MVEBU_MCI0 0
static _Bool pm_fw_running;
/* Set a weak stub for platforms that don't need to configure GPIO */
#pragma weak marvell_gpio_config
int marvell_gpio_config(void)
{
return 0;
}
static void marvell_bl31_mpp_init(int cp)
{
uint32_t reg;
/* need to do for CP#0 only */
if (cp)
return;
/*
* Enable CP0 I2C MPPs (MPP: 37-38)
* U-Boot rely on proper MPP settings for I2C EEPROM usage
* (only for CP0)
*/
reg = mmio_read_32(MVEBU_CP_MPP_REGS(0, 4));
mmio_write_32(MVEBU_CP_MPP_REGS(0, 4), reg | 0x2200000);
}
void marvell_bl31_mss_init(void)
{
struct mss_pm_ctrl_block *mss_pm_crtl =
(struct mss_pm_ctrl_block *)MSS_SRAM_PM_CONTROL_BASE;
/* Check that the image was loaded successfully */
if (mss_pm_crtl->handshake != HOST_ACKNOWLEDGMENT) {
NOTICE("MSS PM is not supported in this build\n");
return;
}
/* If we got here it means that the PM firmware is running */
pm_fw_running = 1;
INFO("MSS IPC init\n");
if (mss_pm_crtl->ipc_state == IPC_INITIALIZED)
mv_pm_ipc_init(mss_pm_crtl->ipc_base_address | MVEBU_REGS_BASE);
}
_Bool is_pm_fw_running(void)
{
return pm_fw_running;
}
/* This function overruns the same function in marvell_bl31_setup.c */
void bl31_plat_arch_setup(void)
{
int cp;
uintptr_t *mailbox = (void *)PLAT_MARVELL_MAILBOX_BASE;
/* initialize the timer for mdelay/udelay functionality */
plat_delay_timer_init();
/* configure apn806 */
ap_init();
/* In marvell_bl31_plat_arch_setup, el3 mmu is configured.
* el3 mmu configuration MUST be called after apn806_init, if not,
* this will cause an hang in init_io_win
* (after setting the IO windows GCR values).
*/
if (mailbox[MBOX_IDX_MAGIC] != MVEBU_MAILBOX_MAGIC_NUM ||
mailbox[MBOX_IDX_SUSPEND_MAGIC] != MVEBU_MAILBOX_SUSPEND_STATE)
marvell_bl31_plat_arch_setup();
for (cp = 0; cp < CP_COUNT; cp++) {
/* configure cp110 for CP0*/
if (cp == 1)
mci_initialize(MVEBU_MCI0);
/* initialize MCI & CP1 */
cp110_init(MVEBU_CP_REGS_BASE(cp),
STREAM_ID_BASE + (cp * MAX_STREAM_ID_PER_CP));
/* Should be called only after setting IOB windows */
marvell_bl31_mpp_init(cp);
}
/* initialize IPC between MSS and ATF */
if (mailbox[MBOX_IDX_MAGIC] != MVEBU_MAILBOX_MAGIC_NUM ||
mailbox[MBOX_IDX_SUSPEND_MAGIC] != MVEBU_MAILBOX_SUSPEND_STATE)
marvell_bl31_mss_init();
/* Configure GPIO */
marvell_gpio_config();
}

View file

@ -0,0 +1,570 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <a8k_common.h>
#include <ap_setup.h>
#include <aro.h>
#include <ccu.h>
#include <cp110_setup.h>
#include <debug.h>
#include <io_win.h>
#include <mv_ddr_if.h>
#include <mvebu_def.h>
#include <plat_marvell.h>
/* Register for skip image use */
#define SCRATCH_PAD_REG2 0xF06F00A8
#define SCRATCH_PAD_SKIP_VAL 0x01
#define NUM_OF_GPIO_PER_REG 32
#define MMAP_SAVE_AND_CONFIG 0
#define MMAP_RESTORE_SAVED 1
/* SAR clock settings */
#define MVEBU_AP_GEN_MGMT_BASE (MVEBU_RFU_BASE + 0x8000)
#define MVEBU_AP_SAR_REG_BASE(r) (MVEBU_AP_GEN_MGMT_BASE + 0x200 +\
((r) << 2))
#define SAR_CLOCK_FREQ_MODE_OFFSET (0)
#define SAR_CLOCK_FREQ_MODE_MASK (0x1f << SAR_CLOCK_FREQ_MODE_OFFSET)
#define SAR_PIDI_LOW_SPEED_OFFSET (20)
#define SAR_PIDI_LOW_SPEED_MASK (1 << SAR_PIDI_LOW_SPEED_OFFSET)
#define SAR_PIDI_LOW_SPEED_SHIFT (15)
#define SAR_PIDI_LOW_SPEED_SET (1 << SAR_PIDI_LOW_SPEED_SHIFT)
#define FREQ_MODE_AP_SAR_REG_NUM (0)
#define SAR_CLOCK_FREQ_MODE(v) (((v) & SAR_CLOCK_FREQ_MODE_MASK) >> \
SAR_CLOCK_FREQ_MODE_OFFSET)
#define AVS_EN_CTRL_REG (MVEBU_AP_GEN_MGMT_BASE + 0x130)
#define AVS_ENABLE_OFFSET (0)
#define AVS_SOFT_RESET_OFFSET (2)
#define AVS_LOW_VDD_LIMIT_OFFSET (4)
#define AVS_HIGH_VDD_LIMIT_OFFSET (12)
#define AVS_TARGET_DELTA_OFFSET (21)
#define AVS_VDD_LOW_LIMIT_MASK (0xFF << AVS_LOW_VDD_LIMIT_OFFSET)
#define AVS_VDD_HIGH_LIMIT_MASK (0xFF << AVS_HIGH_VDD_LIMIT_OFFSET)
/* VDD limit is 0.9V for A70x0 @ CPU frequency < 1600MHz */
#define AVS_A7K_LOW_CLK_VALUE ((0x80 << AVS_TARGET_DELTA_OFFSET) | \
(0x1A << AVS_HIGH_VDD_LIMIT_OFFSET) | \
(0x1A << AVS_LOW_VDD_LIMIT_OFFSET) | \
(0x1 << AVS_SOFT_RESET_OFFSET) | \
(0x1 << AVS_ENABLE_OFFSET))
/* VDD limit is 1.0V for all A80x0 devices */
#define AVS_A8K_CLK_VALUE ((0x80 << AVS_TARGET_DELTA_OFFSET) | \
(0x24 << AVS_HIGH_VDD_LIMIT_OFFSET) | \
(0x24 << AVS_LOW_VDD_LIMIT_OFFSET) | \
(0x1 << AVS_SOFT_RESET_OFFSET) | \
(0x1 << AVS_ENABLE_OFFSET))
#define AVS_A3900_CLK_VALUE ((0x80 << 24) | \
(0x2c2 << 13) | \
(0x2c2 << 3) | \
(0x1 << AVS_SOFT_RESET_OFFSET) | \
(0x1 << AVS_ENABLE_OFFSET))
#define MVEBU_AP_EFUSE_SRV_CTRL_REG (MVEBU_AP_GEN_MGMT_BASE + 0x8)
#define EFUSE_SRV_CTRL_LD_SELECT_OFFS 6
#define EFUSE_SRV_CTRL_LD_SEL_USER_MASK (1 << EFUSE_SRV_CTRL_LD_SELECT_OFFS)
/* Notify bootloader on DRAM setup */
#define AP807_CPU_ARO_0_CTRL_0 (MVEBU_RFU_BASE + 0x82A8)
#define AP807_CPU_ARO_1_CTRL_0 (MVEBU_RFU_BASE + 0x8D00)
/* 0 - ARO clock is enabled, 1 - ARO clock is disabled */
#define AP807_CPU_ARO_CLK_EN_OFFSET 0
#define AP807_CPU_ARO_CLK_EN_MASK (0x1 << AP807_CPU_ARO_CLK_EN_OFFSET)
/* 0 - ARO is the clock source, 1 - PLL is the clock source */
#define AP807_CPU_ARO_SEL_PLL_OFFSET 5
#define AP807_CPU_ARO_SEL_PLL_MASK (0x1 << AP807_CPU_ARO_SEL_PLL_OFFSET)
/*
* - AVS work points in the LD0 eFuse:
* SVC1 work point: LD0[88:81]
* SVC2 work point: LD0[96:89]
* SVC3 work point: LD0[104:97]
* SVC4 work point: LD0[112:105]
* - Identification information in the LD-0 eFuse:
* DRO: LD0[74:65] - Not used by the SW
* Revision: LD0[78:75] - Not used by the SW
* Bin: LD0[80:79] - Not used by the SW
* SW Revision: LD0[115:113]
* Cluster 1 PWR: LD0[193] - if set to 1, power down CPU Cluster-1
* resulting in 2 CPUs active only (7020)
*/
#define MVEBU_AP_LD_EFUSE_BASE (MVEBU_AP_GEN_MGMT_BASE + 0xF00)
/* Bits [94:63] - 32 data bits total */
#define MVEBU_AP_LD0_94_63_EFUSE_OFFS (MVEBU_AP_LD_EFUSE_BASE + 0x8)
/* Bits [125:95] - 31 data bits total, 32nd bit is parity for bits [125:63] */
#define MVEBU_AP_LD0_125_95_EFUSE_OFFS (MVEBU_AP_LD_EFUSE_BASE + 0xC)
/* Bits [220:189] - 32 data bits total */
#define MVEBU_AP_LD0_220_189_EFUSE_OFFS (MVEBU_AP_LD_EFUSE_BASE + 0x18)
/* Offsets for the above 2 fields combined into single 64-bit value [125:63] */
#define EFUSE_AP_LD0_DRO_OFFS 2 /* LD0[74:65] */
#define EFUSE_AP_LD0_DRO_MASK 0x3FF
#define EFUSE_AP_LD0_REVID_OFFS 12 /* LD0[78:75] */
#define EFUSE_AP_LD0_REVID_MASK 0xF
#define EFUSE_AP_LD0_BIN_OFFS 16 /* LD0[80:79] */
#define EFUSE_AP_LD0_BIN_MASK 0x3
#define EFUSE_AP_LD0_SWREV_OFFS 50 /* LD0[115:113] */
#define EFUSE_AP_LD0_SWREV_MASK 0x7
#define EFUSE_AP_LD0_SVC1_OFFS 18 /* LD0[88:81] */
#define EFUSE_AP_LD0_SVC2_OFFS 26 /* LD0[96:89] */
#define EFUSE_AP_LD0_SVC3_OFFS 34 /* LD0[104:97] */
#define EFUSE_AP_LD0_SVC4_OFFS 42 /* LD0[112:105] */
#define EFUSE_AP_LD0_WP_MASK 0xFF
#define EFUSE_AP_LD0_CLUSTER_DOWN_OFFS 4
/* Return the AP revision of the chip */
static unsigned int ble_get_ap_type(void)
{
unsigned int chip_rev_id;
chip_rev_id = mmio_read_32(MVEBU_CSS_GWD_CTRL_IIDR2_REG);
chip_rev_id = ((chip_rev_id & GWD_IIDR2_CHIP_ID_MASK) >>
GWD_IIDR2_CHIP_ID_OFFSET);
return chip_rev_id;
}
/******************************************************************************
* The routine allows to save the CCU and IO windows configuration during DRAM
* setup and restore them afterwards before exiting the BLE stage.
* Such window configuration is required since not all default settings coming
* from the HW and the BootROM allow access to peripherals connected to
* all available CPn components.
* For instance, when the boot device is located on CP0, the IO window to CP1
* is not opened automatically by the HW and if the DRAM SPD is located on CP1
* i2c channel, it cannot be read at BLE stage.
* Therefore the DRAM init procedure have to provide access to all available
* CPn peripherals during the BLE stage by setting the CCU IO window to all
* CPnph addresses and by enabling the IO windows accordingly.
* Additionally this function configures the CCU GCR to DRAM, which allows
* usage or more than 4GB DRAM as it configured by the default CCU DRAM window.
*
* IN:
* MMAP_SAVE_AND_CONFIG - save the existing configuration and update it
* MMAP_RESTORE_SAVED - restore saved configuration
* OUT:
* NONE
****************************************************************************
*/
static void ble_plat_mmap_config(int restore)
{
if (restore == MMAP_RESTORE_SAVED) {
/* Restore all orig. settings that were modified by BLE stage */
ccu_restore_win_all(MVEBU_AP0);
/* Restore CCU */
iow_restore_win_all(MVEBU_AP0);
return;
}
/* Store original values */
ccu_save_win_all(MVEBU_AP0);
/* Save CCU */
iow_save_win_all(MVEBU_AP0);
init_ccu(MVEBU_AP0);
/* The configuration saved, now all the changes can be done */
init_io_win(MVEBU_AP0);
}
/****************************************************************************
* Setup Adaptive Voltage Switching - this is required for some platforms
****************************************************************************
*/
static void ble_plat_avs_config(void)
{
uint32_t reg_val, device_id;
/* Check which SoC is running and act accordingly */
if (ble_get_ap_type() == CHIP_ID_AP807) {
VERBOSE("AVS: Setting AP807 AVS CTRL to 0x%x\n",
AVS_A3900_CLK_VALUE);
mmio_write_32(AVS_EN_CTRL_REG, AVS_A3900_CLK_VALUE);
return;
}
/* Check which SoC is running and act accordingly */
device_id = cp110_device_id_get(MVEBU_CP_REGS_BASE(0));
switch (device_id) {
case MVEBU_80X0_DEV_ID:
case MVEBU_80X0_CP115_DEV_ID:
/* Set the new AVS value - fix the default one on A80x0 */
mmio_write_32(AVS_EN_CTRL_REG, AVS_A8K_CLK_VALUE);
break;
case MVEBU_70X0_DEV_ID:
case MVEBU_70X0_CP115_DEV_ID:
/* Only fix AVS for CPU clocks lower than 1600MHz on A70x0 */
reg_val = mmio_read_32(MVEBU_AP_SAR_REG_BASE(
FREQ_MODE_AP_SAR_REG_NUM));
reg_val &= SAR_CLOCK_FREQ_MODE_MASK;
reg_val >>= SAR_CLOCK_FREQ_MODE_OFFSET;
if ((reg_val > CPU_1600_DDR_900_RCLK_900_2) &&
(reg_val < CPU_DDR_RCLK_INVALID))
mmio_write_32(AVS_EN_CTRL_REG, AVS_A7K_LOW_CLK_VALUE);
break;
default:
ERROR("Unsupported Device ID 0x%x\n", device_id);
}
}
/****************************************************************************
* SVC flow - v0.10
* The feature is intended to configure AVS value according to eFuse values
* that are burned individually for each SoC during the test process.
* Primary AVS value is stored in HD efuse and processed on power on
* by the HW engine
* Secondary AVS value is located in LD efuse and contains 4 work points for
* various CPU frequencies.
* The Secondary AVS value is only taken into account if the SW Revision stored
* in the efuse is greater than 0 and the CPU is running in a certain speed.
****************************************************************************
*/
static void ble_plat_svc_config(void)
{
uint32_t reg_val, avs_workpoint, freq_pidi_mode;
uint64_t efuse;
uint32_t device_id, single_cluster;
uint8_t svc[4], perr[4], i, sw_ver;
/* Due to a bug in A3900 device_id skip SVC config
* TODO: add SVC config once it is decided for a3900
*/
if (ble_get_ap_type() == CHIP_ID_AP807) {
NOTICE("SVC: SVC is not supported on AP807\n");
ble_plat_avs_config();
return;
}
/* Set access to LD0 */
reg_val = mmio_read_32(MVEBU_AP_EFUSE_SRV_CTRL_REG);
reg_val &= ~EFUSE_SRV_CTRL_LD_SELECT_OFFS;
mmio_write_32(MVEBU_AP_EFUSE_SRV_CTRL_REG, reg_val);
/* Obtain the value of LD0[125:63] */
efuse = mmio_read_32(MVEBU_AP_LD0_125_95_EFUSE_OFFS);
efuse <<= 32;
efuse |= mmio_read_32(MVEBU_AP_LD0_94_63_EFUSE_OFFS);
/* SW Revision:
* Starting from SW revision 1 the SVC flow is supported.
* SW version 0 (efuse not programmed) should follow the
* regular AVS update flow.
*/
sw_ver = (efuse >> EFUSE_AP_LD0_SWREV_OFFS) & EFUSE_AP_LD0_SWREV_MASK;
if (sw_ver < 1) {
NOTICE("SVC: SW Revision 0x%x. SVC is not supported\n", sw_ver);
ble_plat_avs_config();
return;
}
/* Frequency mode from SAR */
freq_pidi_mode = SAR_CLOCK_FREQ_MODE(
mmio_read_32(
MVEBU_AP_SAR_REG_BASE(
FREQ_MODE_AP_SAR_REG_NUM)));
/* Decode all SVC work points */
svc[0] = (efuse >> EFUSE_AP_LD0_SVC1_OFFS) & EFUSE_AP_LD0_WP_MASK;
svc[1] = (efuse >> EFUSE_AP_LD0_SVC2_OFFS) & EFUSE_AP_LD0_WP_MASK;
svc[2] = (efuse >> EFUSE_AP_LD0_SVC3_OFFS) & EFUSE_AP_LD0_WP_MASK;
svc[3] = (efuse >> EFUSE_AP_LD0_SVC4_OFFS) & EFUSE_AP_LD0_WP_MASK;
INFO("SVC: Efuse WP: [0]=0x%x, [1]=0x%x, [2]=0x%x, [3]=0x%x\n",
svc[0], svc[1], svc[2], svc[3]);
/* Validate parity of SVC workpoint values */
for (i = 0; i < 4; i++) {
uint8_t parity, bit;
perr[i] = 0;
for (bit = 1, parity = svc[i] & 1; bit < 7; bit++)
parity ^= (svc[i] >> bit) & 1;
/* Starting from SW version 2, the parity check is mandatory */
if ((sw_ver > 1) && (parity != ((svc[i] >> 7) & 1)))
perr[i] = 1; /* register the error */
}
single_cluster = mmio_read_32(MVEBU_AP_LD0_220_189_EFUSE_OFFS);
single_cluster = (single_cluster >> EFUSE_AP_LD0_CLUSTER_DOWN_OFFS) & 1;
device_id = cp110_device_id_get(MVEBU_CP_REGS_BASE(0));
if (device_id == MVEBU_80X0_DEV_ID ||
device_id == MVEBU_80X0_CP115_DEV_ID) {
/* A8040/A8020 */
NOTICE("SVC: DEV ID: %s, FREQ Mode: 0x%x\n",
single_cluster == 0 ? "8040" : "8020", freq_pidi_mode);
switch (freq_pidi_mode) {
case CPU_1800_DDR_1200_RCLK_1200:
case CPU_1800_DDR_1050_RCLK_1050:
if (perr[1])
goto perror;
avs_workpoint = svc[1];
break;
case CPU_1600_DDR_1050_RCLK_1050:
case CPU_1600_DDR_900_RCLK_900_2:
if (perr[2])
goto perror;
avs_workpoint = svc[2];
break;
case CPU_1300_DDR_800_RCLK_800:
case CPU_1300_DDR_650_RCLK_650:
if (perr[3])
goto perror;
avs_workpoint = svc[3];
break;
case CPU_2000_DDR_1200_RCLK_1200:
case CPU_2000_DDR_1050_RCLK_1050:
default:
if (perr[0])
goto perror;
avs_workpoint = svc[0];
break;
}
} else if (device_id == MVEBU_70X0_DEV_ID ||
device_id == MVEBU_70X0_CP115_DEV_ID) {
/* A7040/A7020/A6040 */
NOTICE("SVC: DEV ID: %s, FREQ Mode: 0x%x\n",
single_cluster == 0 ? "7040" : "7020", freq_pidi_mode);
switch (freq_pidi_mode) {
case CPU_1400_DDR_800_RCLK_800:
if (single_cluster) {/* 7020 */
if (perr[1])
goto perror;
avs_workpoint = svc[1];
} else {
if (perr[0])
goto perror;
avs_workpoint = svc[0];
}
break;
case CPU_1200_DDR_800_RCLK_800:
if (single_cluster) {/* 7020 */
if (perr[2])
goto perror;
avs_workpoint = svc[2];
} else {
if (perr[1])
goto perror;
avs_workpoint = svc[1];
}
break;
case CPU_800_DDR_800_RCLK_800:
case CPU_1000_DDR_800_RCLK_800:
if (single_cluster) {/* 7020 */
if (perr[3])
goto perror;
avs_workpoint = svc[3];
} else {
if (perr[2])
goto perror;
avs_workpoint = svc[2];
}
break;
case CPU_600_DDR_800_RCLK_800:
if (perr[3])
goto perror;
avs_workpoint = svc[3]; /* Same for 6040 and 7020 */
break;
case CPU_1600_DDR_800_RCLK_800: /* 7020 only */
default:
if (single_cluster) {/* 7020 */
if (perr[0])
goto perror;
avs_workpoint = svc[0];
} else
avs_workpoint = 0;
break;
}
} else {
ERROR("SVC: Unsupported Device ID 0x%x\n", device_id);
return;
}
/* Set AVS control if needed */
if (avs_workpoint == 0) {
ERROR("SVC: AVS work point not changed\n");
return;
}
/* Remove parity bit */
avs_workpoint &= 0x7F;
reg_val = mmio_read_32(AVS_EN_CTRL_REG);
NOTICE("SVC: AVS work point changed from 0x%x to 0x%x\n",
(reg_val & AVS_VDD_LOW_LIMIT_MASK) >> AVS_LOW_VDD_LIMIT_OFFSET,
avs_workpoint);
reg_val &= ~(AVS_VDD_LOW_LIMIT_MASK | AVS_VDD_HIGH_LIMIT_MASK);
reg_val |= 0x1 << AVS_ENABLE_OFFSET;
reg_val |= avs_workpoint << AVS_HIGH_VDD_LIMIT_OFFSET;
reg_val |= avs_workpoint << AVS_LOW_VDD_LIMIT_OFFSET;
mmio_write_32(AVS_EN_CTRL_REG, reg_val);
return;
perror:
ERROR("Failed SVC WP[%d] parity check!\n", i);
ERROR("Ignoring the WP values\n");
}
#if PLAT_RECOVERY_IMAGE_ENABLE
static int ble_skip_image_i2c(struct skip_image *skip_im)
{
ERROR("skipping image using i2c is not supported\n");
/* not supported */
return 0;
}
static int ble_skip_image_other(struct skip_image *skip_im)
{
ERROR("implementation missing for skip image request\n");
/* not supported, make your own implementation */
return 0;
}
static int ble_skip_image_gpio(struct skip_image *skip_im)
{
unsigned int val;
unsigned int mpp_address = 0;
unsigned int offset = 0;
switch (skip_im->info.test.cp_ap) {
case(CP):
mpp_address = MVEBU_CP_GPIO_DATA_IN(skip_im->info.test.cp_index,
skip_im->info.gpio.num);
if (skip_im->info.gpio.num > NUM_OF_GPIO_PER_REG)
offset = skip_im->info.gpio.num - NUM_OF_GPIO_PER_REG;
else
offset = skip_im->info.gpio.num;
break;
case(AP):
mpp_address = MVEBU_AP_GPIO_DATA_IN;
offset = skip_im->info.gpio.num;
break;
}
val = mmio_read_32(mpp_address);
val &= (1 << offset);
if ((!val && skip_im->info.gpio.button_state == HIGH) ||
(val && skip_im->info.gpio.button_state == LOW)) {
mmio_write_32(SCRATCH_PAD_REG2, SCRATCH_PAD_SKIP_VAL);
return 1;
}
return 0;
}
/*
* This function checks if there's a skip image request:
* return values:
* 1: (true) images request been made.
* 0: (false) no image request been made.
*/
static int ble_skip_current_image(void)
{
struct skip_image *skip_im;
/*fetching skip image info*/
skip_im = (struct skip_image *)plat_marvell_get_skip_image_data();
if (skip_im == NULL)
return 0;
/* check if skipping image request has already been made */
if (mmio_read_32(SCRATCH_PAD_REG2) == SCRATCH_PAD_SKIP_VAL)
return 0;
switch (skip_im->detection_method) {
case GPIO:
return ble_skip_image_gpio(skip_im);
case I2C:
return ble_skip_image_i2c(skip_im);
case USER_DEFINED:
return ble_skip_image_other(skip_im);
}
return 0;
}
#endif
/* Switch to ARO from PLL in ap807 */
static void aro_to_pll(void)
{
unsigned int reg;
/* switch from ARO to PLL */
reg = mmio_read_32(AP807_CPU_ARO_0_CTRL_0);
reg |= AP807_CPU_ARO_SEL_PLL_MASK;
mmio_write_32(AP807_CPU_ARO_0_CTRL_0, reg);
reg = mmio_read_32(AP807_CPU_ARO_1_CTRL_0);
reg |= AP807_CPU_ARO_SEL_PLL_MASK;
mmio_write_32(AP807_CPU_ARO_1_CTRL_0, reg);
mdelay(1000);
/* disable ARO clk driver */
reg = mmio_read_32(AP807_CPU_ARO_0_CTRL_0);
reg |= (AP807_CPU_ARO_CLK_EN_MASK);
mmio_write_32(AP807_CPU_ARO_0_CTRL_0, reg);
reg = mmio_read_32(AP807_CPU_ARO_1_CTRL_0);
reg |= (AP807_CPU_ARO_CLK_EN_MASK);
mmio_write_32(AP807_CPU_ARO_1_CTRL_0, reg);
}
int ble_plat_setup(int *skip)
{
int ret;
/* Power down unused CPUs */
plat_marvell_early_cpu_powerdown();
/*
* Save the current CCU configuration and make required changes:
* - Allow access to DRAM larger than 4GB
* - Open memory access to all CPn peripherals
*/
ble_plat_mmap_config(MMAP_SAVE_AND_CONFIG);
#if PLAT_RECOVERY_IMAGE_ENABLE
/* Check if there's a skip request to bootRom recovery Image */
if (ble_skip_current_image()) {
/* close memory access to all CPn peripherals. */
ble_plat_mmap_config(MMAP_RESTORE_SAVED);
*skip = 1;
return 0;
}
#endif
/* Do required CP-110 setups for BLE stage */
cp110_ble_init(MVEBU_CP_REGS_BASE(0));
/* Setup AVS */
ble_plat_svc_config();
/* work with PLL clock driver in AP807 */
if (ble_get_ap_type() == CHIP_ID_AP807)
aro_to_pll();
/* Do required AP setups for BLE stage */
ap_ble_init();
/* Update DRAM topology (scan DIMM SPDs) */
plat_marvell_dram_update_topology();
/* Kick it in */
ret = dram_init();
/* Restore the original CCU configuration before exit from BLE */
ble_plat_mmap_config(MMAP_RESTORE_SAVED);
return ret;
}

View file

@ -0,0 +1,829 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <a8k_common.h>
#include <assert.h>
#include <bakery_lock.h>
#include <debug.h>
#include <delay_timer.h>
#include <cache_llc.h>
#include <console.h>
#include <gicv2.h>
#include <marvell_pm.h>
#include <mmio.h>
#include <mss_pm_ipc.h>
#include <plat_marvell.h>
#include <platform.h>
#include <plat_pm_trace.h>
#include <platform.h>
#define MVEBU_PRIVATE_UID_REG 0x30
#define MVEBU_RFU_GLOBL_SW_RST 0x84
#define MVEBU_CCU_RVBAR(cpu) (MVEBU_REGS_BASE + 0x640 + (cpu * 4))
#define MVEBU_CCU_CPU_UN_RESET(cpu) (MVEBU_REGS_BASE + 0x650 + (cpu * 4))
#define MPIDR_CPU_GET(mpidr) ((mpidr) & MPIDR_CPU_MASK)
#define MPIDR_CLUSTER_GET(mpidr) MPIDR_AFFLVL1_VAL((mpidr))
#define MVEBU_GPIO_MASK(index) (1 << (index % 32))
#define MVEBU_MPP_MASK(index) (0xF << (4 * (index % 8)))
#define MVEBU_GPIO_VALUE(index, value) (value << (index % 32))
#define MVEBU_USER_CMD_0_REG (MVEBU_DRAM_MAC_BASE + 0x20)
#define MVEBU_USER_CMD_CH0_OFFSET 28
#define MVEBU_USER_CMD_CH0_MASK (1 << MVEBU_USER_CMD_CH0_OFFSET)
#define MVEBU_USER_CMD_CH0_EN (1 << MVEBU_USER_CMD_CH0_OFFSET)
#define MVEBU_USER_CMD_CS_OFFSET 24
#define MVEBU_USER_CMD_CS_MASK (0xF << MVEBU_USER_CMD_CS_OFFSET)
#define MVEBU_USER_CMD_CS_ALL (0xF << MVEBU_USER_CMD_CS_OFFSET)
#define MVEBU_USER_CMD_SR_OFFSET 6
#define MVEBU_USER_CMD_SR_MASK (0x3 << MVEBU_USER_CMD_SR_OFFSET)
#define MVEBU_USER_CMD_SR_ENTER (0x1 << MVEBU_USER_CMD_SR_OFFSET)
#define MVEBU_MC_PWR_CTRL_REG (MVEBU_DRAM_MAC_BASE + 0x54)
#define MVEBU_MC_AC_ON_DLY_OFFSET 8
#define MVEBU_MC_AC_ON_DLY_MASK (0xF << MVEBU_MC_AC_ON_DLY_OFFSET)
#define MVEBU_MC_AC_ON_DLY_DEF_VAR (8 << MVEBU_MC_AC_ON_DLY_OFFSET)
#define MVEBU_MC_AC_OFF_DLY_OFFSET 4
#define MVEBU_MC_AC_OFF_DLY_MASK (0xF << MVEBU_MC_AC_OFF_DLY_OFFSET)
#define MVEBU_MC_AC_OFF_DLY_DEF_VAR (0xC << MVEBU_MC_AC_OFF_DLY_OFFSET)
#define MVEBU_MC_PHY_AUTO_OFF_OFFSET 0
#define MVEBU_MC_PHY_AUTO_OFF_MASK (1 << MVEBU_MC_PHY_AUTO_OFF_OFFSET)
#define MVEBU_MC_PHY_AUTO_OFF_EN (1 << MVEBU_MC_PHY_AUTO_OFF_OFFSET)
/* this lock synchronize AP multiple cores execution with MSS */
DEFINE_BAKERY_LOCK(pm_sys_lock);
/* Weak definitions may be overridden in specific board */
#pragma weak plat_marvell_get_pm_cfg
/* AP806 CPU power down /power up definitions */
enum CPU_ID {
CPU0,
CPU1,
CPU2,
CPU3
};
#define REG_WR_VALIDATE_TIMEOUT (2000)
#define FEATURE_DISABLE_STATUS_REG \
(MVEBU_REGS_BASE + 0x6F8230)
#define FEATURE_DISABLE_STATUS_CPU_CLUSTER_OFFSET 4
#define FEATURE_DISABLE_STATUS_CPU_CLUSTER_MASK \
(0x1 << FEATURE_DISABLE_STATUS_CPU_CLUSTER_OFFSET)
#ifdef MVEBU_SOC_AP807
#define PWRC_CPUN_CR_PWR_DN_RQ_OFFSET 1
#define PWRC_CPUN_CR_LDO_BYPASS_RDY_OFFSET 0
#else
#define PWRC_CPUN_CR_PWR_DN_RQ_OFFSET 0
#define PWRC_CPUN_CR_LDO_BYPASS_RDY_OFFSET 31
#endif
#define PWRC_CPUN_CR_REG(cpu_id) \
(MVEBU_REGS_BASE + 0x680000 + (cpu_id * 0x10))
#define PWRC_CPUN_CR_PWR_DN_RQ_MASK \
(0x1 << PWRC_CPUN_CR_PWR_DN_RQ_OFFSET)
#define PWRC_CPUN_CR_ISO_ENABLE_OFFSET 16
#define PWRC_CPUN_CR_ISO_ENABLE_MASK \
(0x1 << PWRC_CPUN_CR_ISO_ENABLE_OFFSET)
#define PWRC_CPUN_CR_LDO_BYPASS_RDY_MASK \
(0x1 << PWRC_CPUN_CR_LDO_BYPASS_RDY_OFFSET)
#define CCU_B_PRCRN_REG(cpu_id) \
(MVEBU_REGS_BASE + 0x1A50 + \
((cpu_id / 2) * (0x400)) + ((cpu_id % 2) * 4))
#define CCU_B_PRCRN_CPUPORESET_STATIC_OFFSET 0
#define CCU_B_PRCRN_CPUPORESET_STATIC_MASK \
(0x1 << CCU_B_PRCRN_CPUPORESET_STATIC_OFFSET)
/* power switch fingers */
#define AP807_PWRC_LDO_CR0_REG \
(MVEBU_REGS_BASE + 0x680000 + 0x100)
#define AP807_PWRC_LDO_CR0_OFFSET 16
#define AP807_PWRC_LDO_CR0_MASK \
(0xff << AP807_PWRC_LDO_CR0_OFFSET)
#define AP807_PWRC_LDO_CR0_VAL 0xfd
/*
* Power down CPU:
* Used to reduce power consumption, and avoid SoC unnecessary temperature rise.
*/
static int plat_marvell_cpu_powerdown(int cpu_id)
{
uint32_t reg_val;
int exit_loop = REG_WR_VALIDATE_TIMEOUT;
INFO("Powering down CPU%d\n", cpu_id);
/* 1. Isolation enable */
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
reg_val |= 0x1 << PWRC_CPUN_CR_ISO_ENABLE_OFFSET;
mmio_write_32(PWRC_CPUN_CR_REG(cpu_id), reg_val);
/* 2. Read and check Isolation enabled - verify bit set to 1 */
do {
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
exit_loop--;
} while (!(reg_val & (0x1 << PWRC_CPUN_CR_ISO_ENABLE_OFFSET)) &&
exit_loop > 0);
/* 3. Switch off CPU power */
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
reg_val &= ~PWRC_CPUN_CR_PWR_DN_RQ_MASK;
mmio_write_32(PWRC_CPUN_CR_REG(cpu_id), reg_val);
/* 4. Read and check Switch Off - verify bit set to 0 */
exit_loop = REG_WR_VALIDATE_TIMEOUT;
do {
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
exit_loop--;
} while (reg_val & PWRC_CPUN_CR_PWR_DN_RQ_MASK && exit_loop > 0);
if (exit_loop <= 0)
goto cpu_poweroff_error;
/* 5. De-Assert power ready */
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
reg_val &= ~PWRC_CPUN_CR_LDO_BYPASS_RDY_MASK;
mmio_write_32(PWRC_CPUN_CR_REG(cpu_id), reg_val);
/* 6. Assert CPU POR reset */
reg_val = mmio_read_32(CCU_B_PRCRN_REG(cpu_id));
reg_val &= ~CCU_B_PRCRN_CPUPORESET_STATIC_MASK;
mmio_write_32(CCU_B_PRCRN_REG(cpu_id), reg_val);
/* 7. Read and poll on Validate the CPU is out of reset */
exit_loop = REG_WR_VALIDATE_TIMEOUT;
do {
reg_val = mmio_read_32(CCU_B_PRCRN_REG(cpu_id));
exit_loop--;
} while (reg_val & CCU_B_PRCRN_CPUPORESET_STATIC_MASK && exit_loop > 0);
if (exit_loop <= 0)
goto cpu_poweroff_error;
INFO("Successfully powered down CPU%d\n", cpu_id);
return 0;
cpu_poweroff_error:
ERROR("ERROR: Can't power down CPU%d\n", cpu_id);
return -1;
}
/*
* Power down CPUs 1-3 at early boot stage,
* to reduce power consumption and SoC temperature.
* This is triggered by BLE prior to DDR initialization.
*
* Note:
* All CPUs will be powered up by plat_marvell_cpu_powerup on Linux boot stage,
* which is triggered by PSCI ops (pwr_domain_on).
*/
int plat_marvell_early_cpu_powerdown(void)
{
uint32_t cpu_cluster_status =
mmio_read_32(FEATURE_DISABLE_STATUS_REG) &
FEATURE_DISABLE_STATUS_CPU_CLUSTER_MASK;
/* if cpu_cluster_status bit is set,
* that means we have only single cluster
*/
int cluster_count = cpu_cluster_status ? 1 : 2;
INFO("Powering off unused CPUs\n");
/* CPU1 is in AP806 cluster-0, which always exists, so power it down */
if (plat_marvell_cpu_powerdown(CPU1) == -1)
return -1;
/*
* CPU2-3 are in AP806 2nd cluster (cluster-1),
* which doesn't exists in dual-core systems.
* so need to check if we have dual-core (single cluster)
* or quad-code (2 clusters)
*/
if (cluster_count == 2) {
/* CPU2-3 are part of 2nd cluster */
if (plat_marvell_cpu_powerdown(CPU2) == -1)
return -1;
if (plat_marvell_cpu_powerdown(CPU3) == -1)
return -1;
}
return 0;
}
/*
* Power up CPU - part of Linux boot stage
*/
static int plat_marvell_cpu_powerup(u_register_t mpidr)
{
uint32_t reg_val;
int cpu_id = MPIDR_CPU_GET(mpidr),
cluster = MPIDR_CLUSTER_GET(mpidr);
int exit_loop = REG_WR_VALIDATE_TIMEOUT;
/* calculate absolute CPU ID */
cpu_id = cluster * PLAT_MARVELL_CLUSTER_CORE_COUNT + cpu_id;
INFO("Powering on CPU%d\n", cpu_id);
#ifdef MVEBU_SOC_AP807
/* Activate 2 power switch fingers */
reg_val = mmio_read_32(AP807_PWRC_LDO_CR0_REG);
reg_val &= ~(AP807_PWRC_LDO_CR0_MASK);
reg_val |= (AP807_PWRC_LDO_CR0_VAL << AP807_PWRC_LDO_CR0_OFFSET);
mmio_write_32(AP807_PWRC_LDO_CR0_REG, reg_val);
udelay(100);
#endif
/* 1. Switch CPU power ON */
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
reg_val |= 0x1 << PWRC_CPUN_CR_PWR_DN_RQ_OFFSET;
mmio_write_32(PWRC_CPUN_CR_REG(cpu_id), reg_val);
/* 2. Wait for CPU on, up to 100 uSec: */
udelay(100);
/* 3. Assert power ready */
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
reg_val |= 0x1 << PWRC_CPUN_CR_LDO_BYPASS_RDY_OFFSET;
mmio_write_32(PWRC_CPUN_CR_REG(cpu_id), reg_val);
/* 4. Read & Validate power ready
* used in order to generate 16 Host CPU cycles
*/
do {
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
exit_loop--;
} while (!(reg_val & (0x1 << PWRC_CPUN_CR_LDO_BYPASS_RDY_OFFSET)) &&
exit_loop > 0);
if (exit_loop <= 0)
goto cpu_poweron_error;
/* 5. Isolation disable */
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
reg_val &= ~PWRC_CPUN_CR_ISO_ENABLE_MASK;
mmio_write_32(PWRC_CPUN_CR_REG(cpu_id), reg_val);
/* 6. Read and check Isolation enabled - verify bit set to 1 */
exit_loop = REG_WR_VALIDATE_TIMEOUT;
do {
reg_val = mmio_read_32(PWRC_CPUN_CR_REG(cpu_id));
exit_loop--;
} while ((reg_val & (0x1 << PWRC_CPUN_CR_ISO_ENABLE_OFFSET)) &&
exit_loop > 0);
/* 7. De Assert CPU POR reset & Core reset */
reg_val = mmio_read_32(CCU_B_PRCRN_REG(cpu_id));
reg_val |= 0x1 << CCU_B_PRCRN_CPUPORESET_STATIC_OFFSET;
mmio_write_32(CCU_B_PRCRN_REG(cpu_id), reg_val);
/* 8. Read & Validate CPU POR reset */
exit_loop = REG_WR_VALIDATE_TIMEOUT;
do {
reg_val = mmio_read_32(CCU_B_PRCRN_REG(cpu_id));
exit_loop--;
} while (!(reg_val & (0x1 << CCU_B_PRCRN_CPUPORESET_STATIC_OFFSET)) &&
exit_loop > 0);
if (exit_loop <= 0)
goto cpu_poweron_error;
INFO("Successfully powered on CPU%d\n", cpu_id);
return 0;
cpu_poweron_error:
ERROR("ERROR: Can't power up CPU%d\n", cpu_id);
return -1;
}
static int plat_marvell_cpu_on(u_register_t mpidr)
{
int cpu_id;
int cluster;
/* Set barierr */
dsbsy();
/* Get cpu number - use CPU ID */
cpu_id = MPIDR_CPU_GET(mpidr);
/* Get cluster number - use affinity level 1 */
cluster = MPIDR_CLUSTER_GET(mpidr);
/* Set CPU private UID */
mmio_write_32(MVEBU_REGS_BASE + MVEBU_PRIVATE_UID_REG, cluster + 0x4);
/* Set the cpu start address to BL1 entry point (align to 0x10000) */
mmio_write_32(MVEBU_CCU_RVBAR(cpu_id),
PLAT_MARVELL_CPU_ENTRY_ADDR >> 16);
/* Get the cpu out of reset */
mmio_write_32(MVEBU_CCU_CPU_UN_RESET(cpu_id), 0x10001);
return 0;
}
/*****************************************************************************
* A8K handler called to check the validity of the power state
* parameter.
*****************************************************************************
*/
static int a8k_validate_power_state(unsigned int power_state,
psci_power_state_t *req_state)
{
int pstate = psci_get_pstate_type(power_state);
int pwr_lvl = psci_get_pstate_pwrlvl(power_state);
int i;
if (pwr_lvl > PLAT_MAX_PWR_LVL)
return PSCI_E_INVALID_PARAMS;
/* Sanity check the requested state */
if (pstate == PSTATE_TYPE_STANDBY) {
/*
* It's possible to enter standby only on power level 0
* Ignore any other power level.
*/
if (pwr_lvl != MARVELL_PWR_LVL0)
return PSCI_E_INVALID_PARAMS;
req_state->pwr_domain_state[MARVELL_PWR_LVL0] =
MARVELL_LOCAL_STATE_RET;
} else {
for (i = MARVELL_PWR_LVL0; i <= pwr_lvl; i++)
req_state->pwr_domain_state[i] =
MARVELL_LOCAL_STATE_OFF;
}
/*
* We expect the 'state id' to be zero.
*/
if (psci_get_pstate_id(power_state))
return PSCI_E_INVALID_PARAMS;
return PSCI_E_SUCCESS;
}
/*****************************************************************************
* A8K handler called when a CPU is about to enter standby.
*****************************************************************************
*/
static void a8k_cpu_standby(plat_local_state_t cpu_state)
{
ERROR("%s: needs to be implemented\n", __func__);
panic();
}
/*****************************************************************************
* A8K handler called when a power domain is about to be turned on. The
* mpidr determines the CPU to be turned on.
*****************************************************************************
*/
static int a8k_pwr_domain_on(u_register_t mpidr)
{
/* Power up CPU (CPUs 1-3 are powered off at start of BLE) */
plat_marvell_cpu_powerup(mpidr);
if (is_pm_fw_running()) {
unsigned int target =
((mpidr & 0xFF) + (((mpidr >> 8) & 0xFF) * 2));
/*
* pm system synchronization - used to synchronize
* multiple core access to MSS
*/
bakery_lock_get(&pm_sys_lock);
/* send CPU ON IPC Message to MSS */
mss_pm_ipc_msg_send(target, PM_IPC_MSG_CPU_ON, 0);
/* trigger IPC message to MSS */
mss_pm_ipc_msg_trigger();
/* pm system synchronization */
bakery_lock_release(&pm_sys_lock);
/* trace message */
PM_TRACE(TRACE_PWR_DOMAIN_ON | target);
} else {
/* proprietary CPU ON exection flow */
plat_marvell_cpu_on(mpidr);
}
return 0;
}
/*****************************************************************************
* A8K handler called to validate the entry point.
*****************************************************************************
*/
static int a8k_validate_ns_entrypoint(uintptr_t entrypoint)
{
return PSCI_E_SUCCESS;
}
/*****************************************************************************
* A8K handler called when a power domain is about to be turned off. The
* target_state encodes the power state that each level should transition to.
*****************************************************************************
*/
static void a8k_pwr_domain_off(const psci_power_state_t *target_state)
{
if (is_pm_fw_running()) {
unsigned int idx = plat_my_core_pos();
/* Prevent interrupts from spuriously waking up this cpu */
gicv2_cpuif_disable();
/* pm system synchronization - used to synchronize multiple
* core access to MSS
*/
bakery_lock_get(&pm_sys_lock);
/* send CPU OFF IPC Message to MSS */
mss_pm_ipc_msg_send(idx, PM_IPC_MSG_CPU_OFF, target_state);
/* trigger IPC message to MSS */
mss_pm_ipc_msg_trigger();
/* pm system synchronization */
bakery_lock_release(&pm_sys_lock);
/* trace message */
PM_TRACE(TRACE_PWR_DOMAIN_OFF);
} else {
INFO("%s: is not supported without SCP\n", __func__);
}
}
/* Get PM config to power off the SoC */
void *plat_marvell_get_pm_cfg(void)
{
return NULL;
}
/*
* This function should be called on restore from
* "suspend to RAM" state when the execution flow
* has to bypass BootROM image to RAM copy and speed up
* the system recovery
*
*/
static void plat_marvell_exit_bootrom(void)
{
marvell_exit_bootrom(PLAT_MARVELL_TRUSTED_ROM_BASE);
}
/*
* Prepare for the power off of the system via GPIO
*/
static void plat_marvell_power_off_gpio(struct power_off_method *pm_cfg,
register_t *gpio_addr,
register_t *gpio_data)
{
unsigned int gpio;
unsigned int idx;
unsigned int shift;
unsigned int reg;
unsigned int addr;
gpio_info_t *info;
unsigned int tog_bits;
assert((pm_cfg->cfg.gpio.pin_count < PMIC_GPIO_MAX_NUMBER) &&
(pm_cfg->cfg.gpio.step_count < PMIC_GPIO_MAX_TOGGLE_STEP));
/* Prepare GPIOs for PMIC */
for (gpio = 0; gpio < pm_cfg->cfg.gpio.pin_count; gpio++) {
info = &pm_cfg->cfg.gpio.info[gpio];
/* Set PMIC GPIO to output mode */
reg = mmio_read_32(MVEBU_CP_GPIO_DATA_OUT_EN(
info->cp_index, info->gpio_index));
mmio_write_32(MVEBU_CP_GPIO_DATA_OUT_EN(
info->cp_index, info->gpio_index),
reg & ~MVEBU_GPIO_MASK(info->gpio_index));
/* Set the appropriate MPP to GPIO mode */
reg = mmio_read_32(MVEBU_PM_MPP_REGS(info->cp_index,
info->gpio_index));
mmio_write_32(MVEBU_PM_MPP_REGS(info->cp_index,
info->gpio_index),
reg & ~MVEBU_MPP_MASK(info->gpio_index));
}
/* Wait for MPP & GPIO pre-configurations done */
mdelay(pm_cfg->cfg.gpio.delay_ms);
/* Toggle the GPIO values, and leave final step to be triggered
* after DDR self-refresh is enabled
*/
for (idx = 0; idx < pm_cfg->cfg.gpio.step_count; idx++) {
tog_bits = pm_cfg->cfg.gpio.seq[idx];
/* The GPIOs must be within same GPIO register,
* thus could get the original value by first GPIO
*/
info = &pm_cfg->cfg.gpio.info[0];
reg = mmio_read_32(MVEBU_CP_GPIO_DATA_OUT(
info->cp_index, info->gpio_index));
addr = MVEBU_CP_GPIO_DATA_OUT(info->cp_index, info->gpio_index);
for (gpio = 0; gpio < pm_cfg->cfg.gpio.pin_count; gpio++) {
shift = pm_cfg->cfg.gpio.info[gpio].gpio_index % 32;
if (GPIO_LOW == (tog_bits & (1 << gpio)))
reg &= ~(1 << shift);
else
reg |= (1 << shift);
}
/* Set the GPIO register, for last step just store
* register address and values to system registers
*/
if (idx < pm_cfg->cfg.gpio.step_count - 1) {
mmio_write_32(MVEBU_CP_GPIO_DATA_OUT(
info->cp_index, info->gpio_index), reg);
mdelay(pm_cfg->cfg.gpio.delay_ms);
} else {
/* Save GPIO register and address values for
* finishing the power down operation later
*/
*gpio_addr = addr;
*gpio_data = reg;
}
}
}
/*
* Prepare for the power off of the system
*/
static void plat_marvell_power_off_prepare(struct power_off_method *pm_cfg,
register_t *addr, register_t *data)
{
switch (pm_cfg->type) {
case PMIC_GPIO:
plat_marvell_power_off_gpio(pm_cfg, addr, data);
break;
default:
break;
}
}
/*****************************************************************************
* A8K handler called when a power domain is about to be suspended. The
* target_state encodes the power state that each level should transition to.
*****************************************************************************
*/
static void a8k_pwr_domain_suspend(const psci_power_state_t *target_state)
{
if (is_pm_fw_running()) {
unsigned int idx;
/* Prevent interrupts from spuriously waking up this cpu */
gicv2_cpuif_disable();
idx = plat_my_core_pos();
/* pm system synchronization - used to synchronize multiple
* core access to MSS
*/
bakery_lock_get(&pm_sys_lock);
/* send CPU Suspend IPC Message to MSS */
mss_pm_ipc_msg_send(idx, PM_IPC_MSG_CPU_SUSPEND, target_state);
/* trigger IPC message to MSS */
mss_pm_ipc_msg_trigger();
/* pm system synchronization */
bakery_lock_release(&pm_sys_lock);
/* trace message */
PM_TRACE(TRACE_PWR_DOMAIN_SUSPEND);
} else {
uintptr_t *mailbox = (void *)PLAT_MARVELL_MAILBOX_BASE;
INFO("Suspending to RAM\n");
/* Prevent interrupts from spuriously waking up this cpu */
gicv2_cpuif_disable();
mailbox[MBOX_IDX_SUSPEND_MAGIC] = MVEBU_MAILBOX_SUSPEND_STATE;
mailbox[MBOX_IDX_ROM_EXIT_ADDR] = (uintptr_t)&plat_marvell_exit_bootrom;
#if PLAT_MARVELL_SHARED_RAM_CACHED
flush_dcache_range(PLAT_MARVELL_MAILBOX_BASE +
MBOX_IDX_SUSPEND_MAGIC * sizeof(uintptr_t),
2 * sizeof(uintptr_t));
#endif
/* Flush and disable LLC before going off-power */
llc_disable(0);
isb();
/*
* Do not halt here!
* The function must return for allowing the caller function
* psci_power_up_finish() to do the proper context saving and
* to release the CPU lock.
*/
}
}
/*****************************************************************************
* A8K handler called when a power domain has just been powered on after
* being turned off earlier. The target_state encodes the low power state that
* each level has woken up from.
*****************************************************************************
*/
static void a8k_pwr_domain_on_finish(const psci_power_state_t *target_state)
{
/* arch specific configuration */
marvell_psci_arch_init(0);
/* Interrupt initialization */
gicv2_pcpu_distif_init();
gicv2_cpuif_enable();
if (is_pm_fw_running()) {
/* trace message */
PM_TRACE(TRACE_PWR_DOMAIN_ON_FINISH);
}
}
/*****************************************************************************
* A8K handler called when a power domain has just been powered on after
* having been suspended earlier. The target_state encodes the low power state
* that each level has woken up from.
* TODO: At the moment we reuse the on finisher and reinitialize the secure
* context. Need to implement a separate suspend finisher.
*****************************************************************************
*/
static void a8k_pwr_domain_suspend_finish(
const psci_power_state_t *target_state)
{
if (is_pm_fw_running()) {
/* arch specific configuration */
marvell_psci_arch_init(0);
/* Interrupt initialization */
gicv2_cpuif_enable();
/* trace message */
PM_TRACE(TRACE_PWR_DOMAIN_SUSPEND_FINISH);
} else {
uintptr_t *mailbox = (void *)PLAT_MARVELL_MAILBOX_BASE;
/* Only primary CPU requres platform init */
if (!plat_my_core_pos()) {
/* Initialize the console to provide
* early debug support
*/
console_init(PLAT_MARVELL_BOOT_UART_BASE,
PLAT_MARVELL_BOOT_UART_CLK_IN_HZ,
MARVELL_CONSOLE_BAUDRATE);
bl31_plat_arch_setup();
marvell_bl31_platform_setup();
/*
* Remove suspend to RAM marker from the mailbox
* for treating a regular reset as a cold boot
*/
mailbox[MBOX_IDX_SUSPEND_MAGIC] = 0;
mailbox[MBOX_IDX_ROM_EXIT_ADDR] = 0;
#if PLAT_MARVELL_SHARED_RAM_CACHED
flush_dcache_range(PLAT_MARVELL_MAILBOX_BASE +
MBOX_IDX_SUSPEND_MAGIC * sizeof(uintptr_t),
2 * sizeof(uintptr_t));
#endif
}
}
}
/*****************************************************************************
* This handler is called by the PSCI implementation during the `SYSTEM_SUSPEND`
* call to get the `power_state` parameter. This allows the platform to encode
* the appropriate State-ID field within the `power_state` parameter which can
* be utilized in `pwr_domain_suspend()` to suspend to system affinity level.
*****************************************************************************
*/
static void a8k_get_sys_suspend_power_state(psci_power_state_t *req_state)
{
/* lower affinities use PLAT_MAX_OFF_STATE */
for (int i = MPIDR_AFFLVL0; i <= PLAT_MAX_PWR_LVL; i++)
req_state->pwr_domain_state[i] = PLAT_MAX_OFF_STATE;
}
static void
__dead2 a8k_pwr_domain_pwr_down_wfi(const psci_power_state_t *target_state)
{
struct power_off_method *pm_cfg;
unsigned int srcmd;
unsigned int sdram_reg;
register_t gpio_data = 0, gpio_addr = 0;
if (is_pm_fw_running()) {
psci_power_down_wfi();
panic();
}
pm_cfg = (struct power_off_method *)plat_marvell_get_pm_cfg();
/* Prepare for power off */
plat_marvell_power_off_prepare(pm_cfg, &gpio_addr, &gpio_data);
/* First step to enable DDR self-refresh
* to keep the data during suspend
*/
mmio_write_32(MVEBU_MC_PWR_CTRL_REG, 0x8C1);
/* Save DDR self-refresh second step register
* and value to be issued later
*/
sdram_reg = MVEBU_USER_CMD_0_REG;
srcmd = mmio_read_32(sdram_reg);
srcmd &= ~(MVEBU_USER_CMD_CH0_MASK | MVEBU_USER_CMD_CS_MASK |
MVEBU_USER_CMD_SR_MASK);
srcmd |= (MVEBU_USER_CMD_CH0_EN | MVEBU_USER_CMD_CS_ALL |
MVEBU_USER_CMD_SR_ENTER);
/*
* Wait for DRAM is done using registers access only.
* At this stage any access to DRAM (procedure call) will
* release it from the self-refresh mode
*/
__asm__ volatile (
/* Align to a cache line */
" .balign 64\n\t"
/* Enter self refresh */
" str %[srcmd], [%[sdram_reg]]\n\t"
/*
* Wait 100 cycles for DDR to enter self refresh, by
* doing 50 times two instructions.
*/
" mov x1, #50\n\t"
"1: subs x1, x1, #1\n\t"
" bne 1b\n\t"
/* Issue the command to trigger the SoC power off */
" str %[gpio_data], [%[gpio_addr]]\n\t"
/* Trap the processor */
" b .\n\t"
: : [srcmd] "r" (srcmd), [sdram_reg] "r" (sdram_reg),
[gpio_addr] "r" (gpio_addr), [gpio_data] "r" (gpio_data)
: "x1");
panic();
}
/*****************************************************************************
* A8K handlers to shutdown/reboot the system
*****************************************************************************
*/
static void __dead2 a8k_system_off(void)
{
ERROR("%s: needs to be implemented\n", __func__);
panic();
}
void plat_marvell_system_reset(void)
{
mmio_write_32(MVEBU_RFU_BASE + MVEBU_RFU_GLOBL_SW_RST, 0x0);
}
static void __dead2 a8k_system_reset(void)
{
plat_marvell_system_reset();
/* we shouldn't get to this point */
panic();
}
/*****************************************************************************
* Export the platform handlers via plat_arm_psci_pm_ops. The ARM Standard
* platform layer will take care of registering the handlers with PSCI.
*****************************************************************************
*/
const plat_psci_ops_t plat_arm_psci_pm_ops = {
.cpu_standby = a8k_cpu_standby,
.pwr_domain_on = a8k_pwr_domain_on,
.pwr_domain_off = a8k_pwr_domain_off,
.pwr_domain_suspend = a8k_pwr_domain_suspend,
.pwr_domain_on_finish = a8k_pwr_domain_on_finish,
.get_sys_suspend_power_state = a8k_get_sys_suspend_power_state,
.pwr_domain_suspend_finish = a8k_pwr_domain_suspend_finish,
.pwr_domain_pwr_down_wfi = a8k_pwr_domain_pwr_down_wfi,
.system_off = a8k_system_off,
.system_reset = a8k_system_reset,
.validate_power_state = a8k_validate_power_state,
.validate_ns_entrypoint = a8k_validate_ns_entrypoint
};

View file

@ -0,0 +1,91 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <mmio.h>
#include <mss_mem.h>
#include <platform.h>
#include <plat_pm_trace.h>
#ifdef PM_TRACE_ENABLE
/* core trace APIs */
core_trace_func funcTbl[PLATFORM_CORE_COUNT] = {
pm_core_0_trace,
pm_core_1_trace,
pm_core_2_trace,
pm_core_3_trace};
/*****************************************************************************
* pm_core0_trace
* pm_core1_trace
* pm_core2_trace
* pm_core_3trace
*
* This functions set trace info into core cyclic trace queue in MSS SRAM
* memory space
*****************************************************************************
*/
void pm_core_0_trace(unsigned int trace)
{
unsigned int current_position_core_0 =
mmio_read_32(AP_MSS_ATF_CORE_0_CTRL_BASE);
mmio_write_32((AP_MSS_ATF_CORE_0_INFO_BASE +
(current_position_core_0 * AP_MSS_ATF_CORE_ENTRY_SIZE)),
mmio_read_32(AP_MSS_TIMER_BASE));
mmio_write_32((AP_MSS_ATF_CORE_0_INFO_TRACE +
(current_position_core_0 * AP_MSS_ATF_CORE_ENTRY_SIZE)),
trace);
mmio_write_32(AP_MSS_ATF_CORE_0_CTRL_BASE,
((current_position_core_0 + 1) &
AP_MSS_ATF_TRACE_SIZE_MASK));
}
void pm_core_1_trace(unsigned int trace)
{
unsigned int current_position_core_1 =
mmio_read_32(AP_MSS_ATF_CORE_1_CTRL_BASE);
mmio_write_32((AP_MSS_ATF_CORE_1_INFO_BASE +
(current_position_core_1 * AP_MSS_ATF_CORE_ENTRY_SIZE)),
mmio_read_32(AP_MSS_TIMER_BASE));
mmio_write_32((AP_MSS_ATF_CORE_1_INFO_TRACE +
(current_position_core_1 * AP_MSS_ATF_CORE_ENTRY_SIZE)),
trace);
mmio_write_32(AP_MSS_ATF_CORE_1_CTRL_BASE,
((current_position_core_1 + 1) &
AP_MSS_ATF_TRACE_SIZE_MASK));
}
void pm_core_2_trace(unsigned int trace)
{
unsigned int current_position_core_2 =
mmio_read_32(AP_MSS_ATF_CORE_2_CTRL_BASE);
mmio_write_32((AP_MSS_ATF_CORE_2_INFO_BASE +
(current_position_core_2 * AP_MSS_ATF_CORE_ENTRY_SIZE)),
mmio_read_32(AP_MSS_TIMER_BASE));
mmio_write_32((AP_MSS_ATF_CORE_2_INFO_TRACE +
(current_position_core_2 * AP_MSS_ATF_CORE_ENTRY_SIZE)),
trace);
mmio_write_32(AP_MSS_ATF_CORE_2_CTRL_BASE,
((current_position_core_2 + 1) &
AP_MSS_ATF_TRACE_SIZE_MASK));
}
void pm_core_3_trace(unsigned int trace)
{
unsigned int current_position_core_3 =
mmio_read_32(AP_MSS_ATF_CORE_3_CTRL_BASE);
mmio_write_32((AP_MSS_ATF_CORE_3_INFO_BASE +
(current_position_core_3 * AP_MSS_ATF_CORE_ENTRY_SIZE)),
mmio_read_32(AP_MSS_TIMER_BASE));
mmio_write_32((AP_MSS_ATF_CORE_3_INFO_TRACE +
(current_position_core_3 * AP_MSS_ATF_CORE_ENTRY_SIZE)),
trace);
mmio_write_32(AP_MSS_ATF_CORE_3_CTRL_BASE,
((current_position_core_3 + 1) &
AP_MSS_ATF_TRACE_SIZE_MASK));
}
#endif /* PM_TRACE_ENABLE */

View file

@ -0,0 +1,129 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <debug.h>
#include <delay_timer.h>
#include <mmio.h>
#include <mvebu_def.h>
#include <thermal.h>
#define THERMAL_TIMEOUT 1200
#define THERMAL_SEN_CTRL_LSB_STRT_OFFSET 0
#define THERMAL_SEN_CTRL_LSB_STRT_MASK \
(0x1 << THERMAL_SEN_CTRL_LSB_STRT_OFFSET)
#define THERMAL_SEN_CTRL_LSB_RST_OFFSET 1
#define THERMAL_SEN_CTRL_LSB_RST_MASK \
(0x1 << THERMAL_SEN_CTRL_LSB_RST_OFFSET)
#define THERMAL_SEN_CTRL_LSB_EN_OFFSET 2
#define THERMAL_SEN_CTRL_LSB_EN_MASK \
(0x1 << THERMAL_SEN_CTRL_LSB_EN_OFFSET)
#define THERMAL_SEN_CTRL_STATS_VALID_OFFSET 16
#define THERMAL_SEN_CTRL_STATS_VALID_MASK \
(0x1 << THERMAL_SEN_CTRL_STATS_VALID_OFFSET)
#define THERMAL_SEN_CTRL_STATS_TEMP_OUT_OFFSET 0
#define THERMAL_SEN_CTRL_STATS_TEMP_OUT_MASK \
(0x3FF << THERMAL_SEN_CTRL_STATS_TEMP_OUT_OFFSET)
#define THERMAL_SEN_OUTPUT_MSB 512
#define THERMAL_SEN_OUTPUT_COMP 1024
struct tsen_regs {
uint32_t ext_tsen_ctrl_lsb;
uint32_t ext_tsen_ctrl_msb;
uint32_t ext_tsen_status;
};
static int ext_tsen_probe(struct tsen_config *tsen_cfg)
{
uint32_t reg, timeout = 0;
struct tsen_regs *base;
if (tsen_cfg == NULL && tsen_cfg->regs_base == NULL) {
ERROR("initial thermal sensor configuration is missing\n");
return -1;
}
base = (struct tsen_regs *)tsen_cfg->regs_base;
INFO("initializing thermal sensor\n");
/* initialize thermal sensor hardware reset once */
reg = mmio_read_32((uintptr_t)&base->ext_tsen_ctrl_lsb);
reg &= ~THERMAL_SEN_CTRL_LSB_RST_OFFSET; /* de-assert TSEN_RESET */
reg |= THERMAL_SEN_CTRL_LSB_EN_MASK; /* set TSEN_EN to 1 */
reg |= THERMAL_SEN_CTRL_LSB_STRT_MASK; /* set TSEN_START to 1 */
mmio_write_32((uintptr_t)&base->ext_tsen_ctrl_lsb, reg);
reg = mmio_read_32((uintptr_t)&base->ext_tsen_status);
while ((reg & THERMAL_SEN_CTRL_STATS_VALID_MASK) == 0 &&
timeout < THERMAL_TIMEOUT) {
udelay(100);
reg = mmio_read_32((uintptr_t)&base->ext_tsen_status);
timeout++;
}
if ((reg & THERMAL_SEN_CTRL_STATS_VALID_MASK) == 0) {
ERROR("thermal sensor is not ready\n");
return -1;
}
tsen_cfg->tsen_ready = 1;
VERBOSE("thermal sensor was initialized\n");
return 0;
}
static int ext_tsen_read(struct tsen_config *tsen_cfg, int *temp)
{
uint32_t reg;
struct tsen_regs *base;
if (tsen_cfg == NULL && !tsen_cfg->tsen_ready) {
ERROR("thermal sensor was not initialized\n");
return -1;
}
base = (struct tsen_regs *)tsen_cfg->regs_base;
reg = mmio_read_32((uintptr_t)&base->ext_tsen_status);
reg = ((reg & THERMAL_SEN_CTRL_STATS_TEMP_OUT_MASK) >>
THERMAL_SEN_CTRL_STATS_TEMP_OUT_OFFSET);
/*
* TSEN output format is signed as a 2s complement number
* ranging from-512 to +511. when MSB is set, need to
* calculate the complement number
*/
if (reg >= THERMAL_SEN_OUTPUT_MSB)
reg -= THERMAL_SEN_OUTPUT_COMP;
if (tsen_cfg->tsen_divisor == 0) {
ERROR("thermal sensor divisor cannot be zero\n");
return -1;
}
*temp = ((tsen_cfg->tsen_gain * ((int)reg)) +
tsen_cfg->tsen_offset) / tsen_cfg->tsen_divisor;
return 0;
}
static struct tsen_config tsen_cfg = {
.tsen_offset = 153400,
.tsen_gain = 425,
.tsen_divisor = 1000,
.tsen_ready = 0,
.regs_base = (void *)MVEBU_AP_EXT_TSEN_BASE,
.ptr_tsen_probe = ext_tsen_probe,
.ptr_tsen_read = ext_tsen_read
};
struct tsen_config *marvell_thermal_config_get(void)
{
return &tsen_cfg;
}

View file

@ -0,0 +1,135 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <arch.h>
#include <arch_helpers.h>
#include <assert.h>
#include <debug.h>
#include <mmio.h>
#include <plat_marvell.h>
#include <platform_def.h>
#include <xlat_tables.h>
/* Weak definitions may be overridden in specific ARM standard platform */
#pragma weak plat_get_ns_image_entrypoint
#pragma weak plat_marvell_get_mmap
/*
* Set up the page tables for the generic and platform-specific memory regions.
* The extents of the generic memory regions are specified by the function
* arguments and consist of:
* - Trusted SRAM seen by the BL image;
* - Code section;
* - Read-only data section;
* - Coherent memory region, if applicable.
*/
void marvell_setup_page_tables(uintptr_t total_base,
size_t total_size,
uintptr_t code_start,
uintptr_t code_limit,
uintptr_t rodata_start,
uintptr_t rodata_limit
#if USE_COHERENT_MEM
,
uintptr_t coh_start,
uintptr_t coh_limit
#endif
)
{
/*
* Map the Trusted SRAM with appropriate memory attributes.
* Subsequent mappings will adjust the attributes for specific regions.
*/
VERBOSE("Trusted SRAM seen by this BL image: %p - %p\n",
(void *) total_base, (void *) (total_base + total_size));
mmap_add_region(total_base, total_base,
total_size,
MT_MEMORY | MT_RW | MT_SECURE);
/* Re-map the code section */
VERBOSE("Code region: %p - %p\n",
(void *) code_start, (void *) code_limit);
mmap_add_region(code_start, code_start,
code_limit - code_start,
MT_CODE | MT_SECURE);
/* Re-map the read-only data section */
VERBOSE("Read-only data region: %p - %p\n",
(void *) rodata_start, (void *) rodata_limit);
mmap_add_region(rodata_start, rodata_start,
rodata_limit - rodata_start,
MT_RO_DATA | MT_SECURE);
#if USE_COHERENT_MEM
/* Re-map the coherent memory region */
VERBOSE("Coherent region: %p - %p\n",
(void *) coh_start, (void *) coh_limit);
mmap_add_region(coh_start, coh_start,
coh_limit - coh_start,
MT_DEVICE | MT_RW | MT_SECURE);
#endif
/* Now (re-)map the platform-specific memory regions */
mmap_add(plat_marvell_get_mmap());
/* Create the page tables to reflect the above mappings */
init_xlat_tables();
}
unsigned long plat_get_ns_image_entrypoint(void)
{
return PLAT_MARVELL_NS_IMAGE_OFFSET;
}
/*****************************************************************************
* Gets SPSR for BL32 entry
*****************************************************************************
*/
uint32_t marvell_get_spsr_for_bl32_entry(void)
{
/*
* The Secure Payload Dispatcher service is responsible for
* setting the SPSR prior to entry into the BL32 image.
*/
return 0;
}
/*****************************************************************************
* Gets SPSR for BL33 entry
*****************************************************************************
*/
uint32_t marvell_get_spsr_for_bl33_entry(void)
{
unsigned long el_status;
unsigned int mode;
uint32_t spsr;
/* Figure out what mode we enter the non-secure world in */
el_status = read_id_aa64pfr0_el1() >> ID_AA64PFR0_EL2_SHIFT;
el_status &= ID_AA64PFR0_ELX_MASK;
mode = (el_status) ? MODE_EL2 : MODE_EL1;
/*
* TODO: Consider the possibility of specifying the SPSR in
* the FIP ToC and allowing the platform to have a say as
* well.
*/
spsr = SPSR_64(mode, MODE_SP_ELX, DISABLE_ALL_EXCEPTIONS);
return spsr;
}
/*****************************************************************************
* Returns ARM platform specific memory map regions.
*****************************************************************************
*/
const mmap_region_t *plat_marvell_get_mmap(void)
{
return plat_marvell_mmap;
}

View file

@ -0,0 +1,223 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <asm_macros.S>
#include <cortex_a72.h>
#include <marvell_def.h>
#include <platform_def.h>
#ifndef PLAT_a3700
#include <ccu.h>
#include <cache_llc.h>
#endif
.weak plat_marvell_calc_core_pos
.weak plat_my_core_pos
.globl plat_crash_console_init
.globl plat_crash_console_putc
.globl platform_mem_init
.globl disable_mmu_dcache
.globl invalidate_tlb_all
.globl platform_unmap_sram
.globl disable_sram
.globl disable_icache
.globl invalidate_icache_all
.globl marvell_exit_bootrom
.globl ca72_l2_enable_unique_clean
/* -----------------------------------------------------
* unsigned int plat_my_core_pos(void)
* This function uses the plat_marvell_calc_core_pos()
* definition to get the index of the calling CPU.
* -----------------------------------------------------
*/
func plat_my_core_pos
mrs x0, mpidr_el1
b plat_marvell_calc_core_pos
endfunc plat_my_core_pos
/* -----------------------------------------------------
* unsigned int plat_marvell_calc_core_pos(uint64_t mpidr)
* Helper function to calculate the core position.
* With this function: CorePos = (ClusterId * 2) +
* CoreId
* -----------------------------------------------------
*/
func plat_marvell_calc_core_pos
and x1, x0, #MPIDR_CPU_MASK
and x0, x0, #MPIDR_CLUSTER_MASK
add x0, x1, x0, LSR #7
ret
endfunc plat_marvell_calc_core_pos
/* ---------------------------------------------
* int plat_crash_console_init(void)
* Function to initialize the crash console
* without a C Runtime to print crash report.
* Clobber list : x0, x1, x2
* ---------------------------------------------
*/
func plat_crash_console_init
mov_imm x0, PLAT_MARVELL_CRASH_UART_BASE
mov_imm x1, PLAT_MARVELL_CRASH_UART_CLK_IN_HZ
mov_imm x2, MARVELL_CONSOLE_BAUDRATE
b console_core_init
endfunc plat_crash_console_init
/* ---------------------------------------------
* int plat_crash_console_putc(int c)
* Function to print a character on the crash
* console without a C Runtime.
* Clobber list : x1, x2
* ---------------------------------------------
*/
func plat_crash_console_putc
mov_imm x1, PLAT_MARVELL_CRASH_UART_BASE
b console_core_putc
endfunc plat_crash_console_putc
/* ---------------------------------------------------------------------
* We don't need to carry out any memory initialization on ARM
* platforms. The Secure RAM is accessible straight away.
* ---------------------------------------------------------------------
*/
func platform_mem_init
ret
endfunc platform_mem_init
/* -----------------------------------------------------
* Disable icache, dcache, and MMU
* -----------------------------------------------------
*/
func disable_mmu_dcache
mrs x0, sctlr_el3
bic x0, x0, 0x1 /* M bit - MMU */
bic x0, x0, 0x4 /* C bit - Dcache L1 & L2 */
msr sctlr_el3, x0
isb
b mmu_off
mmu_off:
ret
endfunc disable_mmu_dcache
/* -----------------------------------------------------
* Disable all TLB entries
* -----------------------------------------------------
*/
func invalidate_tlb_all
tlbi alle3
dsb sy
isb
ret
endfunc invalidate_tlb_all
/* -----------------------------------------------------
* Disable the i cache
* -----------------------------------------------------
*/
func disable_icache
mrs x0, sctlr_el3
bic x0, x0, 0x1000 /* I bit - Icache L1 & L2 */
msr sctlr_el3, x0
isb
ret
endfunc disable_icache
/* -----------------------------------------------------
* Disable all of the i caches
* -----------------------------------------------------
*/
func invalidate_icache_all
ic ialluis
isb sy
ret
endfunc invalidate_icache_all
/* -----------------------------------------------------
* Clear the SRAM enabling bit to unmap SRAM
* -----------------------------------------------------
*/
func platform_unmap_sram
ldr x0, =CCU_SRAM_WIN_CR
str wzr, [x0]
ret
endfunc platform_unmap_sram
/* -----------------------------------------------------
* Disable the SRAM
* -----------------------------------------------------
*/
func disable_sram
/* Disable the line lockings. They must be disabled expictly
* or the OS will have problems using the cache */
ldr x1, =MASTER_LLC_TC0_LOCK
str wzr, [x1]
/* Invalidate all ways */
ldr w1, =LLC_WAY_MASK
ldr x0, =MASTER_L2X0_INV_WAY
str w1, [x0]
/* Finally disable LLC */
ldr x0, =MASTER_LLC_CTRL
str wzr, [x0]
ret
endfunc disable_sram
/* -----------------------------------------------------
* Operation when exit bootROM:
* Disable the MMU
* Disable and invalidate the dcache
* Unmap and disable the SRAM
* Disable and invalidate the icache
* -----------------------------------------------------
*/
func marvell_exit_bootrom
/* Save the system restore address */
mov x28, x0
/* Close the caches and MMU */
bl disable_mmu_dcache
/*
* There is nothing important in the caches now,
* so invalidate them instead of cleaning.
*/
adr x0, __RW_START__
adr x1, __RW_END__
sub x1, x1, x0
bl inv_dcache_range
bl invalidate_tlb_all
/*
* Clean the memory mapping of SRAM
* the DDR mapping will remain to enable boot image to execute
*/
bl platform_unmap_sram
/* Disable the SRAM */
bl disable_sram
/* Disable and invalidate icache */
bl disable_icache
bl invalidate_icache_all
mov x0, x28
br x0
endfunc marvell_exit_bootrom
/*
* Enable L2 UniqueClean evictions with data
*/
func ca72_l2_enable_unique_clean
mrs x0, CORTEX_A72_L2ACTLR_EL1
orr x0, x0, #CORTEX_A72_L2ACTLR_ENABLE_UNIQUE_CLEAN
msr CORTEX_A72_L2ACTLR_EL1, x0
ret
endfunc ca72_l2_enable_unique_clean

View file

@ -0,0 +1,117 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <bl1.h>
#include <bl1/bl1_private.h>
#include <bl_common.h>
#include <console.h>
#include <debug.h>
#include <platform.h>
#include <platform_def.h>
#include <plat_marvell.h>
#include <sp805.h>
/* Weak definitions may be overridden in specific Marvell standard platform */
#pragma weak bl1_early_platform_setup
#pragma weak bl1_plat_arch_setup
#pragma weak bl1_platform_setup
#pragma weak bl1_plat_sec_mem_layout
/* Data structure which holds the extents of the RAM for BL1*/
static meminfo_t bl1_ram_layout;
meminfo_t *bl1_plat_sec_mem_layout(void)
{
return &bl1_ram_layout;
}
/*
* BL1 specific platform actions shared between Marvell standard platforms.
*/
void marvell_bl1_early_platform_setup(void)
{
const size_t bl1_size = BL1_RAM_LIMIT - BL1_RAM_BASE;
/* Initialize the console to provide early debug support */
console_init(PLAT_MARVELL_BOOT_UART_BASE,
PLAT_MARVELL_BOOT_UART_CLK_IN_HZ,
MARVELL_CONSOLE_BAUDRATE);
/* Allow BL1 to see the whole Trusted RAM */
bl1_ram_layout.total_base = MARVELL_BL_RAM_BASE;
bl1_ram_layout.total_size = MARVELL_BL_RAM_SIZE;
/* Calculate how much RAM BL1 is using and how much remains free */
bl1_ram_layout.free_base = MARVELL_BL_RAM_BASE;
bl1_ram_layout.free_size = MARVELL_BL_RAM_SIZE;
reserve_mem(&bl1_ram_layout.free_base,
&bl1_ram_layout.free_size,
BL1_RAM_BASE,
bl1_size);
}
void bl1_early_platform_setup(void)
{
marvell_bl1_early_platform_setup();
}
/*
* Perform the very early platform specific architecture setup shared between
* MARVELL standard platforms. This only does basic initialization. Later
* architectural setup (bl1_arch_setup()) does not do anything platform
* specific.
*/
void marvell_bl1_plat_arch_setup(void)
{
marvell_setup_page_tables(bl1_ram_layout.total_base,
bl1_ram_layout.total_size,
BL1_RO_BASE,
BL1_RO_LIMIT,
BL1_RO_DATA_BASE,
BL1_RO_DATA_END
#if USE_COHERENT_MEM
, BL_COHERENT_RAM_BASE,
BL_COHERENT_RAM_END
#endif
);
enable_mmu_el3(0);
}
void bl1_plat_arch_setup(void)
{
marvell_bl1_plat_arch_setup();
}
/*
* Perform the platform specific architecture setup shared between
* MARVELL standard platforms.
*/
void marvell_bl1_platform_setup(void)
{
/* Initialise the IO layer and register platform IO devices */
plat_marvell_io_setup();
}
void bl1_platform_setup(void)
{
marvell_bl1_platform_setup();
}
void bl1_plat_prepare_exit(entry_point_info_t *ep_info)
{
#ifdef EL3_PAYLOAD_BASE
/*
* Program the EL3 payload's entry point address into the CPUs mailbox
* in order to release secondary CPUs from their holding pen and make
* them jump there.
*/
marvell_program_trusted_mailbox(ep_info->pc);
dsbsy();
sev();
#endif
}

View file

@ -0,0 +1,275 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <arch_helpers.h>
#include <bl_common.h>
#include <console.h>
#include <marvell_def.h>
#include <platform_def.h>
#include <plat_marvell.h>
#include <string.h>
/* Data structure which holds the extents of the trusted SRAM for BL2 */
static meminfo_t bl2_tzram_layout __aligned(CACHE_WRITEBACK_GRANULE);
/*****************************************************************************
* This structure represents the superset of information that is passed to
* BL31, e.g. while passing control to it from BL2, bl31_params
* and other platform specific parameters
*****************************************************************************
*/
typedef struct bl2_to_bl31_params_mem {
bl31_params_t bl31_params;
image_info_t bl31_image_info;
image_info_t bl32_image_info;
image_info_t bl33_image_info;
entry_point_info_t bl33_ep_info;
entry_point_info_t bl32_ep_info;
entry_point_info_t bl31_ep_info;
} bl2_to_bl31_params_mem_t;
static bl2_to_bl31_params_mem_t bl31_params_mem;
/* Weak definitions may be overridden in specific MARVELL standard platform */
#pragma weak bl2_early_platform_setup
#pragma weak bl2_platform_setup
#pragma weak bl2_plat_arch_setup
#pragma weak bl2_plat_sec_mem_layout
#pragma weak bl2_plat_get_bl31_params
#pragma weak bl2_plat_get_bl31_ep_info
#pragma weak bl2_plat_flush_bl31_params
#pragma weak bl2_plat_set_bl31_ep_info
#pragma weak bl2_plat_get_scp_bl2_meminfo
#pragma weak bl2_plat_get_bl32_meminfo
#pragma weak bl2_plat_set_bl32_ep_info
#pragma weak bl2_plat_get_bl33_meminfo
#pragma weak bl2_plat_set_bl33_ep_info
meminfo_t *bl2_plat_sec_mem_layout(void)
{
return &bl2_tzram_layout;
}
/*****************************************************************************
* This function assigns a pointer to the memory that the platform has kept
* aside to pass platform specific and trusted firmware related information
* to BL31. This memory is allocated by allocating memory to
* bl2_to_bl31_params_mem_t structure which is a superset of all the
* structure whose information is passed to BL31
* NOTE: This function should be called only once and should be done
* before generating params to BL31
*****************************************************************************
*/
bl31_params_t *bl2_plat_get_bl31_params(void)
{
bl31_params_t *bl2_to_bl31_params;
/*
* Initialise the memory for all the arguments that needs to
* be passed to BL31
*/
memset(&bl31_params_mem, 0, sizeof(bl2_to_bl31_params_mem_t));
/* Assign memory for TF related information */
bl2_to_bl31_params = &bl31_params_mem.bl31_params;
SET_PARAM_HEAD(bl2_to_bl31_params, PARAM_BL31, VERSION_1, 0);
/* Fill BL31 related information */
bl2_to_bl31_params->bl31_image_info = &bl31_params_mem.bl31_image_info;
SET_PARAM_HEAD(bl2_to_bl31_params->bl31_image_info, PARAM_IMAGE_BINARY,
VERSION_1, 0);
/* Fill BL32 related information if it exists */
#if BL32_BASE
bl2_to_bl31_params->bl32_ep_info = &bl31_params_mem.bl32_ep_info;
SET_PARAM_HEAD(bl2_to_bl31_params->bl32_ep_info, PARAM_EP,
VERSION_1, 0);
bl2_to_bl31_params->bl32_image_info = &bl31_params_mem.bl32_image_info;
SET_PARAM_HEAD(bl2_to_bl31_params->bl32_image_info, PARAM_IMAGE_BINARY,
VERSION_1, 0);
#endif
/* Fill BL33 related information */
bl2_to_bl31_params->bl33_ep_info = &bl31_params_mem.bl33_ep_info;
SET_PARAM_HEAD(bl2_to_bl31_params->bl33_ep_info,
PARAM_EP, VERSION_1, 0);
/* BL33 expects to receive the primary CPU MPID (through x0) */
bl2_to_bl31_params->bl33_ep_info->args.arg0 = 0xffff & read_mpidr();
bl2_to_bl31_params->bl33_image_info = &bl31_params_mem.bl33_image_info;
SET_PARAM_HEAD(bl2_to_bl31_params->bl33_image_info, PARAM_IMAGE_BINARY,
VERSION_1, 0);
return bl2_to_bl31_params;
}
/* Flush the TF params and the TF plat params */
void bl2_plat_flush_bl31_params(void)
{
flush_dcache_range((unsigned long)&bl31_params_mem,
sizeof(bl2_to_bl31_params_mem_t));
}
/*****************************************************************************
* This function returns a pointer to the shared memory that the platform
* has kept to point to entry point information of BL31 to BL2
*****************************************************************************
*/
struct entry_point_info *bl2_plat_get_bl31_ep_info(void)
{
#if DEBUG
bl31_params_mem.bl31_ep_info.args.arg1 = MARVELL_BL31_PLAT_PARAM_VAL;
#endif
return &bl31_params_mem.bl31_ep_info;
}
/*****************************************************************************
* BL1 has passed the extents of the trusted SRAM that should be visible to BL2
* in x0. This memory layout is sitting at the base of the free trusted SRAM.
* Copy it to a safe location before its reclaimed by later BL2 functionality.
*****************************************************************************
*/
void marvell_bl2_early_platform_setup(meminfo_t *mem_layout)
{
/* Initialize the console to provide early debug support */
console_init(PLAT_MARVELL_BOOT_UART_BASE,
PLAT_MARVELL_BOOT_UART_CLK_IN_HZ,
MARVELL_CONSOLE_BAUDRATE);
/* Setup the BL2 memory layout */
bl2_tzram_layout = *mem_layout;
/* Initialise the IO layer and register platform IO devices */
plat_marvell_io_setup();
}
void bl2_early_platform_setup(meminfo_t *mem_layout)
{
marvell_bl2_early_platform_setup(mem_layout);
}
void bl2_platform_setup(void)
{
/* Nothing to do */
}
/*****************************************************************************
* Perform the very early platform specific architectural setup here. At the
* moment this is only initializes the mmu in a quick and dirty way.
*****************************************************************************
*/
void marvell_bl2_plat_arch_setup(void)
{
marvell_setup_page_tables(bl2_tzram_layout.total_base,
bl2_tzram_layout.total_size,
BL_CODE_BASE,
BL_CODE_END,
BL_RO_DATA_BASE,
BL_RO_DATA_END
#if USE_COHERENT_MEM
, BL_COHERENT_RAM_BASE,
BL_COHERENT_RAM_END
#endif
);
enable_mmu_el1(0);
}
void bl2_plat_arch_setup(void)
{
marvell_bl2_plat_arch_setup();
}
/*****************************************************************************
* Populate the extents of memory available for loading SCP_BL2 (if used),
* i.e. anywhere in trusted RAM as long as it doesn't overwrite BL2.
*****************************************************************************
*/
void bl2_plat_get_scp_bl2_meminfo(meminfo_t *scp_bl2_meminfo)
{
*scp_bl2_meminfo = bl2_tzram_layout;
}
/*****************************************************************************
* Before calling this function BL31 is loaded in memory and its entrypoint
* is set by load_image. This is a placeholder for the platform to change
* the entrypoint of BL31 and set SPSR and security state.
* On MARVELL std. platforms we only set the security state of the entrypoint
*****************************************************************************
*/
void bl2_plat_set_bl31_ep_info(image_info_t *bl31_image_info,
entry_point_info_t *bl31_ep_info)
{
SET_SECURITY_STATE(bl31_ep_info->h.attr, SECURE);
bl31_ep_info->spsr = SPSR_64(MODE_EL3, MODE_SP_ELX,
DISABLE_ALL_EXCEPTIONS);
}
/*****************************************************************************
* Populate the extents of memory available for loading BL32
*****************************************************************************
*/
#ifdef BL32_BASE
void bl2_plat_get_bl32_meminfo(meminfo_t *bl32_meminfo)
{
/*
* Populate the extents of memory available for loading BL32.
*/
bl32_meminfo->total_base = BL32_BASE;
bl32_meminfo->free_base = BL32_BASE;
bl32_meminfo->total_size =
(TRUSTED_DRAM_BASE + TRUSTED_DRAM_SIZE) - BL32_BASE;
bl32_meminfo->free_size =
(TRUSTED_DRAM_BASE + TRUSTED_DRAM_SIZE) - BL32_BASE;
}
#endif
/*****************************************************************************
* Before calling this function BL32 is loaded in memory and its entrypoint
* is set by load_image. This is a placeholder for the platform to change
* the entrypoint of BL32 and set SPSR and security state.
* On MARVELL std. platforms we only set the security state of the entrypoint
*****************************************************************************
*/
void bl2_plat_set_bl32_ep_info(image_info_t *bl32_image_info,
entry_point_info_t *bl32_ep_info)
{
SET_SECURITY_STATE(bl32_ep_info->h.attr, SECURE);
bl32_ep_info->spsr = marvell_get_spsr_for_bl32_entry();
}
/*****************************************************************************
* Before calling this function BL33 is loaded in memory and its entrypoint
* is set by load_image. This is a placeholder for the platform to change
* the entrypoint of BL33 and set SPSR and security state.
* On MARVELL std. platforms we only set the security state of the entrypoint
*****************************************************************************
*/
void bl2_plat_set_bl33_ep_info(image_info_t *image,
entry_point_info_t *bl33_ep_info)
{
SET_SECURITY_STATE(bl33_ep_info->h.attr, NON_SECURE);
bl33_ep_info->spsr = marvell_get_spsr_for_bl33_entry();
}
/*****************************************************************************
* Populate the extents of memory available for loading BL33
*****************************************************************************
*/
void bl2_plat_get_bl33_meminfo(meminfo_t *bl33_meminfo)
{
bl33_meminfo->total_base = MARVELL_DRAM_BASE;
bl33_meminfo->total_size = MARVELL_DRAM_SIZE;
bl33_meminfo->free_base = MARVELL_DRAM_BASE;
bl33_meminfo->free_size = MARVELL_DRAM_SIZE;
}

View file

@ -0,0 +1,232 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <arch.h>
#include <assert.h>
#include <console.h>
#include <debug.h>
#include <marvell_def.h>
#include <marvell_plat_priv.h>
#include <plat_marvell.h>
#include <platform.h>
#ifdef USE_CCI
#include <cci.h>
#endif
/*
* The next 3 constants identify the extents of the code, RO data region and the
* limit of the BL31 image. These addresses are used by the MMU setup code and
* therefore they must be page-aligned. It is the responsibility of the linker
* script to ensure that __RO_START__, __RO_END__ & __BL31_END__ linker symbols
* refer to page-aligned addresses.
*/
#define BL31_END (unsigned long)(&__BL31_END__)
/*
* Placeholder variables for copying the arguments that have been passed to
* BL31 from BL2.
*/
static entry_point_info_t bl32_image_ep_info;
static entry_point_info_t bl33_image_ep_info;
/* Weak definitions may be overridden in specific ARM standard platform */
#pragma weak bl31_early_platform_setup
#pragma weak bl31_platform_setup
#pragma weak bl31_plat_arch_setup
#pragma weak bl31_plat_get_next_image_ep_info
#pragma weak plat_get_syscnt_freq2
/*****************************************************************************
* Return a pointer to the 'entry_point_info' structure of the next image for
* the security state specified. BL33 corresponds to the non-secure image type
* while BL32 corresponds to the secure image type. A NULL pointer is returned
* if the image does not exist.
*****************************************************************************
*/
entry_point_info_t *bl31_plat_get_next_image_ep_info(uint32_t type)
{
entry_point_info_t *next_image_info;
assert(sec_state_is_valid(type));
next_image_info = (type == NON_SECURE)
? &bl33_image_ep_info : &bl32_image_ep_info;
return next_image_info;
}
/*****************************************************************************
* Perform any BL31 early platform setup common to ARM standard platforms.
* Here is an opportunity to copy parameters passed by the calling EL (S-EL1
* in BL2 & S-EL3 in BL1) before they are lost (potentially). This needs to be
* done before the MMU is initialized so that the memory layout can be used
* while creating page tables. BL2 has flushed this information to memory, so
* we are guaranteed to pick up good data.
*****************************************************************************
*/
void marvell_bl31_early_platform_setup(bl31_params_t *from_bl2,
void *plat_params_from_bl2)
{
/* Initialize the console to provide early debug support */
console_init(PLAT_MARVELL_BOOT_UART_BASE,
PLAT_MARVELL_BOOT_UART_CLK_IN_HZ,
MARVELL_CONSOLE_BAUDRATE);
#if RESET_TO_BL31
/* There are no parameters from BL2 if BL31 is a reset vector */
assert(from_bl2 == NULL);
assert(plat_params_from_bl2 == NULL);
#ifdef BL32_BASE
/* Populate entry point information for BL32 */
SET_PARAM_HEAD(&bl32_image_ep_info,
PARAM_EP,
VERSION_1,
0);
SET_SECURITY_STATE(bl32_image_ep_info.h.attr, SECURE);
bl32_image_ep_info.pc = BL32_BASE;
bl32_image_ep_info.spsr = marvell_get_spsr_for_bl32_entry();
#endif /* BL32_BASE */
/* Populate entry point information for BL33 */
SET_PARAM_HEAD(&bl33_image_ep_info,
PARAM_EP,
VERSION_1,
0);
/*
* Tell BL31 where the non-trusted software image
* is located and the entry state information
*/
bl33_image_ep_info.pc = plat_get_ns_image_entrypoint();
bl33_image_ep_info.spsr = marvell_get_spsr_for_bl33_entry();
SET_SECURITY_STATE(bl33_image_ep_info.h.attr, NON_SECURE);
#else
/*
* Check params passed from BL2 should not be NULL,
*/
assert(from_bl2 != NULL);
assert(from_bl2->h.type == PARAM_BL31);
assert(from_bl2->h.version >= VERSION_1);
/*
* In debug builds, we pass a special value in 'plat_params_from_bl2'
* to verify platform parameters from BL2 to BL31.
* In release builds, it's not used.
*/
assert(((unsigned long long)plat_params_from_bl2) ==
MARVELL_BL31_PLAT_PARAM_VAL);
/*
* Copy BL32 (if populated by BL2) and BL33 entry point information.
* They are stored in Secure RAM, in BL2's address space.
*/
if (from_bl2->bl32_ep_info)
bl32_image_ep_info = *from_bl2->bl32_ep_info;
bl33_image_ep_info = *from_bl2->bl33_ep_info;
#endif
}
void bl31_early_platform_setup(bl31_params_t *from_bl2,
void *plat_params_from_bl2)
{
marvell_bl31_early_platform_setup(from_bl2, plat_params_from_bl2);
#ifdef USE_CCI
/*
* Initialize CCI for this cluster during cold boot.
* No need for locks as no other CPU is active.
*/
plat_marvell_interconnect_init();
/*
* Enable CCI coherency for the primary CPU's cluster.
* Platform specific PSCI code will enable coherency for other
* clusters.
*/
plat_marvell_interconnect_enter_coherency();
#endif
}
/*****************************************************************************
* Perform any BL31 platform setup common to ARM standard platforms
*****************************************************************************
*/
void marvell_bl31_platform_setup(void)
{
/* Initialize the GIC driver, cpu and distributor interfaces */
plat_marvell_gic_driver_init();
plat_marvell_gic_init();
/* For Armada-8k-plus family, the SoC includes more than
* a single AP die, but the default die that boots is AP #0.
* For other families there is only one die (#0).
* Initialize psci arch from die 0
*/
marvell_psci_arch_init(0);
}
/*****************************************************************************
* Perform any BL31 platform runtime setup prior to BL31 exit common to ARM
* standard platforms
*****************************************************************************
*/
void marvell_bl31_plat_runtime_setup(void)
{
/* Initialize the runtime console */
console_init(PLAT_MARVELL_BL31_RUN_UART_BASE,
PLAT_MARVELL_BL31_RUN_UART_CLK_IN_HZ,
MARVELL_CONSOLE_BAUDRATE);
}
void bl31_platform_setup(void)
{
marvell_bl31_platform_setup();
}
void bl31_plat_runtime_setup(void)
{
marvell_bl31_plat_runtime_setup();
}
/*****************************************************************************
* Perform the very early platform specific architectural setup shared between
* ARM standard platforms. This only does basic initialization. Later
* architectural setup (bl31_arch_setup()) does not do anything platform
* specific.
*****************************************************************************
*/
void marvell_bl31_plat_arch_setup(void)
{
marvell_setup_page_tables(BL31_BASE,
BL31_END - BL31_BASE,
BL_CODE_BASE,
BL_CODE_END,
BL_RO_DATA_BASE,
BL_RO_DATA_END
#if USE_COHERENT_MEM
, BL_COHERENT_RAM_BASE,
BL_COHERENT_RAM_END
#endif
);
#if BL31_CACHE_DISABLE
enable_mmu_el3(DISABLE_DCACHE);
INFO("Cache is disabled in BL3\n");
#else
enable_mmu_el3(0);
#endif
}
void bl31_plat_arch_setup(void)
{
marvell_bl31_plat_arch_setup();
}
unsigned int plat_get_syscnt_freq2(void)
{
return PLAT_REF_CLK_IN_HZ;
}

View file

@ -0,0 +1,51 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <cci.h>
#include <plat_marvell.h>
static const int cci_map[] = {
PLAT_MARVELL_CCI_CLUSTER0_SL_IFACE_IX,
PLAT_MARVELL_CCI_CLUSTER1_SL_IFACE_IX
};
/****************************************************************************
* The following functions are defined as weak to allow a platform to override
* the way ARM CCI driver is initialised and used.
****************************************************************************
*/
#pragma weak plat_marvell_interconnect_init
#pragma weak plat_marvell_interconnect_enter_coherency
#pragma weak plat_marvell_interconnect_exit_coherency
/****************************************************************************
* Helper function to initialize ARM CCI driver.
****************************************************************************
*/
void plat_marvell_interconnect_init(void)
{
cci_init(PLAT_MARVELL_CCI_BASE, cci_map, ARRAY_SIZE(cci_map));
}
/****************************************************************************
* Helper function to place current master into coherency
****************************************************************************
*/
void plat_marvell_interconnect_enter_coherency(void)
{
cci_enable_snoop_dvm_reqs(MPIDR_AFFLVL1_VAL(read_mpidr_el1()));
}
/****************************************************************************
* Helper function to remove current master from coherency
****************************************************************************
*/
void plat_marvell_interconnect_exit_coherency(void)
{
cci_disable_snoop_dvm_reqs(MPIDR_AFFLVL1_VAL(read_mpidr_el1()));
}

View file

@ -0,0 +1,67 @@
# Copyright (C) 2018 Marvell International Ltd.
#
# SPDX-License-Identifier: BSD-3-Clause
# https://spdx.org/licenses
MARVELL_PLAT_BASE := plat/marvell
MARVELL_PLAT_INCLUDE_BASE := include/plat/marvell
include $(MARVELL_PLAT_BASE)/version.mk
include $(MARVELL_PLAT_BASE)/marvell.mk
VERSION_STRING +=(Marvell-${SUBVERSION})
SEPARATE_CODE_AND_RODATA := 1
# flag to switch from PLL to ARO
ARO_ENABLE := 0
$(eval $(call add_define,ARO_ENABLE))
# Enable/Disable LLC
LLC_ENABLE := 1
$(eval $(call add_define,LLC_ENABLE))
PLAT_INCLUDES += -I. -Iinclude/common/tbbr \
-I$(MARVELL_PLAT_INCLUDE_BASE)/common \
-I$(MARVELL_PLAT_INCLUDE_BASE)/common/aarch64
PLAT_BL_COMMON_SOURCES += lib/xlat_tables/xlat_tables_common.c \
lib/xlat_tables/aarch64/xlat_tables.c \
$(MARVELL_PLAT_BASE)/common/aarch64/marvell_common.c \
$(MARVELL_PLAT_BASE)/common/aarch64/marvell_helpers.S
BL1_SOURCES += drivers/delay_timer/delay_timer.c \
drivers/io/io_fip.c \
drivers/io/io_memmap.c \
drivers/io/io_storage.c \
$(MARVELL_PLAT_BASE)/common/marvell_bl1_setup.c \
$(MARVELL_PLAT_BASE)/common/marvell_io_storage.c \
$(MARVELL_PLAT_BASE)/common/plat_delay_timer.c
ifdef EL3_PAYLOAD_BASE
# Need the arm_program_trusted_mailbox() function to release secondary CPUs from
# their holding pen
endif
BL2_SOURCES += drivers/io/io_fip.c \
drivers/io/io_memmap.c \
drivers/io/io_storage.c \
$(MARVELL_PLAT_BASE)/common/marvell_bl2_setup.c \
$(MARVELL_PLAT_BASE)/common/marvell_io_storage.c
BL31_SOURCES += $(MARVELL_PLAT_BASE)/common/marvell_bl31_setup.c \
$(MARVELL_PLAT_BASE)/common/marvell_pm.c \
$(MARVELL_PLAT_BASE)/common/marvell_topology.c \
plat/common/plat_psci_common.c \
$(MARVELL_PLAT_BASE)/common/plat_delay_timer.c \
drivers/delay_timer/delay_timer.c
# PSCI functionality
$(eval $(call add_define,CONFIG_ARM64))
# MSS (SCP) build
ifeq (${MSS_SUPPORT}, 1)
include $(MARVELL_PLAT_BASE)/common/mss/mss_common.mk
endif
fip: mrvl_flash

View file

@ -0,0 +1,110 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <debug.h>
#include <platform_def.h>
#include <ddr_info.h>
#include <mmio.h>
#define DRAM_CH0_MMAP_LOW_REG(iface, cs, base) \
(base + DRAM_CH0_MMAP_LOW_OFFSET + (iface) * 0x10000 + (cs) * 0x8)
#define DRAM_CH0_MMAP_HIGH_REG(iface, cs, base) \
(DRAM_CH0_MMAP_LOW_REG(iface, cs, base) + 4)
#define DRAM_CS_VALID_ENABLED_MASK 0x1
#define DRAM_AREA_LENGTH_OFFS 16
#define DRAM_AREA_LENGTH_MASK (0x1f << DRAM_AREA_LENGTH_OFFS)
#define DRAM_START_ADDRESS_L_OFFS 23
#define DRAM_START_ADDRESS_L_MASK \
(0x1ff << DRAM_START_ADDRESS_L_OFFS)
#define DRAM_START_ADDR_HTOL_OFFS 32
#define DRAM_MAX_CS_NUM 2
#define DRAM_CS_ENABLED(iface, cs, base) \
(mmio_read_32(DRAM_CH0_MMAP_LOW_REG(iface, cs, base)) & \
DRAM_CS_VALID_ENABLED_MASK)
#define GET_DRAM_REGION_SIZE_CODE(iface, cs, base) \
(mmio_read_32(DRAM_CH0_MMAP_LOW_REG(iface, cs, base)) & \
DRAM_AREA_LENGTH_MASK) >> DRAM_AREA_LENGTH_OFFS
/* Mapping between DDR area length and real DDR size is specific and looks like
* bellow:
* 0 => 384 MB
* 1 => 768 MB
* 2 => 1536 MB
* 3 => 3 GB
* 4 => 6 GB
*
* 7 => 8 MB
* 8 => 16 MB
* 9 => 32 MB
* 10 => 64 MB
* 11 => 128 MB
* 12 => 256 MB
* 13 => 512 MB
* 14 => 1 GB
* 15 => 2 GB
* 16 => 4 GB
* 17 => 8 GB
* 18 => 16 GB
* 19 => 32 GB
* 20 => 64 GB
* 21 => 128 GB
* 22 => 256 GB
* 23 => 512 GB
* 24 => 1 TB
* 25 => 2 TB
* 26 => 4 TB
*
* to calculate real size we need to use two different formulas:
* -- GET_DRAM_REGION_SIZE_ODD for values 0-4 (DRAM_REGION_SIZE_ODD)
* -- GET_DRAM_REGION_SIZE_EVEN for values 7-26 (DRAM_REGION_SIZE_EVEN)
* using mentioned formulas we cover whole mapping between "Area length" value
* and real size (see above mapping).
*/
#define DRAM_REGION_SIZE_EVEN(C) (((C) >= 7) && ((C) <= 26))
#define GET_DRAM_REGION_SIZE_EVEN(C) ((uint64_t)1 << ((C) + 16))
#define DRAM_REGION_SIZE_ODD(C) ((C) <= 4)
#define GET_DRAM_REGION_SIZE_ODD(C) ((uint64_t)0x18000000 << (C))
uint64_t mvebu_get_dram_size(uint64_t ap_base_addr)
{
uint64_t mem_size = 0;
uint8_t region_code;
uint8_t cs, iface;
for (iface = 0; iface < DRAM_MAX_IFACE; iface++) {
for (cs = 0; cs < DRAM_MAX_CS_NUM; cs++) {
/* Exit loop on first disabled DRAM CS */
if (!DRAM_CS_ENABLED(iface, cs, ap_base_addr))
break;
/* Decode area length for current CS
* from register value
*/
region_code =
GET_DRAM_REGION_SIZE_CODE(iface, cs,
ap_base_addr);
if (DRAM_REGION_SIZE_EVEN(region_code)) {
mem_size +=
GET_DRAM_REGION_SIZE_EVEN(region_code);
} else if (DRAM_REGION_SIZE_ODD(region_code)) {
mem_size +=
GET_DRAM_REGION_SIZE_ODD(region_code);
} else {
WARN("%s: Invalid mem region (0x%x) CS#%d\n",
__func__, region_code, cs);
return 0;
}
}
}
return mem_size;
}

View file

@ -0,0 +1,59 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <gicv2.h>
#include <plat_marvell.h>
#include <platform.h>
#include <platform_def.h>
/*
* The following functions are defined as weak to allow a platform to override
* the way the GICv2 driver is initialised and used.
*/
#pragma weak plat_marvell_gic_driver_init
#pragma weak plat_marvell_gic_init
/*
* On a GICv2 system, the Group 1 secure interrupts are treated as Group 0
* interrupts.
*/
static const interrupt_prop_t marvell_interrupt_props[] = {
PLAT_MARVELL_G1S_IRQ_PROPS(GICV2_INTR_GROUP0),
PLAT_MARVELL_G0_IRQ_PROPS(GICV2_INTR_GROUP0)
};
static unsigned int target_mask_array[PLATFORM_CORE_COUNT];
/*
* Ideally `marvell_gic_data` structure definition should be a `const` but it is
* kept as modifiable for overwriting with different GICD and GICC base when
* running on FVP with VE memory map.
*/
static gicv2_driver_data_t marvell_gic_data = {
.gicd_base = PLAT_MARVELL_GICD_BASE,
.gicc_base = PLAT_MARVELL_GICC_BASE,
.interrupt_props = marvell_interrupt_props,
.interrupt_props_num = ARRAY_SIZE(marvell_interrupt_props),
.target_masks = target_mask_array,
.target_masks_num = ARRAY_SIZE(target_mask_array),
};
/*
* ARM common helper to initialize the GICv2 only driver.
*/
void plat_marvell_gic_driver_init(void)
{
gicv2_driver_init(&marvell_gic_data);
}
void plat_marvell_gic_init(void)
{
gicv2_distif_init();
gicv2_pcpu_distif_init();
gicv2_set_pe_target_mask(plat_my_core_pos());
gicv2_cpuif_enable();
}

View file

@ -0,0 +1,206 @@
/*
* Copyright (C) 2018 Marvell International Ltd.
*
* SPDX-License-Identifier: BSD-3-Clause
* https://spdx.org/licenses
*/
#include <assert.h>
#include <bl_common.h> /* For ARRAY_SIZE */
#include <debug.h>
#include <firmware_image_package.h>
#include <io_driver.h>
#include <io_fip.h>
#include <io_memmap.h>
#include <io_storage.h>
#include <platform_def.h>
#include <string.h>
/* IO devices */
static const io_dev_connector_t *fip_dev_con;
static uintptr_t fip_dev_handle;
static const io_dev_connector_t *memmap_dev_con;
static uintptr_t memmap_dev_handle;
static const io_block_spec_t fip_block_spec = {
.offset = PLAT_MARVELL_FIP_BASE,
.length = PLAT_MARVELL_FIP_MAX_SIZE
};
static const io_uuid_spec_t bl2_uuid_spec = {
.uuid = UUID_TRUSTED_BOOT_FIRMWARE_BL2,
};
static const io_uuid_spec_t scp_bl2_uuid_spec = {
.uuid = UUID_SCP_FIRMWARE_SCP_BL2,
};
static const io_uuid_spec_t bl31_uuid_spec = {
.uuid = UUID_EL3_RUNTIME_FIRMWARE_BL31,
};
static const io_uuid_spec_t bl32_uuid_spec = {
.uuid = UUID_SECURE_PAYLOAD_BL32,
};
static const io_uuid_spec_t bl33_uuid_spec = {
.uuid = UUID_NON_TRUSTED_FIRMWARE_BL33,
};
static int open_fip(const uintptr_t spec);
static int open_memmap(const uintptr_t spec);
struct plat_io_policy {
uintptr_t *dev_handle;
uintptr_t image_spec;
int (*check)(const uintptr_t spec);
};
/* By default, Marvell platforms load images from the FIP */
static const struct plat_io_policy policies[] = {
[FIP_IMAGE_ID] = {
&memmap_dev_handle,
(uintptr_t)&fip_block_spec,
open_memmap
},
[BL2_IMAGE_ID] = {
&fip_dev_handle,
(uintptr_t)&bl2_uuid_spec,
open_fip
},
[SCP_BL2_IMAGE_ID] = {
&fip_dev_handle,
(uintptr_t)&scp_bl2_uuid_spec,
open_fip
},
[BL31_IMAGE_ID] = {
&fip_dev_handle,
(uintptr_t)&bl31_uuid_spec,
open_fip
},
[BL32_IMAGE_ID] = {
&fip_dev_handle,
(uintptr_t)&bl32_uuid_spec,
open_fip
},
[BL33_IMAGE_ID] = {
&fip_dev_handle,
(uintptr_t)&bl33_uuid_spec,
open_fip
},
};
/* Weak definitions may be overridden in specific ARM standard platform */
#pragma weak plat_marvell_io_setup
#pragma weak plat_marvell_get_alt_image_source
static int open_fip(const uintptr_t spec)
{
int result;
uintptr_t local_image_handle;
/* See if a Firmware Image Package is available */
result = io_dev_init(fip_dev_handle, (uintptr_t)FIP_IMAGE_ID);
if (result == 0) {
result = io_open(fip_dev_handle, spec, &local_image_handle);
if (result == 0) {
VERBOSE("Using FIP\n");
io_close(local_image_handle);
}
}
return result;
}
static int open_memmap(const uintptr_t spec)
{
int result;
uintptr_t local_image_handle;
result = io_dev_init(memmap_dev_handle, (uintptr_t)NULL);
if (result == 0) {
result = io_open(memmap_dev_handle, spec, &local_image_handle);
if (result == 0) {
VERBOSE("Using Memmap\n");
io_close(local_image_handle);
}
}
return result;
}
void marvell_io_setup(void)
{
int io_result;
io_result = register_io_dev_fip(&fip_dev_con);
assert(io_result == 0);
io_result = register_io_dev_memmap(&memmap_dev_con);
assert(io_result == 0);
/* Open connections to devices and cache the handles */
io_result = io_dev_open(fip_dev_con, (uintptr_t)NULL,
&fip_dev_handle);
assert(io_result == 0);
io_result = io_dev_open(memmap_dev_con, (uintptr_t)NULL,
&memmap_dev_handle);
assert(io_result == 0);
/* Ignore improbable errors in release builds */
(void)io_result;
}
void plat_marvell_io_setup(void)
{
marvell_io_setup();
}
int plat_marvell_get_alt_image_source(
unsigned int image_id __attribute__((unused)),
uintptr_t *dev_handle __attribute__((unused)),
uintptr_t *image_spec __attribute__((unused)))
{
/* By default do not try an alternative */
return -ENOENT;
}
/*
* Return an IO device handle and specification which can be used to access
* an image. Use this to enforce platform load policy
*/
int plat_get_image_source(unsigned int image_id, uintptr_t *dev_handle,
uintptr_t *image_spec)
{
int result;
const struct plat_io_policy *policy;
assert(image_id < ARRAY_SIZE(policies));
policy = &policies[image_id];
result = policy->check(policy->image_spec);
if (result == 0) {
*image_spec = policy->image_spec;
*dev_handle = *(policy->dev_handle);
} else {
VERBOSE("Trying alternative IO\n");
result = plat_marvell_get_alt_image_source(image_id, dev_handle,
image_spec);
}
return result;
}
/*
* See if a Firmware Image Package is available,
* by checking if TOC is valid or not.
*/
int marvell_io_is_toc_valid(void)
{
int result;
result = io_dev_init(fip_dev_handle, (uintptr_t)FIP_IMAGE_ID);
return result == 0;
}

Some files were not shown because too many files have changed in this diff Show more