Outside of changing versions here the other visible change is that we
tell grub that riscv64 does not have "large model" support. Without this
change the resulting mkimage is non-functional. This is known upstream
already.
Link: https://savannah.gnu.org/bugs/?65909
Signed-off-by: Tom Rini <trini@konsulko.com>
Currently, this platform is failing in CI due to seemingly platform
specific reasons. For now, remove it from CI until the maintainers have
a chance to look in to it.
Signed-off-by: Tom Rini <trini@konsulko.com>
Now that U-Boot can boot this quickly, using kvm, add a test that the
installer starts up correctly.
Use the qemu-x86_64 board in the SJG lab.
Signed-off-by: Simon Glass <sjg@chromium.org>
When adding the symlink for the conf file so qemu_arm64_lwip uses
qemu_arm64 configuration information, the symlink for the boardenv file
was missed in Gitlab (but not Azure). Add that in now.
Fixes: fd10d156db ("CI: add qemu_arm64_lwip to the test matrix")
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
Tom Rini <trini@konsulko.com> says:
A challenge we've run in to is making it easier for more people to use
various python tools that we include in the tree. Part of the problem is
that when we have a requirements.txt file, aside from the doc one we
share with the kernel, I created it using "pip freeze". And while this
might have been a best (or at least OK) practice at the time, that's no
longer the case and is why our files have so many things in them. What
this series does is create multiple files, one per project/tool and then
has CI install them as needed. There's a few places here where this
means that we update the requirements as well, but we keep a few big
things where they are currently. This is because updating them
introduces problems of their own and delaing with that would best be a
follow up series. I've put this through GitLab and Azure to make sure
everything is still going fine on both platforms.
Link: https://lore.kernel.org/r/20250205000743.949790-1-trini@konsulko.com
We can invoke pip once to install the various requirements.txt files
that we need rather than invoking the tool multiple times.
Signed-off-by: Tom Rini <trini@konsulko.com>
We should install all of our requirements.txt files after starting the
virtualenv rather than ad-hoc throughout each test.
Signed-off-by: Tom Rini <trini@konsulko.com>
Before we invoke pip we should always have first created and started our
virtualenv. This was done most of the time, but not always.
Signed-off-by: Tom Rini <trini@konsulko.com>
The rules part of the template makes sure that this doesn't run until
specifically requested. Drop the check in the script itself, so it is
possible to trigger a run manually without re-pushing the tree.
Signed-off-by: Simon Glass <sjg@chromium.org>
Whereas with Azure the JUnit results file is available for download,
Gitlab doesn't default to including it as an artifact to download and
only makes it available via its own JUnit parser. Fix this by listing it
as an artifact to save as well.
Signed-off-by: Tom Rini <trini@konsulko.com>
Now that we can run sandbox on arm64 hosts, have these jobs run on both
the fast arm64 and amd64 hosts to catch any issues.
Signed-off-by: Tom Rini <trini@konsulko.com>
Execution time varies widely with the existing tests. Provides a way to
produce a summary of the time taken for each test, along with a
histogram.
This is enabled with the --timing flag.
Enable it for sandbox in CI.
Example:
Duration : Number of tests
======== : ========================================
<1ms : 1
<8ms : 1
<20ms : # 20
<30ms : ######## 127
<50ms : ######################################## 582
<75ms : ####### 102
<100ms : ## 39
<200ms : ##### 86
<300ms : # 29
<500ms : ## 42
<750ms : # 16
<1.0s : # 15
<2.0s : # 23
<3.0s : 13
<5.0s : 9
<7.5s : 1
<10.0s : 6
<20.0s : 12
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
I have one of these boards loaded with Ubuntu 24.10 (64-bit). Add an
entry for it so that it can be used for testing.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-By: Heinrich Schuchardt <xypron.glpk@gmx.de>
Upon further consideration, we should have both DEFAULT_FAST_ARM64_TAG
and DEFAULT_ARM64_TAG values available. This will allow us to later run
a matrix of some jobs, such as sandbox, on any arm64 host and still keep
the world build to only fast arm64 hosts.
Signed-off-by: Tom Rini <trini@konsulko.com>
The CI uses the following command to launch xilinx_versal_virt_defconfig:
qemu-system-aarch64 -M xlnx-versal-virt \
-display none -m 4G -serial mon:stdio \
-device loader,file=u-boot,cpu-num=0
'usb start' or invoking eth_bootdev_hunt leads to a crash when function
dwc3_core_init() tries to access a register at offset 0xc704 (DWC3_DCTL)
relative to the register start address 0xfe20c100.
Disable CONFIG_USB_DWC3 in the CI until the driver problem is fixed.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
In commit 399f739be6 ("CI: allow jobs to be run in merge requests") we
added "rules:when: always" to many stages of the pipeline to allow for merge
requests to trigger a run. However based on current Gitlab
documentation, we should still be triggered on merge requests without
this. Furthermore the way we have things written today we always run all
stages of the CI rather than failing out early on problems, which is not
always useful. Remove these as we should still be fine with merge
requests triggering a run.
Link: https://docs.gitlab.com/ee/ci/yaml/#rules
Signed-off-by: Tom Rini <trini@konsulko.com>
Our Gitlab pipeline is currently broken up in to several stages. This
was done with the thought process of "we should test tools and if
they're good test emulated targets and if they're good test real
hardware and if they're good test the world". However, in terms of that
first stage it only really matters that binman, et al are still
functional. And for a few years now Gitlab has had a "needs" keyword
that lets you refine pipeline dependencies. Use this to perform the
minor optimization of having test.py only require that tool testing job.
This will become more useful later when we add long running testsuites
that we do not want to block later jobs.
Signed-off-by: Tom Rini <trini@konsulko.com>
In the test.py stage of the build we mark the pytest results as
artifacts to save, so that they can be used for reports. This however
leads to all of the artifacts being downloaded (and then not used) in
later stages. Optimize this out by using an empty list of dependencies
here (which is the keyword for which artifacts are needed).
Signed-off-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
When validating our current pipeline, a warning is produced about a lack
of a default workflow. For how we use it, we can add a simple default of
"always".
Signed-off-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
First, introduce DEFAULT_ALL_TAG, DEFAULT_ARM64_TAG, DEFAULT_AMD64_TAG
and DEFAULT_FAST_AMD64_TAG and remove the previous DEFAULT_TAG (as
anyone making use of that will need to adjust their jobs). This allows
us to say that some jobs can run on amd64 or arm64 hosts under the all
tag, while some jobs must run on amd64 (the Xtensa jobs due to
binary-only toolchains and sandbox for now) Then we rework the world
build stage to only run on our very fast amd64 hosts, or our arm64 hosts
(which are also very fast). This should result in a similar overall
build time but also a much more consistent overall build time as we
won't have the two big world jobs possibly run on our slower build
nodes.
Signed-off-by: Tom Rini <trini@konsulko.com>
The xtensa architecture is interesting in that the platforms we support
are only valid on the binary-only toolchains as the DC233C instruction
set requires those toolchains (and not the FSF instruction set). Only
install the binary toolchain on amd64 hosts and only run the tests on
them as well.
Signed-off-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
I have an original rpi installed now, loaded with OS Lite (32-bit)
Add an entry for it so that it can be used for testing.
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Tom Rini <trini@konsulko.com> says:
Hey all,
This is picking up Simon's v5 of the above-named series and making a few
more changes so that the follow-up series I have leads to arm64 being
supported for almost all jobs. To quote Simon's cover letter:
All gitlab runners are currently amd64 machines. This series attempts to
create a docker image which can also support arm64 so that sandbox tests
can be run on it.
The TARGET_... environment variables for grub could perhaps be adjusted,
using the new variables, but I have not done that for now.
Adding to what Simon said, we now build grub for all architectures as
the reason to install it was to be able to use the binaries in QEMU.
That won't provide us with amd64 binaries on arm64 hosts so we can't use
that shortcut anymore.
Link: https://lore.kernel.org/r/20241127172247.1488685-1-trini@konsulko.com
Add a list of possible platforms that can be used by gitlab runners.
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
For consistency now, and future ease of testing with non-amd64 hosts,
build grub for all architectures rather than relying on host binaries
for i386/x86_64.
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
Simon Glass <sjg@chromium.org> says:
Labgrid provides access to a hardware lab in an automated way. It is
possible to boot U-Boot on boards in the lab without physically touching
them. It relies on relays, USB UARTs and SD muxes, among other things.
By way of background, about 4 years ago I wrong a thing called Labman[1]
which allowed my lab of about 30 devices to be operated remotely, using
tbot for the console and build integration. While it worked OK and I
used it for many bisects, I didn't take it any further.
It turns out that there was already an existing program, called Labgrid,
which I did not know about at time (thank you Tom for telling me). It is
more rounded than Labman and has a number of advantages:
- does not need udev rules, mostly
- has several existing users who rely on it
- supports multiple machines exporting their devices
It lacks a 'lab check' feature and a few other things, but these can be
remedied.
On and off over the past several weeks I have been experimenting with
Labgrid. I have managed to create an initial U-Boot integration (this
series) by adding various features to Labgrid[2] and the U-Boot test
hooks.
I hope that this might inspire others to set up boards and run tests
automatically, rather than relying on infrequent, manual test. Perhaps
it may even be possible to have a number of labs available.
Included in the integration are a number of simple scripts which make it
easy to connect to boards and run tests:
ub-int <target>
Build and boot on a target, starting an interactive session
ub-cli <target>
Build and boot on a target, ensure U-Boot starts and provide an interactive
session from there
ub-smoke <target>
Smoke test U-Boot to check that it boots to a prompt on a target
ub-bisect <target>
Bisect a git tree to locate a failure on a particular target
ub-pyt <target> <testspec>
Run U-Boot pytests on a target
Some of these help to provide the same tbot[4] workflow which I have
relied on for several years, albeit much simpler versions.
The goal here is to create some sort of script which can collect
patches from the mailing list, apply them and test them on a selection
of boards. I suspect that script already exists, so please let me know
what you suggest.
I hope you find this interesting and take a look!
[1] https://github.com/sjg20/u-boot/tree/lab6a
[2] https://github.com/labgrid-project/labgrid/pull/1411
[3] https://github.com/sjg20/uboot-test-hooks/tree/labgrid
[4] https://tbot.tools/index.html
Link: https://lore.kernel.org/r/20241112141326.643128-1-sjg@chromium.org
[trini: Move the sjg-lab job to prior to world build, to fix pipeline
status]
Signed-off-by: Tom Rini <trini@konsulko.com>
Add a way to run tests on a real hardware lab. This is in the very early
experimental stages. There are only 23 boards and 3 of those are broken!
(bob, ff3399, samus). A fourth fails due to problems with the TPM tests.
To try this, assuming you have gitlab access, set SJG_LAB=1, e.g.:
git push -o ci.variable="SJG_LAB=1" dm HEAD:try
This relies on the two previous series targeted at -next as well as the
bugfix series for -master
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Andrejs Cainikovs <andrejs.cainikovs@toradex.com>
Patrick Rudolph <patrick.rudolph@9elements.com> says:
Based on the existing work done by Simon Glass this series adds
support for booting aarch64 devices using ACPI only.
As first target QEMU SBSA support is added, which relies on ACPI
only to boot an OS. As secondary target the Raspberry Pi4 was used,
which is broadly available and allows easy testing of the proposed
solution.
The series is split into ACPI cleanups and code movements, adding
Arm specific ACPI tables and finally SoC and mainboard related
changes to boot a Linux on the QEMU SBSA and RPi4. Currently only the
mandatory ACPI tables are supported, allowing to boot into Linux
without errors.
The QEMU SBSA support is feature complete and provides the same
functionality as the EDK2 implementation.
The changes were tested on real hardware as well on QEMU v9.0:
qemu-system-aarch64 -machine sbsa-ref -nographic -cpu cortex-a57 \
-pflash secure-world.rom \
-pflash unsecure-world.rom
qemu-system-aarch64 -machine raspi4b -kernel u-boot.bin -cpu cortex-a72 \
-smp 4 -m 2G -drive file=raspbian.img,format=raw,index=0 \
-dtb bcm2711-rpi-4-b.dtb -nographic
Tested against FWTS V24.03.00.
Known issues:
- The QEMU rpi4 support is currently limited as it doesn't emulate PCI,
USB or ethernet devices!
- The SMP bringup doesn't work on RPi4, but works in QEMU (Possibly
cache related).
- PCI on RPI4 isn't working on real hardware since the pcie_brcmstb
Linux kernel module doesn't support ACPI yet.
Link: https://lore.kernel.org/r/20241023132116.970117-1-patrick.rudolph@9elements.com
Add QEMU's SBSA ref board to azure pipelines and gitlab CI to run tests on it.
TEST: Run on Azure pipelines and confirmed that tests succeed.
Signed-off-by: Patrick Rudolph <patrick.rudolph@9elements.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Build and run qemu_arm64_lwip_defconfig in CI. This tests the lightweight
IP (lwIP) implementation of the dhcp, tftpboot and ping commands.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
When we have platforms being emulated by QEMU we cannot rely on the
"sleep" command running for the expected wall-clock amount of time. Even
with our current allowance for deviation from expected time, it will
still fail from time to time. Exclude the sleep test here.
Signed-off-by: Tom Rini <trini@konsulko.com>
Since MbedTLS is an external repo with its own coding style,
exclude it from Azure and gitlab CI CONFIG checks.
Signed-off-by: Raymond Mao <raymond.mao@linaro.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Jiaxun Yang <jiaxun.yang@flygoat.com> says:
Hi all,
This series enabled qemu-xtensa board.
For dc232b CPU it needs to be built with toolchain[1].
This is a side product of me investigating architectures
physical address != virtual address in U-Boot. Now we can
get it covered under CI and regular tests.
VirtIO devices are not working as expected, due to U-Boot's
assumption on VA == PA everywhere, I'm going to get this fixed
later.
My Xtensa knowledge is pretty limited, Xtensa people please
feel free to point out if I got anything wrong.
Thanks
[1]: https://github.com/foss-xtensa/toolchain/releases/download/2020.07/x86_64-2020.07-xtensa-dc232b-elf.tar.gz
Both GitLab and Azure (and other CI systems) have native support for
displaying JUnitXML test report results. The pytest framework that we
use can generate these reports. Change our CI tests so that they will
generate these reports and then have the respective CI platform pick
them up. We write to different locations because of where each CI is
(and isn't) able to easily pass things along.
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
Since devicetree-rebasing is an external repo with its own coding style,
exclude it from Azure and gitlab CI CONFIG checks.
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Instead of downloading coreboot binaries from a Google drive location,
use the ones we have built ourselves.
Signed-off-by: Tom Rini <trini@konsulko.com>
The primary motivation for having a sandbox without LTO build in CI is
to ensure that we don't have that option break. We now have the ability
to run tests of specific options being enabled/disabled, so drop the
parts of CI that build and test that configuration specifically and add
a build test instead. We still test that "NO_LTO=1" rather than editing
the config file works via the ftrace tests.
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
Use the most recent upstream release of OpenSBI for CI testing.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Leo Yu-Chi Liang <ycliang@andestech.com>