Xen ARM with Virtualization Extensions/FastModels: Difference between revisions
No edit summary |
|||
Line 101: | Line 101: | ||
--block-device rootfs.img |
--block-device rootfs.img |
||
The block device is exposed by the emulated hardware via virtio, therefore the root device will be /dev/vda. |
The block device is exposed by the emulated hardware via virtio, therefore the root device will be /dev/vda. Make sure to have at least the following options enabled in your kernel config: |
||
CONFIG_VIRTIO=y |
|||
CONFIG_VIRTIO_MMIO=y |
|||
CONFIG_VIRTIO_BLK=y |
|||
= FVP AEMv8 Model = |
= FVP AEMv8 Model = |
Latest revision as of 21:29, 22 March 2017
The first 'hardware' which was supported by the Xen ARM with Virtualization Extensions port was the ARM FastModel emulator.
Fixed Virtual Platforms
The primary models in use today by the Xen developers are the Fixed Virtual Platforms (FVP) modules which are available from ARM, e.g. RTSM_VE_Cortex-A15x2 and RTSM_VE_AEMv8Ax2.
In addition for ARMv8 ARM makes a Foundation Model freely available (looks for "ARMv8-A Foundation Platform" on this page).
If you do not have access to the FVPs or Foundation model (e.g you are interested in ARMv7) then you may be able to download an evaluation version of the FastModels and build an equivalent model yourself using sgcanvas, see "Building a model with sgcanvas" below.
Known issues
- Xen boot can be slow on the models because it scrubs the memory. If it's too slow you can add no-bootscrub on Xen command line.
Device Tree
The device tree for the ARMv8 foundation model is upstream in Linux.
Pawel Moll maintains a set of device tree files which describe the fast model platforms. See arm-dts.git.
This tree contains no build system, therefore the device tree compiler (dtc) should be invoked by hand:
$ git clone git://linux-arm.org/arm-dts.git arm-dts.git $ cd arm-dts.git/fast_models $ dtc -I dts -O dtb -o rtsm_ve-cortex_a15x2.dtb rtsm_ve-cortex_a15x2.dts $ dtc -I dts -O dtb -o rtsm_ve-aemv8a.dtb rtsm_ve-aemv8a.dts
You should use the dts file which describes as many CPUs as the model you intend to use (e.g. the x1, x2 or x4 suffix). In the case of the AEM DTS you should edit it to contain the appropriate number of cpu nodes.
Firmware & boot-wrapper
It is common to run the models without real firmware. In this case a boot-wrapper is required in order to provide a suitable boot time environment for Xen (e.g. booting in NS-HYP mode, providing the boot modules etc). Bootwrappers are available for both arm32 and arm64 however their functionality differs significantly.
arm32
The arm32 boot-wrapper is the more functional version and can make use of semihosting to load the hypervisor, kernel and DTB from the host filesystem at runtime. A version of the boot-wrapper with support for Xen is available in the xen-arm32 branch of http://xenbits.xen.org/gitweb/?p=people/ianc/boot-wrapper.git;a=summary.
Build the bootwrapper with e.g.
$ git clone -b xen-arm32 git://xenbits.xen.org/people/ianc/boot-wrapper.git boot-wrapper.git $ cd boot-wrapper.git $ make CROSS_COMPILE=arm-linux-gnueabihf- semi
This will produce a linux-system-semi.axf binary. This should be passed to the model as the application to run and a cluster.cpu0.semihosting-cmd_line option should be passed (with -C) containing the set of modules and their command lines. e.g.
RTSM_VE_Cortex-A15x2 -C cluster.cpu0.semihosting-cmd_line=" --kernel xen.git/xen/xen \ --module linux.git/arch/arm/boot/zImage <DOMAIN 0 COMMAND LINE> --dtb rtsm_ve-cortex_a15x2.dtb -- <XEN COMMAND LINE>" <MODEL OPTIONS> boot-wrapper.git/linux-system-semi.axf
Command line options are:
- --kernel <path-to-kernel>
- Provides the "kernel", Xen in this case.
- --module <path-to-module> <optional-command-line>
- Supplies a boot module. In this case the first module supplied is treated as the domain 0 kernel (in zImage format). The kernel command line should be specified here too.
- --dtb <path-to-dtb>
- Supplies the Device Tree Blob.
The final -- token delimits the end of the options after which the kernel (Xen in this case) command line should be supplied.
Note that the entirety of the cluster.cpu0.semihosting-cmd_line options should be quoted from the shell.
arm64
The arm64 version of boot-wrapper is not as fully featured as the arm32 version and does not support semihosting. The required binaries and command lines are built directly into the boot-wrapper which must be rebuilt whenever any component changes.
The upstream boot-wrapper-aarch64 has Xen support. It can be build with:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git $ cd boot-wrapper-aarch64 $ autoreconf -i $ ./configure --host=aarch64-linux-gnu \ $ --with-kernel-dir=$KERNEL \ $ --with-dtb=$KERNEL/arch/arm64/boot/dts/arm/foundation-v8.dtb \ $ --with-cmdline="console=hvc0 earlycon=pl011,0x1c090000 root=/dev/vda rw" \ $ --enable-psci \ $ --with-xen-cmdline="dtuart=serial0 console=dtuart no-bootscrub dom0_mem=512M" \ $ --with-xen=$XEN \ $ --with-cpu-ids=0,1,2,3 $ make
Where $KERNEL points to the Linux kernel directory, and $XEN points to the Xen binary. You need to have the cross-compile toolchain installed on your $PATH.
The resulting xen-system.axf binary should be passed to the model as the application to run. e.g.
$ ./Foundation_Platform --image=/path/to/xen-system.axf --block-device=<rootfs> --cores=4
If any of Xen, the FDT or the kernel Image change then only the final make step needs to be repeated.
Firmware
TODO: Real firmware on models?
Foundation Model
The ARMv8 Foundation Model is a free as in beer AArch64 emulation platform. The use is very similar to the arm64 instructions for the fastmodel using the relevant bootwrapper however the invocation of the model is slightly different:
./Foundation_v8pkg/models/Linux64_GCC-4.1/Foundation_v8 \ --image boot-wrapper-aarch64/xen-system.axf \ --block-device rootfs.img
The block device is exposed by the emulated hardware via virtio, therefore the root device will be /dev/vda. Make sure to have at least the following options enabled in your kernel config:
CONFIG_VIRTIO=y CONFIG_VIRTIO_MMIO=y CONFIG_VIRTIO_BLK=y
FVP AEMv8 Model
The FVP AEMv8 Model is a licensed AArch64 emulation platform. It has additional features compared to ARMv8 foundation model.
model_shell <FVP_AEMv8_install_directory>/models/Linux64_GCC-4.1/RTSM_VE_AEMv8A.so \ -C motherboard.mmc.p_mmc_file=<aarch64_rootfs_image> \ boot-wrapper-aarch64/xen-system.axf
Notes: 1. The DTS for FVP AEMv8 model is already available in mainline Linux kernel. 2. For trying XEN on older FVP AEMv8 model we might need to disable VIRTIO BLOCK device from FVP AEMv8 model DTS.
Model Options
Building a model with sgcanvas
Download FastModels & Evaluation License
You can download FastModels and an evaluation license from the ARM Info Center. In order to do so you will need to register. Once you have registered you can navigate via the Support drop-down menu, to Resources, Evaluation Products and finally Fast Models. At this point you will be asked to provide a phone number as well as a host MAC address for licensing purposes, you should enter the MAC address of the machine you intend to run the emulator on. Next you should select the Processor model (select Cortex-A15) and host platform.
At this point your download should begin and you should be shown your license file which you should download and save
Installation
These instructions are based on the FE000-KT-00002-r7p0-40rel0.tgz version of FastModels. (Note, this is an older version of FastModels)
Unpack the tarball and run the setup.bin which is contained. Follow the wizard to install.
Note: if your system is a 64 bit Debian Squeeze you need to install the package ia32-libs to be able to run setup.bin.
Building a model
We use the example models which ship with FastModels. These are equivalent to the FVP.
If you have access to an AEM license then you can/should use FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_AEMv7A/RTSM_VE_AEMv7A.sgproj. If you have a Cortex-A15 evaluation license then FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/RTSM_VE_Cortex-A15x1.sgproj is the one to use.
To start run:
sgcanvas <SGPROJ>
Using the relevant .sgproj file. sgcanvas will start and load the example model.
At this point you can select your target environment from the Project, Active Configuration menu. Select the environment which best matches your host.
Next click the Build button and the hit yes to save your changes.
At this point sgcanvas will compile your model, which will take a few minutes. The output will be e.g.
FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/Linux64-Release-GCC-4.1/cadi_system_Linux64-Release-GCC-4.1.so
Where FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1 corresponds to the example project which you built and Linux64-Release-GCC-4.1 corresponds to the Active Configuration which you selected.
Note: if your system is a 64 bit Debian Squeeze you need to install the package xutils-dev to be able to compile your model.
Running a model
A model is run using the model_shell tool, or optionally modeldebugger. To run the model pass the path to the cadi_system_Linux64-Release-GCC-4.1.so as the first argument and the kernel to run (e.g. the boot-wrapper) as the second:
model_shell FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/Linux64-Release-GCC-4.1/cadi_system_Linux64-Release-GCC-4.1.so boot-wrapper.git/linux-system-semi.axf
Documentation
There is extensive documentation regarding the use of FastModel installed as part of the installation process. Look in FastModels/FastModelsTools_7.0/doc and FastModels/FastModelsPortfolio_7.0/Docs.