Difference between revisions of "Xen ARM with Virtualization Extensions/FastModels"

From Xen
Line 5: Line 5:
 
The primary models in use today by the Xen developers are the Fixed Virtual Platforms (FVP) modules which are available from ARM, e.g. ''RTSM_VE_Cortex-A15x2'' and ''RTSM_VE_AEMv8Ax2''.
 
The primary models in use today by the Xen developers are the Fixed Virtual Platforms (FVP) modules which are available from ARM, e.g. ''RTSM_VE_Cortex-A15x2'' and ''RTSM_VE_AEMv8Ax2''.
   
If you do not have access to the FVPs then you may be able to download an evaluation version of the FastModels and build an equivalent model yourself using sgcanvas, see 'Building a model with sgcanvas' below.
+
If you do not have access to the FVPs then you may be able to download an evaluation version of the FastModels and build an equivalent model yourself using sgcanvas, see "Building a model with sgcanvas" below.
   
 
= Device Tree =
 
= Device Tree =
Line 17: Line 17:
 
$ dtc -I dts -O dtb -o rtsm_ve-cortex_a15x2.dtb rtsm_ve-cortex_a15x2.dts
 
$ dtc -I dts -O dtb -o rtsm_ve-cortex_a15x2.dtb rtsm_ve-cortex_a15x2.dts
 
$ dtc -I dts -O dtb -o rtsm_ve-aemv8a.dtb rtsm_ve-aemv8a.dts
 
$ dtc -I dts -O dtb -o rtsm_ve-aemv8a.dtb rtsm_ve-aemv8a.dts
  +
  +
You should use the dts file which describes as many CPUs as the model you intend to use (e.g. the x1, x2 or x4 suffix). In the case of the AEM DTS you should edit it to contain the appropriate number of cpu nodes.
   
 
= Firmware & boot-wrapper =
 
= Firmware & boot-wrapper =
Line 37: Line 39:
 
--module linux.git/arch/arm/boot/zImage <DOMAIN 0 COMMAND LINE>
 
--module linux.git/arch/arm/boot/zImage <DOMAIN 0 COMMAND LINE>
 
--dtb rtsm_ve-cortex_a15x2.dtb -- <XEN COMMAND LINE>"
 
--dtb rtsm_ve-cortex_a15x2.dtb -- <XEN COMMAND LINE>"
boot-wrapper.git/linux-system-semi.axf
+
<MODEL OPTIONS> boot-wrapper.git/linux-system-semi.axf
   
 
Command line options are:
 
Command line options are:
Line 85: Line 87:
   
 
The resulting ''xen-system.axf'' binary should be passed to the model as the application to run. e.g.
 
The resulting ''xen-system.axf'' binary should be passed to the model as the application to run. e.g.
$ RTSM_VE_AEMv8Ax2 boot-wrapper-aarch64.git/xen-system.axf
+
$ RTSM_VE_AEMv8Ax2 <MODEL OPTIONS> boot-wrapper-aarch64.git/xen-system.axf
   
 
If any of Xen, the FDT or the kernel Image change then only the final ''make'' step needs to be repeated.
 
If any of Xen, the FDT or the kernel Image change then only the final ''make'' step needs to be repeated.
Line 92: Line 94:
   
 
TODO: Real firmware on models?
 
TODO: Real firmware on models?
  +
  +
= Model Options =
   
 
= Building a model with sgcanvas =
 
= Building a model with sgcanvas =
Line 113: Line 117:
 
We use the example models which ship with FastModels. These are equivalent to the FVP.
 
We use the example models which ship with FastModels. These are equivalent to the FVP.
   
If you have access to an AEM license then you can/should use ''FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_AEMv7A/RTSM_VE_AEMv7A.sgproj''. If you have a Cortex-A15 evalaution license then ''FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/RTSM_VE_Cortex-A15x1.sgproj'' is the one to use.
+
If you have access to an AEM license then you can/should use ''FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_AEMv7A/RTSM_VE_AEMv7A.sgproj''. If you have a Cortex-A15 evaluation license then ''FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/RTSM_VE_Cortex-A15x1.sgproj'' is the one to use.
   
 
To start run:
 
To start run:
Line 130: Line 134:
 
Note: if your system is a 64 bit Debian Squeeze you need to install the package ''xutils-dev'' to be able to compile your model.
 
Note: if your system is a 64 bit Debian Squeeze you need to install the package ''xutils-dev'' to be able to compile your model.
   
= Running a model =
+
== Running a model ==
 
This section assumes that you have found and built a suitable Xen hypervisor and Linux domain 0 kernel. Please see [[Xen ARM with Virtualization Extensions]] for more details on obtaining and building these.
 
 
A model is run using the ''model_shell'' tool, or optionally ''modeldebugger''. To run the model pass the path to the ''cadi_system_Linux64-Release-GCC-4.1.so'' as the first argument and the kernel to run (e.g. the Xen hypervisor) as the second:
 
model_shell FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/Linux64-Release-GCC-4.1/cadi_system_Linux64-Release-GCC-4.1.so xen.git/xen/xen.axf
 
 
This will boot but fail to find the domain 0 kernel, since we haven't provided one. Currently Xen expects to find the kernel at the start of the emulate flash device, you can configure this by setting the ''motherboard.flashloader0.fname'' model paramter using the ''-C'' command line option:
 
model_shell FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/Linux64-Release-GCC-4.1/cadi_system_Linux64-Release-GCC-4.1.so xen.git/xen/xen.axf -C motherboard.flashloader0.fname=linux/arch/arm/boot/zImage
 
Alternatively you can create params.cfg containing you local parameters:
 
motherboard.flashloader0.fname=linux/arch/arm/boot/zImage
 
which you pass to the model using the ''-f'' option:
 
model_shell FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/Linux64-Release-GCC-4.1/cadi_system_Linux64-Release-GCC-4.1.so xen.git/xen/xen.axf -f params.cfg.
 
 
Note that before commit [http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=47d1a51480ad0f602d747e460d619436c907deea 47d1a51] (xen: arm: make zImage the default target which we install) it was necessary to use <tt>xen.git/xen/xen</tt> instead of <tt>xen.git/xen/xen.axf</tt>.
 
 
When running with the AEM you should also pass some other options to the model. This is best done using the ''-f'' option to pass a file containing the following to ''model_shell'':
 
cluster.cpuID=0x410fc0f0
 
cluster.multiprocessor_extensions=1
 
cluster.vmsa.separate_tlbs=1
 
cluster.implements_ple_like_a8=0
 
cluster.vmsa.implements_fcse=0
 
cluster.vmsa.main_tlb_size=512
 
cluster.vmsa.main_tlb_lockable_entries=4
 
cluster.vmsa.instruction_tlb_size=32
 
cluster.implements_virtualization=1
 
cluster.implements_lpae=1
 
cluster.use_Cortex-A15_peripherals=1
 
cluster.delayed_CP15_operations=1
 
cluster.num_cores=1
 
cluster.cpu0.implements_fused_mac=1
 
cluster.cpu0.implements_sdiv_udiv=1
 
cluster.cpu0.l1icache-size_bytes=32768
 
cluster.cpu0.l1icache-associativity=2
 
cluster.cpu0.l1icache-linelength_bytes=64
 
cluster.cpu0.l1dcache-size_bytes=32768
 
cluster.cpu0.l1dcache-associativity=2
 
cluster.cpu0.l1dcache-linelength_bytes=64
 
cluster.cpu0.l2dcache-size_bytes=0x00200000
 
cluster.cpu0.l2dcache-associativity=16
 
cluster.cpu0.l2dcache-linelength_bytes=64
 
   
  +
A model is run using the ''model_shell'' tool, or optionally ''modeldebugger''. To run the model pass the path to the ''cadi_system_Linux64-Release-GCC-4.1.so'' as the first argument and the kernel to run (e.g. the boot-wrapper) as the second:
This extra configuration is not necessary with the Cortex-A15 model.
 
  +
model_shell FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/Linux64-Release-GCC-4.1/cadi_system_Linux64-Release-GCC-4.1.so boot-wrapper.git/linux-system-semi.axf
   
 
= Documentation =
 
= Documentation =

Revision as of 13:20, 30 September 2013

The first 'hardware' which was supported by the Xen ARM with Virtualization Extensions port was the ARM FastModel emulator.

Fixed Virtual Platforms

The primary models in use today by the Xen developers are the Fixed Virtual Platforms (FVP) modules which are available from ARM, e.g. RTSM_VE_Cortex-A15x2 and RTSM_VE_AEMv8Ax2.

If you do not have access to the FVPs then you may be able to download an evaluation version of the FastModels and build an equivalent model yourself using sgcanvas, see "Building a model with sgcanvas" below.

Device Tree

Pawel Moll maintains a set of device tree files which describe the fast model platforms. See arm-dts.git.

This tree contains no build system, therefore the device tree compiler (dtc) should be invoked by hand:

$ git clone git://linux-arm.org/arm-dts.git arm-dts.git
$ cd arm-dts.git/fast_models
$ dtc -I dts -O dtb -o rtsm_ve-cortex_a15x2.dtb rtsm_ve-cortex_a15x2.dts
$ dtc -I dts -O dtb -o rtsm_ve-aemv8a.dtb rtsm_ve-aemv8a.dts 

You should use the dts file which describes as many CPUs as the model you intend to use (e.g. the x1, x2 or x4 suffix). In the case of the AEM DTS you should edit it to contain the appropriate number of cpu nodes.

Firmware & boot-wrapper

It is common to run the models without real firmware. In this case a boot-wrapper is required in order to provide a suitable boot time environment for Xen (e.g. booting in NS-HYP mode, providing the boot modules etc). Bootwrappers are available for both arm32 and arm64 however their functionality differs significantly.

arm32

The arm32 boot-wrapper is the more functional version and can make use of semihosting to load the hypervisor, kernel and DTB from the host filesystem at runtime. A version of the boot-wrapper with support for Xen is available in the xen-arm32 branch of http://xenbits.xen.org/gitweb/?p=people/ianc/boot-wrapper.git;a=summary.

Build the bootwrapper with e.g.

$ git clone -b xen-arm32 git://xenbits.xen.org/people/ianc/boot-wrapper.git boot-wrapper.git
$ cd boot-wrapper.git
$ make CROSS_COMPILE=arm-linux-gnueabihf- semi

This will produce a linux-system-semi.axf binary. This should be passed to the model as the application to run and a cluster.cpu0.semihosting-cmd_line option should be passed (with -C) containing the set of modules and their command lines. e.g.

RTSM_VE_Cortex-A15x2 -C cluster.cpu0.semihosting-cmd_line="
    --kernel xen.git/xen/xen-arm32 \
    --module linux.git/arch/arm/boot/zImage <DOMAIN 0 COMMAND LINE>
    --dtb rtsm_ve-cortex_a15x2.dtb -- <XEN COMMAND LINE>"
  <MODEL OPTIONS> boot-wrapper.git/linux-system-semi.axf

Command line options are:

--kernel <path-to-kernel>
Provides the "kernel", Xen in this case.
--module <path-to-module> <optional-command-line>
Supplies a boot module. In this case the first module supplied is treated as the domain 0 kernel (in zImage format). The kernel command line should be specified here too.
--dtb <path-to-dtb>
Supplies the Device Tree Blob.

The final -- token delimits the end of the options after which the kernel (Xen in this case) command line should be supplied.

Note that the entirety of the cluster.cpu0.semihosting-cmd_line options should be quoted from the shell.

arm64

The arm64 version of boot-wrapper is not as fully featured as the arm32 version and does not support semihosting. The required binaries and command lines are built directly into the boot-wrapper which must be rebuilt whenever any component changes.

This bootwrapper also does not support editing the FDT to provide a suitable chosen node referencing the boot modules etc. Therefore it is necessary to edit the dts file (e.g. arm-dts.git/fast_models/rtsm_ve-aemv8.dts) to add a chosen node within the root / node.

chosen {
	#address-cells = <1>;
	#size-cells = <1>;

	xen,xen-bootargs = "<XEN COMMAND LINE>";
	module@1 {
		 compatible = "xen,linux-zimage", "xen,multiboot-module";
		 reg = <0x80080000 0x800000>;
		 bootargs = "<DOMAIN 0 COMMAND LINE>";
	};
};

The kernel address (0x80080000) is derived from the boot-wrapper Makefile which contains:

PHYS_OFFSET     := 0x80000000 
KERNEL_OFFSET   := 0x80000

The kernel size (0x800000) simply has to be larger than the kernel Image.

A version of the boot-wrapper with support for Xen is available in the xen-arm64 branch of http://xenbits.xen.org/gitweb/?p=people/ianc/boot-wrapper-aarch64.git;a=summary. Assuming you have already built Xen, Linux and a suitable FDT the bootwrapper can be built with:

$ git clone -b xen-arm64 git://xenbits.xen.org/people/ianc/boot-wrapper-aarch64.git boot-wrapper-aarch64.git
$ cd boot-wrapper-aarch64.git
$ ln -s ../xen.git/xen/xen-arm32 Xen
$ ln -s ../arm-dts.git/fast_models/rtsm_ve-aemv8a.dtb fdt.dtb
$ ln -s ../linux.git/arch/arm64/boot/Image Image
$ make CROSS_COMPILE=aarch64-linux-gnu- xen-system.axf

The resulting xen-system.axf binary should be passed to the model as the application to run. e.g.

$ RTSM_VE_AEMv8Ax2 <MODEL OPTIONS> boot-wrapper-aarch64.git/xen-system.axf

If any of Xen, the FDT or the kernel Image change then only the final make step needs to be repeated.

Firmware

TODO: Real firmware on models?

Model Options

Building a model with sgcanvas

Download FastModels & Evaluation License

You can download FastModels and an evaluation license from the ARM Info Center. In order to do so you will need to register. Once you have registered you can navigate via the Support drop-down menu, to Resources, Evaluation Products and finally Fast Models. At this point you will be asked to provide a phone number as well as a host MAC address for licensing purposes, you should enter the MAC address of the machine you intend to run the emulator on. Next you should select the Processor model (select Cortex-A15) and host platform.

At this point your download should begin and you should be shown your license file which you should download and save

Installation

These instructions are based on the FE000-KT-00002-r7p0-40rel0.tgz version of FastModels. (Note, this is an older version of FastModels)

Unpack the tarball and run the setup.bin which is contained. Follow the wizard to install.

Note: if your system is a 64 bit Debian Squeeze you need to install the package ia32-libs to be able to run setup.bin.

Building a model

We use the example models which ship with FastModels. These are equivalent to the FVP.

If you have access to an AEM license then you can/should use FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_AEMv7A/RTSM_VE_AEMv7A.sgproj. If you have a Cortex-A15 evaluation license then FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/RTSM_VE_Cortex-A15x1.sgproj is the one to use.

To start run:

   sgcanvas <SGPROJ>

Using the relevant .sgproj file. sgcanvas will start and load the example model.

At this point you can select your target environment from the Project, Active Configuration menu. Select the environment which best matches your host.

Next click the Build button and the hit yes to save your changes.

At this point sgcanvas will compile your model, which will take a few minutes. The output will be e.g.

   FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/Linux64-Release-GCC-4.1/cadi_system_Linux64-Release-GCC-4.1.so

Where FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1 corresponds to the example project which you built and Linux64-Release-GCC-4.1 corresponds to the Active Configuration which you selected.

Note: if your system is a 64 bit Debian Squeeze you need to install the package xutils-dev to be able to compile your model.

Running a model

A model is run using the model_shell tool, or optionally modeldebugger. To run the model pass the path to the cadi_system_Linux64-Release-GCC-4.1.so as the first argument and the kernel to run (e.g. the boot-wrapper) as the second:

   model_shell FastModels/FastModelsPortfolio_7.0/examples/RTSM_VE/Build_Cortex-A15x1/Linux64-Release-GCC-4.1/cadi_system_Linux64-Release-GCC-4.1.so boot-wrapper.git/linux-system-semi.axf

Documentation

There is extensive documentation regarding the use of FastModel installed as part of the installation process. Look in FastModels/FastModelsTools_7.0/doc and FastModels/FastModelsPortfolio_7.0/Docs.