Xen ARM with Virtualization Extensions/Salvator-XS

From Xen
Jump to navigationJump to search

(NOT COMPLETE! in the process of creating)

General information

This Wiki describes how to run Xen on Renesas Salvator-XS board with R-Car H3 ES3.0 SoC.

Main information how to deal with Salvator-XS board located at:

https://elinux.org/R-Car/Boards/Salvator-XS

Please note, although Wiki only covers Salvator-XS + H3 ES3.0 configuration, the R-Car M3 ES3.0 (M3-W+) SoC is also supported. It is possible run Xen on other "board & SoC" configurations with updating platform specific parts (device-tree, etc), the hypervisor part will remain the same. For example, the following configurations will work as well:

https://elinux.org/R-Car/Boards/M3SK
https://elinux.org/R-Car/Boards/H3SK

Due to the fact that Renesas provides their reference software in form of Yocto build instruction, it is provided additional steps in order to build and run system with Xen and Dom0. This Wiki relies on Renesas Yocto BSP v5.1.0

BSP build

  • Follow the build instruction in Manual steps mode up to the build step #4, including it.
  • Clone meta-renesas layer from xen-troops and cherry-pick last 4 patches from branch v5.1.0_xen, this is the minimum set of patches needed to run system with Xen and Dom0:
cd ${WORK}/meta-renesas
git remote add troops git@github.com:xen-troops/meta-renesas.git
git fetch troops
git cherry-pick f806ddaf40c465378c483004359404a9f141dc98^..405b9b98e5831c16e6b968a2cddce0a5fdac1856
  • Consider changes to r8a77951-salvator-xs-xen.dts added by one of the previous patches in case you want system configuration changes (i.e. NFS server ip, Dom0 root option, IPMMU settings, Xen command line, etc)
  • Skip build steps #5, #6 and continue from build step #7 till the end
  • In build step #8 change the path to local.conf file:
cp $WORK/meta-renesas/meta-rcar-gen3/docs/sample/conf/salvator-x/poky-gcc/bsp/*.conf ./conf/
cd $WORK/build
cp conf/local-wayland.conf conf/local.conf

Xen build

Get Xen sources

There is no need in additional patches here to bring up test system, the mainline Xen works out of the box. Clone sources from the upstream and checkout on commit 73c932d0ea43ddf904db9429811788480c4cb816 “tools/libxc: use uint32_t for pirq in xc_domain_irq_permission”, likely the more recent version will work as well, but the proposed commit is known to work:

git clone git://xenbits.xenproject.org/xen.git
cd xen
git checkout 73c932d0ea43ddf904db9429811788480c4cb816 -b v4.16_xen

Configure Xen

export CROSS_COMPILE=<path_to_gcc>
XEN_TARGET_ARCH=arm64 ./configure
cd xen
make menuconfig XEN_TARGET_ARCH=arm64

Enable IPMMU-VMSA support. Please note, the DMA devices in Dom0 will work even with IPMMU being disabled, as Dom0 is a direct mapped domain on Arm for now, therefore the IPMMU here is for protection purpose. But, if you are going to launch other domains (which are not direct mapped) and assign DMA devices to them, the IPMMU must be enabled to perform proper address translations for these devices to work:

Device Drivers  --->
 [*] Renesas IPMMU-VMSA found in R-Car Gen3 SoCs

Enable earlyprintk support (if needed):

Debugging Options  --->
 [*] Early printk (Early printk via SCIF UART)  --->
  (X) Early printk via SCIF UART
  (0xe6e88000) Early printk, physical base address of debug UART
  Early printk UART SCIF interface version (default SCIF UART interface)  --->
   (X) default SCIF UART interface

Build Xen

cd ..
make xen XEN_TARGET_ARCH=arm64
mkimage -A arm64 -C none -T kernel -a 0x78080000 -e 0x78080000 -n "XEN" -d xen/xen xen-uImage
make -C tools/flask/policy

Copy resulting images to TFTP directory (prepared beforehand) for the future use:

sudo cp xen-uImage <path_to_tftp_dir>
sudo cp tools/flask/policy/xenpolicy* <path_to_tftp_dir>/xenpolicy