Xen on ARM Real Time Analysis

From Xen
Revision as of 14:54, 16 June 2022 by BertrandMarquis (talk | contribs) (Xen timer interrupt forward analysis on ARM64)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Contributions

Michal Orzel (Arm)

Luca Fancellu (Arm)

Bertrand Marquis (Arm)

Introduction

This document aims to evaluate Xen hypervisor real time capacities by conducting an investigation to assess and analyze interrupt forwarding path inside Xen. The next goal is to try to compute a WCET (Worst Case Execution Time) for the most standard use case which is handling an interrupt in a real time guest.

Investigation is done on an example of generating a virtual timer interrupt from within a Zephyr guest.

Configuration

The analysis is based on Xen 4.16.0 (tag RELEASE-4.16.0).

Compiler

gcc version 9.4.0 (target: aarch64-linux-gnu)

Target

FVP_Base_RevC-2xAEMvA

Fast Models [11.14.21 (Mar 16 2021)]

Xen Configuration process

$ ./configure
–prefix=/usr –exec_prefix=/usr –bindir=/usr/bin –sbindir=/usr/sbin
–libexecdir=/usr/lib –datadir=/usr/share –sysconfdir=/etc
–sharedstatedir=/com –localstatedir=/var –libdir=/usr/lib
–includedir=/usr/include –oldincludedir=/usr/include
–infodir=/usr/share/info –mandir=/usr/share/man –disable-stubdom
–disable-ioemu-stubdom –disable-pv-grub –disable-xenstore-stubdom
–disable-rombios –disable-ocamltools –disable-qemu-traditional
–disable-pvshim –disable-static –disable-seabios
–disable-sdl –disable-systemd –disable-xsmpolicy

Xen Build process

$ make dist-xen && make dist-tools

Kconfig options that must be disabled

  • CONFIG_NEW_VGIC

If this option is set, the new vGIC will be used and enter_hypervisor_from_guest will additionally call the functions: vtimer_update_irqs and vcpu_update_evtchn_irq. This will result in executing more instructions. Also CONFIG_NEW_VGIC is not a security supported option.

  • CONFIG_IOREQ_SERVER

If this option is set, check_for_vcpu_work function will additionally call vcpu_ioreq_handle_completion which will result in more instructions and additional loops.

  • CONFIG_DEBUG_LOCK_PROFILE

If this option is set, we can see how often locks are taken and blocked using serial console or xenlockprof tool. This will result in more instructions executed.

  • CONFIG_DEBUG_LOCKS

This option enables debugging features of lock handling. Additional checks will be performed when acquiring and releasing locks which results in more instructions executed.

  • CONFIG_HAS_ITS

This option enables GICv3 ITS MSI controller support. ITS should be avoided as it can generate calls to the RCU system in Xen. Also this option is currently in unsupported state.

  • CONFIG_XSM

XSM code is based on RCU which could cause cross core IPI to be generated.

Additional recommendations for the real time guests

  • do not specify vpl011=“sbsa_uart” in the guest config file
  • do not access virtual distributor/re-distributor
  • other guests should not generate SGI to real time guest
  • real-time guest should be allowed to make use of maximum 16 interrupts

Runtime Configuration

Target is running using 2 CPUs.

dom0 is running using single vCPU assigned to pCPU 0.

Guest (Zephyr) is running using single vCPU assigned to pCPU 1.

Scheduler pCPU no. vCPU no.
dom0 credit2 0 0
guest(Zephyr) null 1 0

Interrupt Forwarding Path Analysis

About

In this analysis, the number of assembly instructions is taken to act as a benchmark/performance value.

Process of tracing included 1000 iterations. During each iteration a guest was setting up a virtual timer to fire an interrupt immediately. For each iteration the number of assembly instructions, needed to handle a vtimer interrupt, executed in EL2 was measured starting from a context switch from EL1 (guest) to EL2 (Xen) and ending with a switch back to EL1.

With the configuration described in the previous chapter, each iteration resulted in the same number of assembly instructions being 1090. The generated trace file was divided into blocks and analyzed in the next chapter.

A graph showing the complete interrupt execution flow:

Interrupt Execution flow

Blocks Analysis

Saving state on entry to hypervisor

Loops Code No. of ASM ins. Spinlocks
No ASM 35 (fixed) No
guest_irq -> guest_vector;
guest_vector -> entry;
entry -> entry_guest;

Consuming pending SErrors generated by the guest if any

Loops Code No. of ASM ins. Spinlocks
No ASM 10 (fixed) No
  • Executed by default unless “serrors=panic” cmd line option is set. In a real-time system it is recommended to use “serrors=panic” so that a guest’s serror will cause the whole system to crash.
entry -> check_pending_guest_serror;

Actions to be done after entering Xen and before unmasking interrupts

Loops Code No. of ASM ins. Spinlocks
No C 7 (fixed) No
entry -> enter_hypervisor_from_guest_preirq;
enter_hypervisor_from_guest_preirq -> needs_ssbd_flip -> check_workaround_ssbd;

Actions to be done after entering Xen and before handling any request

Loops Code No. of ASM ins. Spinlocks
Yes C 304 (not fixed) Yes
  • Loops: Iterating through LR registers Max. no. of iterations: The architecture limits the number of List registers to 16.
  • Instructions: The number of instructions depends mainly on number of LR registers. In our case there is only one LR to be cleared which results in a block containing 304 instructions. The worst case is clearing all 16 LR registers. GICH_VTR.ListRegs defines the number of implemented LR registers.
entry -> enter_hypervisor_from_guest;
enter_hypervisor_from_guest -> vgic_sync_from_lrs;
vgic_sync_from_lrs -> gic_get_nr_lrs;
vgic_sync_from_lrs -> gicv3_hcr_status;
/* spinlock save : arch.vgic.lock */
vgic_sync_from_lrs -> _spin_lock_irqsave;
/* Iterating through each LR - start */
vgic_sync_from_lrs -> find_next_bit;
vgic_sync_from_lrs -> gic_update_one_lr;
gic_update_one_lr -> gicv3_read_lr;
gicv3_read_lr -> gicv3_ich_read_lr;
gic_update_one_lr -> irq_to_pending;
gic_update_one_lr -> int_test_and_clear_bit;
gic_update_one_lr -> int_clear_bit;
gic_update_one_lr -> test_bit;
/* Remove from inflight list */
gic_update_one_lr -> list_del_init;
/* Iterating through each LR - end */
/* spinlock restore : arch.vgic.lock */
vgic_sync_from_lrs -> _spin_unlock_irqrestore;

GIC IRQ handler entry

Loops Code No. of ASM ins. Spinlocks
Yes C 144 (not fixed) Yes
  • Loops: Iterating through pending IRQs The maximum number of IRQs is limited in GICv3 to 1024. However the max. no. of interrupt sources is device specific and it is usually discovered either from DT or bus specific mechanisms.
entry -> do_trap_irq;
/* Accept an interrupt from the GIC and dispatch its handler */
do_trap_irq -> gic_interrupt;
gic_interrupt -> gicv3_read_irq;
/* do_IRQ is called in case of PPI or SPI interrupt */
gic_interrupt -> do_IRQ;
do_IRQ -> irq_to_desc;
/* spinlock lock: desc->lock */
do_IRQ -> _spin_lock;
do_IRQ -> gicv3_irq_ack;
do_IRQ -> test_bit;
/* Setting IRQ status to _IRQ_INPROGRESS */
do_IRQ -> int_set_bit;
/* spinlock unlock: desc->lock */
do_IRQ -> _spin_unlock_irq;
/* Call vtimer IRQ hanlder */
do_IRQ -> vtimer_interrupt;

Virtual Timer IRQ handler

Loops Code No. of ASM ins. Spinlocks
No C 360 (fixed) Yes
/* Inject interrupt into the guest */
vtimer_interrupt -> vgic_inject_irq;
/* spinlock save: arch.vgic.lock */
vgic_inject_irq -> _spin_lock_irqsave;
/* Set IRQ to pending */
vgic_inject_irq -> irq_to_pending;
/* Check if vcpu is offline */
vgic_inject_irq -> test_bit;
/* Set IRQ status to GIC_IRQ_GUEST_QUEUED */
vgic_inject_irq -> int_set_bit;
vgic_inject_irq -> vgic_get_virq_priority;
vgic_get_virq_priority -> vgic_rank_irq -> vgic_get_rank;
/* Raise IRQ */
vgic_inject_irq -> gic_raise_guest_irq;
gic_raise_guest_irq -> gic_get_nr_lrs;
gic_raise_guest_irq -> irq_to_pending;
/* Find unused LR to insert IRQ into */
gic_raise_guest_irq -> gic_find_unused_lr;
gic_find_unused_lr -> gic_get_nr_lrs;
gic_find_unused_lr -> test_bit;
gic_find_unused_lr -> find_next_zero_bit;
gic_raise_guest_irq -> int_set_bit;
gic_raise_guest_irq -> gic_set_lr;
gic_set_lr -> int_clear_bit;
gic_set_lr -> gicv3_update_lr;
gicv3_update_lr -> gicv3_ich_write_lr;
gic_set_lr -> int_set_bit;
gic_set_lr -> int_clear_bit;
/* Add IRQ to inflight list */
vgic_inject_irq -> list_add_tail;
/* spinlock restore: arch.vgic.lock */
vgic_inject_irq -> _spin_unlock_irqrestore;
/* Wake up vcpu if it is blocked - not needed in our case */
vgic_inject_irq -> vcpu_kick;
vcpu_kick -> vcpu_unblock;
vcpu_unblock -> int_test_and_clear_bit;

GIC IRQ handler exit

Loops Code No. of ASM ins. Spinlocks
No C 97 (fixed) Yes
/* spinlock lock: desc->lock */
do_IRQ -> _spin_lock_irq;
/* Clear bit _IRQ_INPROGRESS of IRQ status */
do_IRQ -> int_clear_bit;
do_IRQ -> gicv3_host_irq_end;
/* Lower the priority and deactivate the IRQ */
gicv3_host_irq_end -> gicv3_eoi_irq;
gicv3_host_irq_end -> gicv3_dir_irq;
/* spinlock unlock: desc->lock */
do_IRQ -> _spin_unlock;
/* Check if there are any pending interrupts */
gic_interrupt -> gicv3_read_irq;

Actions to be done before entering the guest

Loops Code No. of ASM ins. Spinlocks
Yes C 100 (not fixed) Yes
  • Loops: Iterating through pending work for vcpu before it can be safely resumed Max. no. of iterations: Only 1 iteration possible, provided CONFIG_IOREQ_SERVER is disabled.

Iterating through pending softIRQs Max. no. of iterations: Number of iterations depends on a number of softIRQs which is dependent on a lot of factors (mostly when using RCU). As long as the real-time core is not doing any hypercalls, none should be generated.

guest_vector -> exit;
exit -> leave_hypervisor_to_guest;
/* Process pending work for the vCPU */
leave_hypervisor_to_guest -> check_for_vcpu_work;
/* Call softIRQs if pending */
leave_hypervisor_to_guest -> check_for_pcpu_work;
/* Synchronise vGIC state with the GIC hardware */
leave_hypervisor_to_guest -> vgic_sync_to_lrs;
vgic_sync_to_lrs -> gic_restore_pending_irqs;
gic_restore_pending_irqs -> gic_get_nr_lrs;
/* spinlock lock: arch.vgic.lock */
gic_restore_pending_irqs -> _spin_lock;
gic_restore_pending_irqs -> list_empty;
/* spinlock unlock: arch.vgic.lock */
gic_restore_pending_irqs -> _spin_unlock;
leave_hypervisor_to_guest -> needs_ssbd_flip;
leave_hypervisor_to_guest -> check_workaround_ssbd;

Restoring state on exit from hypervisor

Loops Code No. of ASM ins. Spinlocks
No ASM 33 (fixed) No
exit -> exit_guest;
exit -> return_from_trap;

Spinlock Analysis

The goal of this chapter is to analyze the spinlocks inside the interrupt forwarding path that was a subject of the previous chapter.

Interrupt Forwarding Path Analysis showed that there are 2 spinlocks in the path:

  1. arch.vgic.lock from struct vgic_cpu

This spinlock is initialized per vCPU.

  1. desc->lock from struct irq_desc

This spinlock is initialized per IRQ (so 1024 spinlocks).

SPINLOCK: arch.vgic.lock

The following functions make use of locking/unlocking arch.vgic.lock spinlock.

arch/arm/gic-vgic.c

  • vgic_sync_from_lrs
  • gic_restore_pending_irqs
  • vgic_vcpu_pending_irq

arch/arm/vgic-v3-its.c

  • its_handle_clear
  • its_handle_inv
  • its_handle_invall
  • its_discard_event
  • its_handle_movi

The above can be discarded provided that CONFIG_HAS_ITS is not set.

arch/arm/vgic-v3.c

  • __vgic_v3_rdistr_rd_mmio_read
  • __vgic_v3_rdistr_rd_mmio_write

The above can be triggered only by a guest trying to access virtual redistributor. This is something pointless when running on a single vCPU that is statically assigned to a pCPU so a guest should not do this.

arch/arm/vgic.c

  • domain_vgic_init
  • vcpu_vgic_init
  • vgic_migrate_irq
  • vgic_disable_irqs
  • vgic_enable_irqs
  • vgic_clear_pending_irqs

vgic_migrate_irq is executed if the target vCPU has changed which cannot occur when a guest is running on a single vCPU that is statically assigned to a pCPU.

vgic_disable_irqs and vgic_disable_irqs are executed only if the guest is trying to access the virtual distributor. This is something pointless when running on a single vCPU that is statically assigned to a pCPU so a guest should not do this.

  • vgic_inject_irq Possible callers:
  • virt_timer_expired (not possible when using NULL scheduler)
  • phys_timer_expired (not possible when using NULL scheduler)
  • vtimer_update_irq (only if CONFIG_NEW_VGIC is set)
  • vpl011_update_interrupt_status (only if we use vpl011=“sbsa_uart” in guest)
  • vgic_set_irqs_pending (if guest is accessing virtual distributor)
  • vgic_to_sgi (if guest is trying to generate SGI; guest should not do this)
  • vtimer_interrupt
  • do_IRQ
  • vgic_vcpu_inject_lpi (only if CONFIG_HAS_ITS is set)
  • dm_op (when calling hypercall dm_op with XEN_DMOP_set_irq_level)
  • vcpu_update_evtchn_irq (only if CONFIG_NEW_VGIC is set)

Correlation between structures: vgic_cpu and pending_irq

struct vgic_cpu

  • inflight_irqs list This list is used to keep track of the IRQs that the vGIC injected into the guest. It is ordered by IRQ priority.
  • lr_pending list This list is used to queue IRQs (struct pending_irq) that the vGIC tried to inject into the guest, but there was no LR available.

The above lists are per vCPU and initialized in vcpu_vgic_init.

struct pending_irq

  • inflight list This list is used to append instances of structure pending_irq to the list inflight_irqs from structure vgic_cpu.
  • lr_queue list This list is used to append instances of structure pending_irq to the list lr_pending from structure vgic_cpu.

The above lists are per IRQ and initialized in vgic_init_pending_irq.

SPINLOCK: desc->lock

The following functions make use of locking/unlocking desc->lock spinlock.

arch/arm/irq.c

  • do_IRQ
  • release_irq
  • setup_irq
  • route_irq_to_guest
  • release_guest_irq
  • irq_set_spi_type
  • irq_local_set_type

arch/arm/vgic.c

  • vgic_disable_irqs
  • vgic_enable_irqs
  • vgic_set_irqs_pending

common/irq.c

  • init_one_irq_desc

To achieve the best WCET a configuration in which a guest is running on a single vCPU statically assigned to a pCPU using NULL scheduler shall be used. This way we are avoiding additional locks related to: -migrating vCPUs (thanks to single vCPU/pCPU) -performing timer softirqs (thanks to NULL scheduler)

To limit the worst case scenario, a real-time guest should be allowed to make use of maximum 16 interrupts.

Correlation between IRQ and VGIC spinlocks

Tracing and analysis done in the chapter Interrupt Forwarding Path Analysis showed that:

-> spinlock desc.lock from irq_desc is only taken/released from: - do_IRQ Sections protected by this spinlock are marked blue in the Interrupt Execution flow graph.

-> spinlock arch.vgic.lock from vgic_cpu is only taken/released from: - vgic_sync_from_lrs (section 1) - vgic_inject_irq (section 2) - gic_restore_pending_irqs (section 3) Sections protected by this spinlocks are marked red in the Interrupt Execution flow graph.

-> both of these spinlocks are never locked together. If one spinlock is taken the other is already released.

IPI Analysis

About

This analysis identifies all possibles cases where Xen can generate an IPI to another core. The goal is to define what could influence the response time of a real-time guest running in the configuration described in the chapter Configuration.

The analysis starts from the actual function generating an IPI and looks for the reverse calling tree to find out all conditions of generation.

On arm there is one single function used to generate IPIs: gic_hw_ops->send_SGI.

The function is implemented by the interrupt controller driver and is using the hardware to generate an interrupt to one or several cores of the system.

A graph showing all the places from which an IPI can be triggered:

Callers of gic_hw_ops->send_SGI:

The function is called by: - send_SGI_mask - send_SGI_self - send_SGI_allbutself

  • send_SGI_mask is called by:
    • send_SGI_one
    • smp_send_event_check_mask
    • smp_send_call_function_mask
  • send_SGI_self is never called
  • send_SGI_allbutself is called by:
    • cpu_up_send_sgi

Level 2 callers:

  • cpu_up_send_sgi is called by:
    • exynos5_cpu_up: This function is only called during boot by the exynos5 platform to signal to the main CPU that a secondary core is up and alive. Conclusion: Xen Init.
    • plat->cpu_up: This function is only called during boot by platforms to signal to the main CPU that a secondary core is up and alive. Conclusion: Xen Init.
  • smp_send_event_check_mask is called by:
    • vcpu_kick: This function is waking up/signaling another vcpu of a guest and is only actually firing an interrupt if the destination vcpu of the guest is different than the current vcpu.
      • vgic_queue_irq_unlock: Used only in new VGIC Conclusion: Do not use new VGIC.
      • vgic_kick_vcpus: Used only in new VGIC Conclusion: Do not use new VGIC.
      • vgic_inject_irq:
        • vtimer: raise interrupt on core itself Conclusion: real time process could be impacted by timer on the core but this is the same behaviour as normal hardware.
        • vpl011: generate serial interrupt to guest Conclusion: do not use vpl011.
        • XEN_DMOP_set_irq_level: hypercall to set an interrupt level. Conclusion: Control domain should not use this to generate interrupts to the real time guest.
        • vcpu_mark_events_pending: part of xen even channels Conclusion: do not use event channels between real time guest and other guests on the system.
        • vcpu_update_evtchn_irq: same as previous
        • do_IRQ: this can be triggered when an hardware interrupt is to be forwarded to the guest. This can in fact be a wanted scenario for a real time guest wanting to handle an hardware interrupt.
  • cpumask_raise_softirq is called by:
    • credit2 __runq_tickle: This function is raising an interrupt if the next task to schedule is modified on a core. Conclusion: Do not use credit2 for real-time guests. Conclusion: Should not happen if real-time core is not using credit2.
    • nullsched null_unit_wake: Wake up cpu when a task is scheduled on it. Conclusion: Single core guest alone on its CPU
    • sched/core schedule: Deschedule current domain and reschedule a new one. TODO: Generic code, can this happen if using null scheduler ?
    • rcupdate rcu_barrier: This function is called by:
      • cpu_hotplug_begin: This function is called by:
        • cpu_down: Function to stop a core. Conclusion: Xen halt/reboot.
        • cpu_up: Function called when a core is started. Conclusion: Xen Init.
    • rcupdate force_quiescent_state: This function is called when the callback queue on a cpu is above 1000 (parameter defined in the module). This function is called by:
      • call_rcu: This function is called by:
        • free_percpu_area: This function is called when a CPU is DEAD. Conclusion: Xen halt/reboot.
        • rcu_node_free is called by:
          • radix_tree_node_free is called by:
            • radix_tree_shrink is called by:
              • radix_tree_delete: Deletes a node from the radix tree.
            • radix_tree_delete is called by:
              • p2m_mem_access_radix_set: Conclusion: It happens only if mem access is active. Don’t use mem access for any domain.
                • __p2m_set_entry: Insert (or remove) an entry in the p2m.
                  • p2m_set_entry
                    • p2m_set_mem_access: Set access type for a region of pfns
                      • p2m_mem_access_check
                        • do_trap_stage2_abort_guest
                          • do_trap_guest_sync (Instruction/data abort from lower EL, permission fault)
                            • guest_sync (Synchronous exception from 64bit guest)
                            • guest_sync_compat (Synchronous exception from 64bit guest)
                      • mem_access_memop
                        • do_memory_op (XENMEM_access_op)
                          • compat_memory_op (only x86)
                          • do_trap_hypercall
                            • do_trap_guest_sync
                              • guest_sync (Hypercall from 64bit guest)
                              • guest_sync_compat (Hypercall from 32bit guest)
                    • p2m_insert_mapping
                      • map_regions_p2mt
                        • map_range_to_domain (Called at init)
                          • handle_device
                            • handle_node
                              • prepare_dtb_hwdom
                                • construct_dom0
                                  • create_dom0*
                          • map_device_children
                            • handle_device*
                          • pci_host_bridge_mappings
                            • construct_dom0*
                        • handle_passthrough_prop
                          • handle_prop_pfdt
                            • scan_pfdt_node
                              • domain_handle_dtb_bootmodule
                                • prepare_dtb_domU
                                  • construct_domU
                                    • create_domUs*
                        • try_map_mmio
                          • do_trap_stage2_abort_guest Conclusion: This branch is taken only on a permission fault access, done by an HW domain when Xen boots using ACPI and the HW domain must have access to the region.
                        • acpi_map_other_tables (only ACPI at init)
                          • prepare_acpi*
                        • prepare_acpi (only ACPI at init)
                          • construct_dom0*
                      • map_mmio_regions
                        • vgic_v2_domain_init (Xen init)
                          • domain_vgic_init
                            • arch_domain_create
                              • domain_create*
                        • gicv2_map_hwdown_extra_mappings (Dom0 construction)
                          • gic_map_hwdom_extra_mappings
                            • construct_dom0*
                        • vgic_v2_map_resources (domain creation)
                          • domain_vgic_init*
                        • do_domctl (XEN_DOMCTL_memory_mapping)
                          • do_trap_hypercall
                            • do_trap_guest_sync
                              • guest_sync (Hypercall from 64bit guest)
                              • guest_sync_compat (Hypercall from 32bit guest)
                        • map_range
                          • rangeset_consume_ranges
                            • vpci_process_pending
                              • check_for_vcpu_work
                                • leave_hypervisor_to_guest
                            • apply_map (Called only on Xen boot [system_state < SYS_STATE_active])
                              • modify_bars
                                • cmd_write
                                  • vpci_write_helper
                                    • vpci_write
                                      • vpci_ecam_write
                                        • vpci_mmcfg_write
                                          • handle_write
                                            • try_handle_mmio
                                              • do_trap_stage2_abort_guest
                                                • do_trap_guest_sync (data abort from lower EL)
                                                  • guest_sync (Synchronous exception from 64bit guest)
                                                  • guest_sync_compat (Synchronous exception from 64bit guest)
                                • rom_write
                                  • vpci_write_helper
                                    • vpci_write
                                      • vpci_ecam_write
                                        • vpci_mmcfg_write
                                          • handle_write
                                            • try_handle_mmio
                                              • do_trap_stage2_abort_guest
                                                • do_trap_guest_sync (data abort from lower EL)
                                                  • guest_sync (Synchronous exception from 64bit guest)
                                                  • guest_sync_compat (Synchronous exception from 64bit guest)
                                • init_bars
                                  • vpci_add_handlers
                                    • pci_add_device
                                      • pci_physdev_op
                                        • do_physdev_op
                                          • do_trap_hypercall
                                            • do_trap_guest_sync
                                              • guest_sync (Hypercall from 64bit guest)
                                              • guest_sync_compat (Hypercall from 32bit guest)
                                      • pci_add_device
                                    • setup_one_hwdom_device
                                      • _setup_hwdom_pci_devices
                                        • setup_hwdom_pci_devices (called only on x86)
                              • init_bars
                                • vpci_add_handlers
                                  • pci_add_device
                                    • pci_physdev_op
                                      • do_physdev_op (PHYSDEVOP_pci_device_add)
                                        • do_trap_hypercall
                                          • do_trap_guest_sync
                                            • guest_sync (Hypercall from 64bit guest)
                                            • guest_sync_compat (Hypercall from 32bit guest)
                                    • pci_add_device*
                                  • setup_one_hwdom_device
                                    • _setup_hwdom_pci_devices
                                      • setup_hwdom_pci_devices (called only on x86)
                      • map_dev_mmio_region
                        • xenmem_add_to_physmap_one
                          • xenmem_add_to_physmap
                            • do_memory_op* (XENMEM_add_to_physmap)
                          • xenmem_add_to_physmap_batch
                            • do_memory_op* (XENMEM_add_to_physmap_batch)
                      • guest_physmap_add_entry
                        • xenmem_add_to_physmap_one*
                        • create_grant_host_mapping
                          • map_grant_ref
                            • gnttab_map_grant_ref
                              • do_grant_table_op (GNTTABOP_map_grant_ref)
                                • compat_grant_table_op (only x86)
                                • do_trap_hypercall
                                  • do_trap_guest_sync
                                    • guest_sync (Hypercall from 64bit guest)
                                    • guest_sync_compat (Hypercall from 32bit guest)
                        • set_foreign_p2m_entry
                          • acquire_resource
                            • do_memory_op* (XENMEM_acquire_resource)
                        • arm_iommu_map_page
                          • iommu_map
                            • iommu_legacy_map
                              • map_grant_ref*
                              • unmap_common
                                • unmap_grant_ref
                                  • gnttab_unmap_grant_ref
                                    • do_grant_table_op* (GNTTABOP_unmap_grant_ref)
                                • unmap_and_replace
                                  • gnttab_unmap_and_replace
                                    • do_grant_table_op* (GNTTABOP_unmap_and_replace)
                    • p2m_remove_mapping
                      • unmap_regions_p2mt Conclusion: Not called anywhere
                      • unmap_mmio_regions
                        • do_domctl* (XEN_DOMCTL_memory_mapping)
                        • map_range*
                      • guest_physmap_remove_page
                        • replace_grant_host_mapping
                          • map_grant_ref*
                          • unmap_common*
                        • gnttab_transfer
                          • do_grant_table_op* (GNTTABOP_transfer)
                        • guest_remove_page
                          • decrease_reservation
                            • do_memory_op* (XENMEM_decrease_reservation)
                        • memory_exchange
                          • do_memory_op* (XENMEM_exchange) Conclusion: Do not use memory_op hypercall on any guest using XENMEM_exchange as argument.
                        • do_memory_op* (XENMEM_remove_from_physmap) Conclusion: Do not use memory_op hypercall on any guest using XENMEM_remove_from_physmap as argument.
                        • gnttab_unpopulate_status_frames
                          • gnttab_set_version
                            • do_grant_table_op (GNTTABOP_set_version)
                        • gnttab_map_frame
                          • xenmem_add_to_physmap_one*
                  • relinquish_p2m_mapping
                    • domain_relinquish_resources
                      • domain_kill
                        • do_domctl* (XEN_DOMCTL_destroydomain) Conclusion: Domain destroy
              • its_discard_event Conclusion: ITS can’t be used by any guest as it generates RCU subsystem calls that may lead to cross core IPI handling.
                • its_unmap_device
                  • its_handle_mapd
                    • vgic_its_handle_cmds
                      • vgic_v3_its_mmio_write
                        • handle_write
                          • try_handle_mmio (Deliberate action to write on a certain part of memory)
                            • do_trap_stage2_abort_guest
                              • do_trap_guest_sync (data abort from lower EL)
                                • guest_sync (Synchronous exception from 64bit guest)
                                • guest_sync_compat (Synchronous exception from 64bit guest)
                • its_handle_discard
                  • vgic_its_handle_cmds*
              • map_grant_ref* Conclusion: Do not use grant table hypercall with GNTTABOP_map_grant_ref operation on any guest.
              • unmap_common* Conclusion: Do not use grant table hypercall with GNTTABOP_unmap_grant_ref operation on any guest.
              • gnttab_release_mappings Conclusion: called only on domain destroy.
                • domain_teardown
                  • domain_create* (on error path)
                  • domain_kill*
            • radix_tree_node_destroy
              • radix_tree_destroy
                • p2m_teardown Conclusion: Only on domain destroy.
                  • arch_domain_destroy
                    • domain_create* (on error path)
                    • complete_domain_destroy
                • vgic_v3_domain_free Conclusion: Only on domain destroy.
                  • domain_vgic_free
                    • arch_domain_destroy*
                • domain_create Conclusion: Only on domain creation.
                  • create_domUs
                    • start_xen (init of Xen)
                  • create_dom0
                    • start_xen (init of Xen)
                  • setup_system_domains
                    • start_xen (init of Xen)
                  • do_domctl* (XEN_DOMCTL_createdomain)
                  • scheduler_init
                    • init_idle_domain
                      • start_xen (init of Xen)
                • complete_domain_destroy (callback of call_rcu, issued by domain_destroy)
                • gnttab_release_mappings*
        • domain_destroy: This is called whenever a domain is destroyed to wake up the other cores of the domain to stop them. Conclusion: Domain create/destroy.
        • free_pirq_struct is called by:
          • domain_create: Called if domain creation failed. Conclusion: Domain create/destroy.
          • complete_domain_destroy: Called when a domain is destroyed. Conclusion: Domain create/destroy.
          • pirq_get_info: only on error case
            • evtchn_bind_pirq: Called during EVTCHNOP_bind_pirq Conclusion: Do not create event channels
        • cpu_schedule_down is called by:
          • sched_rm_cpu:
            • cpu_schedule_callback: in CPU_DEAD case only to remove a cpu from the scheduler. Conclusion: Xen halt/reboot.
            • cpupool_cpu_remove_forced: only called during resume if a cpu available before is not coming up anymore. Conclusion: Xen halt/reboot.
          • cpu_schedule_callback: in CPU_UP_CANCELLED which is only if something goes wrong during cpu bring up. Conclusion: Xen Init.
        • schedule_cpu_add: Add a cpu not assigned to any pool to a pool. Conclusion: Cpupool with fixed hardware core.
        • rcu_barrier_action is called by:
          • rcu_process_callbacks
            • __do_softirq*
        • avc_node_delete (xsm): do not use XSM
        • avc_node_replace (xsm): do not use XSM
    • cpu_raise_softirq_batch_finish: This is never called on arm.
  • smp_send_call_function_mask is called by:
    • on_selected_cpus is called by:
      • scrub_heap_pages: This function is scrubbing the memory to remove any old data from it during init. Conclusion: Xen Init.
      • cpu_down: Stops a xen cpu. Conclusion: Xen halt/reboot.
      • smp_call_function is called by:
        • setup_virt_paging: Setup Xen mappings in the mmu during boot. Conclusion: Xen Init.
        • machine_halt: Stops the computer. Conclusion: Xen halt/reboot.
        • machine_restart: Restart the computer: Conclusion: Xen halt/reboot.
        • read_clocks: Get clock information from other core to generate some debug information in the debug console. Conclusion: Debug console usage.
        • gdb_smp_pause: Debugger pausing all cores. Conclusion: Debugging support.
  • smp_send_state_dump: Request through the console to dump cpu state. Conclusion: Debug console usage.

RCU callback stack

  • Any callback added to the list by a call to call_rcu()
    • rcu_do_batch
      • __rcu_process_callbacks
        • rcu_process_callbacks
          • __do_softirq
            • process_pending_softirqs
              • do_idle
                • idle_loop
                  • continue_new_vcpu (PC address for a new vcpu)
                  • startup_cpu_idle_loop
                    • start_secondary (Xen init)
                    • init_done (Xen init)
              • __cpu_up
                • cpu_up
                  • start_xen
                  • core_parking_helper (only on x86)
                  • enable_nonboot_cpus (only on x86)
              • __cpu_die
                • cpu_down
                  • core_parking_helper (only on x86)
                  • enable_nonboot_cpus (only on x86)
                  • disable_nonboot_cpus (only on x86)
              • dump_domains (only for debug purpose)
                • do_tasklet
                  • idle_loop*
              • run_all_nonirq_keyhandlers (only for debug purpose)
                • do_tasklet*
              • livepatch_printall (only for debug purpose)
                • do_tasklet*
              • scrub_heap_pages
                • heap_init_late
                  • start_xen
              • rcu_barrier_action (rcu barriers are used only on cpu up/down)
                • rcu_process_callbacks
                  • __do_softirq*
              • rcu_barrier
                • cpu_hotplug_begin (only when cpu system config changes)
                  • cpu_down*
                  • cpu_up*
              • warning_print
                • console_endboot
                  • start_xen
              • _setup_hwdom_pci_devices
                • setup_hwdom_pci_devices (called only on x86)
              • apply_map* (Called only on Xen boot [system_state < SYS_STATE_active])
            • do_softirq
              • idle_loop

Conclusions

Cases out of scope

Some parts of Xen are temporary (init, halt, reboot) or simply do not make sense for a real-time usage and can lead to some IPI generation so are out of scope of the analysis:

  • Xen Init Several cases can lead to a generation of an IPI to another core:
    • Secondary core startup signaling to main core that it has started and is alive.
    • Scrubbing of guest memory which is signaling other cores to make sure cache is synced on all cores.
    • Setup Xen virtual memory (configure and start MMU).
  • Xen halt/reboot When the complete platform is to be rebooted or stopped, Xen is signaling secondary cores to also stop. Conclusion: Stopping or rebooting the system is out of scope.
  • Debug console usage Some of the commands available in the debug console are requesting information from other cores and to make sure the information is stable an IPI is sent and some operations are done directly in secondary cores before reporting it to the user. Conclusion: The debug console should be disabled.
  • Debugging support (ie gdb) To do debugging there is a need to synchronize cores (for example to stop all of them from executing code synchronously). Conclusion: GDB support should be disabled.
  • Domain create/destroy Whenever a domain is created, cores are assigned and scheduled which might need to signal the other core they will be running on. When it is destroyed the other cores of the guest need to be wake up to properly stop. Conclusion: Guest Init/Destroy phase is out of scope.
  • Domain configuration Whenever a domain configuration is altered or modified, and in particular the cpupool it is assigned to is modified, an IPI might be generated to inform the affected cores. Conclusion: Guest configuration should not be modified at runtime.
  • Event channel creation Event channel should not be used between real time domain and others domain.
  • Hardware interrupts Hardware interrupts handlers must be analysed to derive a worst case execution time and check if it’s feasible with the real time system. This depends on the device being used and because of that it’s out of scope from this analysis.

Restrictions in configuration

Some conditions on which IPI can be generated can be prevented by putting some restrictions on how the system can be configured and the attributes/options of the guest that must be real-time:

  • Single core guest Several conditions could lead to Xen generating an interrupt to wake up an other vcpu of a guest (injection of an interrupt, guest starting the virtual interrupt controller) and the guest itself if it wants to send an IPI to one of its other cores. Conclusion: Only one core should be assigned to real-time guests.
  • Cpupool with fixed hardware core In a generic case, Xen can move a guest vcpu between hardware cores depending on the load of the system. As this might alter the guest timings and have other bad side effects (cache or TLB flushes required), a real time guest should have a configuration using cpupools and cores dedicated to it. Conclusion: Real-time guests must have a dedicated cpupool with dedicated cores.
  • Avoid using ITS Use of ITS can generate calls to the RCU system in Xen, manipulating a radix tree used to store the pending irq and subsequently adding callbacks that will be executed on idle loop or on context switch. The RCU system can raise cross core IPI when there are too many callbacks queued. So don’t use ITS on any guest of the system.
  • Don’t use PV driver on any guest Use of PV driver needs the usage of grant tables. Hypercalls with GNTTABOP_map_grant_ref and GNTTABOP_unmap_grant_ref operations trigger modifications to a radix tree and subsequently uses the RCU that could lead to cross core IPI.
  • Don’t use mem access to monitor any guest on the system Use of mem access leads to modifications to a radix tree that subsequently trigger RCU, this could cause cross core IPI to be raised.
  • XSM must be disabled Compile Xen with CONFIG_XSM not set or start Xen with “flask=disabled” to make Xen use the dummy xsm module. XSM code is based on RCU to protect access and modification of some internal list, RCU could cause cross core IPI to be raised.