Xen Networking: Difference between revisions

From Xen
Jump to navigationJump to search
m (Migrated)
 
(404 removed; URL updated)
 
(51 intermediate revisions by 12 users not shown)
Line 1: Line 1:
= Virtual Network Interfaces =
<!-- MoinMoin name: XenNetworking -->
<!-- Comment: -->
<!-- WikiMedia name: XenNetworking -->
<!-- Page revision: 00000054 -->
<!-- Original date: Thu Sep 29 11:31:00 2011 (1317295860000000) -->


== Paravirtualised Network Devices ==
{{Needs_Review|This page has been marked as out-of-date and needs review and its content needs to be updated for Xen 4.x.}}


A Xen guest typically has access to one or more [[Paravirtualization_(PV)|paravirtualised (PV)]] network interfaces. These PV interfaces enable fast and efficient network communications for domains without the overhead of emulating a real network device. Drivers for PV network devices are available by default in most PV aware guest OS kernels. In addition PV network drivers are available for various guest operating systems when running as a [[XenOverview#HVM|fully virtualised (HVM)]] guest, e.g. via [[PV on HVM]] drivers for Linux or the [[Xen Windows GplPv|GPL PV drivers for Windows]].
= Virtual Ethernet interfaces =
Xen creates, by default, seven pair of "connected virtual ethernet interfaces" for use by dom0. Think of them as two ethernet interfaces connected by an internal crossover ethernet cable. <code><nowiki>veth0</nowiki></code> is connected to <code><nowiki>vif0.0</nowiki></code>, <code><nowiki>veth1</nowiki></code> is connected to <code><nowiki>vif0.1</nowiki></code>, etc, up to <code><nowiki>veth7</nowiki></code> -> <code><nowiki>vif0.7</nowiki></code>. You can use them by configuring IP and MAC addresses on the <code><nowiki>veth#</nowiki></code> end, then attaching the <code><nowiki>vif0.#</nowiki></code> end to a bridge.


A paravirtualised network device consists of a pair of network devices. The first of these (the ''frontend'') will reside in the guest domain while the second (the ''backend'') will reside in the backend domain (typically [[Dom0]]). A similar pair of devices is created for each virtual network interface
'''Diagram of Physical and Logical network cards:'''


The frontend devices appear much like any other physical Ethernet NIC in the guest domain. Typically under Linux it is bound to the ''xen-netfront'' driver and creates a device ''ethN''. Under NetBSD and FreeBSD the frontend devices are named ''xennetN'' and ''xnN'' respectively.
http://koocotte.googlepages.com/Diapositive1.png


The backend device is typically named such that it contains both the guest domain ID and the index of the device. Under Linux such devices are by default named ''vifDOMID.DEVID'' while under NetBSD ''xvifDOMID.DEVID'' is used.
http://koocotte.googlepages.com/Diapositive4.png


In both cases the device naming is subject to the usual guest or backend domain facilities for renaming network devices. For the remainder of this document the default Linux naming, that is ''ethN'' for frontend and ''vifDOMID.DEVID'' for backend devices, will be used.
Every time you create a running domU instance, it is assigned a new domain id number. You don't get to pick the number, sorry. The first domU will be id #1. The second one started will be #2, even if #1 isn't running anymore.


The front and backend devices are linked by a virtual communication channel, guest networking is achieved by arranging for traffic to pass from the backend device onto the wider network, e.g. using bridging, routing or Network Address Translation (NAT).
For each new domU, Xen creates a new pair of "connected virtual ethernet interfaces", with one end in domU and the other in dom0. For linux domU's, the device name it sees is named <code><nowiki>eth0</nowiki></code>. The other end of that virtual ethernet interface pair exists within dom0 as interface <code><nowiki>vif<id#>.0</nowiki></code>. For example, domU #5's <code><nowiki>eth0</nowiki></code> is attached to <code><nowiki>vif5.0</nowiki></code>. If you create multiple network interfaces for a domU, it's ends will be <code><nowiki>eth0</nowiki></code>, <code><nowiki>eth1</nowiki></code>, etc, whereas the dom0 end will be <code><nowiki>vif<id#>.0</nowiki></code>, <code><nowiki>vif<id#>.1</nowiki></code>, etc.


<!-- To edit this image please send patches against xen-unstable.hg/docs/figs -->
'''Logical network cards connected between dom0 and dom1:'''
https://xenbits.xen.org/docs/unstable-staging/figs/network-basic.png


== Emulated Network Devices ==
http://koocotte.googlepages.com/Diapositive5.png


As well as PV network interface [[XenOverview#HVM|fully virtualised (HVM)]] guests can also be configured with one or more emulated network devices. These devices emulate a real piece of hardware and are useful when a guest OS does not have PV drivers available or when they are not yet available (i.e. during guest installation).
When a domU is shutdown, the virtual ethernet interfaces for it are deleted.


An emulated network device is usually paired with a PV device with the same MAC address and configuration. This allows the guest to smoothly transition from the emulated device to the PV device when a driver becomes available.
= MAC addresses =
Virtualised network interfaces in domains are given Ethernet MAC addresses. By default xend will select a random address, this will differ between instantiations of the domain. If it is required to have a fixed MAC address for a domain (e.g. for using with DHCP) then this can be configured using the <code><nowiki>mac=</nowiki></code> option to the <code><nowiki>vif</nowiki></code> configuration directive (e.g. <code><nowiki>vif = ['mac=aa:00:00:00:00:11']</nowiki></code>).


The emulated network device is provided by the device model, running either as a process in domain 0 or as a [[Device Model Stub Domains|Stub Domain]].
When choosing MAC addresses to use, ensure you choose a unicast address. That is, one with the low bit of the first octet set to zero. For example, an address starting <code><nowiki>aa:</nowiki></code> is OK but <code><nowiki>ab:</nowiki></code> is not. It is best to keep to the range of addresses declared to be "locally assigned" (rather than allocated globally to hardware vendors). These have the second lowest bit set to one in the first octet. For example, <code><nowiki>aa:</nowiki></code> is OK, <code><nowiki>a8:</nowiki></code> isn't.


When the DM runs as a process in domain 0 then the device is surfaced in the backend domain as a ''tap'' type network device. Historically these were named either ''tapID'' (for an arbitrary ID) or ''tapDOMID.DEVID''. More recently they have been named ''vifDOMID.DOMID-emu'' to highlight the relationship between the paired PV and emulated devices.
In summary, an address of the following form should be OK:


If the DM runs in a stub domain then the device surfaces in domain 0 as a PV network device attached to the ''stub domain''. The stub domain will take care of forwarding between the device emulator and this PV device.


For the remainder of this document PV and Emulated devices are mostly interchangeable and we will use the PV naming in the examples.
<pre><nowiki>
XY:XX:XX:XX:XX:XX
</nowiki></pre>


= MAC addresses =
where <code><nowiki>X</nowiki></code> is any hexadecimal digit, and <code><nowiki>Y</nowiki></code> is one of <code><nowiki>2</nowiki></code>, <code><nowiki>6</nowiki></code>, <code><nowiki>A</nowiki></code> or <code><nowiki>E</nowiki></code>.


Virtualised network interfaces in domains are given Ethernet [https://en.wikipedia.org/wiki/MAC_address MAC addresses]. By default most Xen toolstacks will select a random address, depending on the toolstack this will either be static for the entire life time of the guest (e.g. [[Libvirt]], [[XAPI]] or xend managed domains) or will change each time the guest is started (e.g. [[XL]] or xend unmanaged domains).
It's recommended to use a MAC address inside the range 00:16:3e:xx:xx:xx. This address range is reserved for use by Xen.


In the latter case if a fixed MAC address is required e.g. for using with DHCP then this can be be configured using the <code>mac=</code> option to the <code>vif</code> configuration directive (e.g. <code>vif = ['mac=aa:00:00:00:00:11']</code>). See [https://xenbits.xen.org/docs/unstable/man/xl-network-configuration.5.html XL Network Configuration] for more details of the syntax.
= Bridging =
'''Illustration on network-bridge and vif-bridge:'''


When choosing MAC addresses there are in general three strategies which can be used. In decreasing order of preference these are:
http://koocotte.googlepages.com/Diapositive6.png


* Assign an address from the range associated with an [https://en.wikipedia.org/wiki/Organizationally_Unique_Identifier Organizationally Unique Identifier] (OUI) which you control. If you do not know what this means then you likely do not control an OUI and this option does not apply to you.
The default Xen configuration uses bridging within domain 0 to allow all domains to appear on the network as individual hosts. If extensive use of iptables is made in domain 0 (e.g. a firewall) then this can affect bridging because bridged packets pass through the PREROUTING, FORWARD and POSTROUTING iptables chains. This means that packets being bridged between guest domains and the external network will need to be permitted to pass those chains. The most likely problem is the FORWARD chain being configured to DROP or REJECT packets (this is different from IP forwarding in the kernel).
* Generate a random sequence of 6 bytes, set the locally administered bit (bit 2 of the first byte) and clear the multicast bit (bit 1 of the first byte). In other words the first byte should have the bit pattern xxxxxx10 (where x is a randomly generated bit) and the remaining 5 bytes are randomly generated. See [https://en.wikipedia.org/wiki/MAC_address wikipedia] for more details the structure of a MAC address.
* Assign a random address from within the space 00:16:3e:xx:xx:xx. 00:16:3e is an [https://en.wikipedia.org/wiki/Organizationally_Unique_Identifier OUI] assigned to the Xen project and which has been made available for Xen users for the purposes of assigning local addresses within that space.


A MAC address must be unique among all network devices (both physical and virtual) on the same local network segment (e.g. on the LAN containing the Xen host). For this reason if you do not have your own OUI to use it is in general recommended to generate a random locally administered address (the second option above) rather than using the Xen OUI (the third option) since it gives 46 bits of randomness rather than 24 which significantly reduces the chances of a clash.
iptable FORWARDing can be disabled for all packets; to prevent the dom0 from acting as an IP router: <code><nowiki>echo 0 > /proc/sys/net/ipv4/ip_forward</nowiki></code>.


= Bridging =
A slightly more secure method is to allowing packet forwarding (at the iptables level) between the external physical interface and the vifs for the guests. For a machine with a single ethernet card this would be:


The default (and most common) Xen configuration uses bridging within the backend domain (typically domain 0) to allow all domains to appear on the network as individual hosts.


In this configuration a software bridge is created in the backend domain. The backend virtual network devices (''vifDOMID.DEVID'')) are added to this bridge along with an (optional) physical Ethernet device to provide connectivity off the host. By omitting the physical Ethernet device an isolated network containing only guest domains can be created.
<pre><nowiki>
iptables -A FORWARD -m physdev --physdev-in eth0 --physdev-out '!' eth0 -j ACCEPT
iptables -A FORWARD -m physdev --physdev-out eth0 --physdev-in '!' eth0 -j ACCEPT
</nowiki></pre>


There are two common naming schemes when using bridged networking. In one scheme the physical device ''eth0'' is renamed to ''peth0'' and a bridge named ''eth0'' is created. In the other the physical device remains ''eth0'' while the bridge is named ''xenbr0'' (or ''br0'' etc). We shall use the ''eth0''+''xenbr0'' naming scheme here.
(needs the ipt_physdev [aka xt_physdev] module to be available).


Of course you are free to use whatever names you like, including descriptive names (e.g. "dmz", "internal", "external" etc).
The ebtables project has an [http://ebtables.sourceforge.net/br_fw_ia/br_fw_ia.html interesting document on the interaction of bridging and iptables].


<!-- To edit this image please send patches against xen-unstable.hg/docs/figs -->
== Packet flow in bridging ==
http://xenbits.xen.org/docs/unstable-staging/figs/network-bridge.png
([http://lists.xensource.com/archives/html/xen-users/2006-02/msg00586.html By Ernst Bachman])


== Setting up bridged networking ==
Packet arrives at hardware, is handled by dom0 Ethernet driver and appears on <code><nowiki>peth0</nowiki></code>. <code><nowiki>peth0</nowiki></code> is bound to to the bridge, so its passed to the bridge from there. This step is run on Ethernet level, no IP addresses are set on <code><nowiki>peth0</nowiki></code> or bridge.


The recommended method for configuring bridged networking is to use your distro supplied network configuration tools as described in [[Host Configuration/Networking]].
Now the bridge distributes the packet, just like a switch would. Filtering at this stage would be possible with [http://ebtables.sourceforge.net/ ebtables].


Prior to Xen 4.1 when xend started up it would run the <code>network-bridge</code> script which would reconfigure any existing physical network configuration into a bridged network configuration i.e. it would create a bridge, move the IP address from the physical device to the bridge, add the physical device to the bridge etc. However this was fragile and prone to breaking and therefore is no longer recommended.
Now there's a number of <code><nowiki>vifX.Y</nowiki></code> connected to the bridge, it decides where to put the packet based on the receiver's MAC.


After Xen 4.1 xend will only do this if no bridges currently exist, so as to avoid overwriting any locally configured network configuration.
The <code><nowiki>vif</nowiki></code> interface puts the packet into Xen, which then puts the packet back to the domain the <code><nowiki>vif</nowiki></code> leads to (its also done that way for dom0, hence the <code><nowiki>vif0.0</nowiki></code>-><code><nowiki>(v)eth0</nowiki></code> pair).


The [[XL]] toolstack will never modify the network configuration and expects that the administrator will have configured the host networking appropriately. Check out this [[Network_Configuration_Examples_%28Xen_4.1%2B%29|XL example]].
The target device in the dom0/domU finally has an IP address, you can apply iptables filtering here.


== network-bridge ==
== Attaching virtual devices to the appropriate bridge ==
When xend starts up, it runs the <code><nowiki>network-bridge</nowiki></code> script, which:


# creates a new bridge named <code><nowiki>xenbr0</nowiki></code>
When a domU starts up the <code>vif-bridge</code> script is run which:
# "real" ethernet interface <code><nowiki>eth0</nowiki></code> is brought down
# the IP and MAC addresses of <code><nowiki>eth0</nowiki></code> are copied to virtual network interface <code><nowiki>veth0</nowiki></code>
# real interface <code><nowiki>eth0</nowiki></code> is renamed <code><nowiki>peth0</nowiki></code>
# virtual interface <code><nowiki>veth0</nowiki></code> is renamed <code><nowiki>eth0</nowiki></code>
# <code><nowiki>peth0</nowiki></code> and <code><nowiki>vif0.0</nowiki></code> are attached to bridge <code><nowiki>xenbr0. </nowiki></code>Please notice that in xen 3.3, the default bridge name is the same than the interface it is attached to. Eg: bridge name eth0, eth1 or ethX.VlanID
# the bridge, <code><nowiki>peth0</nowiki></code>, <code><nowiki>eth0</nowiki></code> and <code><nowiki>vif0.0</nowiki></code> are brought up
It is good to have the physical interface and the dom0 interface separated; thus you can e.g. setup a firewall on dom0 that does not affect the traffic to the domUs (just for protecting dom0 alone).


# attaches ''vifDOMID.DEVID'' to the appropriate bridge
== vif-bridge ==
# brings ''vifDOMID.DEVID'' up.
When a domU starts up, <code><nowiki>xend</nowiki></code> (running in dom0) runs the <code><nowiki>vif-bridge</nowiki></code> script, which:


With [[XL]] and xend the bridge to use for each VIF can be configured using the ''bridge'' configuration key. e.g.
# attaches <code><nowiki>vif<id#>.0</nowiki></code> to <code><nowiki>xenbr0</nowiki></code>
vif=[ 'bridge=mybridge' ]
# <code><nowiki>vif<id#>.0</nowiki></code> is brought up
or
== Additional Notes ==
vif=[ 'mac=00:16:3e:01:01:01,bridge=mybridge' ]
* you can change the bridge name from <code><nowiki>xenbr0</nowiki></code> using:
or to create multiple interfaces attached to different bridges:
vif=[ 'mac=00:16:3e:70:01:01,bridge=br0', 'mac=00:16:3e:70:02:01,bridge=br1' ]
<pre><nowiki>
(network-script 'network-bridge bridge=mybridge')
</nowiki></pre>


== Bridging Loops ==
in <code><nowiki>xend-config.sxp</nowiki></code> and rebooting or restarting <code><nowiki>xend</nowiki></code>
* remember to configure the bridge to attach to in the domU's config file using:
<pre><nowiki>
vif=[ 'bridge=mybridge' ]
</nowiki></pre>


It is common practice to disable the [https://en.wikipedia.org/wiki/Spanning_Tree_Protocol Spanning Tree Protocol] on Xen bridges. However if guests are able to themselves bridge two or more interfaces together then you run the risk of creating bridging loops. See [[Xen Bridge Loop]] for more discussion of this issue.
or perhaps something like:
<pre><nowiki>
vif=[ 'mac=00:16:3e:01:01:01,bridge=mybridge' ]
</nowiki></pre>


== Links ==
* you can create multiple network interfaces, and attach them to different bridges using:
<pre><nowiki>
vif=[ 'mac=00:16:3e:70:01:01,bridge=br0', 'mac=00:16:3e:70:02:01,bridge=br1' ]
</nowiki></pre>


* if you want to use multiple bridges, you must create them yourself, either manually, or via your own startup script, or via a custom script to replace <code><nowiki>network-bridge</nowiki></code>. For example:
<pre><nowiki>
$ cd /etc/xen/scripts
$ cp network-bridge network-custom
$ cp vif-bridge vif-custom
$ vi /etc/xen/xend-config.sxp
(network-script network-custom)
(vif-script vif-custom)
$ vi network-custom
# whatever you want
</nowiki></pre>

* before you connect a physical interface to a bridge, remember to reset it's mac and turn arp off. For example:
<pre><nowiki>
# ip link set eth1 down
# ip link set eth1 mac fe:ff:ff:ff:ff:ff arp off
# brctl addif br1 eth1
# ip link set eth1 up
</nowiki></pre>

* With Xen 3.0 the best method for additional bridges is to use the default Xen scripts with a slight modification. Following the [http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=332 XenBug #332]. For example in a two bridge network with eth0 and eth1. Create /etc/xen/scripts/my-network-script with
<pre><nowiki>
#!/bin/sh
dir=$(dirname "$0")
"$dir/network-bridge" "$@" vifnum=0
"$dir/network-bridge" "$@" vifnum=1
</nowiki></pre>

* With Xen 3.2.1 (tested on Debian Etch 4.0r3), here is a script example that creates two virtual interfaces corresponding to the 2 physical network interfaces
<pre><nowiki>
# xemacs /etc/xen/scripts/network-bridge-wrapper
#!/bin/sh
/etc/xen/scripts/network-bridge "$@" netdev=eth0
/etc/xen/scripts/network-bridge "$@" netdev=eth1
</nowiki></pre>

The $1 will use the argument of xend (in the /etc/xen/xend-config.sxp configuration file). If there is a default physical network interface, the standard network-bridge script of Xen will create a vif for this interface, and not the other ones also.

* (Additional note by steve_from_moreover - May be stating the obvious but remember to do - chmod 755 /etc/xen/scripts/my-network-script or when you reboot it will silently not be able to run this script).
* On SuSE Linux (at least), each interface requires an ifcfg script in /etc/sysconfig/network, e.g. /etc/sysconfig/network/ifcfg-eth1. Otherwise, network-bridge will create the bridge with no interfaces attached.
* Then change /etc/xen/xend-config.sxp with the following (network-script my-network-script).
* The same principle can apply to networks without a physical ethernet device. Use a dummy interface with
<pre><nowiki>
"$dir/network-bridge" "$@" vifnum=2 netdev=dummy0
</nowiki></pre>

== Links ==
Some relevant topics from the mailing list:
Some relevant topics from the mailing list:


{{Warning|Many of the links presented here are rather old and may refer to configurations which are no longer best practice, such as the use of the <code>network-*</code> scripts to configure networking.}}
* eth0 IP in dom0 [http://lists.xensource.com/archives/html/xen-devel/2005-01/msg00425.html 2005/01/14]
* eth0 IP in dom0 [http://lists.xensource.com/archives/html/xen-devel/2005-01/msg00425.html 2005/01/14]
* Bridging vs. Routing [http://lists.xensource.com/archives/html/xen-devel/2005-01/msg00368.html 2005/01/13]
* Bridging vs. Routing [http://lists.xensource.com/archives/html/xen-devel/2005-01/msg00368.html 2005/01/13]
Line 178: Line 97:
* An attempt to explain Xen networking [http://lists.xensource.com/archives/html/xen-users/2006-02/msg00030.html 2006-02-01]
* An attempt to explain Xen networking [http://lists.xensource.com/archives/html/xen-users/2006-02/msg00030.html 2006-02-01]
* [http://lists.xensource.com/archives/html/xen-users/2006-02/msg00602.html Firewall in domU with bridging]
* [http://lists.xensource.com/archives/html/xen-users/2006-02/msg00602.html Firewall in domU with bridging]
* [http://www.shorewall.net/Xen.html Xen and Shorewall] (with bridging)
* [http://www.shorewall.net/XenMyWay.html Xen and the Art of Consolidation] (with bridging)
* [http://www1.shorewall.net/XenMyWay.html Xen and the Art of Consolidation] (with bridging)
* [http://lists.xensource.com/archives/html/xen-users/2006-03/msg00109.html Another way for making multiple Xen bridges]
* [http://lists.xensource.com/archives/html/xen-users/2006-03/msg00109.html Another way for making multiple Xen bridges]
* [http://lists.xensource.com/archives/html/xen-users/2007-05/msg00064.html Advanced bridging (2007/05)] You can also have a look to: http://searchservervirtualization.techtarget.com/tip/0,289483,sid94_gci1310165,00.html#
* [http://lists.xensource.com/archives/html/xen-users/2007-05/msg00064.html Advanced bridging (2007/05)]
* [https://searchservervirtualization.techtarget.com/tip/Creating-additional-Xen-virtual-network-bridges Creating additional Xen virtual network bridges]
= Routing =
This section applies only if you choose to use <code><nowiki>network-route</nowiki></code> and <code><nowiki>vif-route</nowiki></code> instead of <code><nowiki>network-bridge</nowiki></code> and <code><nowiki>vif-bridge</nowiki></code>.


= Open vSwitch =
'''Illustration on network-route and vif-route:'''


The Xen 4.3 release will feature initial integration of [https://www.openvswitch.org/ Open vSwitch] based networking. Conceptually this is similar to a bridged configuration but rather than placing each vif on a Linux bridge instead an Open vSwitch switch is used. Open vSwitch supports more advance [https://en.wikipedia.org/wiki/Software-defined_networking Software-defined Networking] (SDN) features such as [https://www.opennetworking.org/technical-communities/areas/specification/open-datapath/ OpenFlow].
http://lists.xensource.com/archives/html/xen-users/2006-02/pngy2DEMwu3Fb.png


== Setting up Open vSwitch networking ==
Routing creates a point-to-point link between dom0 and each domU. Routes to each domU are added to dom0's routing table, so domU must have a known (static) IP. DHCP doesn't work, because the route won't be created, and the DHCP offer won't arrive.


Set up openvswitch according to the [[Network_Configuration_Examples_(Xen_4.1+)|Host Networking Configuration Examples]].
== network-route ==
When <code><nowiki>xend</nowiki></code> starts up, it runs <code><nowiki>network-route</nowiki></code> which:


If you want openvswitch to be the default, add the following line to your <code>xl.conf</code> file:
# enables ip forwarding within dom0
<pre>
== vif-route ==
vif.default.script="vif-openvswitch"
When domU starts up, <code><nowiki>xend</nowiki></code> runs (within dom0) <code><nowiki>vif-route</nowiki></code> which:
</pre>


If you have given the openvswitch bridge a name other than <code>xenbr0</code>, you will need to update that default as well:
# copies the ip address from <code><nowiki>eth0</nowiki></code> to <code><nowiki>vif<id#>.0</nowiki></code>
<pre>
# brings up <code><nowiki>vif<id#>.0</nowiki></code>
vif.default.bridge="ovsbr0"
# adds host static route for domU's ip address specified in domU config file, pointing at interface <code><nowiki>vif<id#>.</nowiki></code>
</pre>
More information on vif-route can be found here:
http://wiki.xensource.com/xenwiki/vif-route
== Reference ==
* http://lists.xensource.com/archives/html/xen-users/2006-02/msg00030.html
= Virtual Network =
The virtual network is currently a non-standard configuration.


Alternately, you can specify the new script (and bridge, if necessary) in each config file by adding <code>script=vif-openvswitch</code> (and possibly <code>bridge=ovsbr0</code>) to the vifspec of individual vifs in config files. See xl-network-configuration.markdown for more information.
'''Illustration of virtual network setups:'''
vif = [ 'script=vif-openvswitch,bridge=ovsbr0' ]


== Attaching virtual devices to the appropriate switch ==
http://koocotte.googlepages.com/Diapositive9.png


Xen 4.3 ships with a <code>vif-openvswitch</code> hotplug script which behaves similarly to the <code>vif-bridge</code> script, except that it attaches the VIF to an openvswitch switch (named via the VIF's <code>bridge</code> parameter).
The virtual network configuration places all domU on a shared virtual network with dom0. This allows domU to use a DHCP server provided by dom0, without allowing DHCP requests from domU to escape onto the physical network. (As far as I can tell, <code><nowiki>vif0.0</nowiki></code> and <code><nowiki>dummy0</nowiki></code> are not strictly required.)


In addition to naming the bridge the openvswitch hotplug script supports an extended syntax for the bridge optio which allows for VLAN tagging and trunking. That syntax is:
== Links ==
BRIDGE_NAME[.VLAN][:TRUNK:TRUNK]
* My (sapphirecat) virtual network and DHCP setup: http://www.sapphirepaw.org/pawprints/index.php?/archives/127-Xen-part-II.html
* Another virtual network setup: http://en.opensuse.org/Xen3_and_a_Virtual_Network
= Interface names =
The default configuration for Xen systems is to use bridging. When xend starts it creates a bridge called <code><nowiki>xen-br0</nowiki></code>. xend takes the IP address etc. from <code><nowiki>eth0</nowiki></code> and assigns to <code><nowiki>xen-br0</nowiki></code> (as dom0's interface onto the bridge). So dom0's own external-facing interface is now <code><nowiki>xen-br0</nowiki></code> - make sure any firewall config reflects this.


To add a vif to VLAN 102 on bridge xenbr0:
= VLANs =
vif = [ 'mac=00:16:3e:01:01:01,bridge=xenbr0.102' ]
== 1st method of having VMs using vlan interfaces with XEN. ==
Multiple tagged VLANs can be supported by configuring 802.1Q VLAN support into domain 0. A local interface in dom0 is needed for each desired VLAN although it need not have an IP address in dom0. A bridge can be set up for each VLAN, and guests can then connect the the appropriate bridge.


To add a vif to bridge xenbr1 trunked and receiving traffic for VLAN 101 and 202:
My ([[JamesBulpin]]) prefered method is to define the bridge as an interface which is '''not''' brought up automatically (e.g. for Debian <code><nowiki>/etc/network/interfaces</nowiki></code>, with no "auto" entry):
vif = [ 'mac=00:16:3e:01:01:01,bridge=xenbr1:101:202' ]


= Routing =


In a routed network configuration a point-to-point link is created between the backend domain (typically domain 0) and each domU virtual network interface. Traffic is then routed between these point-to-point links and the outside world using the backend domain's network routing functionality.
<pre><nowiki>
iface xen-br293 inet manual
up vconfig add eth0 293
up /etc/xen/scripts/network start netdev=eth0.293 bridge=xen-br293 antispoof=no
up /sbin/ifconfig eth0.293 up
down /etc/xen/scripts/network stop netdev=eth0.293 bridge=xen-br293 antispoof=no
down vconfig rem eth0.293
</nowiki></pre>


For a general discussion of network routing see the [https://en.wikipedia.org/wiki/Routing wikipedia page] on the subject.
I then add an <code><nowiki>init.d</nowiki></code> script to bring the interface up between <code><nowiki>xend</nowiki></code> and <code><nowiki>xendomains</nowiki></code> starting.


Because routes are created dynamically as domains are created it is usually necessary for each guest network interface to have a known static IP address.
== 2nd method for XEN with vlans: modify network-bridge script ==
I ([[OrianaPalivan]]) used this method because I needed to have a particular use (and quite general on my oppinion) of my VMs. Hope this may help you with your future VMs.


== Setting up routing on the host ==
It was tested with '''xen 3.3''', and Debian Etch 4.r05. Please notice that with xen 3.3, the name of the bridge is the name of the interface in the domain 0 it is attached to (so no xenbr anymore by default).


The recommended method for configuring networking is to use your distro supplied network configuration tools as described in [[Host Configuration/Networking]].
<u>Domain 0</u> has 2 interfaces: eth0 and eth1, each of them has 2 vlan interfaces attached, for example eth0.20, eth0.21 and eth1.3916 and eth1.3999.


Prior to Xen 4.1 when xend started up it would run the <code>network-route</code> script which perform the necessary configuration. However this mechanism was fragile and prone to breaking and therefore is no longer recommended.
Here is the network configuration of the VMs:


The [[XL]] toolstack will never modify the network configuration and expects that the administrator will have configured the host networking appropriately. Check out this [[Network_Configuration_Examples_%28Xen_4.1%2B%29|XL example]].
* VM1: eth0 -> domain 0, eth0 / eth1 -> domain 0, eth1
* VM2: eth0 -> domain 0, eth0.20 / eth1 -> domain 0, eth1.3999
* VM3: eth0 -> domain 0, eth0.21 / eth1 -> domain 0, eth1.3916
* VM4: eth0 -> domain 0, eth0 / eth1 -> domain 0, eth1
I needed to be able to reach domain 0 thru IP.


== Associating routes with virtual devices ==
Here are the steps for having all the interfaces from domain 0 (vlan or not vlan interface) ready to be used by the VMs:


When domU starts up, the <code>vif-route</code> script is run for each virtual device ''vifDOMID.DEVID''. This script sets up routing for that device by
* Configure vlan interfaces (needs the vlan package)
* Change the network-bridge script (located in /etc/xen/scripts)
** Copy for example the network-bridge script to network-bridge-withvlan script
** Edit the network-bridge-withvlan script. Comment each time ifup or ifdown commands are executed.The reason is that ifdown ends with error for vlan interfaces.
Change do_ifup() function:


* Adding an IP address to the device. This address is largely arbitrary but required in order that the interface can be involved in routing. By default domain 0's IP address is used.
* Brings up the device.
* Adds a host static route for the interfaces IP address as specified in domU config file routing traffic to the ''vifDOMID.DEVID'' interface.


The IP address associated with a virtual network interface should be specified in the domain configuration file using the ''ip'' configuration key.
<pre><nowiki>
vif=[ 'ip=192.168.1.12' ]
do_ifup() {
or
# if ! ifup $1 ; then
vif=[ 'mac=00:16:3e:01:01:01,ip=192.168.1.12' ]
if [ -n "$addr_pfx" ] ; then
or for multiple devices:
# use the info from get_ip_info()
vif=[ 'mac=00:16:3e:70:01:01,ip=192.168.13.15', 'mac=00:16:3e:70:02:01,ip=192.168.75.11' ]
ip addr flush $1
ip addr add ${addr_pfx} dev $1
ip link set dev $1 up
[ -n "$gateway" ] && ip route add default via ${gateway}
fi
#fi
}
</nowiki></pre>


More information on <code>vif-route</code> can be found [[vif-route|here]].
Change op_start() and op_stop() functions:


= Network Address Translation =


[https://en.wikipedia.org/wiki/Network_address_translation Network Address Translation] or NAT is a form of routing which gives each guest VIF its own IP address on a [https://en.wikipedia.org/wiki/Private_network private/internal network], often using [https://tools.ietf.org/html/rfc1918 RFC1918] addresses, and performs address translation at the router/firewall (e.g. domain 0) to connect the entire private network to the rest of the network via a single public IP address.
<pre><nowiki>
op_start () {
[...]
# if ! ifdown ${netdev}; then
# If ifdown fails, remember the IP details.
get_ip_info ${netdev}
ip link set ${netdev} down
ip addr flush ${netdev}
# fi
op_stop () {
[...]
# if ! ifdown ${bridge}; then
get_ip_info ${bridge}
<!-- # fi</nowiki></pre>
-->
* Create your network-bridge wrapper which will call the network-bridge-withvlan:


NAT is sometimes also called "IP masquerading".
<pre><nowiki>
<16:39>[root:/etc/xen/scripts]# cat network-bridge-wrapper
#!/bin/sh
/etc/xen/scripts/network-bridge-withvlan "$@" netdev=eth0
/etc/xen/scripts/network-bridge-withvlan "$@" netdev=eth1
/etc/xen/scripts/network-bridge-withvlan "$@" netdev=eth0.20
/etc/xen/scripts/network-bridge-withvlan "$@" netdev=eth0.21
/etc/xen/scripts/network-bridge-withvlan "$@" netdev=eth1.3916
/etc/xen/scripts/network-bridge-withvlan "$@" netdev=eth1.3999# </nowiki></pre>


== Setting up NAT on the host ==
* Call your new network-bridge-wrapper in your xend-config.xsp


Setting up NAT is similar to configuring Routing as described above with the most obvious difference being that one should enable NAT in the backend domain.
<pre><nowiki>
(network-script network-bridge-wrapper)</nowiki></pre>


The recommended method for configuring networking is to use your distro supplied network configuration tools as described in [[Host Configuration/Networking]].
* Before you restart your domain 0, do not forget to disable transmit checksum offloading (see below). You can make a script at startup:


Prior to Xen 4.1 when xend started up it would run the <code>network-nat</code> script which perform the necessary configuration. However this mechanism was fragile and prone to breaking and therefore is no longer recommended.
<pre><nowiki>
<16:40>[root:/etc/xen]# cat /etc/init.d/xen-vlan
#!/bin/sh
ethtool -K eth0 tx off
ethtool -K eth1 tx off</nowiki></pre>


The [[XL]] toolstack will never modify the network configuration and expects that the administrator will have configured the host networking appropriately. Check out this [[Network_Configuration_Examples_%28Xen_4.1%2B%29|XL example]].
* At boot time, and before starting your VMs, make a :
<pre><nowiki>
/etc/init.d/networking restart</nowiki></pre>


== Virtual Device Configuration ==
In some cases, the static routes you have may not come up after xend starts.
* In the configuration file of your VMs, you can now use the vlan interfaces:
VM1:


In a NAT'd configuration virtual devices are given IP addresses on a private network, typically an [https://tools.ietf.org/html/rfc1918 RFC1918] internal network. Guests may either be configured statically with addresses in the chosen network space or you can chose to run a [https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol DHCP server] within that network (perhaps on the host itself) to provide addresses to guests.


When domU starts up, the <code>vif-nat</code> script is run for each virtual device ''vifDOMID.DEVID''. If the ISC DHCP server is install then this script will attempt to dynamically reconfigure the DHCP service to serve up entries for the ''mac'' and ''ip'' address configuration keys in the guest configuration file. This is specific to the ISC DHCP servers configuration file syntax so if you are using a different DHCP server or simply want to manage the DHCP server yourself then you should disable the <code>vif-nat</code> script (which seems like a good idea, since automatic editing of the DHCP configuration is bound to be fragile).
<pre><nowiki>
vif = ['type=ioemu, mac=00:16:3E:00:00:13, bridge=eth0' , 'type=ioemu, mac=00:16:3E:00:00:14, bridge=eth1']</nowiki></pre>


= VLANs & Bonding =
VM2:


Multiple tagged VLANs can be supported by configuring 802.1Q VLAN support into the backend domain (typically domain 0).


Once configured according to [[Host Configuration/Networking]] then the VLAN devices can be treated like any other device and used for either routing or bridging.
<pre><nowiki>
vif = ['type=ioemu, mac=00:16:3E:00:00:10, bridge=eth0.20' , 'type=ioemu, mac=00:16:3E:00:00:12, bridge=eth1.3999']
</nowiki></pre>


Likewise bonding (or even VLANs over bonding etc) can also be created by following distribution specific documentation and treating the resulting device as normal.
VM3:


= Advanced configurations =


By combining the above with the networking capabilities of the host OS it is possible to create more complex configurations to suit various different requirements.
<pre><nowiki>
vif = ['type=ioemu, mac=00:16:3E:00:00:01, bridge=eth0.21' , 'type=ioemu, mac=00:16:3E:00:00:02, bridge=eth1.3916']
</nowiki></pre>


; Virtual network using a ''brouter''.
VM4:
:This configuration uses a bridge with no physical device shared by the guests. The bridge an IP address in domain 0 which is then use routed (or even NATed) to the external network (hence ''bridged router''). See "[https://tr.opensuse.org/Xen3_and_a_Virtual_Network Xen3 and a Virtual Network]" for a more complete description of this type of configuration.


= ASCII Art Examples of Xen Networking Topologies =


The following attempt to show some common networking topologies used with Xen. See [[Network Configuration Examples (Xen 4.1+)]] for examples of how to achieve these configurations using distribution provided tools.
<pre><nowiki>
vif = ['type=ioemu, mac=00:16:3E:00:00:03, bridge=eth0' , 'type=ioemu, mac=00:16:3E:00:00:04, bridge=eth1']</nowiki></pre>


== Standard Bridged Networking Architecture ==
= Yet Another ASCII Graphics Description of Xen Networking =
<pre>
== Xen 3.1- Networking ==

<pre><nowiki>
LAN0 LAN1
| |
+-----+-----------------------------------------------------+-----+
| | | |
| +---+-------------------------+ +-------------------------+---+ |
| | | | | | | |
| | peth0 xenbr0 | | xenbr1 peth1 | |
| | | | | |
| | xenbr0 vif1.0 vif1.1 | | vif2.0 vif2.1 xenbr1 | |
| | | \ | | / | | |
| +---^------------+---------\--+ +--/---------+------------^---+ |
| | | \ / | | |
| | +------+-------------X-------------+------+ | |
| | | | / \ | | | |
| | | +----+---------/--+ +--\---------+----+ | | |
| | | | | / | | \ | | | | |
| | | | eth0 eth1 | | eth0 eth1 | | | |
| | | | | | | | | | | | | |
| +-+-+ | | +-+-+ +-+-+ | | +-+-+ +-+-+ | | +-+-+ |
| | | | | | | | | | | | | | | | | | | |
| www ssh | | www ssh ftp pop | | www ssh ftp pop | | ftp pop |
| | | | | | | |
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
+-----------+ +-----------------+ +-----------------+ +-----------+
</nowiki></pre>

Notes:

* vif0.0 is absolute different from vif1.0. vif0.0 is created by netloop, while vif1.0 is created by netback.
== Xen 3.2+ Networking ==

<pre><nowiki>
LAN0 LAN1
| |
+-----+-----------------------------------------------------+-----+
| | | |
| +---+-------------------------+ +-------------------------+---+ |
| | | | | | | |
| | peth0 | | peth1 | |
| | | | | |
| | xenbr0 vif1.0 vif1.1 | | vif2.0 vif2.1 xenbr1 | |
| | | \ | | / | | |
| +---^------------+---------\--+ +--/---------+------------^---+ |
| | | \ / | | |
| | +------+-------------X-------------+------+ | |
| | | | / \ | | | |
| | | +----+---------/--+ +--\---------+----+ | | |
| | | | | / | | \ | | | | |
| | | | eth0 eth1 | | eth0 eth1 | | | |
| | | | | | | | | | | | | |
| +-+-+ | | +-+-+ +-+-+ | | +-+-+ +-+-+ | | +-+-+ |
| | | | | | | | | | | | | | | | | | | |
| www ssh | | www ssh ftp pop | | www ssh ftp pop | | ftp pop |
| | | | | | | |
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
+-----------+ +-----------------+ +-----------------+ +-----------+
</nowiki></pre>

Notes:

* eth0 and eth1 are the bridge names, which is a bit confusing.
== Alternative Xen Networking Architecture ==

<pre><nowiki>
LAN0 LAN1
LAN0 LAN1
| |
| |
Line 428: Line 220:
| | eth0 | | eth1 | |
| | eth0 | | eth1 | |
| | | | | |
| | | | | |
| | xenbr0 vif1.0 vif1.1 | | vif2.0 vif2.1 xenbr1 | |
| | xenbr0 vif1.0 vif2.0 | | vif1.1 vif2.1 xenbr1 | |
| | | \ | | / | | |
| | | \ | | / | | |
| +---^------------+---------\--+ +--/---------+------------^---+ |
| +---^------------+---------\--+ +--/---------+------------^---+ |
Line 444: Line 236:
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
+-----------+ +-----------------+ +-----------------+ +-----------+
+-----------+ +-----------------+ +-----------------+ +-----------+
</nowiki></pre>
</pre>


Notes:
Notes:


* based on xen 3.3, change bridge name to xenbr0, xenbr1, ...
* xenbrX has an active address, which is used by dom0 to communicate with outside.
* xenbrX has an active address, which is used by dom0 to communicate with outside.


== Xen Networking with vlan ==
== Xen Networking with VLANs ==
<pre>

<pre><nowiki>
LAN0 LAN1
LAN0 LAN1
| |
| |
Line 464: Line 254:
| | eth0.100 | | eth1.200 | |
| | eth0.100 | | eth1.200 | |
| | | | | |
| | | | | |
| | xenbr0 vif1.0 vif1.1 | | vif2.0 vif2.1 xenbr1 | |
| | xenbr0 vif1.0 vif2.0 | | vif1.1 vif2.1 xenbr1 | |
| | | \ | | / | | |
| | | \ | | / | | |
| +---^------------+---------\--+ +--/---------+------------^---+ |
| +---^------------+---------\--+ +--/---------+------------^---+ |
Line 480: Line 270:
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
+-----------+ +-----------------+ +-----------------+ +-----------+
+-----------+ +-----------------+ +-----------------+ +-----------+
</nowiki></pre>
</pre>


Notes:
Notes:
Line 488: Line 278:
* There are two things may need to be configured:
* There are two things may need to be configured:
** If your ethernet card does not natively support VLAN tags, you will have to set the maximum MTU to 1496 to make room for the tag. With command:
** If your ethernet card does not natively support VLAN tags, you will have to set the maximum MTU to 1496 to make room for the tag. With command:
<pre>
<pre><nowiki>
# ifconfig eth0 mtu 1496
# ifconfig eth0 mtu 1496
</nowiki></pre>
</pre>


* With the [[DomUs]] bridged to VLAN interfaces, some optimizations need to be disabled or tcp and udp connections will fail. This is done by disabling transmit checksum offloading:
* With the [[DomUs]] bridged to VLAN interfaces, some optimizations need to be disabled or tcp and udp connections will fail. This is done by disabling transmit checksum offloading:
<pre><nowiki>
<pre>
# ethtool -K eth0 tx off
# ethtool -K eth0 tx off
</nowiki></pre>
</pre>

* Need further test in the production environment.


== Xen Networking with bonding ==
== Xen Networking with bonding ==


<pre><nowiki>
<pre>
PRT0 PRT1 PRT2 PRT3
PRT0 PRT1 PRT2 PRT3
| | | |
| | | |
Line 516: Line 303:
| | bond0 | | bond1 | |
| | bond0 | | bond1 | |
| | | | | |
| | | | | |
| | xenbr0 vif1.0 vif1.1 | | vif2.0 vif2.1 xenbr1 | |
| | xenbr0 vif1.0 vif2.0 | | vif1.1 vif2.1 xenbr1 | |
| | | \ | | / | | |
| | | \ | | / | | |
| +---^------------+---------\--+ +--/---------+------------^---+ |
| +---^------------+---------\--+ +--/---------+------------^---+ |
Line 532: Line 319:
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
+-----------+ +-----------------+ +-----------------+ +-----------+
+-----------+ +-----------------+ +-----------------+ +-----------+
</nowiki></pre>
</pre>

Notes:

* Need some more destructive testing.


== Xen Networking with vlan on bonding ==
== Xen Networking with vlan on bonding ==


<pre>

<pre><nowiki>
PRT0 PRT1 PRT2 PRT3
PRT0 PRT1 PRT2 PRT3
| | | |
| | | |
Line 556: Line 338:
| | bond0.100 | | bond1.200 | |
| | bond0.100 | | bond1.200 | |
| | | | | |
| | | | | |
| | xenbr0 vif1.0 vif1.1 | | vif2.0 vif2.1 xenbr1 | |
| | xenbr0 vif1.0 vif2.0 | | vif1.1 vif2.1 xenbr1 | |
| | | \ | | / | | |
| | | \ | | / | | |
| +---^------------+---------\--+ +--/---------+------------^---+ |
| +---^------------+---------\--+ +--/---------+------------^---+ |
Line 572: Line 354:
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
| Domain0 | | Domain1 | | Domain2 | | Domain0 |
+-----------+ +-----------------+ +-----------------+ +-----------+
+-----------+ +-----------------+ +-----------------+ +-----------+
</nowiki></pre>
</pre>




Line 580: Line 362:
* In the VMs eth0 maps to bond0.100 and eth1 maps to bond1.200
* In the VMs eth0 maps to bond0.100 and eth1 maps to bond1.200
* Protocols suggest a service VLAN (100) and a mgmt VLAN (200)
* Protocols suggest a service VLAN (100) and a mgmt VLAN (200)

= Collection of Examples =

this is a collection of configurations that can help you to understand how it is working in detail and rapidly deploy one of them.

* [[XenNetRoutingWithPrivateNetwork]]

if you want to add a configuration to this list, please look at http://wiki.xensource.com/xenwiki/ under "Join Wiki as a Contributor"

= Reference =
* [[XenBridgeLoop]]


[[Category:Xen]]
[[Category:Xen]]
[[Category:Overview]]
[[Category:Overview]]
[{Category:Users]]
[[Category:HowTo]]
[{Category:Beginners]]
[[Category:Users]]
[[Category:Beginners]]
[[Category:Networking]]

Latest revision as of 23:47, 6 June 2018

Virtual Network Interfaces

Paravirtualised Network Devices

A Xen guest typically has access to one or more paravirtualised (PV) network interfaces. These PV interfaces enable fast and efficient network communications for domains without the overhead of emulating a real network device. Drivers for PV network devices are available by default in most PV aware guest OS kernels. In addition PV network drivers are available for various guest operating systems when running as a fully virtualised (HVM) guest, e.g. via PV on HVM drivers for Linux or the GPL PV drivers for Windows.

A paravirtualised network device consists of a pair of network devices. The first of these (the frontend) will reside in the guest domain while the second (the backend) will reside in the backend domain (typically Dom0). A similar pair of devices is created for each virtual network interface

The frontend devices appear much like any other physical Ethernet NIC in the guest domain. Typically under Linux it is bound to the xen-netfront driver and creates a device ethN. Under NetBSD and FreeBSD the frontend devices are named xennetN and xnN respectively.

The backend device is typically named such that it contains both the guest domain ID and the index of the device. Under Linux such devices are by default named vifDOMID.DEVID while under NetBSD xvifDOMID.DEVID is used.

In both cases the device naming is subject to the usual guest or backend domain facilities for renaming network devices. For the remainder of this document the default Linux naming, that is ethN for frontend and vifDOMID.DEVID for backend devices, will be used.

The front and backend devices are linked by a virtual communication channel, guest networking is achieved by arranging for traffic to pass from the backend device onto the wider network, e.g. using bridging, routing or Network Address Translation (NAT).

network-basic.png

Emulated Network Devices

As well as PV network interface fully virtualised (HVM) guests can also be configured with one or more emulated network devices. These devices emulate a real piece of hardware and are useful when a guest OS does not have PV drivers available or when they are not yet available (i.e. during guest installation).

An emulated network device is usually paired with a PV device with the same MAC address and configuration. This allows the guest to smoothly transition from the emulated device to the PV device when a driver becomes available.

The emulated network device is provided by the device model, running either as a process in domain 0 or as a Stub Domain.

When the DM runs as a process in domain 0 then the device is surfaced in the backend domain as a tap type network device. Historically these were named either tapID (for an arbitrary ID) or tapDOMID.DEVID. More recently they have been named vifDOMID.DOMID-emu to highlight the relationship between the paired PV and emulated devices.

If the DM runs in a stub domain then the device surfaces in domain 0 as a PV network device attached to the stub domain. The stub domain will take care of forwarding between the device emulator and this PV device.

For the remainder of this document PV and Emulated devices are mostly interchangeable and we will use the PV naming in the examples.

MAC addresses

Virtualised network interfaces in domains are given Ethernet MAC addresses. By default most Xen toolstacks will select a random address, depending on the toolstack this will either be static for the entire life time of the guest (e.g. Libvirt, XAPI or xend managed domains) or will change each time the guest is started (e.g. XL or xend unmanaged domains).

In the latter case if a fixed MAC address is required e.g. for using with DHCP then this can be be configured using the mac= option to the vif configuration directive (e.g. vif = ['mac=aa:00:00:00:00:11']). See XL Network Configuration for more details of the syntax.

When choosing MAC addresses there are in general three strategies which can be used. In decreasing order of preference these are:

  • Assign an address from the range associated with an Organizationally Unique Identifier (OUI) which you control. If you do not know what this means then you likely do not control an OUI and this option does not apply to you.
  • Generate a random sequence of 6 bytes, set the locally administered bit (bit 2 of the first byte) and clear the multicast bit (bit 1 of the first byte). In other words the first byte should have the bit pattern xxxxxx10 (where x is a randomly generated bit) and the remaining 5 bytes are randomly generated. See wikipedia for more details the structure of a MAC address.
  • Assign a random address from within the space 00:16:3e:xx:xx:xx. 00:16:3e is an OUI assigned to the Xen project and which has been made available for Xen users for the purposes of assigning local addresses within that space.

A MAC address must be unique among all network devices (both physical and virtual) on the same local network segment (e.g. on the LAN containing the Xen host). For this reason if you do not have your own OUI to use it is in general recommended to generate a random locally administered address (the second option above) rather than using the Xen OUI (the third option) since it gives 46 bits of randomness rather than 24 which significantly reduces the chances of a clash.

Bridging

The default (and most common) Xen configuration uses bridging within the backend domain (typically domain 0) to allow all domains to appear on the network as individual hosts.

In this configuration a software bridge is created in the backend domain. The backend virtual network devices (vifDOMID.DEVID)) are added to this bridge along with an (optional) physical Ethernet device to provide connectivity off the host. By omitting the physical Ethernet device an isolated network containing only guest domains can be created.

There are two common naming schemes when using bridged networking. In one scheme the physical device eth0 is renamed to peth0 and a bridge named eth0 is created. In the other the physical device remains eth0 while the bridge is named xenbr0 (or br0 etc). We shall use the eth0+xenbr0 naming scheme here.

Of course you are free to use whatever names you like, including descriptive names (e.g. "dmz", "internal", "external" etc).

network-bridge.png

Setting up bridged networking

The recommended method for configuring bridged networking is to use your distro supplied network configuration tools as described in Host Configuration/Networking.

Prior to Xen 4.1 when xend started up it would run the network-bridge script which would reconfigure any existing physical network configuration into a bridged network configuration i.e. it would create a bridge, move the IP address from the physical device to the bridge, add the physical device to the bridge etc. However this was fragile and prone to breaking and therefore is no longer recommended.

After Xen 4.1 xend will only do this if no bridges currently exist, so as to avoid overwriting any locally configured network configuration.

The XL toolstack will never modify the network configuration and expects that the administrator will have configured the host networking appropriately. Check out this XL example.

Attaching virtual devices to the appropriate bridge

When a domU starts up the vif-bridge script is run which:

  1. attaches vifDOMID.DEVID to the appropriate bridge
  2. brings vifDOMID.DEVID up.

With XL and xend the bridge to use for each VIF can be configured using the bridge configuration key. e.g.

   vif=[ 'bridge=mybridge' ]

or

   vif=[ 'mac=00:16:3e:01:01:01,bridge=mybridge' ]

or to create multiple interfaces attached to different bridges:

   vif=[ 'mac=00:16:3e:70:01:01,bridge=br0', 'mac=00:16:3e:70:02:01,bridge=br1' ]

Bridging Loops

It is common practice to disable the Spanning Tree Protocol on Xen bridges. However if guests are able to themselves bridge two or more interfaces together then you run the risk of creating bridging loops. See Xen Bridge Loop for more discussion of this issue.

Links

Some relevant topics from the mailing list:

Icon Ambox.png Many of the links presented here are rather old and may refer to configurations which are no longer best practice, such as the use of the network-* scripts to configure networking.

Open vSwitch

The Xen 4.3 release will feature initial integration of Open vSwitch based networking. Conceptually this is similar to a bridged configuration but rather than placing each vif on a Linux bridge instead an Open vSwitch switch is used. Open vSwitch supports more advance Software-defined Networking (SDN) features such as OpenFlow.

Setting up Open vSwitch networking

Set up openvswitch according to the Host Networking Configuration Examples.

If you want openvswitch to be the default, add the following line to your xl.conf file:

vif.default.script="vif-openvswitch"

If you have given the openvswitch bridge a name other than xenbr0, you will need to update that default as well:

vif.default.bridge="ovsbr0"

Alternately, you can specify the new script (and bridge, if necessary) in each config file by adding script=vif-openvswitch (and possibly bridge=ovsbr0) to the vifspec of individual vifs in config files. See xl-network-configuration.markdown for more information.

vif = [ 'script=vif-openvswitch,bridge=ovsbr0' ]

Attaching virtual devices to the appropriate switch

Xen 4.3 ships with a vif-openvswitch hotplug script which behaves similarly to the vif-bridge script, except that it attaches the VIF to an openvswitch switch (named via the VIF's bridge parameter).

In addition to naming the bridge the openvswitch hotplug script supports an extended syntax for the bridge optio which allows for VLAN tagging and trunking. That syntax is:

BRIDGE_NAME[.VLAN][:TRUNK:TRUNK]

To add a vif to VLAN 102 on bridge xenbr0:

vif = [ 'mac=00:16:3e:01:01:01,bridge=xenbr0.102' ]

To add a vif to bridge xenbr1 trunked and receiving traffic for VLAN 101 and 202:

vif = [ 'mac=00:16:3e:01:01:01,bridge=xenbr1:101:202' ]

Routing

In a routed network configuration a point-to-point link is created between the backend domain (typically domain 0) and each domU virtual network interface. Traffic is then routed between these point-to-point links and the outside world using the backend domain's network routing functionality.

For a general discussion of network routing see the wikipedia page on the subject.

Because routes are created dynamically as domains are created it is usually necessary for each guest network interface to have a known static IP address.

Setting up routing on the host

The recommended method for configuring networking is to use your distro supplied network configuration tools as described in Host Configuration/Networking.

Prior to Xen 4.1 when xend started up it would run the network-route script which perform the necessary configuration. However this mechanism was fragile and prone to breaking and therefore is no longer recommended.

The XL toolstack will never modify the network configuration and expects that the administrator will have configured the host networking appropriately. Check out this XL example.

Associating routes with virtual devices

When domU starts up, the vif-route script is run for each virtual device vifDOMID.DEVID. This script sets up routing for that device by

  • Adding an IP address to the device. This address is largely arbitrary but required in order that the interface can be involved in routing. By default domain 0's IP address is used.
  • Brings up the device.
  • Adds a host static route for the interfaces IP address as specified in domU config file routing traffic to the vifDOMID.DEVID interface.

The IP address associated with a virtual network interface should be specified in the domain configuration file using the ip configuration key.

   vif=[ 'ip=192.168.1.12' ]

or

   vif=[ 'mac=00:16:3e:01:01:01,ip=192.168.1.12' ]

or for multiple devices:

   vif=[ 'mac=00:16:3e:70:01:01,ip=192.168.13.15', 'mac=00:16:3e:70:02:01,ip=192.168.75.11' ]

More information on vif-route can be found here.

Network Address Translation

Network Address Translation or NAT is a form of routing which gives each guest VIF its own IP address on a private/internal network, often using RFC1918 addresses, and performs address translation at the router/firewall (e.g. domain 0) to connect the entire private network to the rest of the network via a single public IP address.

NAT is sometimes also called "IP masquerading".

Setting up NAT on the host

Setting up NAT is similar to configuring Routing as described above with the most obvious difference being that one should enable NAT in the backend domain.

The recommended method for configuring networking is to use your distro supplied network configuration tools as described in Host Configuration/Networking.

Prior to Xen 4.1 when xend started up it would run the network-nat script which perform the necessary configuration. However this mechanism was fragile and prone to breaking and therefore is no longer recommended.

The XL toolstack will never modify the network configuration and expects that the administrator will have configured the host networking appropriately. Check out this XL example.

Virtual Device Configuration

In a NAT'd configuration virtual devices are given IP addresses on a private network, typically an RFC1918 internal network. Guests may either be configured statically with addresses in the chosen network space or you can chose to run a DHCP server within that network (perhaps on the host itself) to provide addresses to guests.

When domU starts up, the vif-nat script is run for each virtual device vifDOMID.DEVID. If the ISC DHCP server is install then this script will attempt to dynamically reconfigure the DHCP service to serve up entries for the mac and ip address configuration keys in the guest configuration file. This is specific to the ISC DHCP servers configuration file syntax so if you are using a different DHCP server or simply want to manage the DHCP server yourself then you should disable the vif-nat script (which seems like a good idea, since automatic editing of the DHCP configuration is bound to be fragile).

VLANs & Bonding

Multiple tagged VLANs can be supported by configuring 802.1Q VLAN support into the backend domain (typically domain 0).

Once configured according to Host Configuration/Networking then the VLAN devices can be treated like any other device and used for either routing or bridging.

Likewise bonding (or even VLANs over bonding etc) can also be created by following distribution specific documentation and treating the resulting device as normal.

Advanced configurations

By combining the above with the networking capabilities of the host OS it is possible to create more complex configurations to suit various different requirements.

Virtual network using a brouter.
This configuration uses a bridge with no physical device shared by the guests. The bridge an IP address in domain 0 which is then use routed (or even NATed) to the external network (hence bridged router). See "Xen3 and a Virtual Network" for a more complete description of this type of configuration.

ASCII Art Examples of Xen Networking Topologies

The following attempt to show some common networking topologies used with Xen. See Network Configuration Examples (Xen 4.1+) for examples of how to achieve these configurations using distribution provided tools.

Standard Bridged Networking Architecture

      LAN0                                                  LAN1
       |                                                     |
 +-----+-----------------------------------------------------+-----+
 |     |                                                     |     |
 | +---+-------------------------+ +-------------------------+---+ |
 | |   |                         | |                         |   | |
 | | eth0                        | |                        eth1 | |
 | |                             | |                             | |
 | | xenbr0       vif1.0  vif2.0 | |  vif1.1  vif2.1      xenbr1 | |
 | |                |       \    | |    /       |                | |
 | +---^------------+---------\--+ +--/---------+------------^---+ |
 |     |            |           \   /           |            |     |
 |     |     +------+-------------X-------------+------+     |     |
 |     |     |      |           /   \           |      |     |     |
 |     |     | +----+---------/--+ +--\---------+----+ |     |     |
 |     |     | |    |       /    | |    \       |    | |     |     |
 |     |     | |  eth0    eth1   | |   eth0   eth1   | |     |     |
 |     |     | |    |       |    | |    |       |    | |     |     |
 |   +-+-+   | |  +-+-+   +-+-+  | |  +-+-+   +-+-+  | |   +-+-+   |
 |   |   |   | |  |   |   |   |  | |  |   |   |   |  | |   |   |   |
 |  www ssh  | | www ssh ftp pop | | www ssh ftp pop | |  ftp pop  |
 |           | |                 | |                 | |           |
 |  Domain0  | |     Domain1     | |     Domain2     | |  Domain0  |
 +-----------+ +-----------------+ +-----------------+ +-----------+

Notes:

  • xenbrX has an active address, which is used by dom0 to communicate with outside.

Xen Networking with VLANs

      LAN0                                                  LAN1
       |                                                     |
 +-----+-----------------------------------------------------+-----+
 |     |                                                     |     |
 |   eth0                                                  eth1    |
 |     |                                                     |     |
 | +---+-------------------------+ +-------------------------+---+ |
 | |   |                         | |                         |   | |
 | | eth0.100                    | |                    eth1.200 | |
 | |                             | |                             | |
 | | xenbr0       vif1.0  vif2.0 | |  vif1.1  vif2.1      xenbr1 | |
 | |                |       \    | |    /       |                | |
 | +---^------------+---------\--+ +--/---------+------------^---+ |
 |     |            |           \   /           |            |     |
 |     |     +------+-------------X-------------+------+     |     |
 |     |     |      |           /   \           |      |     |     |
 |     |     | +----+---------/--+ +--\---------+----+ |     |     |
 |     |     | |    |       /    | |    \       |    | |     |     |
 |     |     | |  eth0    eth1   | |   eth0   eth1   | |     |     |
 |     |     | |    |       |    | |    |       |    | |     |     |
 |   +-+-+   | |  +-+-+   +-+-+  | |  +-+-+   +-+-+  | |   +-+-+   |
 |   |   |   | |  |   |   |   |  | |  |   |   |   |  | |   |   |   |
 |  www ssh  | | www ssh ftp pop | | www ssh ftp pop | |  ftp pop  |
 |           | |                 | |                 | |           |
 |  Domain0  | |     Domain1     | |     Domain2     | |  Domain0  |
 +-----------+ +-----------------+ +-----------------+ +-----------+

Notes:

  • With this configuration, DomUs are completely unaware of the fact that they are utilizing a VLAN, all the work is done within the bridges in Dom0.
  • Dom0 is aware of the traffic within the VLAN, because it has an active address on the xenbrX interfaces. To prevent it, don't give the xenbrX an active address, but configure a extra interface for management.
  • There are two things may need to be configured:
    • If your ethernet card does not natively support VLAN tags, you will have to set the maximum MTU to 1496 to make room for the tag. With command:
 # ifconfig eth0 mtu 1496
  • With the DomUs bridged to VLAN interfaces, some optimizations need to be disabled or tcp and udp connections will fail. This is done by disabling transmit checksum offloading:
 # ethtool -K eth0 tx off

Xen Networking with bonding

              PRT0 PRT1                       PRT2 PRT3
                |   |                           |   |
 +--------------+---+---------------------------+---+--------------+
 |              |   |                           |   |              |
 |            eth0 eth1                       eth2 eth3            |
 |              |   |                           |   |              |
 |              +-+-+                           +-+-+              |
 |                |                               |                |
 | +--------------+--------------+ +--------------+--------------+ |
 | |              |              | |              |              | |
 | |            bond0            | |            bond1            | |
 | |                             | |                             | |
 | | xenbr0       vif1.0  vif2.0 | |  vif1.1  vif2.1      xenbr1 | |
 | |                |       \    | |    /       |                | |
 | +---^------------+---------\--+ +--/---------+------------^---+ |
 |     |            |           \   /           |            |     |
 |     |     +------+-------------X-------------+------+     |     |
 |     |     |      |           /   \           |      |     |     |
 |     |     | +----+---------/--+ +--\---------+----+ |     |     |
 |     |     | |    |       /    | |    \       |    | |     |     |
 |     |     | |  eth0    eth1   | |   eth0   eth1   | |     |     |
 |     |     | |    |       |    | |    |       |    | |     |     |
 |   +-+-+   | |  +-+-+   +-+-+  | |  +-+-+   +-+-+  | |   +-+-+   |
 |   |   |   | |  |   |   |   |  | |  |   |   |   |  | |   |   |   |
 |  www ssh  | | www ssh ftp pop | | www ssh ftp pop | |  ftp pop  |
 |           | |                 | |                 | |           |
 |  Domain0  | |     Domain1     | |     Domain2     | |  Domain0  |
 +-----------+ +-----------------+ +-----------------+ +-----------+

Xen Networking with vlan on bonding

              PRT0 PRT1                       PRT2 PRT3
                |   |                           |   |
 +--------------+---+---------------------------+---+--------------+
 |              |   |                           |   |              |
 |            eth0 eth1                       eth2 eth3            |
 |              |   |                           |   |              |
 |              +-+-+                           +-+-+              |
 |                |                               |                |
 |              bond0                           bond1              |
 |                |                               |                |
 | +--------------+--------------+ +--------------+--------------+ |
 | |              |              | |              |              | |
 | |          bond0.100          | |          bond1.200          | |
 | |                             | |                             | |
 | | xenbr0       vif1.0  vif2.0 | |  vif1.1  vif2.1      xenbr1 | |
 | |                |       \    | |   	/       |                | |
 | +---^------------+---------\--+ +--/---------+------------^---+ |
 |     |            |           \   /           |            |     |
 |     |     +------+-------------X-------------+------+     |     |
 |     |     |      |           /   \           |      |     |     |
 |     |     | +----+---------/--+ +--\---------+----+ |     |     |
 |     |     | |    |       /    | |    \       |    | |     |     |
 |     |     | |  eth0    eth1   | |   eth0   eth1   | |     |     |
 |     |     | |    |       |    | |    |       |    | |     |     |
 |   +-+-+   | |  +-+-+   +-+-+  | |  +-+-+   +-+-+  | |   +-+-+   |
 |   |   |   | |  |   |   |   |  | |  |   |   |   |  | |   |   |   |
 |  www ftp  | | www ftp ssh dns | | www ftp ssh dns | |  ssh dns  |
 |           | |                 | |                 | |           |
 |  Domain0  | |     Domain1     | |     Domain2     | |  Domain0  |
 +-----------+ +-----------------+ +-----------------+ +-----------+


Notes:

  • The connections at the top are switch ports - probably on 2 switches with an ISL
  • bond0 has eth0 and eth1 ; bond1 has eth2 and eth3
  • In the VMs eth0 maps to bond0.100 and eth1 maps to bond1.200
  • Protocols suggest a service VLAN (100) and a mgmt VLAN (200)