Xen FAQ Networking: Difference between revisions

From Xen
Jump to navigationJump to search
m (typo)
mNo edit summary
Line 235: Line 235:
[[Category:Users]]
[[Category:Users]]
[[Category:Beginners]]
[[Category:Beginners]]
[[Category:Networking]]

Revision as of 22:35, 25 June 2012


Networking Issues

What is veth, vif or xenbr0?

You should read XenNetworking http://wiki.xensource.com/xenwiki/XenNetworking

Why can't I ssh into or ping a newly created domain?

In the default configuration we rely on the Linux bridge-utils in domain 0 to set up virtual networking. After you've created a new domain (e.g., domain 1) you should be able to run ifconfig in domain 0 and see an interface with a name like vif1.0; you should also be able to check that bridging is working by typing brctl show xen-br0. Finally, you can check the IP confiuration in the new domain by logging into it via the console (xm console) and running standard tools such as ifconfig and route.

Why does my new domain receive no network traffic until after it initiates an outgoing connection?

This is an issue that occurs under the following circumstances:

  • You do not specify the domain's MAC address, causing a random MAC address to be selected at domain creation time
  • The upstream router has a local ARP cache

When a domain is destroyed, the host system's ARP cache is purged of addresses for the defunct virtual NIC. If the domain is recreated it is likely to be allocated a different random MAC address. This is no problem for the host machine, but the external switch/router still knows the "old" MAC address. The switch/router requires an outbound packet to the outside world to implicitly eradicate the old ARP-cache entry. This is not really Xen's fault at all, just a property of the implementation of ARP resolution.

Fixes: Either manually declare the MAC address in your VM config file, or upgrade to the 2.0-testing tree where the problem is fixed. This fix will also find its way into the forthcoming 2.0.4 maintenance release.

How do I fix MTU issues resulting in "Received packet needs 8 bytes more headroom" in dmesg or /var/log/message output ?

See: http://lists.xensource.com/archives/html/xen-devel/2005-12/msg00226.html

You can work around the bug by reducing the MTU of eth0 in the dom0 e.g. "ifconfig eth0 mtu 1400". Put this in your networking scripts (e.g.: /etc/sysconfig/network/ifcfg-eth0 for RH/RPM systems or /etc/network/interfaces for Debian/Deb systems). See: http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/ref-guide/s1-networkscripts-interfaces.html for RH syntax, 'man interfaces' for Debian.

This bug is understood and a fix has been applied to the testing tree.

It only effects dom0 kernels built with the -xen config rather than -xen0.

Basically, the -xen kernel config turns on so much stuff that the area reserved for the max possible header length is too big. This causes a netfront slow-path to be exercised that copies the SKB. Unfortunately, this path hadn't been exercised before, and guess what, it was subtly broken for checksum-offloaded packets.

TCP and UDP checksum errors, ping but nothing else, ipsec tunnels don't form, DNAT translation doesn't work

Try running the following command in each domU:


ethtool -K eth0 tx off

This command disable TX checksumming

To check checksum problems, you can use tcpdump:


tcpdump -vv -n -i eth0

Read http://lists.xensource.com/archives/html/xen-users/2006-04/msg00032.html

This may or may not work.

Patch for network-bridge

Xen and Shorewall

There is a document about configuring Shorewall in Dom0 at http://www.shorewall.net/Xen.html

http://www1.shorewall.net/XenMyWay.html can be useful also.

Too many vethX and vif0.X

By default, 8 vethX and vif0.X are created. These interfaces are "cheap" but clutter list of interfaces. If you want to allocate only the necessary number, pass netloop.nloopbacks=NUMBER parameter to kernel command line.

I can't use more than 3 network interfaces in domU

This is a limitation in Xen 3. Xen 3.1 supports 8 network interfaces.

Bridging

Which Mechanism is used by Xen bridging to handle packets coming from various VMs to forward them to their destination

Nothing. Xen by itself does not handle bridge. dom0 OS does that. On Linux dom0 : http://www.linuxfoundation.org/en/Net:Bridge On opensolaris dom0: http://opensolaris.org/os/project/crossbow/

IP Determination

I want to know the IP of a running VM in XEN.. Is there any way to have this without login to that VM..

Find domU's mac. This can be easy (if your domU config specify a static MAC).

The easy way to get domU's IP address, you can look at domUs config file (if you specify it), or you can try running this:

xm network-list domU_name


if you get this line


Idx BE MAC Addr. handle state evt-ch tx-/rx-ring-ref BE-path 0 0 00:16:3E:F7:D6:E7 0 4
6 16238/16237 /local/domain/0/backend/vif/163/0


Then domU's MAC is 00:16:3E:F7:D6:E7

The hard way to find out your MAC from a bridge, since your bridge is called eth0 you can try:

xm list, note the domain ID (the number)

  • brctl showstp eth0 that should show which interface is identified as which "port". For example if your domU has an ID 163, look for the lines that has "vif163.0" or "tap163.0". If the line looks like this


vif163.0 (11)

then that vif is identified as port 11 on the bridge.

  • brctl showmacs eth0 Look for the port corresponding to the port above. If you get this line
11 00:16:3e:f7:d6:e7 no 0.96

then on port11 (where your domU interface is) there's a MAC address 00:16:3e:f7:d6:e7.

Now that you have domU's mac, you try snooping the bridge for that MAC. For example :


# tcpdump -n -i eth0 ether src 00:16:3e:f7:d6:e7 tcpdump: verbose output suppressed, use
-v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
15:54:56.419482 IP 10.0.0.10 > 10.0.0.1: ICMP echo reply, id 5443, seq 1, length 64 15:54:57.422349
IP 10.0.0.10 > 10.0.0.1: ICMP echo reply, id 5443, seq 2, length 64


Then you know that domU has IP address 10.0.0.10.

NAT

I managed to configure NAT on dom0 but this does not work properly. Outgoing traffic from domU is seen with the original domU ip address instead of the dom0 ip address and the requests can't get back to the domU.

I figured out MASQUERADING was not set.

The following rule needs to be set:


iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE


SSL/VPN

Why can't I use openvpn with a xen guest? I can't load the tun module

From openvpn perspective, the requirements for openvpn on xen domU is the same as openvpn on native Linux. If you can't load the tun module, then you need to get a kernel that supports it.

The easiest way is to use a distro that supports it. For example, I'm using RHEL/Centos 5.3 domU, loaded with pygrub, and they can run openvpn just fine.

Another alternative is compile your own kernel with tun/tap support.

I want to ask how to mount one storage device to 2 guests? When I try to create vm handle everything is fine I can create one vdi and vbds for every guest. When I start first machine everything is ok, but when I try to start second one it says that

ERROR: 2 INTERNAL_ERROR Device 2051 (vbd) could not be connected. Device /dev/mapper/test_vg-test64_454 is mounted in a guest domain,
and so cannot be mounted now.


Ideas how I can share one device to two or more virtual machines? I don't want network solution like nfs,iscsi ... etc but instead use ocfs2. How I can set mode to w! ?

First of all, you DO realize that sharing a block device without some kind of cluster file system could lead to data corruption? If you want to share it anyway, you can try changing the mode to "r"(for read only) or "w!" (to force read-write multiple mount).

in your domU.cfg:

 
'phy:/dev/data/bla-disk,sda1,w' => 'phy:/dev/data/bla-disk,sda1,r'


I'm looking for a way to monitor network activities of processes in Guest OS. I want to get a list of Guest OS processes that open TCP connections to other machines (like "lsof" command).

If you're thinking about doing on from dom0, that's not possible. You need something that runs on domU for that, possibly by using snmpd and extending it to run "netstat -anp --tcp". Other host (including dom0) can then collect the information using snmp.

Also, have a look at Versiera, it provides what you are looking for including, user IDs, inbound/outbound communications, IPv4, IPV6, etc. There are many more capabilities. Versiera is not open-source, but the Internet self-manage service is free.

I am attempting to gather stats on usage of the "metal", by which I mean the physical host's hardware. I would like to know the CPU, IO, and network stats for the hardware.

All DomU's IO passes through Dom0. there you can measure all you want.

  • for disk IO, if you use phy: devices, you can use iostat to see the usage of each device. if you use file-based backends, it would be easier to check the userspace daemons. maybe iotop would help. in any case, if aggregate usage is all you need, just measure the disk usage seen at Dom0
  • for network, if you can measure at peth0, that would give you the aggregate usage. if you need stats for each DomU, check the respective tun devices.

I have some trouble finding the best solution to my networking requirements. I want to have the following things:

  • dom0 : have 2 physical networks devices * 1 eth with public IP (static) * 1 eth with private IP (static)
  • domU * 1 eth with private IP
  • I also want openVPN solution to let people outside the private network have access to it.
  • A DHCP server is required so that domU get their IP from it.

I have debian lenny and xen 3.2 installed and working. Actually openvpn and dhcp are on the dom0. All is fine *except* that domU don't have access to internet (this is my main problem). My current config use network-bridge netdev=eth1 (eth1 have a static private ip).

It is perhaps better to have dhcp and openvpn server in a domU, feel free to suggest what you think is the best choice (and the config that go with it) :)

When designing such setups, with bridge networking, I often find it easier to think of dom0 as a switch or router, and domU like any other physical server on your network.

In your setup you're making dom0 act as router/firewall. Your problem is that probably you haven't setup ip forwarding and NAT on dom0 to allow domU internet access.

Note that (if you want) you could also have dom0 act like a switch. In that scenario you'd need another domU, with two network interfaces connected to both dom0's eth0 and eth1, acting as router/firewall.

== I have been trying to get a HVM DomU running and being able to connect to a vlan. I am starting to get the feeling that at least the hw emulations I have tried do not supoprt vlans. Also all the things I have found online would have me creating the vlans inside the Dom0 and pushing them to the DomU's as regular interface ==

You could always have a trunk port in your dom0 and create bridges for each VLAN for xen. You can even script it so you can add it to boot time. If you have the VLAN trunk set up, you can create bridges as follows.

For this example, my trunk interface is on eth0 and the vlan I am adding is 2.


# vconfig add eth0 2 # brctl addbr xenbr2 # brctl addif xenbr2 eth0.2 # ifconfig eth0.2 up # ifconfig xenbr2 up

Now all you would add in the domU configuration file is:

vif=['bridge=xenbr2']

And you would be on VLAN 2. Otherwise I'm pretty sure you would have to pass through the network card to get VLAN access. You can also script this and give it a space separated list of VLANs and loop it through. I will leave this up to you though.