[arch-general] tap device
Hi, I am setting up a network for a container. I have a bridge br0 with a eth adapter "enp7s0" and a tap device "tap0" ******************************* /etc/netctl/bridge Description="Bridge connection" Interface=br0 Connection=bridge BindsToInterfaces=(enp7s0 tap0) IP=static Address='192.168.1.87/24' Gateway='192.168.1.254' DNS='192.168.1.254' /etc/netctl/ethernet Description='ethernet connection' Interface=enp7s0 Connection=ethernet IP=no IP6=no /etc/netctl/tuntap Description='tuntap connection' Interface=tap0 Connection=tuntap Mode='tap' User='nobody' Group='nobody' ****************************** gabx@hortensia ➤➤ ~ % netctl list * ethernet * tuntap * bridge gabx@hortensia ➤➤ ~ % ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp7s0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000 link/ether 14:da:e9:b5:7a:88 brd ff:ff:ff:ff:ff:ff 3: tap0: <NO-CARRIER,BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc pfifo_fast master br0 state DOWN group default qlen 500 link/ether 6a:1d:c3:4b:91:4d brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 14:da:e9:b5:7a:88 brd ff:ff:ff:ff:ff:ff inet 192.168.1.87/24 brd 192.168.1.255 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::16da:e9ff:feb5:7a88/64 scope link valid_lft forever preferred_lft forever gabx@hortensia ➤➤ ~ % lsmod | grep tun 58:tun 19783 1 ************************************* I do not understand why the tap0 profile is listed as DOWN. (# ip link set dev tap0 up does nothing more) I have a custom kernel. Do I need to add anything? Is there anything wrong on my netctl profiles ? No dhcp enable. TY for help
3: tap0: <NO-CARRIER,BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc
...
I do not understand why the tap0 profile is listed as DOWN. (# ip link set dev tap0 up does nothing more)
it's not down. it's UP, only NO-CARRIER. but for a tap device you also need a user-space program to handle it. what do you want to accomplish? do you perhaps need a veth device? -- damjan
On 10.03.14 at 23:30, Damjan Georgievski wrote:
3: tap0: <NO-CARRIER,BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc
...
I do not understand why the tap0 profile is listed as DOWN. (# ip link set dev tap0 up does nothing more)
it's not down. it's UP, only NO-CARRIER. but for a tap device you also need a user-space program to handle it. what do you want to accomplish? do you perhaps need a veth device?
-- damjan
Actually, there was a "state DOWN" in the full command output, which is what matters. -- jlk
what do you want to accomplish? do you perhaps need a veth device?
I want to create a network for a linux container managed by systemd-nspawn. As Jakub mentioned, the interface is DOWN, thus NO-CARRIER.
On Monday 10 Mar 2014 18:57:38 arnaud gaboury wrote:
Hi,
I am setting up a network for a container.
I have a bridge br0 with a eth adapter "enp7s0" and a tap device "tap0"
******************************* /etc/netctl/bridge Description="Bridge connection" Interface=br0 Connection=bridge BindsToInterfaces=(enp7s0 tap0) IP=static Address='192.168.1.87/24' Gateway='192.168.1.254' DNS='192.168.1.254'
/etc/netctl/ethernet Description='ethernet connection' Interface=enp7s0 Connection=ethernet IP=no IP6=no
/etc/netctl/tuntap Description='tuntap connection' Interface=tap0 Connection=tuntap Mode='tap' User='nobody' Group='nobody'
Hi Arnaud, I don't think you need the /etc/netctl/ethernet profile at all. The enp7s0 interface is being absorbed into the bridge, and so should not be considered on its own any more. Otherwise, this looks OK. Are you seeing any connectivity problems? Paul
Hi Arnaud, I don't think you need the /etc/netctl/ethernet profile at all. The enp7s0 interface is being absorbed into the bridge, and so should not be considered on its own any more. Otherwise, this looks OK. Are you seeing any connectivity problems?
Thank you. Btw, as I was thinking using the tap0 interface for the container with systemd-networkd when I realized systemd.netdev does not support tap interface ! So: 1- i do not use systemd-networkd and configure network inside container using tap0 (I spent so much time understanding networkd I will not favor this way) 2- I use networkd with a bridge br0 bind to "enp7s0" and "my_container_interface". I am still not sure what kind of interface I shall put for container. I know "vb-container_name" works as interface (this is a virtual bridge), when enabling the 80-container-host0.network, but I need to modify systemd-nspawn@.service and append --network-bridge=br0 in the line ExecStart. This does sound like a dirty hack. In this case, I will have host0 on container, and br0 on host. Furthermore, the virtual bridge notion sounds weird to me. A bridge is already a virtual interface, so with the virtual bridge, it sound we are at virtual layer 2 !!
On Tuesday 11 Mar 2014 11:06:32 arnaud gaboury wrote:
Hi Arnaud, I don't think you need the /etc/netctl/ethernet profile at all. The enp7s0 interface is being absorbed into the bridge, and so should not be considered on its own any more. Otherwise, this looks OK. Are you seeing any connectivity problems? Thank you. Btw, as I was thinking using the tap0 interface for the container with systemd-networkd when I realized systemd.netdev does not support tap interface ! So: 1- i do not use systemd-networkd and configure network inside container using tap0 (I spent so much time understanding networkd I will not favor this way) 2- I use networkd with a bridge br0 bind to "enp7s0" and "my_container_interface". I am still not sure what kind of interface I shall put for container. I know "vb-container_name" works as interface (this is a virtual bridge), when enabling the 80-container-host0.network, but I need to modify systemd-nspawn@.service and append --network-bridge=br0 in the line ExecStart. This does sound like a dirty hack. In this case, I will have host0 on container, and br0 on host. Furthermore, the virtual bridge notion sounds weird to me. A bridge is already a virtual interface, so with the virtual bridge, it sound we are at virtual layer 2 !!
systemd-networkd is still really new. If you're having difficulty with it, I recommend simply using netctl, which is a bit more mature. As far the rest, I'm afraid I don't know. Before any manual configuration, did you see any network interfaces in the container? I'm losing track of what you've created and what systemd creates for you when you spawn the container. (I think you're using systemd-nspawn, right?) Paul
systemd-networkd is still really new. If you're having difficulty with it, I recommend simply using netctl, which is a bit more mature.
I do for part of the setup on host. I am trying to do zero network config on container, thus the use of networkd. But I can use netctl inside the container, you are right As far the rest, I'm afraid I don't know. Before any
manual configuration, did you see any network interfaces in the container? I'm losing track of what you've created and what systemd creates for you when you spawn the container. (I think you're using systemd-nspawn, right?)
yes
On Tuesday 11 Mar 2014 13:06:23 arnaud gaboury wrote:
systemd-networkd is still really new. If you're having difficulty with it, I recommend simply using netctl, which is a bit more mature.
I do for part of the setup on host. I am trying to do zero network config on container, thus the use of networkd. But I can use netctl inside the container, you are right As far the rest, I'm afraid I don't know. Before any
manual configuration, did you see any network interfaces in the container? I'm losing track of what you've created and what systemd creates for you when you spawn the container. (I think you're using systemd-nspawn, right?)
yes
I'm not sure what you mean by zero network config. Do you mean you want to set up the tap interface entirely on the host, along with a static IP, and then the container simply attaches to that tap device when it boots? I guess that's possible. Can I ask what you're trying to achieve in terms of this network setup? Are you just after straight-forward internet connectivity, or are you planning to filter packets going in and out of the container? For the simple usecase, I think you can simply use the normal host interface from inside the container, though you can't modify it. Paul
I'm not sure what you mean by zero network config. Do you mean you want to set up the tap interface entirely on the host, along with a static IP, and then the container simply attaches to that tap device when it boots? I guess that's possible.
yes. But I have a static IP now on container but using bridge, no tap
Can I ask what you're trying to achieve in terms of this network setup? Are you just after straight-forward internet connectivity, or are you planning to filter packets going in and out of the container? For the simple usecase, I think you can simply use the normal host interface from inside the container, though you can't modify it.
The container is dedicated to be a test server for months before I set up a production server (not on my machine this time !). A lot of web services will be hosted on the container. The container is a way to test my settings for web apps and coding
On Tuesday 11 Mar 2014 15:45:19 arnaud gaboury wrote:
The container is dedicated to be a test server for months before I set up a production server (not on my machine this time !). A lot of web services will be hosted on the container. The container is a way to test my settings for web apps and coding
OK, so you really just need basic internet connectivity; you don't have any special filtering requirements. When you boot the container, can it see the enp7s0 interface? That is, is the enp7s0 interface visible both from the host and from the container? Paul
OK, so you really just need basic internet connectivity; you don't have any special filtering requirements. When you boot the container, can it see the enp7s0 interface? That is, is the enp7s0 interface visible both from the host and from the container?
no. On container, I just see hos0, what is expected
On Tuesday 11 Mar 2014 18:03:20 arnaud gaboury wrote:
OK, so you really just need basic internet connectivity; you don't have any special filtering requirements. When you boot the container, can it see the enp7s0 interface? That is, is the enp7s0 interface visible both from the host and from the container?
no. On container, I just see hos0, what is expected
So you're using --network-veth when you launch the container? As far as I can tell, you don't need a tap interface at all; that will be handled automatically by systemd. I think all you need to do is create the bridge br0, binding the physical interface enp7s0 on its own (a bridge containing only the host's adaptor). Then, you launch the container with -- network-bridge=br0. That will automatically add the container's interface to the bridge. I'm not sure if the container will be aware of the bridge's IP address at this point. I'd want to check with the "ip a" command to see if it's listening on the same IP address on host0 and check to see if it has connectivity before assigning an IP to the host0 interface inside the container. Paul
On 12-03-2014 10:43, Paul Gideon Dann wrote:
On Tuesday 11 Mar 2014 18:03:20 arnaud gaboury wrote:
OK, so you really just need basic internet connectivity; you don't have any special filtering requirements. When you boot the container, can it see the enp7s0 interface? That is, is the enp7s0 interface visible both from the host and from the container?
no. On container, I just see hos0, what is expected
So you're using --network-veth when you launch the container? As far as I can tell, you don't need a tap interface at all; that will be handled automatically by systemd.
I think all you need to do is create the bridge br0, binding the physical interface enp7s0 on its own (a bridge containing only the host's adaptor). Then, you launch the container with -- network-bridge=br0. That will automatically add the container's interface to the bridge.
I'm not sure if the container will be aware of the bridge's IP address at this point. I'd want to check with the "ip a" command to see if it's listening on the same IP address on host0 and check to see if it has connectivity before assigning an IP to the host0 interface inside the container.
Paul
I have found that you will need to bring the virtual interface up (the one handled by systemd-nspawn). If you are running systemd-networkd on the host then you can do that easily with a network file. I've called mine vb-veth.network and it contains: [Match] Name=vb-* Right now on the host side I have everything being handled only by systemd-{networkd,nspawn}, I don't add any physical interfaces to the bridge but I suppose that would also be possible to do with systemd-networkd. -- Mauro Santos
I have found that you will need to bring the virtual interface up (the one handled by systemd-nspawn).
Right. I am left after I boot my machine (the host) with this : 4: vb-dahlia: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master br0 state DOWN group default qlen 1000 link/ether 62:a2:6b:f4:0f:87 brd ff:ff:ff:ff:ff:ff I have to manually # ip link set dev vb-dahlia up to get the network working on the container : 2: host0: <BROADCAST,MULTICAST,ALLMULTI,NOTRAILERS,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 5a:51:a2:a2:b5:fb brd ff:ff:ff:ff:ff:ff inet 192.168.1.94/24 brd 192.168.1.255 scope global host0 valid_lft forever preferred_lft forever inet6 fe80::5851:a2ff:fea2:b5fb/64 scope link valid_lft forever preferred_lft forever If you are running systemd-networkd on
the host then you can do that easily with a network file. I've called mine vb-veth.network and it contains:
[Match] Name=vb-*
I will try your hack asap
Right now on the host side I have everything being handled only by systemd-{networkd,nspawn}, I don't add any physical interfaces to the bridge Ah? I have two netctl profiles, one for my physical eth (enp7s0) with no ip, one for bridge (br0) with enp7s0 binded to. So you mean you don't have any bridge profile managed by netctl ?
but I suppose that would also be possible to do with
systemd-networkd.
If you are running systemd-networkd on
the host then you can do that easily with a network file. I've called mine vb-veth.network and it contains:
[Match] Name=vb-*
Very good indeed. /etc/systemd/network/80-container-host0.network [Match] Name=vb-dahlia [Network] DHCP=no DNS=192.168.1.254 [Address] Address=192.168.1.94/24 [Route] Gateway=192.168.1.254 and now the virtual bridge is UP right after boot. Can you post your configuration for bridge ? TY for your help.
On Wed, Mar 12, 2014 at 3:00 PM, arnaud gaboury <arnaud.gaboury@gmail.com> wrote:
If you are running systemd-networkd on
the host then you can do that easily with a network file. I've called mine vb-veth.network and it contains:
[Match] Name=vb-*
Very good indeed. /etc/systemd/network/80-container-host0.network [Match] Name=vb-dahlia
[Network] DHCP=no DNS=192.168.1.254
[Address] Address=192.168.1.94/24
[Route] Gateway=192.168.1.254
and now the virtual bridge is UP right after boot. ..... but network didn't work on container !!! I just realized this a few moment ago, trying to upgrade the container. In fact, I shall keep my original 80-container-host0.network with :
[Match] Virtualization=container Host=host0 ..... Then add a new .network file with you "hack" to get interface UP at boot. This sounds to me a potential bug in fact, as your vb-veth.network has no reason to exist. But as I am far from catching every part of workability of networkd, I will keep myself from filling a bug report. Maybe shall you post on the devel-systemd mailing list about this interface down at boot ?
On 12-03-2014 18:40, arnaud gaboury wrote:
This sounds to me a potential bug in fact, as your vb-veth.network has no reason to exist. But as I am far from catching every part of workability of networkd, I will keep myself from filling a bug report.
Maybe shall you post on the devel-systemd mailing list about this interface down at boot ?
If there is a problem then it should be with systemd-nspawn since it is the one responsible for creating the tap adapter and adding it to the bridge. I guess someone will have to ask about it, either in the mailing list or irc, I haven't done so before because systemd-{nspawn,networkd} have lots of new functionality and I'm not sure I understand them all. -- Mauro Santos
I guess someone will have to ask about it, either in the mailing list or irc, I haven't done so before because systemd-{nspawn,networkd} have lots of new functionality and I'm not sure I understand them all.
After I related this issue (interfaces not being UP) on the systemd-devel mailing list, Tom Gundersen did a commit yesterday nite to change this behavior. So if you run systemd-git, just upgrade and interfaces will now be UP, and you will be able to get rid of your hack.
On Monday 17 Mar 2014 09:55:11 arnaud gaboury wrote:
I guess someone will have to ask about it, either in the mailing list or irc, I haven't done so before because systemd-{nspawn,networkd} have lots of new functionality and I'm not sure I understand them all.
After I related this issue (interfaces not being UP) on the systemd-devel mailing list, Tom Gundersen did a commit yesterday nite to change this behavior. So if you run systemd-git, just upgrade and interfaces will now be UP, and you will be able to get rid of your hack.
I don't get this: it seems normal to me that the interface would be down until it's configured by the container, pretty much like on a normal machine. The only situation in which you can expect an interface to be up already is in a network-booting situation, in which the initramfs configures the interface. In a virtualised situation, this is like the host configuring the container's interface for it. Anyway, no big deal either way for me. Paul
On 17-03-2014 10:01, Paul Gideon Dann wrote:
I don't get this: it seems normal to me that the interface would be down until it's configured by the container, pretty much like on a normal machine. The only situation in which you can expect an interface to be up already is in a network-booting situation, in which the initramfs configures the interface. In a virtualised situation, this is like the host configuring the container's interface for it.
Anyway, no big deal either way for me.
Paul
I suspect we might have been talking about 2 different things all along. What I and Arnaud have been talking about is the tap interface on the host, not the interface inside the container, which of course should be properly configured by the OS inside the container. The OS inside the container has no way to bring the tap interface on the host up so there would be no network connectivity even though the interface on the container side was properly configured and brought up. -- Mauro Santos
On Monday 17 Mar 2014 12:00:10 Mauro Santos wrote:
I suspect we might have been talking about 2 different things all along. What I and Arnaud have been talking about is the tap interface on the host, not the interface inside the container, which of course should be properly configured by the OS inside the container.
The OS inside the container has no way to bring the tap interface on the host up so there would be no network connectivity even though the interface on the container side was properly configured and brought up.
That would indeed make sense, but when I asked about this: On Wednesday 12 Mar 2014 17:32:27 arnaud gaboury wrote:
In that case, I'm curious to find out if you find that setting the host0 interface up in the container also brings the vb-dahlia interface up on the host?
On container :
gab@dahlia ➤➤ ~ % ip addr 2: host0: <BROADCAST,ALLMULTI,AUTOMEDIA,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 56:84:f7:39:43:c7 brd ff:ff:ff:ff:ff:ff inet 192.168.1.94/24 brd 192.168.1.255 scope global host0 valid_lft forever preferred_lft forever inet6 fe80::5484:f7ff:fe39:43c7/64 scope link valid_lft forever preferred_lft forever
gab@dahlia ➤➤ ~ # ip link set dev host0 down gab@dahlia ➤➤ ~ % ip addr 2: host0: <BROADCAST,ALLMULTI,AUTOMEDIA> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether 56:84:f7:39:43:c7 brd ff:ff:ff:ff:ff:ff inet 192.168.1.94/24 brd 192.168.1.255 scope global host0 valid_lft forever preferred_lft forever
Now looking on host : gabx@hortensia ➤➤ ~ % ip addr 4: vb-dahlia: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast master br0 state DOWN group default qlen 1000 link/ether 8e:a4:c3:8c:cc:89 brd ff:ff:ff:ff:ff:ff inet 192.168.1.94/24 brd 192.168.1.255 scope global vb-dahlia valid_lft forever preferred_lft forever inet6 fe80::8ca4:c3ff:fe8c:cc89/64 scope link valid_lft forever preferred_lft forever
It was UP before I brought vb down. So you have your answer : yes.
...buy maybe it doesn't work the other way---bringing the interface up after boot? Anyway, this was just a matter of curiosity for me, really. I'm glad you guys got things working the way you wanted. Paul
I don't get this: it seems normal to me that the interface would be down until it's configured by the container, pretty much like on a normal machine. The only situation in which you can expect an interface to be up already is in a network-booting situation, in which the initramfs configures the interface. In a virtualised situation, this is like the host configuring the container's interface for it.
Anyway, no big deal either way for me.
Paul
You are right. Then, following your previous comments:
The host has configuration that creates a bridge br0, containing only the physical interface enp7s0. The bridge should be given the IP address that you want the host to have.
When the container is started, using --network-bridge=br0, the host automatically creates the vb-dahlia interface and adds it to the br0 bridge. No additional configuration is necessary on the host.
The container should configure its network exactly as for a normal, non-virtualised system. It can use DHCP if necessary, in which case it will receive an IP on the same network as the host. Conceptually, they are connected to the same network via a hub/switch.
I think it would be best-practice to set the network configuration inside the container, if possible.
I decided, when writing the wiki, to setup the container network static IP (example given) INSIDE the container. This approach solves the interfaces being DOWN issue for any network profile and sounds in fact a more best-practice. TY for your comments.
On Thursday 20 Mar 2014 11:07:55 arnaud gaboury wrote:
I decided, when writing the wiki, to setup the container network static IP (example given) INSIDE the container. This approach solves the interfaces being DOWN issue for any network profile and sounds in fact a more best-practice. TY for your comments.
Ah, excellent. That's good news :) And thank you for doing a wiki write-up. I'm sure that'll help lots of people in future. Paul
On 17-03-2014 08:55, arnaud gaboury wrote:
After I related this issue (interfaces not being UP) on the systemd-devel mailing list, Tom Gundersen did a commit yesterday nite to change this behavior. So if you run systemd-git, just upgrade and interfaces will now be UP, and you will be able to get rid of your hack.
Great :) Thanks for bringing this issue to the attention of the devs. -- Mauro Santos
On Wednesday 12 Mar 2014 14:48:38 arnaud gaboury wrote:
Right. I am left after I boot my machine (the host) with this :
4: vb-dahlia: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master br0 state DOWN group default qlen 1000 link/ether 62:a2:6b:f4:0f:87 brd ff:ff:ff:ff:ff:ff
I have to manually # ip link set dev vb-dahlia up
to get the network working on the container :
2: host0: <BROADCAST,MULTICAST,ALLMULTI,NOTRAILERS,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 5a:51:a2:a2:b5:fb brd ff:ff:ff:ff:ff:ff inet 192.168.1.94/24 brd 192.168.1.255 scope global host0 valid_lft forever preferred_lft forever inet6 fe80::5851:a2ff:fea2:b5fb/64 scope link valid_lft forever preferred_lft forever
Does it work if you do "ip link set host0 up" in the container? I think that would be a better solution.
Ah? I have two netctl profiles, one for my physical eth (enp7s0) with no ip, one for bridge (br0) with enp7s0 binded to. So you mean you don't have any bridge profile managed by netctl ?
As far as the host is concerned, I think you should consider the bridge as if it were the only interface. The enp7s0 interface is part of the bridge and should not be configured in addition to being joined to the bridge. So yeah, I would get rid of the physical enp7s0 configuration, and leave only the bridge configuration on the host. Paul
On 12-03-2014 13:48, arnaud gaboury wrote:
Right now on the host side I have everything being handled only by systemd-{networkd,nspawn}, I don't add any physical interfaces to the bridge Ah? I have two netctl profiles, one for my physical eth (enp7s0) with no ip, one for bridge (br0) with enp7s0 binded to. So you mean you don't have any bridge profile managed by netctl ?
but I suppose that would also be possible to do with
systemd-networkd.
No netctl here :) I systemd-networkd enabled on boot and 3 files in /etc/systemd/network
cat brkvm.netdev [NetDev] Name=brkvm Kind=bridge
cat brkvm.network [Match] Name=brkvm
[Network] Description=Bride for use with virtual machines and containers Address=192.168.56.1/24
cat vb-veth.network [Match] Name=vb-*
This last one is sort of a hack to bring the network up as it shows up, I suppose systemd-nspawn should do it by itself, this might be a bug, unless there is a good reason not to bring the network up automatically. Inside the container I do manual setup of the network address since I'm not actually booting it. Mind you that you may have to do systemctl daemon-reload (not really sure if this one is needed) and restart systemd-networkd for any changes to make effect. -- Mauro Santos
On Wednesday 12 Mar 2014 14:06:30 Mauro Santos wrote:
No netctl here :)
I systemd-networkd enabled on boot and 3 files in /etc/systemd/network
cat brkvm.netdev
[NetDev] Name=brkvm Kind=bridge
cat brkvm.network
[Match] Name=brkvm
[Network] Description=Bride for use with virtual machines and containers Address=192.168.56.1/24
cat vb-veth.network
[Match] Name=vb-*
This last one is sort of a hack to bring the network up as it shows up, I suppose systemd-nspawn should do it by itself, this might be a bug, unless there is a good reason not to bring the network up automatically.
Inside the container I do manual setup of the network address since I'm not actually booting it.
Mind you that you may have to do systemctl daemon-reload (not really sure if this one is needed) and restart systemd-networkd for any changes to make effect.
Can I ask you both why you chose this route of creating a private network? As far as I can tell, by default systemd-spawn will allow the container to use the host's interface. I would have thought that would be adequate for most usecases? Paul
Can I ask you both why you chose this route of creating a private network? As far as I can tell, by default systemd-spawn will allow the container to use the host's interface. I would have thought that would be adequate for most usecases?
Paul
My first tests with nspwan/networkd, with a very minimal configuration (just one eth netcl profile) left me with a working network on container, but as you said, the container was using host interface (enp7s0 in my case). Thus, same IP for both and no container network "isolation".
From SYSTEMD-NSPAWN(1)
--private-network Disconnect networking of the container from the host. This makes all network interfaces unavailable in the container, with the exception of the loopback device and those specified with --network-interface= and configured with --network-veth. That is exactly what I wanted. In my case, as the container is aimed at hosting various web apps with a static IP, I wanted to isolate the container network from the host one.
On Wednesday 12 Mar 2014 15:20:01 arnaud gaboury wrote:
Can I ask you both why you chose this route of creating a private network? As far as I can tell, by default systemd-spawn will allow the container to use the host's interface. I would have thought that would be adequate for most usecases?
Paul
My first tests with nspwan/networkd, with a very minimal configuration (just one eth netcl profile) left me with a working network on container, but as you said, the container was using host interface (enp7s0 in my case). Thus, same IP for both and no container network "isolation".
From SYSTEMD-NSPAWN(1)
--private-network Disconnect networking of the container from the host. This makes all network interfaces unavailable in the container, with the exception of the loopback device and those specified with --network-interface= and configured with --network-veth.
That is exactly what I wanted. In my case, as the container is aimed at hosting various web apps with a static IP, I wanted to isolate the container network from the host one.
OK, so in fact you did have an extra requirement that you wanted to use a separate IP address in this container? Is that an important requirement? Also, as I stated earlier, I think you should be using --network-bridge, not --private-network. Paul
On 12-03-2014 14:11, Paul Gideon Dann wrote:
On Wednesday 12 Mar 2014 14:06:30 Mauro Santos wrote:
No netctl here :)
I systemd-networkd enabled on boot and 3 files in /etc/systemd/network
cat brkvm.netdev
[NetDev] Name=brkvm Kind=bridge
cat brkvm.network
[Match] Name=brkvm
[Network] Description=Bride for use with virtual machines and containers Address=192.168.56.1/24
cat vb-veth.network
[Match] Name=vb-*
This last one is sort of a hack to bring the network up as it shows up, I suppose systemd-nspawn should do it by itself, this might be a bug, unless there is a good reason not to bring the network up automatically.
Inside the container I do manual setup of the network address since I'm not actually booting it.
Mind you that you may have to do systemctl daemon-reload (not really sure if this one is needed) and restart systemd-networkd for any changes to make effect.
Can I ask you both why you chose this route of creating a private network? As far as I can tell, by default systemd-spawn will allow the container to use the host's interface. I would have thought that would be adequate for most usecases?
Paul
Because I have both a virtual machine and container that need to talk to each other. Initially I had this setup specifically because of qemu, I wanted access to a few ports inside the virtual machine and having to setup some kind of nat would be a pain (and another variable in case things didn't work). After I saw that systemd-nspawn now has more network isolation features I just used the setup I had. It's possible this is overkill for what I want but it was the solution I came up with at the time. -- Mauro Santos
On Wednesday 12 Mar 2014 14:21:05 Mauro Santos wrote:
Can I ask you both why you chose this route of creating a private network? As far as I can tell, by default systemd-spawn will allow the container to use the host's interface. I would have thought that would be adequate for most usecases?
Because I have both a virtual machine and container that need to talk to each other.
Yeah, that sounds like a sensible reason; thank you. I believe Arnaud's usecase is a single container with no particularly special connectivity requirements (as far as I can tell). I'm worried that he's making his setup a lot more complicated than it needs to be. Paul
Yeah, that sounds like a sensible reason; thank you. I believe Arnaud's usecase is a single container with no particularly special connectivity requirements (as far as I can tell). I'm worried that he's making his setup a lot more complicated than it needs to be.
See my previous post : I want to learn. Then, the container will one day be a production server. So my idea is to test now everything, then take a snapshot and build a prod server with much more complicated network services and settings.
On Wednesday 12 Mar 2014 16:01:00 arnaud gaboury wrote:
See my previous post : I want to learn. Then, the container will one day be a production server. So my idea is to test now everything, then take a snapshot and build a prod server with much more complicated network services and settings.
OK yeah, that's also a good reason :) In that case, I'm curious to find out if you find that setting the host0 interface up in the container also brings the vb-dahlia interface up on the host? I think it would be best-practice to set the network configuration inside the container, if possible. That way, you can move the container from host to host without needing to perform any additional configuration on each host other than starting the container. After all, the container's IP is related to the container, not to the host, so it makes sense for its configuration to live with the container. Paul
In that case, I'm curious to find out if you find that setting the host0 interface up in the container also brings the vb-dahlia interface up on the host?
On container : gab@dahlia ➤➤ ~ % ip addr 2: host0: <BROADCAST,ALLMULTI,AUTOMEDIA,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 56:84:f7:39:43:c7 brd ff:ff:ff:ff:ff:ff inet 192.168.1.94/24 brd 192.168.1.255 scope global host0 valid_lft forever preferred_lft forever inet6 fe80::5484:f7ff:fe39:43c7/64 scope link valid_lft forever preferred_lft forever gab@dahlia ➤➤ ~ # ip link set dev host0 down gab@dahlia ➤➤ ~ % ip addr 2: host0: <BROADCAST,ALLMULTI,AUTOMEDIA> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether 56:84:f7:39:43:c7 brd ff:ff:ff:ff:ff:ff inet 192.168.1.94/24 brd 192.168.1.255 scope global host0 valid_lft forever preferred_lft forever Now looking on host : gabx@hortensia ➤➤ ~ % ip addr 4: vb-dahlia: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast master br0 state DOWN group default qlen 1000 link/ether 8e:a4:c3:8c:cc:89 brd ff:ff:ff:ff:ff:ff inet 192.168.1.94/24 brd 192.168.1.255 scope global vb-dahlia valid_lft forever preferred_lft forever inet6 fe80::8ca4:c3ff:fe8c:cc89/64 scope link valid_lft forever preferred_lft forever It was UP before I brought vb down. So you have your answer : yes.
On Wednesday 12 Mar 2014 17:32:27 arnaud gaboury wrote:
It was UP before I brought vb down. So you have your answer : yes.
OK, so in that case, I'd recommend not doing anything special on the host to bring the vb- dahlia interface up. It's behaving just like a normal interface would on a real system: the interface should be brought up and configured as normal in the container; the host doesn't need to do anything special. So you should have this situation: The host has configuration that creates a bridge br0, containing only the physical interface enp7s0. The bridge should be given the IP address that you want the host to have. When the container is started, using --network-bridge=br0, the host automatically creates the vb-dahlia interface and adds it to the br0 bridge. No additional configuration is necessary on the host. The container should configure its network exactly as for a normal, non-virtualised system. It can use DHCP if necessary, in which case it will receive an IP on the same network as the host. Conceptually, they are connected to the same network via a hub/switch. Paul
On Wed, Mar 12, 2014 at 6:02 PM, Paul Gideon Dann <pdgiddie@gmail.com> wrote:
On Wednesday 12 Mar 2014 17:32:27 arnaud gaboury wrote:
It was UP before I brought vb down. So you have your answer : yes.
OK, so in that case, I'd recommend not doing anything special on the host to bring the vb- dahlia interface up. It's behaving just like a normal interface would on a real system: the interface should be brought up and configured as normal in the container; the host doesn't need to do anything special.
So you should have this situation:
The host has configuration that creates a bridge br0, containing only the physical interface enp7s0. The bridge should be given the IP address that you want the host to have.
When the container is started, using --network-bridge=br0, the host automatically creates the vb-dahlia interface and adds it to the br0 bridge. No additional configuration is necessary on the host.
Exactly what it happens
The container should configure its network exactly as for a normal, non-virtualised system. It can use DHCP if necessary, in which case it will receive an IP on the same network as the host. Conceptually, they are connected to the same network via a hub/switch.
Not a bad idea to set up this part in container.
After I saw that systemd-nspawn now has more network isolation features I just used the setup I had.
It's possible this is overkill for what I want but it was the solution I came up with at the time.
Same same here. I have been a long time user of libvirt for VM, and I decided to have a look to the container story. I first tried libvirt-lxc, but configuring C group was a pain. So I jumped to the nspawn wagon as the guest OS setting seemed to me so obvious. I lost myself with the network story because of my lack of knowledge in this field. But now I learned lot about virtual network and devices, and that's what I am expecting from Linux, and Arch: learn. The fastest is running Ubuntu with Virtual Box, but it is not my approach.
No netctl here :)
I systemd-networkd enabled on boot and 3 files in /etc/systemd/network
cat brkvm.netdev [NetDev] Name=brkvm Kind=bridge
cat brkvm.network [Match] Name=brkvm
[Network] Description=Bride for use with virtual machines and containers Address=192.168.56.1/24
cat vb-veth.network [Match] Name=vb-*
This last one is sort of a hack to bring the network up as it shows up, I suppose systemd-nspawn should do it by itself, this might be a bug, unless there is a good reason not to bring the network up automatically.
Inside the container I do manual setup of the network address since I'm not actually booting it.
Mind you that you may have to do systemctl daemon-reload (not really sure if this one is needed) and restart systemd-networkd for any changes to make effect.
-- Mauro Santos
Thank you Muaro. Will try to get rid of the bridge netctl profile then.
participants (5)
-
arnaud gaboury
-
Damjan Georgievski
-
Jakub Klinkovský
-
Mauro Santos
-
Paul Gideon Dann