with the demise of ESXi, I am looking for alternatives. Currently I have PfSense virtualized on four physical NICs, a bunch of virtual ones, and it works great. Does Proxmox do this with anything like the ease of ESXi? Any other ideas?
with the demise of ESXi, I am looking for alternatives. Currently I have PfSense virtualized on four physical NICs, a bunch of virtual ones, and it works great. Does Proxmox do this with anything like the ease of ESXi? Any other ideas?
I have another question, if you don’t mind: I have a debian/incus+opnsense setup now, created bridges for my NICs with systemd-networkd and attached the bridges to the VM like you described. I have the host configured with DHCP on the LAN bridge and ideally (correct me if I’m wrong, please), I’d like the host to not touch the WAN bridge at all (other than creating it and hooking it up to the NIC).
Here’s the problem: if I don’t configure the bridge on the host with either dhcp or a static IP, the opnsense VM also doesn’t receive an IP on that interface. I have a
br0.netdev
to set up the bridge, abr0.network
to connect the bridge to the NIC, and awan.network
to assign a static IP on br0, otherwise nothing works. (While I’m working on this, I have the WAN port connected to my old LAN, if it makes a difference.)My question is: Is my expectation wrong or my setup? Am I mistaken that the host shouldn’t be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what’s the best practice here?
Thank you for taking a look! 😊
I think there’s something wrong with your setup. One of my machines has a
br0
and a setup like yours.10-enp5s0.network
is the physical “WAN” interface:root@host10:/etc/systemd/network# cat 10-enp5s0.network [Match] Name=enp5s0 [Network] Bridge=br0 # -> note that we're just saying that enp5s0 belongs to the bridge, no IPs are assigned here.
root@host10:/etc/systemd/network# cat 11-br0.netdev [NetDev] Name=br0 Kind=bridge
root@host10:/etc/systemd/network# cat 11-br0.network [Match] Name=br0 [Network] DHCP=yes # -> In my case I'm also requesting an IP for my host but this isn't required. If I set it to "no" it will also work.
Now, I have a profile for “bridged” containers:
root@host10:/etc/systemd/network# lxc profile show bridged config: (...) description: Bridged Networking Profile devices: eth0: name: eth0 nictype: bridged parent: br0 type: nic (...)
And one of my VMs with this profile:
root@host10:/etc/systemd/network# lxc config show havm architecture: x86_64 config: image.description: HAVM image.os: Debian (...) profiles: - bridged (...)
Inside the VM the network is configured like this:
root@havm:~# cat /etc/systemd/network/10-eth0.network [Match] Name=eth0 [Link] RequiredForOnline=yes [Network] DHCP=ipv4
Can you check if your config is done like this? If so it should work.
My config was more or less identical to yours, and that removed some doubt and let me focus on the right part: Without a network config on
br0
, the host isn’t bringing it up on boot. I thought it had something to do with the interface having an IP, but turns out the following works as well:user@edge:/etc/systemd/network$ cat wan0.network [Match] Name=br0 [Network] DHCP=no LinkLocalAddressing=ipv4 [Link] RequiredForOnline=no
Thank you once again!
Oh, now I remembered that there’s
ActivationPolicy=
on[Link]
that can be used to control what happens to the interface. At some point I even reported a bug on that feature and vlans.I’m not so sure it is about the interface having an IP… I believe your current
LinkLocalAddressing=ipv4
is forcing the interface to get up since it has to assign a local IP. Maybe you can setLinkLocalAddressing=no
andActivationPolicy=always-up
and see how it goes.You know your stuff, man! It’s exactly as you say. 🙏
You’re welcome.