Enabling NFS mounts in Proxmox 5.2 LXC containers

Edit: This post has been made obsolete in recent releases, which allow enabling these flags via both the web UI and the pct CLI tool. Normally, Proxmox doesn’t allow mounting NFS mounts directly in containers due to security concerns. In the past it was possible to modify apparmor profiles directly in order to allow it, as seen here and here. However, as of this commit, that method is no longer an option, due to the apparmor profiles now being generated dynamically when the containers are started (you can view these profiles at /var/lib/lxc/${CID}/apparmor/lxc-${CID}_<-var-lib-lxc>).
Read more →

Enabling jumbo frames on Proxmox 4

While turning on jumbo frames on a basic interface is straightforward (ip link set eth0 mtu 9000), Proxmox’s use of bridges to connect VMs makes things much more interesting. To start with, all interfaces connected to the bridge must have their MTUs upgraded first, otherwise it will give you an unhelpful error. Note that interfaces connected to the bridge include those of running containers/VMs.1 # ip a | grep mtu 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000 3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 4: veth102i0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000 # ip link set dev vmbr0 mtu 9000 RTNETLINK answers: Invalid argument # ip link set dev eth0 mtu 9000 # ip link set dev veth102i0 mtu 9000 # ip link set dev vmbr0 mtu 9000 Note that setting the MTU on vmbr0 is technically unnecessary, since bridges inherit the smallest MTU of the slaved devices2.
Read more →