Integrating netbox with Authentik
Netbox has support for SSO integration out of the box, however some extra work is required to make it work with authentik correctly.
Setting up Authentik In authentik, create an OAuth2/OpenID Provider (under Resources/Providers) with these settings: Name: Netbox Signing Key: Select any available key Take note of the client ID & secret for later usage Create an application with these settings: Name: Netbox Slug: netbox-slug Provider: Netbox Setting up Netbox Building the image This step is only required for docker.
‘Ghost’ changes in kubernetes' server-side apply
Starting in version 1.22, server-side apply has become officially “stable” in kubernetes, and in most is truly an upgrade from client-side applies. It respects fields that other server-side tools mark as controlled, and allows for applying larger and more complex objects.
However, there are a couple “gotchas” to be aware of. Most are either big loud errors, or called out explicitly in the documentation, but one thing I can across while working with server-side apply diffs (N.
Tracking Geo IP information with Vector, Loki and Grafana
Normally my logging stack of choice is ELK, but recently I’ve been digging more into Loki, designed to be the logging equivalent of Prometheus. It’s been very interesting dealing with a low-cardinality log system after getting so used to ELK, but one feature I’ve definitely been missing is the ability to easily add GeoIP from arbitrary logs. There’s an open issue for adding it, but in the mean time one comment suggested using vector.
What are those zero byte files on my glusterfs bricks?
While troubleshooting some issues on a distributed GlusterFS cluster, I came across the presence of a bunch of odd zero-byte files with empty permissions on the bricks.
132504 1 ---------T 2 1000 1000 0 Nov 28 02:09 /data/gluster/tank/brick-xxxxxxxx/brick/example/file.txt I had known they were there previously, but since the bricks were composed of ZFS zpools, there was very little danger of running out of inodes or anything, so I had let them be.
Finding the log for deleted files in git
Recently I came across a blog post explaining how to get the history of a deleted file: https://dzone.com/articles/git-getting-history-deleted, which pointed to yet another blog with the simple solution https://feeding.cloud.geek.nz/posts/querying-deleted-content-in-git/:
git log -- deleted_file.txt However, neither explained exactly why this works, and I got interested. -- is normally the convention that means “pass everything after this into the program as a literal string. For example, if for some crazy reason you had a file named -h (which is perfectly legal), rm -h would just bring up the man page, while rm -- -h would work as expected (as would rm .
Enabling NFS mounts in Proxmox 5.2 LXC containers
Edit: This post has been made obsolete in recent releases, which allow enabling these flags via both the web UI and the pct CLI tool.
Normally, Proxmox doesn’t allow mounting NFS mounts directly in containers due to security concerns. In the past it was possible to modify apparmor profiles directly in order to allow it, as seen here and here. However, as of this commit, that method is no longer an option, due to the apparmor profiles now being generated dynamically when the containers are started (you can view these profiles at /var/lib/lxc/${CID}/apparmor/lxc-${CID}_<-var-lib-lxc>).
SSH public key shenanigans
A fun little fact I discovered about SSH: when you specify a private key to use, it checks ${key}.pub for hints about how to parse the private key, without warning. Under normal operations this is never a problem, but you need to replace a private key in-place, and don’t update the .pub file, authentication will fail:
$ ls -la ssh.key ssh.key.pub $ ssh user@host echo ping user@host's password: ^C $ mv ssh.
Enabling jumbo frames on Proxmox 4
While turning on jumbo frames on a basic interface is straightforward (ip link set eth0 mtu 9000), Proxmox’s use of bridges to connect VMs makes things much more interesting. To start with, all interfaces connected to the bridge must have their MTUs upgraded first, otherwise it will give you an unhelpful error. Note that interfaces connected to the bridge include those of running containers/VMs.1
# ip a | grep mtu 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000 3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 4: veth102i0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000 # ip link set dev vmbr0 mtu 9000 RTNETLINK answers: Invalid argument # ip link set dev eth0 mtu 9000 # ip link set dev veth102i0 mtu 9000 # ip link set dev vmbr0 mtu 9000 Note that setting the MTU on vmbr0 is technically unnecessary, since bridges inherit the smallest MTU of the slaved devices2.
Upgrading from PHP 5.5 to 5.6 on FreeBSD
Recently PHP 5.5 got EOL’d, but PHP 5.6 will be supported for another two years. On Debian, this is a just a matter of upgrading the php5 package, but FreeBSD splits it out into two packages: php55 and php56, not to mention that extensions are also split out this way. The fact that I’ve installed php via ports also complicates things.
Doing the deed This assumes portmaster is installed.
Listing the installed php55 packages:
Making Terraform work with PowerDNS 4
Edit: This post has been made obsolete by a pull request I opened in the terraform repository: https://github.com/hashicorp/terraform/pull/7819
I’ve really enjoyed using PowerDNS as my DNS server at home. Most people only think of BIND and dnsmasq when it comes to DNS, while ignoring this stable, scalable, secure database-backed offering that powers some really large deployments. But enough proselytizing! I’m in the middle of trying to migrate my infrastructure to be controlled via Terraform (mostly).