Geronimo Wenja

  • 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle


  • I had a reasonably good time with it. I had issues with btrfs, which is why I moved off it and went to Fedora IoT for pretty much the same benefits.

    For me, btrfs caused multiple drive corruptions because of unexpected power offs, and I didn’t feel like trying to fix that on the fly - it might have been drives that were incompatible with CoW because of firmware “optimisations” that break if a write isn’t completed prior to power off.

    In general, outside of that, it was pretty solid. I didn’t find much use for the orchestration/setup tooling they include, and I found their documentation pretty sporadic unfortunately. Fedora IoT has the advantage of basically being silverblue, with rpm-ostree, so it’s easy to find people using it and discussing it.


  • Are you expecting sonarr to go after historical stuff? You have to manually request a search for anything added that isn’t being released in the future. Sonarr only automatically checks for new episodes, not old ones. Like others have said, season searches and interactive searches are useful for anything that’s not airing in the future.




  • The existing feature is that only subscribers will see it in feeds, but it can still be searched for or viewed manually. It’s not a private community feature. I’m just planning to add front-end access for the feature that already exists, so that admins don’t have to do API calls to use it.

    I’ll see if there’s any existing discussions about private communities while I’m at it though, it might be something the main devs have an opinion on or plan for.





  • Yeah sure.

    I’m going to assume you’re starting from the point of having a second linux user also set up to use rootless podman. That’s just following the same steps for setting up rootless podman as any other user, so there shouldn’t be too many problems there.

    If you have wireguard set up and running already - i.e. with Mullvad VPN or your own VPN to a VPS - you should be able to run ip link to see a wireguard network interface. Mine is called wg. I don’t use wg-quick, which means I don’t have all my traffic routing through it by default. Instead, I use a systemd unit to bring up the WG interface and set up routing.

    I’ll also assume the UID you want to forward is 1001, because that’s what I’m using. I’ll also use enp3s0 as the default network link, because that’s what mine is, but if yours is eth0, you should use that. Finally, I’ll assume that 192.168.0.0 is your standard network subnet - it’s useful to avoid routing local traffic through wireguard.

    #YOUR_STATIC_EXTERNAL_IP# should be whatever you get by calling curl ifconfig.me if you have a static IP - again, useful to avoid routing local traffic through wireguard. If you don’t have a static IP you can drop this line.

    [Unit]
    Description=Create wireguard interface
    After=network-online.target
    
    [Service]
    RemainAfterExit=yes
    ExecStart=/usr/bin/bash -c " \
            /usr/sbin/ip link add dev wg type wireguard || true; \
            /usr/bin/wg setconf wg /etc/wireguard/wg.conf || true; \
            /usr/bin/resolvectl dns wg #PREFERRED_DNS#; \
            /usr/sbin/ip -4 address add #WG_IPV4_ADDRESS#/32 dev wg || true; \
            /usr/sbin/ip -6 address add #WG_IPV6_ADDRESS#/128 dev wg || true; \
            /usr/sbin/ip link set mtu 1420 up dev wg || true; \
            /usr/sbin/ip rule add uidrange 1001-1001 table 200 || true; \
            /usr/sbin/ip route add #VPN_ENDPOINT# via #ROUTER_IP# dev enp3s0 table 200 || true; \
            /usr/sbin/ip route add 192.168.0.0/24 via 192.168.0.1 dev enp3s0 table 200 || true; \
            /usr/sbin/ip route add #YOUR_STATIC_EXTERNAL_IP#/32 via #ROUTER_IP# dev enp3s0 table 200 || true; \
            /usr/sbin/ip route add default via #WG_IPV4_ADDRESS# dev wg table 200 || true; \
    "
    
    ExecStop=/usr/bin/bash -c " \
            /usr/sbin/ip rule del uidrange 1001-1001 table 200 || true; \
            /usr/sbin/ip route flush table 200 || true; \
            /usr/bin/wg set wg peer '#PEER_PUBLIC_KEY#' remove || true; \
            /usr/sbin/ip link del dev wg || true; \
    "
    
    [Install]
    WantedBy=multi-user.target
    

    There’s a bit to go through here, so I’ll take you through why it works. Most of it is just setting up WG to receive/send traffic. The bits that are relevant are:

            /usr/sbin/ip rule add uidrange 1001-1001 table 200 || true; \
            /usr/sbin/ip route add #VPN_ENDPOINT# via #ROUTER_IP# dev enp3s0 table 200 || true; \
            /usr/sbin/ip route add 192.168.0.0/24 via 192.168.0.1 dev enp3s0 table 200 || true; \
            /usr/sbin/ip route add #YOUR_STATIC_EXTERNAL_IP#/32 via #ROUTER_IP# dev enp3s0 table 200 || true; \
            /usr/sbin/ip route add default via #WG_IPV4_ADDRESS# dev wg table 200 || true; \
    

    ip rule add uidrange 1001-1001 table 200 adds a new rule where requests from UID 1001 go through table 200. A table is a subset of ip routing rules that are only relevant to certain traffic.

    ip route add #VPN_ENDPOINT# ... ensures that traffic already going through the VPN - i.e. wireguard traffic - does. This is relevant for handshakes.

    ip route add 192.168.0.0/24 via 192.168.0.1 ... is just excluding local traffic, as is ip route add #YOUR_STATIC_EXTERNAL_IP

    Finally, we add ip route add default via #WG_IPV4_ADDRESS# ... which routes all traffic that didn’t match any of the above rules (local traffic, wireguard) to go to the wireguard interface. From there, WG handles all the rest, and passes returning traffic back.

    There’s going to be some individual tweaking here, but the long and short of it is, UID 1001 will have all their external traffic routed through WG. Any internal traffic between docker containers in a docker-compose should already be handled by podman pods and never reach the routing rules. Any traffic aimed at other services in the network - i.e. sonarr calling sabnzbd or transmission - will happen with a relevant local IP of the machine it’s hosted on, and so will also be skipped. Localhost is already handled by existing ip route rules, so you shouldn’t have to worry about that either.

    Hopefully that helps - sorry if it’s a bit confusing. I learned to set up my own IP routing to avoid wg-quick so that I could have greater control over the traffic flow, so this is quite a lot of my learning that I’m attempting to distill into one place.


  • One of the really nice side-effects of it running rootless is that you get all the benefits of it running as an actual Unix user.

    For instance, you can set up wireguard with IP route to send all traffic from a given UID through the VPN.

    Using that, I set up one user as the single user for running all the stuff I want to have VPN’d for outgoing connections, like *arr services, with absolutely no extra work. I don’t need to configure a specific container, I don’t need to change a docker-compose etc.

    In rootful docker, I had to use a specific IP subnet to achieve the same, which was way more clunky.