Just an explorer in the threadiverse.

  • 4 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle


  • PriorProject@lemmy.worldtoSelfhosted@lemmy.worldWoL through Wireguard
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    10 months ago

    This is a very strong explanation of what’s going on. And as a follow-up, I believe that ZeroTier present a single Ethernet broadcast domain, and so WoL tricks are more likely to work naturally there than with Wireguard. I haven’t used ZeroTier, and I do use Wireguard via Tailscale/Headscale. I’ve never missed the Ethernet features of ZeroTier and they CAN result in a very chatty wan if you’re not careful. But I think ZT would make this straightforward.

    Though as other people note… the simplest/least-disruptive change is probably to expose some scripty thing on the rpi that can be triggered via be triggered over a routed protocol and then have the rpi emit the Ethernet broadcast packets from the physical network.


  • I dunno how to hotlink, but if you scroll to the active users graph at https://fedidb.org/software/lemmy you can see there’s been like a 25% dropoff in active users since the peak in July. Lemmy has still grown 50x since May, and it’s much MUCH more active than it was then. But we’ve definitely crested a peak and not everyone who gave Lemmy a shot then is sticking around in a monthly basis.

    This isn’t necessarily bad. Lemmy is still young and has many rough edges, it wasn’t realistic to win all the users that tried it on ease-of-use in a head to head with reddit. And Mastodon has had multiple growth waves interspersed with periods of declining usage, but with the spikes has grown ie remained stable overall. Early-stage commercial social media have big ups and downs in engagement and growth as well, and just like lemmy those ups and downs are often externally driven… when competitors mess up, when a big global news story hits, when a major sporting event happens… these can all be catalysts for one-time growth. It’s not a straight line.

    Time will tell what user level we stabilize at in the short-term and what events spur new growth, but it’s normal to have a big expansion be followed by some degree of contraction.


  • No no, sorry. I mean can I still have all my network traffic go through some VPN service (mine or a providers) while Tailscale is activated?

    Tailscale just partnered with Mullvad so this works out of the box for that setup: https://tailscale.com/blog/mullvad-integration/

    For others, it’s a “yes on paper” situation. It will probably often not work out of the box, but it seems likely to be possible as an advanced configuration. At the end of the line of possibilities, it would definitely be possible to set up a couple of docker containers as one-armed routers, one with your VPN and one with Tailscale as an exit node. Then they can each have their own networking stack and you can set up your own routes and DNS delegating only the necessary bits to each one. That’s a pretty advanced setup and you may not have the knowhow for it, but it demonstrates what’s possible.


  • To a first approximation, Tailscale/Headscale don’t route and traffic.

    Ah, well damn. Is there a way to achieve this while using Tailscale as well, or is that even recommended?

    Is there a way to achieve what? Force tailscale to route all traffic through the DERP servers? I don’t know, and I don’t know why you’d want to. When my laptop is at home on the same network as my file-server, I certainly don’t want tailscale sending filserver traffic out to my Headscale server on the Internet just to download it back to my laptop on the same network it came from. I want NAT traversal to allow my laptop and file-server to negotiate the most efficient network path that works for them… whether that’s within my home lab when I’m there, across the internet when I’m traveling, or routing through the DERP server when no other option works.

    OpenVPN or vanilla Wireguard are commonly setup with simple hub-and-spoke routing topologies that send all VPN traffic through “the VPN server”, but this is generally slower path than a direct connection. It might be imperceptibly slower over the Internet, but it will be MUCH slower than the local network unless you do some split-dns shenanigans to special-case the local-network scenario. With Tailscale, it all more or less works the same wherever you are which is a big benefit. Of course excepting if you have a true multigigabit network at home and the encryption overhead slows you down… Wireguard is pretty fast though and not a problematic throughout limiter for the vast majority of cases.


  • Have a read through https://tailscale.com/blog/how-nat-traversal-works/

    You, and many commenters are pretty confused about out tailscale/Headscale work.

    1. To a first approximation, Tailscale/Headscale don’t route and traffic. They perform NAT traversal and data flows directly between nodes on the tailnet, without traversing Headscale/Tailscale directly.
    2. If NAT traversal fails badly enough, it’s POSSIBLE that bulk traffic can flow through the headscale/tailscale DERP nodes… but that’s an unusual scenario.
    3. You probably can’t run Headscale from your home network and have it perform the NAT traversal functions correctly. Of course, I can’t know that for sure because I don’t know anything about your ISP… but home ISPs preventing Headscale from doing it’s NAT traversal job are the norm… one would be pleasantly surprised to find that a home network can do that properly.
    4. Are younreally expecting 10gb/s speeds over your encrypted links? I don’t want to say it’s impossible, people do it… but you’d generally only expect to see this on fairly burly servers that are properly configured. Tailscale just in April bragged about hitting 10gb speeds with recent optimizations: https://tailscale.com/blog/more-throughput/ and on home hardware with novice configd I’d generally expect to see roughly more like single gigabit.

  • I don’t know the answer to your question, though I suspect it’s that Jellyfin doesn’t support menus.

    What I’ve always done is rip each track to a video file. Jellyfin’s movie metadata DOES support extras: https://jellyfin.org/docs/general/server/media/movies/ and video formats like mkv support additional audio and subtitle tracks. With multi-track video format and extras support in the Jellyfin native menus… it’s possible to rip the vast majority of DVD content into Jellyfin. But ISO is not the preferred format to do it.

    The main thing you’d lose here would be interactive menu features or choose-your-own adventure video codes into menus. Those DVD titles are pretty rare though.

    VLC might have DVD menu support for ISOs, fwiw. I have a vague recollection it might, but I’m not at all sure.


  • I don’t know what’s up on your case, but I would not jump to the conclusion that it’s impossible to use tailscale with any other VPN in any circumstance.

    Rather, tailscale and Mullvad will now work easily and out of the box. For other VPNs, you may need to do understand the topology and routing of virtual devices and have the technical ability and system permissions to make deep networking changes.

    So I’d expect one can probably find a way for most things to coexist on a Linux server. On a non-rootrr android phone? I’m less confident.


  • I use k8s at work and have built a k8s cluster in my homelab… but I did not like it. I tore it down, and currently using podman, and don’t think I would go back to k8s (though I would definitely use docker as an alternative to podman and would probably even recommend it over podman for beginners even though I’ve settled on podman for myself).

    1. K8s itself is quite resource-consuming, especially on ram. My homelab is built on old/junk hardware from retired workstations. I don’t want the kubelet itself sucking up half my ram. Things like k3s help with this considerably, but that’s not quite precisely k8s either. If I’m going to start trimming off the parts of k8s I don’t need, I end up going all the way to single-node podman/docker… not the halfway point that is k3s.
    2. If you don’t use hostNetworking, the k8s model of traffic routes only with the cluster except for egress is all pure overhead. It’s totally necessary with you have a thousand engineers slinging services around your cluster, but there’s no benefit to this level fo rigor in service management in a homelab. Here again, the networking in podman/docker is more straightforward and maps better to the stuff I want to do in my homelab.
    3. Podman accepts a subset of k8s resource-yaml as a docker-compose-like config interface. This lets me use my familiarity with k8s configs iny podman setup.

    Overall, the simplicity and lightweight resource consumption of podman/docker are are what I value at home. The extra layers of abstraction and constraints k8s employs are valuable at work, where we have a lot of machines and alot of people that must coordinate effectively… but I don’t have those problems at home and the overhead (compute overhead, conceptual overhead, and config-overhesd) of k8s’ solutions to them is annoying there.






  • What’s the network flow like? I’m posting this to the lemmy.ml /asklemmy community, but I’m composing it on the sh.itjust.works interface. I’m assuming sh.itjust.works hands this over to lemmy.ml. How does my browsing work? Is all of my traffic routed through sh.itjust.works?

    • You register your account on sh.itjust.works, that’s where all the info you care about resides. Your list of subscribed communities resides there. When you read a post, it gets fetched out of the db on sh.itjust.works (irrespective of where the home instance for that post’s community is… when you read it it comes out of the database on your home instance), and when you comment on a post, that gets written to the db on your home instance. Your home instance a standalone fully functioning thing.
    • When you subscribe to a remote community like this one, you tell your home instance "keep up to date with posts and comments for this community and let me know about them. Your home instance asynchronously gets all those updates while you’re asleep or whatever so it can show them to you out of its local database when you come back. If more users on sh.itjust.works subscribe to the same community… there’s no incremental overhead. All ya’lls instance is ALREADY subscribed to that sub. So other users on your instance can sub to it for free, it’s already in the instance’s database.

    Assuming there’s a mass influx of redditors, what does it look like as things fail?

    • If lemmy.ml (where this community is homed) falls over from being overloaded or just is broken for whatever reason, your instance is unaffected. You can still read posts and make comments. This community however… is affected. New posts and comments for this community might come through intermitently or not at all for you (and everyone in the lemmyverse) because the community’s home server isn’t working well enough to reliably deliver them over federated replication. You can still read older posts and comments that have already been synced to your home instance, but new ones might not arrive. You might also see weird stuff like being able to see new comments from other sh.itjust.works users on this community, since those get written to your db before getting federated back to the community’s home server. But mostly updates from other instances stop or get unreliable.
    • If sh.itjust.works falls over for some reason… well… that sucks for you. You can’t log in or browse anything on it. You can still visit this sub at https://lemmy.ml/c/asklemmy/ as long as lemmy.ml is working and you’ll be able to see the posts and comments that other accounts make. But you’ll be an anonymous read-only browser, you won’t be able to post or comment until sh.itjust.works comes back online (or you make a new account elsewhere and lose all your comment history and subscription list).

    Are there easy mechanisms to allow me to grab my post history?

    There’s a github issue for this, but it’s not done yet: https://github.com/LemmyNet/lemmy/issues/506.

    I’m assuming most (all?) Lemmy servers are hosted in home labs?

    I don’t think that’s a good assumption. lemmy.ml is hosted on OVH, a cloud provider. My home instance on lemmy.world is hosted by admins that run something like a 32 CPU mastodon instance. Most instances with over 100 users are running on some kind of probably modest but “real” cloud instance. The admins are volunteers, but often smart technical folks paying for small but real compute infrastructure.

    The idea of Lemmy excites me, but the growth pain that could be coming scares me. Anybody using a CDN in front of their servers? That could be good, but with unconstrained growth, that could be costly, which is very bad.

    Anticipating growing pains isn’t wrong, it’s probably gonna happen. But the devs are gonna find and work on the biggest performance problems so that people can viably run bigger instances, and instance admins are gonna run bigger hardware and ask for donations or run patreons to cover the cost. In my opinion, the bigger worry is that Lemmy will fizzle… not that it will spectacularly explode. As long as people join and contribute and are interested, we’ll find a way to improve scalability and performance. The death knell would be if people get bored and leave, but compute capacity won’t be the problem in that scenario.