I dunno when it happened but I swear SBCs were the new best thing in the universe for a while and everyone was building cool little servers with their RockPis and OrangePis.

Now it’s all gone x86 and Proxmox with everyone shitting on Arm. What happened? What gives?

Is my small army of xPis pointless? What about my 2 Edge routers?

I’ve got about 6 xPis scattered round my flat - is there anything worth doing with them or should I just bin them?

All thoughts, feelings and information welcome. Thank you.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    10 months ago

    What happened is that people realized what I’ve been saying since ever - that the RPi and others are a money grab because of all the required accessories while a MiniPC will get you way more power, stable hardware , case, power supply and everything in between for the same price (if you go for second hand). Here is are examples of such posts: https://lemmy.world/comment/5357961 , https://lemmy.world/comment/4696545

    For eg. for 100€ you can find an HP Mini with an i5 8th gen + 16GB of ram + 256GB NVME that obviously has a case, a LOT of I/O, PCIe (m2) comes with a power adapter and outperforms a RPi5 in all possible ways. Note that the RPi5 8GB of ram will cost you 80€ + case + power adapter + cable + bullshit adapter + SD card + whatever else money grab - the Pi isn’t just a good option.

    Either way, Pis have their use cases however in my opinion it was an overhyped product that sits on the middle of a market:

    • They tried to make the Arduino easy by adding an operating system and high level programming languages such as Python. It never made much sense, why would you want to have GPIOs directly on a “computer”? not reasonable at all. Nowadays we’re seeing a raise of the ESP32 devices that have 30-40 GPIOs and Wifi for 2$ each. Cheap, easy to develop and deploy and eating away on the Pi’s market.
    • Another typical use case for a Pi is some low power server, but while it is great in theory then it lacks the CPU performance required for the container-based absurdities people want to run and the I/O sucks. USB wasn’t ever a good way to connect to storage, let alone a USB/network shared bus like we had in the past. The new PCIe is questionable (look at the NanoPi M4v2 from 2018) and requires… more adapters;
    • Price-wise it doesn’t make much sense as well because a second hand x86 will be 10x faster at the same price point… and way more stable with more expansion.

    Now it’s all gone x86 and Proxmox

    Proxmox isn’t a new thing, in fact it is a pile of crap and questionable open-source that people still run because they haven’t discovered LXC/LXD yet. Read more here: https://lemmy.world/comment/6507871. FYI you can run LXD on your Pis and get both containers and virtual machines with it in the same way Proxmox people do with x86.

    The irony of this comment is that people will shit on me about replacing Proxmox with LXD in the same way they used to when I said that Pis were a money grab and x86 MiniPCs were way better.

    • akrot@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      The mian issue with Mini/used PCs is the power efficiency. It’s just a waste of wattage and performanve/Watt is very bad, especially at idle.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        I would agree to a certain point. If you get a 10th gen CPU it is power efficient and there are a lot of gamers and whatnot selling those. Also there are a lot of MiniPCs that come with mobile “T” CPU that are very decent at idle.

        • akrot@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          But idle still would run much more than 15w. There a very good compilation google sheets for the most efficient X86 cpus, but once you start factoring hdds and ssds, it’s only natural to go higher (20w-30w) at least. That’s at least double than rpis

    • jkrtn@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Do you think the used server market is worth the cost? It looks like I could have a giant chunk of DDR3 for not so much.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        10 months ago

        I don’t (specially DDR3-era stuff) because old server hardware is way more expensive, won’t be of any particular advantage and older hardware, compared to new stuff, will use a LOT of power.

        Instead use regular desktop/laptop machines as they’ll probably be more than enough for homelabs. You can a good 9-10th gen Intel CPU and motherboard that is perfect to run servers (very high performance) but that people don’t want because they aren’t good to play the latest games. Modern hardware = less power consumption, cheaper, more performance.

        If you go really low end, let’s say i5-6500, this will probably cost around 80€ second hand with RAM. You can use https://www.cpubenchmark.net/compare/ to compare CPUs the server hardware you can get with modern hardware if you’re interested.

        Most DDR3-era server hardware comes with RAID controllers/cards and other things that nobody uses anymore, people have moved on the software RAID be it BRTFS or ZFS and you will want to do the same. Servers make a lot of noise - impractical for a home - and a CPU from that era will be around 150-200W, you can get a recent i5 with more performance that runs around 50W.

        Another thing to consider: you’re trying to build a NAS get a basic motherboard with 4 SATA ports and then add a PCI to 5 SATA port card and it will be much cheaper than whatever server hardware. BTRFS as your filesystem and its RAID if needed. Now you may be thinking something like “I want a faster CPU in order to have fast SMB”, just don’t - your gigabit network will saturate before an i5-6500 or any mechanical drive does and when this happens you’ll be at something like 10-20% CPU usage. Just don’t waste your money.

        • jkrtn@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Thank you, really appreciate your advice. I was just struggling to install Proxmox on a new machine, and you made me take a step back. The kernel is messed up, do I really want this? Why am I jumping through hoops for this when Debian has zero issues installing? I’ll be trying the container software you mentioned instead.

          • 1371113@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            I’ve done the same thing as the person you replied to is suggesting for around 10 years now. It works very well for a home user because parts etc are readily available. Most hypervisors will run on x86/amd64 hardware without issue. Check out something other than proxmox. LXC is one suggestion. If you’re going to stick with Debian look into SAMBA with BIND to ensure ease of sharing and cross platform integration.

            Another reason to not get an old server is power, noise and thermals. They’re designed to live in an air conditioned room. Anyone who works in server rooms for any length of time will tell you to wear ear protection.

    • chunkystyles@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      people will shit on me about replacing Proxmox with LXD

      From reading your comments I understand why. It’s in your delivery. You’re abrasive and you don’t explain why. You’re also telling people not to use something they know, to use something they don’t know, and not explaining how that would be beneficial. As far as I can see, you’ve only explained how LXD, when setup correctly, can do what Proxmox does.

      You’re essentially telling people to use something that is at best a side grade for reasons, and being salty about it.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 months ago

        Ahaha I don’t explain why 😂😂

        I wrote dozens of posts replying to every single question people had about LXD/Incus. Gave out printscreens, explained how it works, what it does, described useful features and pointed out multiple issues of Proxmox. I can show you what roads you can take and why but you must do the work yourself.

        The same applies to the MiniPC vs Raspberry discussion as my price, performance and feature breakdowns and proved countless times that for a large number of use cases a MiniPC is better. Unsurprisingly this is the first of such breakdowns that got upvotes, and do you know why? Because a known youtuber in this space recently came out with a video saying the exact same things I’ve been saying and now it became “acceptable” to criticize the Raspberry Pi money grab.

        to use something they don’t know, and not explaining how that would be beneficial you’ve only explained how LXD, when setup correctly, can do what Proxmox does.

        Even if that were true, what’s was the issue then? Isn’t it obvious that a true open-source solution that is available on Debian’s repos from a fresh install is better than a half proprietary solution that asks you to buy a license at any turn? Use your common sense.

        Besides my comments aren’t a marketing campaign there’s no “LXD will make you rich today and solve all your family drama” as soon as you complete our three step formula:

        1. apt install lxd
        2. lxd init
        3. lxc launch debian debian-container

        The advantage of using LXD/Incus are on the details, not on a flashy and shinny feature. It’s about running a clean Debian system, a non twisted and mangled kernel that will conflict with everything and not run stuff like OVPN properly, it’s about the license, the tools, not depending on a company, not having to wait 3x the time before your cluster is online. It’s about having a decent API for once and so many others.

        Most people say they don’t want to be put in the same situation they were put about the the CentOS/RedHat licensing change, but then they proceeded to replace CentOS with Ubuntu and still use Proxmox. All questionable open-source that is as likely to fuck you over as RedHat did.

        So eventually there will be a video from some youtuber stating that LXD/Incus is much better than Proxmox and people will flock to it without questioning anything. :)