• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: July 28th, 2023

help-circle
  • This. Nothing is more difficult than understanding someone’s else code and architecture, and even if you manage that, you’re now stucked with the choices somebody else made and nobody wants that (we want to make our terrible choices!).

    More than a final app, the best thing to publish as FOSS is libraries extracted from it to help other developers build there own products faster. That’s something other may want to maintain when we abandon it. And on top of that, it still help to publish your app using this lib to serve as practical example about how to use your it, of course.



  • Anafroj@sh.itjust.workstoSelfhosted@lemmy.worldCost-cutting tips?
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    That’s the same thing. :) If you reduce computing load, you reduce the need for costly hardware and you reduce the need for energy, thus you reduce the amount of money needed to build and run your setup. There’s a saying in (software) engineering : “reducing energy consumption and increasing performances requires the same optimizations”. Make your code faster (by itself, not by buffing up hardware) and it consumes less energy. Make your application simpler, and it will run faster, and it will consume less energy. It’s not an absolute truth (it sometimes happen that you make your code faster and it consumes more energy), but it’s true most of the time.


  • Anafroj@sh.itjust.workstoSelfhosted@lemmy.worldCost-cutting tips?
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 year ago

    Basically, yes. You can configure most cron programs to mail task output to you (it’s usually done by setting the MAILTO variable in the crontab, provided sendmail is available on your system).

    I use that to do things like:

    0 9 11 10 * echo 'lunch with John Doe at 12:20'
    

    It sends me a mail, and I can see the upcoming events with crontab -l. If it’s not a recurring event, I then delete the rule.


  • Anafroj@sh.itjust.workstoSelfhosted@lemmy.worldCost-cutting tips?
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    1 year ago

    My favorite cost cutting tip is to avoid big webapps running on docker, and instead do with small UNIX utilities (cron instead of a calendar, text files instead of note taking app, rsync instead of a filehosting dropbox-like app, simple static webserver for file sharing, etc). This allows me to run my server on a simple Raspberry Pi, with less than 500mb of used RAM in average, and mininal energy consumption. So, total cost of the setup:

    • Raspberry Pi : 77€ x 2 = 144€ (I bought two to have a backup if the first one fails)
    • MicroSD 64gb : 13€ x 2 = 26€ (main and backup)
    • average energy consumption : 0.41€ (2kWh) per month

    With that, I run all services I need on a single machine, and I have a backup plan for recovery of both hardware and software.

    Getting used to a UNIX shell and to UNIX philosophy can take some time, but it’s very rewarding in making everything more simple (thus more efficient).


  • I’m using a pi4 8gb as my server, with a pi4 2gb as backup in case the first one dies. It’s a very classic server, running postfix/courier-imap for mails, lighttpd for web, bind9 for dns, ergo for irc, sqlite3 for databases. I also use fail2ban for IDS and cron to run tons of various task. All of that is hosted on a Gentoo linux OS.

    The one thing I don’t want to use is docker. I love docker for development or for deploying the main app at work, but it makes managing updates a nightmare for handling multiple services on my server (most your containers probably contain vulnerable software due to lack of system updates), and it eats resources needlessly. Then again, it’s made possible because I avoid the big webapps that usually need it.



  • “Git hosting” would be more appropriate. Unless that by frontend, you mean specifically web frontend, but that would be weird, because forges also provide the web backend part.

    Sourceforge was the biggest FOSS host in the 2000s, before GitHub (mainly because there was not much centralization to begin with). That train is long gone. :) Sure, the name and website Sourceforge still exist. Myspace, Digg and Yahoo do too. They are basically web ghosts, only an echo of what they once were.


  • Actually, I do use git bare repos for CD too. :) The ROOT/hooks/post-update executable can be anything, which allows to go wild : on my laptop, a push to a bare repos triggers deploy to all the machines needing it (on local or remote networks), by pushing through ssh to other bare repos hosted there, which builds and installs locally, given they all have their own post-update scripts ; all of that thanks to a git push and scripts at the proper paths. I don’t think any forge could do it more conveniently.

    For me the main interest of forges is to publish my code and get it discovered (before GitHub, getting people to find your repos hosted on your blog’s server was a nightmare). Even for the collaboration, I could do with emails. That being said, most people aren’t on top of their inbox, in which mails from family are mixed with work mails and commercial spam in one giant pile of unread items, so it’s a good thing for them we have those issue trackers.






  • Anafroj@sh.itjust.workstoFediverse@lemmy.worldFediverse or Decentralisation?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I’m sorry to say that, but you get you’re definition wrong. “decentralized” means “which has no center anymore”. ActivityPub is decentralized. The usual criticism of the Fediverse by peer to peer networks such as Secure Scuttlebutt or Dat is not that ActivityPub is not decentralized, but that it will eventually “recentralize”, like client/server models tend to do, when one instance capture all the traffic (like Gmail with SMTP, we already see signs of that with mastodon.social, but we’re still very far from it to be a center). I think that maybe you’ve been exposed to that argument and misunderstood it?

    What you really want to say is that ActivityPub is not p2p. You can criticize the fact that there is a server/client model behind it, which means that users don’t really own their data and can lost it if the server goes down - that’s a valid criticism.

    To which I would answer that it’s a tradeoff. :) ActivityPub is built on top of HTTP, the well known protocol on which the web is built. This makes it dirt simple to build an ActivityPub app. The difference of adoption rate between SSB, Dat or IPFS and ActivityPub has nothing to do with luck. It’s HTTP and JSON, it’s just simpler (and easier) to build on top of ActivityPub. Not only that, but it’s a w3c standard. Which means, for people like me who have been burnt by building apps on top of the Beaker Browser only to see it abandoned, that we can trust there won’t be any rug pull. That matters.

    And of course, you can also… run your own server (look into self-hosting if you’re interested in that, there’s a vibrant community here on Lemmy about that). If you run your server, then you own your data and the other servers become your peers. The idea that only others (presumably big companies) can have servers is a very centralized way of thinking.



  • They do maintain the simplicity of the line oriented protocol, so I’m fine with that. :)

    That’s the strongest point of IRC, IMO, and why it’s kept so simple : every instruction is a plain text line, period. It makes it incredibly simple to build on top of it. You don’t need to introduce a dependency to a project that probably will be abandonned in a few years, at which point you’ll have to rewrite your codebase to use an other dependency, for a few years. You just open a TCP connection, you read lines from the socket and write lines to it, each line is its own instruction structured in well known fields, and that’s it. It’s so simple!

    As long as IRCv3 sticks to that, they have my blessing. :)



  • I’m going to pass for the crazy person around, but so be it : cron.

    Cron can be easily configured to send mails (MAILTO variable when using standard cron), provided sendmail is available on the system. If a command called by cron outputs anything, it will send a mail with the content, which is useful by itself to warn when something goes wrong with a cron task, but also allows to do things like this:

    0 9 28 9 * echo birthday John
    

    It’s really easy to get used to the syntax, it’s just going from more precise to less precise, so it’s “minute, hour, day, month, *”. The last one can usually be ignored (it’s the day of the week, I must have used it twice in my life). So here, “0 9 28 9”, you read it backward and it gives : September, 28th, 9:00. Piece of cake when you get a bit of practice. And cron is everywhere, so no need to install anything. Although, since I run it on my laptop, I use fcron, which has a nice feature to run ASAP tasks which should have ran if the computer was not shut down. This way, I never miss an alert.

    I use it for recurring notes (like birthday, paperwork, house cleaning tasks, holidays, etc), but also as reminders of specific dates when I expect a delivery, have a meeting, etc. For the most important messages, I make it use a script that will make a destkop notification (with notify-send) and have a voice read the message (with mimic). And of course, I also use it to actually launch programs. :)


  • In such a widespread usage, there would probably not be “the fediverse” anymore, but a galaxy of clusters of interconnected instances. Spam would be a serious problem, so instances would switch to whitelisting instances they want to federate with instead of just occasionally “defederating” from them. It would not only happen because of spam, by the way, but also because of political/cultural/ideological divergences. Maybe even because of laws.

    There would be a boom of innovations, made possible because of the data openly accessible and the fact that we would finally have a standard on which to build upon to create third party applications (which, from a developer perspective, was the promise of the web-2.0 and its APIs, but never truly materialized). You would see alternative frontends for everything, and applications that allow to get new insights or use your data in new and smart ways.

    The big businesses would still be around, by the way. They would open their own instances, publish lot of ads and add cool features found nowhere else so that most people join their instances, which would quickly become the go to instances for everyone, dwarving all other instances. We would spend a lot of time evangelizing so that people join smaller instances instead, but our folks would answer that it’s less convenient, they would have less easy to use features and their account is already at BigCo anyway. Plus, to fight spam, terrorism, child pornography, nazis or whatever is the scarecrow then, they would severely limit the possibility for small instances to interop with them, adding arbitrary technical barriers that most implementers won’t succeed in hoping. But we won’t care that much, because we will have our own alternative networks with more content on them than ever.


  • Obligatory check : are you sure you really need a forge? (that’s the name we use to designate tools like Github/Gitlab/Gitea/etc). You can do a lot with git alone : you can host repositories on your server, clone them through ssh (or even http with git http-backend, although it requires a bit of setup), push, pull, create branches, create notes, etc. And the best of it : you can even have CI/CD scripts as post-receive hooks that will run your tests, deploy your app, or reject the changes if something is not right.

    The only thing you have to do is to create the repos on your server with the --bare flag, as in git init --bare, this will create a repos that is basically only what you usually have in the .git directory, and will avoid having errors because you pushed to a branch that is not the currently one checked. It will also keep the repos clean, without artifacts (provided you run your build tasks elsewhere, obviously), so it will make all your sources really easy to backup.

    And to discuss issues and changes, there is always email. :) There is also this, a code review tool that just pop up on HN.

    And it works with Github! :) Just add a git remote to Github, and you can push to it or fetch from it. You can even setup hooks to sync with it. I publish my FOSS projects both on Github and Gitlab, and the only thing I do to propagate changes is to push to my local bare repos that I use for easy backups, they each have a post-update hook which propagates the change everywhere it needs to be (on Github, Gitlab, various machines in my local network, which then have their own post-update hooks to deploy the app/lib). The final touch to that : having this ~/git/ directory that contains all my bare repos (which are only a few hundred MB so fit perfectly in my backups) allowed me to create a git_grep_all script to do code search in all my repos at once (who needs elasticsearch anyway :D ) :

    #!/usr/bin/env bash
    # grep recursively bare repos
    
    INITIAL_DIR=$(pwd)
    for dir in $(find . -name HEAD -exec dirname '{}' \;); do
      pushd $dir > /dev/null
      git grep "$*" HEAD > /dev/null
      if [[ "$?" = "0" ]]; then
        pwd
        git grep "$*" HEAD
        echo
      fi
    
      popd > /dev/null
    done
    

    (note that it uses pushd and popd, which are bash builtins, other shells should use other ways to change directories)

    The reason why you may still want a forge is if you have non tech people who should be able to work on issues/epics/documentation/etc.