• 7 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle

  • I do not see that as phone-usage, I’m doing an experiment to see how easy / difficult it is to revert the “i need to know the time, so I grab my phone” reflex back to “I need to know the time, so I look at my wrist”.

    I’m currently reading some books on how easy it is to manipulate peoples behaviour using ‘nudging’, this to better understand the social engineering tricks used by hackers.

    An chapter in one of these books in how social media use tricks to manupale our behaviour that resemble the tricks used by the gambling industry.

    One of the things I find intriging is the size of a smartphones today. If you look at it objectively, they are actually so large that most people would consider it to be annoyting: you have to carry it in a bag, in a pocket of your pants -but you have to take your phone out when you want sit-, or …you carry it in your hands. Have you noticed how many people have their smartphone in their hand when they walk around? But, of course, if you have something in your hand, it is very easy to open it quickly check your notifications; which reinforces the addiction.

    So, that’s the thing. People do not find it annoying.

    So … as an experiment, I am trying out how easy / difficult it is to break the habbit.

    A small sidenote when (or if) I manage to get my garmin vivosmart HR charges, it does rapport activity per week, number of steps and number of floors I went up on foot per day, even without a smartphone app. So that’s at least something :-)


  • One of the reasons I am looking for a new sportswatch is because I try to reduce my smartphone use and I noticed that I actually took out my smartphone just to check the time.

    I have an old garmin vivosmart HR but I do have a problem with the charging cable. Plus I am not able to download the healthstats with my linux ‘daily driver’ laptop.

    Perhaps I should just get a cheap regular watch somewhere? 🤔


  • I don’t. I thought the emoji would have made that clear.

    I have been doing cybersecurity awareness lately. We are starting to get over the furst hurdle: make people see the signatures of phishing message. But now we are starting with the 2nd hurdle: make people understand that when they write a genuine post, they should avoid these signatures of phishing, in this case, the “time pressure” argument.

    The problem is that the more genuine messages have phising signatures, to more difficult it becomes for people to distinguish a genuine posts from phishing. There is also the risk that you genuine posts will get noted as fake (although that is clearly not the case here :-) )







  • Hi,

    Just to put things into perspective.

    Well, this example dates from some years ago, before LLMs and ChatGPT. But I agree that the principle is the same. (an that was exactly my point).

    If you analyse this. The error the person made was that he assumed an arduino to be like a PC, … while it is not. An arduino is a microcontroller. The difference is that a microcontroller has resources that are limited: pins, hardware interrups, timers, … An addition, pins can be reconfigured for different functions (GPIO, UART, SPI, I2C, PWM, …) Also, a microcontroller of the arduino-class does not run a RTOS, so is coded in “baremetal”. And as there is no operating-system that does resource-management for you, you have to do it the application.

    And that was the problem: Although resource-management is responsability of the application-programmer, the arduino environment has largly pushed that off the libraries. The libraries configure the ports in the correct mode, set up timers and interrupts, configure I/O devices, …And in the end, this is where things went wrong. So, in essence, what happened is the programmer made assumption based on the illusion created by the libraries: writing application on arduino is just like using a library on a unix-box. (which is not correct)

    That is why I have become carefull to promote tools that make things to easy, that are to good at hiding the complexity of things. Unless they are really dummy-proof after years and decades of use, you have to be very carefull not to create assumptions that are simply not true.

    I am not saying LLMs are by definition bad. I am just careful about the assumptions they can create.


  • As a sidenote. This reminds me of a discussion I haver every so often on “tools that make things to easy”.

    There is something I call "the arduino effect:. People who write code for things, based on example-code they find left and right, and all kind of libraries they mix together. It all works … for as long as it works. The problem is what happens if things do not work.

    I once helped out somebody who had an issue with a simple project: he: “I don’t understand it. I have this sensor, and this library… and it works. Then I have this 433 MHz radio-module with that library and that also works. But when I use them together. It doesn’t work”| me: what have you tried? he: well, looked at the libraries. They all are all. Reinstalled all the software. It’s that neither me: could it be that these two boards use the same hardware interrupt or the same timer he: the what ???

    I see simular issues with other platforms. GNU Radio is a another nice example. People mix blocks without knowing what exactly they do.

    As said, this is all very nice, as long as it works

    I wonder if programming-code generated by LLMs will not result in the same kind of problems. people who do not have the background knowledge needed to troubleshoot issues once problems become more complex.

    (Just a thought / question … not an assumpion)


  • To be honest, I have no personal experience with LLM (kind of boring, if you ask me). I know do have two collegues at work who tried them. One -who has very basic coding skills (dixit himself) - is very happy. The other -who has much more coding experience- says that his test show they are only good at very basic problems. Once things become more complex, they fail very quickly.

    I just fear that, the result could be that -if LLMs can be used to provide same code of any project- open-source project will spend even less time writing documentation (“the boring work”)



  • Wauw! So many answers in such a short time. Thanks all! 👍 (I will not spam the channel by sending a thank you to all but this is really greatly apriciated)

    Concerning ncurses. I did hear of it but never looked at it myself. What is not completely clear for me. I know you can use it for ‘low-level’ things, but does it also include ‘high-level’ concepts like windows, input fields and so?

    The blog mentioned in one of the other posts only shows low-level things.








  • Well, the issue here is that your backup may be physically in a different location (which you can ask to host your S3 backup storage in a different datacenter then the VMs), if the servers themselfs on which the service (VMs or S3) is hosted is managed by the same technical entity, then a ransomware attack on that company can affect both services.

    So, get S3 storage for your backups from a completely different company?

    I just wonder to what degree this will impact the bandwidth-usage of your VM if -say- you do a complete backup of your every day to a host that will be comsidered as “of-premises”



  • First of all, thanks to all who replied! I didn’t think there would have been that many people who self-host a SSO-server, so I am happy to see these replies.

    As a side-note, I have also been looking into making the setup more robust, i.e. add redundancy. For a “light redundant” senario (not fully automatic, but -say- where I have a 2nd instance ready to run, so I just need to adapt the DNS-record if it is needed), can I conclude from the “makeing a backup” question, that I just need to run a 2nd instance of postgres and do streaming-replication from the main instance to the backup-instance ?

    Or are there other caviats I haven’t thought about?



  • For me, the first goal is to simply understand the setup. I now have been able to create a setup with two frontend jvb-instances and one backend. In the end, the architecture setup of a jitsi-server is quite nicely explained, and -by delving a little bit into the startup scripts of the docker-based jitsi setup, you do get some idea of how things fit together.

    From a practicle point of view, I think I’ll go for the basic setup (1 backend, 2 frontends) natively on two servers, and -if the backend server would go down- just have a dockerised backup-setup ready to go if it would be needed.

    Thanks!