• 546 Posts
  • 1.66K Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle








  • That’s why we should be conscientious about directing communities to an appropriate instance. If they want to post about political violence, there’s a place for that.

    If they are LGBT, particularly trans, then blahaj is particularly well suited for example.

    Just because I happen to be on .world doesn’t mean I necessarily think it’s the best place for every community. I would have to be pretty stupid to come to the Fediverse and oppose Federation.


  • That’s exactly right. Different instances will be more sensitive to different issues, but hopefully that means there’s a place for everything.

    There are instances where posting guillotines will be removed and there are instances where that’s permitted but pictures of Winnie the Pooh might be removed.

    If people want to talk about anti-authoritarian violence, there is lemmygrad, etc.

    I’m more concerned with the fact that a lot of reddit LGBT and particularly trans communities were banned. That could never happen on blahaj.






  • I live in a hot climate, so it’s really the expense of air conditioning.

    Small adjustments to the temperature based on whether or not we’re home, pre-cooling versus cooling during the heat of the day, etc. makes a big difference on the bill potentially.

    I’ve seen some scenarios where people were able to save hundreds of dollars a year just by adjusting the timing of systems. The price of electricity can go up and down during the day.

    Maybe those cases are outliers and it’s actually not worthwhile, but it seems compelling. If I can put a system in place for under $100, that will be at least as good as what I have and possibly a significant improvement, I’m interested in trying it.








  • Censorship and bias are two different issues.

    Censorship is a deliberate choice by the deployment. It comes from a realistic and demonstrated need to limit the misuse of the tool. Consider all the examples of people using early LLMs to generate plans for bombs, Nazi propaganda, revenge p*rn etc. Of course, once you begin to draw that line, you have to debate where the line is, and that falls to the lawyers and publicity departments.

    Bias is trickier to deal with because it comes from the bias in the training data. I remember on example where a writer found that it was impossible to get the model to generate a black doctor treating a white patient. Imagine the racist chaos that ensued when they applied an LLM to criminal sentencing.

    I am curious about how bias might be deliberately introduced into a model. We have seen the brute force method (eg “answer as though Donald Trump is the greatest American,” or whatever). However, if you could really control and fine tune the values directly, then even an “open source” model could be steered. As far as I know, the values are completely dependent on the training data. But it should be theoretically possible to “nudge” those values if you could develop a way to tune it.