• 0 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle





  • The main probably with scientific publishing is that our threshold for statistical significance is way too low.

    If we allow the threshold to sit at a 1 percent chance that results of the study were random chance, it means that 1 percent of all publications at that level of certainly are going o mislead the public if the media reports on them. And with the volume of research published every day, that adds up to a LOT of misinformation.

    It’s not even bad science, it’s bad reporting and widespread scientific illiteracy. But neither of those are going away.


  • Oh, it was nothing more than just showing off the technology, really. It wasn’t a committed bit.

    I cloned my voice then left a voicemail that said something like: “hey buddy it’s me. My car broke down and I’m at… Actually I don’t know where I’m at. I walked to the gas station and borrowed this guy’s phone. He said he’ll give me a ride into to town if I can get him $50 bucks. Could you venmo it to him at @franks_diner? I’ll get you back as soon as I can find my phone. … By the way this is really me, definitely not a bot pretending to be me.”




  • I use YouTube for tutorials, education, and entertainment all the time. And YouTube music is how I listen to all my music.

    I’ve been paying for the YouTube premium version for my family since day one.

    Recently they took away my grandfathered-in pricing. It really costs me a ton of money.

    But I remember that I’m keeping ads off my screens, my parents’ screens, and my kids screens… And we all use YouTube music all the time… So…

    Yeah, a lot of money, but honestly, probably the best subscription I have.

    I could never go back to ads.











  • You’re wandering into one of the great questions of our age: what is intelligence? I don’t have a great answer. All I know is that gpt-4 can REASON, and does so better than the average human.

    It’s gpt-4 self-aware? Yes. To an extent. It knows what it is, and can use that information in its reasoning. It knows it’s an LLM, but not which model.

    Can it make judgement calls? Yes. Better than the average human.

    Understand meaning? Absolutely. To a jaw-dropping extent.

    Accuracy and correctness… Depends on the type of question.

    What you need to understand is that gpt-4 isn’t a whole brain. Think of it as if we have managed to reproduce the language center of the brain. I believe this is mechanism for higher reasoning in the human brain.

    But just as in humans with right-brain injuries, the language center is disconnected from reality at times.

    So, when you think about gpt-4 as the most important, difficult to solve part of the brain, you start to understand that with some minimal supporting infrastructure, you now have something very similar to a complete brain.

    You can use vector databases to give it long-term memory, and any kind of data retrieval used to augment it’s prompts improved accuracy and reduces hallucinations almost entirely.

    With my very mediocre programming skills, I managed to build a system that is curious, has a long-term memory, and do a wide variety of tasks, enough to easily replace an entire customer service, tech support team, sales team, and marketing team.

    That’s just ME, and working with the gpt-4 that’s available to the public with a bunch of guardrails on it. Today.

    Imagine a less-restricted system, with infrastructure built by an experienced enterprise coding team, and with just one more generation of LLM improvement? That could wipe out half the white collar workforce.

    If LLM improvement was only geometric, and not even exponential (as it clearly is), in 10 years these things will be smarter AND MORE CREATIVE than all humans.

    The truth is that we’re going to be there in 5 years.