• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • Oh, I’ve just been toying around with Stable Diffusion and some general ML tidbits. I was just thinking from a practical point of view. From what I read, it sounds like the files are smaller at the same quality, require the same or less processor load (maybe), are tuned for parallel I/O, can be encoded and decoded faster (and there being less difference in performance between the two), and supports progressive loading. I’m kinda waiting for the catch, but haven’t seen any major downsides, besides less optimal performance for very low resolution images.

    I don’t know how they ingest the image data, but I would assume they’d be constantly building sets, rather than keeping lots of subsets, if just for the space savings of de-duplication.

    (I kinda ramble below, but you’ll get the idea.)

    Mixing and matching the speed/efficiency and storage improvement could mean a whole bunch of improvements. I/O is always an annoyance in any large set analysis. With JPEG XL, there’s less storage needed (duh), more images in RAM at once, faster transfer to and from disc, fewer cycles wasted on waiting for I/O in general, the ability to store more intermediate datasets and more descriptive models, easier to archive the raw photo sets (which might be a big deal with all the legal issues popping up), etc. You want to cram a lot of data into memory, since the GPU will be performing lots of operations in parallel. Accessing the I/O bus must be one of the larger time sinks and CPU load becomes a concern just for moving data around.

    I also wonder if the support for progressive loading might be useful for more efficient, low resolution variants of high resolution models. Just store one set of high res images and load them in progressive steps to make smaller data sets. Like, say you have a bunch of 8k images, but you only want to make a website banner based on the model from those 8k res images. I wonder if it’s possible to use the the progressive loading support to halt reading in the images at 1k. Lower resolution = less model data = smaller datasets to store or transfer. Basically skipping the downsampling.

    Any time I see a big feature jump, like better file size, I assume the trade off in another feature negates at least half the benefit. It’s pretty rare, from what I’ve seen, to have improvements on all fronts.



  • It might be worth noting that the platform is stable, but still growing. Expect little quirks; we’re dealing with a big influx of new users.

    For example, I joined during the first big wave of signups and the servers were having trouble keeping up with the sudden spike in activity (10 to 1000x+ new posts/users/instances). I would sometimes see federated content, sometimes not. After 12 hours and a massive effort by the devs, everything became MUCH more stable.

    There will be bugs, but they are actively being squashed at breakneck speed.

    For example, one that I encounter regularly is leaving a thread open in a background tab too long (on Firefox) eventually stops syncing with the server. When I eventually get to that tab, the data is old and attempting to interact (click arrows, reply, subscribe, etc.) send me to an error page. The fix? Refresh the page if was open more that 30 minutes ago. It’s a minor bug that will eventually be fixed, so give it time.


    I also wanted to throw some advice out there, in case it’s useful…

    If they’re ever confused, there are plenty of support communities/magazines. First, check if others have posted about the same problem. If they haven’t, feel free to ask. The NoStupidQuestions community hosts a ton of simple Fediverse-related questions posted by users, and it has some of the highest engagement on the platform. I know the reluctance of posting may have been ground into you by Reddit, but (a) this isn’t Reddit, and (b) we’re all new here.

    There is a slight learning curve, so canoodle around a bit to get a feel for this new Reddit-esque multiverse. Read a few FAQs, skim the support communities, follow a few rabbit holes.

    Here’s what I suspect is a semi-normal new user experience (because it was mine :) ):

    • To start, you’ll want to register an account, so you do. You’ll click a few stories, try to comment, and find you’re not logged in and can’t log in. You’ll notice you’re not on the original server. Do you have to register a million accounts? That makes no sense! The answer is no.

    • Next, you’ll want to understand why. That post you clicked took you to another instance (think of them as servers). So, how do you post a comment on another instance? Ah, from your home instance. So, did it matter where I registered? Yes and no, but mostly no.

      • Keep going down the account rabbit hole and you’ll read about the pros and cons of running your own instance, how federation/defederation works, and other instance-related topics,
      • Or, hop back out and proceed to comment on the post you read. Wait. My comment has no votes. The path forks again.
        • Is there something like Reddit’s karma system? Down the voting/rep rabbit hole!
        • Is it considered bad form to vote for my own comment/post? (There’s no consensus right now, so don’t worry.) Down the Lemmy/Kbin etiquette rabbit hole!

    You’ll eventually go back to hit on those forks in the path you didn’t take. Follow whichever path suits you best and expand from there.


  • I’m generally a Windows user, but on the verge of doing a trial run of Fedora Silverblue (just need to find the time). It sounds like a great solution to my… complicated… history with Linux.

    I’ve installed Linux dozens of times going back to the 90s (LinuxPPC anyone? Yellow Dog?), and I keep going back to Windows because I tweak everything until it breaks. Then I have no idea how I got to that point, but no time to troubleshoot. Easily being able to get back to a stable system that isn’t a fresh install sounds great.