Possibly one of my favourites to date. Absolutely love it.
Possibly one of my favourites to date. Absolutely love it.
While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything resembling an actively intelligent system.
On that note, the recent developments with active inference like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I’m almost glad it’s not being absorbed into the hype and hate cycle.
Upvote for using censorship. Been seeing worse things left uncensored in ai channels and it’s put me off my coffee before. Wonder if having more detailed censored indicators would prevent some of the downvotes, but you have my thanks.
I’m talking about the general strides in cognitive computing and predictive processing.
https://youtu.be/A1Ghrd7NBtk?si=iaPVuRjtnVEA2mqw
Machine learning is still impressive, we just can better frame the limitations now.
For the note on scale and ecosystems, review recent work by karl friston or Michael Levin.
Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn’t inevitably lead to unnecessary conflict due to diverging models that haven’t grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.
breath
We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.
We’re seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.
AI or no AI, the solution needs to be social restructuring. People underestimate the amount society can actively change, because the current system is a self sustaining set of bubbles that have naturally grown resilient to perturbations.
The few people who actually care to solve the world’s problems are figuring out how our current systems inevitably fail, and how to avoid these outcomes.
However, the best bet for restructuring would be a distributed intelligent agent system. I could get into recent papers on confirmation bias, and the confabulatory nature of thought, on the personal level, group level, and society level.
Turns out we are too good at going with the flow, even when the structure we are standing on is built over highly entrenched vestigial confabulations that no longer help.
Words, concepts, and meanings change heavily depending on the model interpreting them. The more divergent, the more difficulty in bridging this communication gap.
a distributed intelligent system could not only enable a complete social restructuring with autonomy and altruism both guaranteed, but with an overarching connection between the different models at every scale, capable of properly interpreting the different views, and conveying them more accurately than we could have ever managed with model projection and the empathy barrier.
Yeah I’m going to have to leave this sub if this shit keeps ending up in my feed while I’m eating or in public.
I definitely agree that copyright is a good half century in need of an update. Disney company and other contemporaries should never have been allowed the dominance and extension of copywrite that allows what feels like ownership of most global artistic output. They don’t need AI, they have the money and interns to create whatever boardroom adjusted art they need to continue their dominance.
Honestly I think the faster AI happens, the more likely it is that we find a way out of the social and economical hierarchical structure that feels one step from anarcho-capitalistic aristocracy.
I just hope we can find the change without riots.
And you violate copyright when you think about copywritten things alone at night.
I violate copyright when i draw Mario and don’t sell it to anybody.
Or these are dumb stretches of what copyright is and how it should be applied.
the reasoning in this article is dumb and all over the place.
Seems like gary marcus being gary marcus.
Already seen openAI calling out some of the bullshit specifically noted in this. That doesn’t matter though, damage is done and people WANT to believe ai is terrible in every way.
Everyone is just deadfast determined to climb onto the gary marcus unreasonable AI hate train no matter what.
God I want some large projects by independent teams. It’s impossible to do anything without a sponsor, but this might be what we need for smaller groups to create wonderful complex works of art, instead of cookiecutter boardroom content machines that currently flood almost all available commercial artistic spaces.
Can’t wait to see how the tech develops. It’s be curious to do VR experience recreations of my dreams through AI dictation.
Modelling, rigging, animation and the like are all coming. Imagine walking around a world being crafted and changed as you describe each element to be exactly what you are looking for.
I think it would capture more artist intent than the unnecessary interface of archaic tools that create an artificial interface and challenge between you and your vision.
Especially if you’ve damaged your digits, or otherwise lack digital dexterity.
But change scares people. Especially ones who have put in effort to conform to the current economic system corporate art creators.
That’s already the system outside of creating what rich people want. An entire team of artists creating boardroom directed art is much less art to me than a single creative using AI to bring their personal vision to life.
Hopefully individual artists can do more with these tools, and we can all hope for a world where artists can be supported to have the ability and freedom to create apart from the whims of the wealthy.
Starving artist is a term for a reason. Technology has never been the real problem.
Wouldn’t put it past bezos for being responsible for the megastructure
People’s perspective is killing their sense of awe.
While our economic system is grand in ensuring our experience of life doesn’t improve, technology has gotten kind of crazy and awesome.
They could release an agi next year, and unless it affected people’s work life balance, people would just immediately get used to it and think it’s boring.
Will generative AI still kill our sense of awe when video game characters can naturally and accurately respond how you would expect?
I would never get bored of it. The majority of people would find it a boring novelty after a couple days because we are good at getting used to things and people don’t want to recognize the fact. We will have full fantastical worlds to explore and people will still find reason to be salty because it’s made with the help of evil computers.
I’m personally eager for a life where my recreational experiences aren’t defined by companies like Disney. Smaller artists with these powerful tools will be able to create wonderful unique experiences without the ball and chain of media oligarches.
We have more control than we think of our sense of awe.
Maybe it’s time for a new perspective on art and industry.
Hey shill here. I also shill for other artistic tools like cameras and CGI. Got a lot of hate back when CGI and digital painting were still controversial. Don’t know if such “art” will ever truly be accepted by the art police, i guess AI art tools will join them.
Personally I think independent artists can accomplish much more with tools like these than they could just pretending to be a Disney art director with all the pretend Disney interns not actually helping their vision come to life.
I like when art isn’t monopolized by the ones with all the money. I also like when we allow open models that aren’t proprietary adobe subscriptions.
Also this thread is hilarious. OpenAI are literally asking to be regulated by more democratic external bodies. They’ve been making every effort one could expect on this front, but I guess that doesn’t matter?
It’s like when Altman went to the senate and said “regulate larger and more capable models like we will have, but don’t stifle and limit open source and smaller startups”
And everyone started bashing openAI for encouraging regulation of open source.
If I’m a brain dead tech bro, at least I have decades of familiarity with art, copywrite woes, and AI/ML. Back in school I was just called a nerd, but I guess that framing doesn’t really work these days so i need to be compared to frat bro adventure capitalists every time I have an opinion that’s not negative to new technologies.
If I know my lore, that fish is going to get laid.
Tools like “segment anything now” that can specifically segment things in an image so you don’t have to do detailed inpainting.
the the problem of analogy is applicable to more than one task. your point is moot.
for it to be intelligent enough to be a “super intelligence” it would require systems for weighting vague liminal concept spaces. rather, several systems that would prevent that style of issue.
otherwise it just couldn’t function as well as you fear.
For a system to be advanced enough to be that dangerous, it would need the complex analogical thought that would prevent this type of misunderstanding. Rather, such dumb super intelligence is unlikely.
however, human society has enabled a paperclip maximizer in the form of profit maximizing corporate environments.
Hey! Artist here. I love drawing. My hands go numb within minutes and they shake more every year. I appreciate having a tool and medium that allows great artistic control despite these facts.
Now, if you’re really butthurt about the training data you can use adobe’s proprietary model. I for one think it’s good that peasants have an open available tool that isn’t owned by adobe, even if it was trained less proprietarily.
This anger about it reminds me of deviant art artists getting mad at each other for “copying my style”
And the fact that copywrite used to be about the general good, and promotion of creative works.
This world needs new artistic priorities. Pen and paper aren’t losing their place, but new tech will lead to independent artists creating entire movies, games, and holodeck style experiences without looming overhead of whatever art oligarch holds the funding.
I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that “nature” has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.
Most people haven’t even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.
That being said, localized narrow generative models are just building large individual models of predictive process that doesn’t by default actively update information.
People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.
But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.