Former team members have either resigned or been absorbed into other research groups.
Nothing to see here. Move along. Open AI is the future. Move along.
Tethics
It was a blabla team to begin with
Good, useless department that wrote sci-fi and fascism for big tech. They are only a plague that advocates against open source and freedom. Their “safety” was not in the interest of the people, but rather corporate. We’re also so incredibly far off from AGI that it’s just roleplay to pretend like it’s relevant.
- Misalignment is a huge problem in any black-box system, not just in AGIs.
- What would it look like for us to be close to AGI? I have doubts that we’re close, but it seems at least plausible.
Any sort of intelligence. LLMs are not intelligent, and we haven’t created intelligence yet.
Should open source AI be condemned and possibly outrighted banned for when(if) these big tech companies achieve their alignment? Or will it be a way to hinder open source and ban it since the big players can claim you can hack it out, and therefore ban competition.
I can’t believe I’m aligned with Meta here but Yan LeCun is on the right side of history releasing these models for free. Giving everyone the ability to be competitive with LLMs is a much better outcome than only someone like Sam Altman having the keys.
We’re also so incredibly far off from AGI that it’s just roleplay to pretend like it’s relevant.
Oh you knew that for certain do you. Well that’s reassuring, please share your evidence.
Are we sure AI isn’t already as intelligent as some humans, because the bar isn’t really very high is it?
I suggest you look into how machine learning and LLMs work. “Please share your evidence”, this isn’t a internet debate lol, you can choose to be informed about LLMs or not, I’m not your teacher.
Yeah but you’re the one that made the claim though. I’ve heard plenty of counterclaims from people in the industry saying the opposite so who am I going to believe some random on the internet or people who are actually in the industry?