

That’d be preem
That’d be preem
Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.
The hell, infrastructure strikes are how Ukraine is dealing the most damage to Russia. Was Ukraine even involved in this decision?
I have a solar cell charging station, but I suspect a lot of people don’t.
Curious what makes Realmz so replayable. BG3 has so many unique storylines and endings you’d be hard pressed to play them all. Not to mention character classes and subclasses.
Nitrogen gas should be painless administered correctly, but the way they do it is fundamentally flawed.
That being said, execution is barbaric.
Fantastic. Does anyone know a collaborative Calc/Excel alternative?
Lab bacteria like this almost never can exist outside of very specific environmental parameters.
Because conservatives are against change on principle. It doesn’t matter what the change is, they just want things to stay the same as it always was in their younger days.
Maybe fried food is a sex thing?
We are all Nicole on this blessed day.
Last post was almost exactly 2 years ago. The artist probably just got burnt out. It happens a lot.
The other guy is the main character of the artist’s comic. He’s just visiting the land of public domain.
How will they enforce this? And won’t the actual effect be the improvement of LLMs to the point they can no longer be distinguished from human text? I suspect this will end up being like a slow GAN training the LLMs.
Yep, unlike the US, they aren’t shooting themself in the foot. As they industrialize and invest in Africa, they are displacing the established world order.
I mean, I’m not really sure presidential pardons should even exist, but they do, and it’s a problem that Trump is trying to get around them to persecute the opposition party.
Agreed. If I was a little state I’d be working on my nuclear program like mad right now.
I have been wondering if keeping his kid around is some sort of anti-assassination strategy.
So this is what I’m excited about in AI.
LLMs are statistical machines that simply output reasonable sequences of tokens. Useful! Not particularly smart, but it approximates language. I think it proves that a great majority of what humans do is learned sequences of behaviors.
But now we’re working on corralling that statistical language into workflows that improve the reasoning of the output. These are the first experiments into what makes thinking actually work. Is it iteratively refining a rough concept (like we’re seeing in this paper)? Or is it subdividing tasks into more easily solved problems (like the Atom of Thoughts paper)?
Once we find something that works, a real theory of intelligence seems much more likely to emerge. If that happens, I wouldn’t be surprised to see LLMs die out in favor of something far simpler and more efficient.
God bless the artistic human mind that sees a bunch of rocks and wire and thinks “I bet I can make this look like booba”
More seriously, it’s a really wonderful work