The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.
And people believe this … why? I mean, shouldn’t the default assumption about anything anyone in AI says is that it’s a lie?
Should be, but it isn’t.
A) Putting on my conspiracy theory hat… OpenAI has been bleeding for most of a year now, with execs hitting the door running and taking staff with them. It’s not at all implausible that somebody lower on the totem pole could have been convinced to leak some reinforcement training weights to help Deepseek along.
B) Putting on my best LessWronger hat (random brown stains, full of holes)… I estimate no less than a 25% chance that by the end of this week, Sammy-boy will be demanding an Oval Office meeting, banging the table and screaming about “theft!” and “hacking!!”
wrong thread :(
This but it’s american vs chinese slop
This shows the US is falling behind China, so you gotta give OpenAI more money!
Fear of a “bullshit gap”, I guess.
Oh, and: simply perfect choice of header image on that article.
Kind of reminds me of the government funding Star wars programs that never produced anything but was credited for spending the Soviet Union into their grave because they couldn’t keep up. But I don’t think it’s going to work the same this time…
i understand that the thing that worked was mostly space-based surveillance
Altman: Mr. President, we must not allow a bullshit gap!
Musk: I have a plan… Mein Führer, I can walk!