recall the recent condensing of regulation and timescales in the biomedical industry, ‘Operation Warp Speed’
in chess or Go I expect that in the overwhelming majority of positions (even excluding the opening book & endgame databases), the best move is known and the engines aren’t going to change the choice no matter how long you run them
(that second one is him quoting some researcher, but this transparently absurd statement simply slides past unnoticed, and indeed is cited as support for something similar he’s saying which also seem dubious to me.)
This month, the US was fairly quiet. In fact, only a few US-based AI labs announced new models (generally, I don’t count Llama finetunes, nor OpenAI’s minor model updates).
I barely pay attention to this stuff, and I noticed the CodeLlama 70B release, which I would describe as significant – simply stacking up the number of papers and saying one side of the equation is making more progress because they’re releasing more things (and specifically saying that he “doesn’t count” the two most prolific US sources in terms of commonly-used models), is very weird. You can look at benchmarks, or try the models yourselves or see what results they claim in their papers, if you’re going to write an article saying something about the comparative output.
There is a whole conversation to be had about AI research in China, and I’m 100% open to the idea that I and the rest of the West is missing something important, but it would have been nice to see this citation-less statement:
many of them significantly outperforming ChatGPT
Backed up by something more than:
I’m not sure what China’s prolific output means
He also compares things to GPT 3.5 (in his mind, not by testing). Personally I dislike using 3.5 for anything, because there’s something already available to consumers that’s clearly way better and has been for quite some time. GPT-4 is clearly the model to beat and the model that most US researchers compare their stuff to when they’re publishing stuff.
What the heck is this source. Excerpts:
(that second one is him quoting some researcher, but this transparently absurd statement simply slides past unnoticed, and indeed is cited as support for something similar he’s saying which also seem dubious to me.)
I barely pay attention to this stuff, and I noticed the CodeLlama 70B release, which I would describe as significant – simply stacking up the number of papers and saying one side of the equation is making more progress because they’re releasing more things (and specifically saying that he “doesn’t count” the two most prolific US sources in terms of commonly-used models), is very weird. You can look at benchmarks, or try the models yourselves or see what results they claim in their papers, if you’re going to write an article saying something about the comparative output.
There is a whole conversation to be had about AI research in China, and I’m 100% open to the idea that I and the rest of the West is missing something important, but it would have been nice to see this citation-less statement:
Backed up by something more than:
He also compares things to GPT 3.5 (in his mind, not by testing). Personally I dislike using 3.5 for anything, because there’s something already available to consumers that’s clearly way better and has been for quite some time. GPT-4 is clearly the model to beat and the model that most US researchers compare their stuff to when they’re publishing stuff.
Etc etc. In short:
BOOOOOOOOOOOOOOO
BOOOOOOOOOOOOOOOO
Ty