misk@sopuli.xyz to Technology@lemmy.worldEnglish · 1 day agoApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.comexternal-linkmessage-square94fedilinkarrow-up1463arrow-down114cross-posted to: apple_enthusiast@lemmy.world
arrow-up1449arrow-down1external-linkApple study exposes deep cracks in LLMs’ “reasoning” capabilitiesarstechnica.commisk@sopuli.xyz to Technology@lemmy.worldEnglish · 1 day agomessage-square94fedilinkcross-posted to: apple_enthusiast@lemmy.world
minus-squareHalcyon@discuss.tchncs.delinkfedilinkEnglisharrow-up27arrow-down1·10 hours agoThey are large LANGUAGE models. It’s no surprise that they can’t solve those mathematical problems in the study. They are trained for text production. We already knew that they were no good in counting things.
minus-squareFlocklesscrow@lemm.eelinkfedilinkEnglisharrow-up15arrow-down1·9 hours ago“You see this fish? Well, it SUCKS at climbing trees.”
They are large LANGUAGE models. It’s no surprise that they can’t solve those mathematical problems in the study. They are trained for text production. We already knew that they were no good in counting things.
“You see this fish? Well, it SUCKS at climbing trees.”