Tried my duck river crossing thing a few times recently, it usually solves it now, albeit with a bias to make unnecessary trips half of the time.

Of course, anything new fails:

There’s 2 people and 1 boat on the left side of the river, and 3 boats on the right side of the river. Each boat can accommodate up to 6 people. How do they get all the boats to the left side of the river?

Did they seriously change something just to deal with my duck puzzle? How odd.

It’s Google so it is not out of the question that they might do some analysis on the share links and referring pages, or even use their search engine to find discussions of a problem they’re asked. I need to test that theory and simultaneously feed some garbage to their plagiarism machine…

Sample of the new botshit:

L->R: 2P take B_L. L{}, R{2P, 4B}. R->L: P1 takes B_R1. L{P1, B_R1}, R{P2, 3B}. R->L: P2 takes B_R2. L{2P, B_R1, B_R2}, R{2B}. L->R: P1 takes B_R1 back. L{P2, B_R2}, R{P1, 3B}. R->L: P1 takes B_R3. L{P1, P2, B_R2, B_R3}, R{2B}. L->R: P2 takes B_R2 back. L{P1, B_R3}, R{P2, 3B}.

And again and again, like a buggy attempt at brute forcing the problem.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    The fact that it appears to be trying to create a symbolic representation of the problem is interesting, since that’s the closest I’ve ever seen this come to actually trying to model something rather than just spewing raw text, but the model itself looks nonsensical, especially for such a simple problem.

    Did you use any of that kind of notation in the prompt? Or did some poor squadron of task workers write out a few thousand examples of this notation for river crossing problems in an attempt to give it an internal structure?

    • diz@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 days ago

      Did you use any of that kind of notation in the prompt? Or did some poor squadron of task workers write out a few thousand examples of this notation for river crossing problems in an attempt to give it an internal structure?

      I didn’t use any notation in the prompt, but gemini 2.5 pro seem to always represent state of the problem after every step in some way. When asked if it does anything with it says it is “very important”, so it may be that there’s some huge invisible prompt that says its very important to do this.

      It also mentioned N cannibals and M missionaries.

      My theory is that they wrote a bunch of little scripts that generate puzzles and solutions in that format. Since river crossing is one of the top most popular puzzles, it would be on the list (and N cannibals M missionaries is easy to generate variants of), although their main focus would have been the puzzles in the benchmarks that they are trying to cheat.

      edit: here’s one of the logs:

      https://pastebin.com/GKy8BTYD

      Basically it keeps on trying to brute force the problem. It gets first 2 moves correct, but in a stopped clock style manner - if there’s 2 people and 1 boat they both take the boat, if there’s 2 people and >=2 boats, then each of them takes a boat.

      It keeps doing the same shit until eventually its state tracking fails, or its reading of the state fails, and then it outputs the failure as a solution. Sometimes it deems it impossible:

      https://pastebin.com/Li9quqqd

      All tests done with gemini 2.5 pro, I can post links if you need them but links don’t include their “thinking” log and I also suspect that if >N people come through a link they just look at it. Nobody really shares botshit unless its funny or stupid. A lot of people independently asking the same problem, that would often happen if there’s a new homework question so they can’t use that as a signal so easily.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I’m not familiar with the cannibal/missionary framed puzzle, but reading through it the increasingly simplified notation reads almost like a comp sci textbook trying to find or outline an algorithm for something, but with an incredibly simple problem. We also see it once again explicitly acknowledge then implicitly discard part of the problem; in this case it opens by acknowledging that each boat can carry up to 6 people and that each boat needs at least one person, but somehow gets stuck on the pattern that we need to alternate trips left and right and each trip can only consist of one boat. It’s still pattern matching rather than reasoning, even if the matching gets more sophisticated.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      I would be 0% surprised to learn that the modelfarmers “iterated” to “hmm, people are doing a lot of logic tests, let’s handle those better” and that that’s what gets here

      (I have no evidence for this, but to me it seems a completely obvious/evident way for them to try keep the party going)

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        I have two theories on how the modelfarmers (I like that slang, it seems more fitting than “devs” or “programmers”) approached this…

        1. Like you theorized, they noticed people doing lots of logic tests, including twists on standard logic tests (that the LLMs were failing hard on), so they generated (i.e. paid temp workers) to write a bunch of twists on standard logic tests. And here we are, with it able to solve a twist on the duck puzzle, but not really better in general.

        2. There has been a lot of talk of synthetically generated data sets (since they’ve already robbed the internet of all the text they could). Simple logic puzzles could actually be procedurally generated, including the notation diz noted. The modelfarmers have over-generalized the “bitter lesson” (or maybe they’re just lazy/uninspired/looking for a simple solution they can tell the VCs and business majors) and think just some more data, deeper network, more parameters, and more training will solve anything. So you get the buggy attempt at logic notation from synthetically generated logic notation. (Which still doesn’t quite work, lol.)

        I don’t think either of these approaches will actually work for letting LLM’s solve logic puzzles in general, these approaches will just solve individual cases (for solution 1) and make the hallucinations more convincing (for 2). For all their talk of reaching AGI… the approaches the modelfarmers are taking suggest a mindset of just reaching the next benchmark (to win more VC, and maybe market share?) and not of creating anything genuinely reliable much less “AGI”. (I’m actually on the far optimistic end of sneerclub in that I think something useful might be invented that lasts the coming AI winter… but if the modelfarmers just keep scaling and throwing more data at the problem, I doubt they’ll even manage that much).

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          2 days ago

          (excuse possible incoherence it’s 01:20 and I’m entirely in filmbrain (I’ll revise/edit/answer questions in morning))

          re (1): while that is a possibility, keep in mind that all this shit also operates/exists in a metrics-as-targets obsessed space. they might not present end user with hit% but the number exists, and I have no reason to believe that isn’t being tracked. combine that with social effects (public humiliation of their Shiny New Model, monitoring usage in public, etc etc) - that’s where my thesis of directed prompt-improvement is grounded

          re (2): while they could do something like that (synthetic derivation, etc), I dunno if that’d be happening for this. this is outright a guess on my part, a reach based on character based on what I’ve seen from some the field, but just……I don’t think they’d try that hard. I think they might try some limited form of it, but only so much as can be backed up in relatively little time and thought. “only as far as you can stretch 3 sprints” type long

          (the other big input in my guesstimation re (2) is an awareness of the fucked interplay of incentives and glorycoders and startup culture)

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            I don’t think they’d try that hard.

            Wow lol… 2) was my guess at an easy/lazy/fast solution, and you think they are too lazy for even that? (I think a “proper” solution would involve substantial modifications/extensions to the standard LLM architecture, and I’ve seen academic papers with potential approaches, but none of the modelfarmers are actually seriously trying anything along those lines.)

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              2 days ago

              lol, yeah

              “perverse incentives rule everything around me” is a big thing (observable) in “startup”[0] world because everything[1] is about speed/iteration. for example: why bother spending a few weeks working out a way to generate better training data for a niche kind of puzzle test if you can just code in “personality” and make the autoplag casinobot go “hah, I saw a puzzle almost like this just last week, let’s see if the same solution works…”

              i.e. when faced with a choice of hard vs quick, cynically I’ll guess the latter in almost all cases. there are occasional exceptions, but none of the promptfondlers and modelfarmers are in that set imo

              [0] - look, we may wish to argue about what having billions in vc funding categorizes a business as. but apparently “immature shitderpery” is still squarely “startup”

              [1] - in the bayfucker playbook. I disagree.

              • diz@awful.systemsOP
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 day ago

                I think they worked specifically on cheating the benchmarks, though. As well as popular puzzles like pre existing variants of the river crossing - it is a very large puzzle category, very popular, if the river crossing puzzle is not on the list I don’t know what would be.

                Keep in mind that they are also true believers, too - they think that if they cram enough little pieces of logical reasoning, taken from puzzles, into the AI, then they will get robot god that will actually start coming up with new shit.

                I very much doubt that there’s some general reasoning performance improvement that results in these older puzzle variants getting solved, while new ones that aren’t particularly more difficult, fail.