I thought about it for 15 mins, but couldn’t think of any mathematical tricks. I thought of lots of minor tricks, like comparing the number to the result and not adding any more multiplications if it’s over, things that would cut 10%-20% here and there, but nothing which fundamentally changes big O running time.
For reference, here’s my solution for part 2 in smalltalk. I just generated every possible permutation and tested it. Part 1 is similar, mainly I just used bit magic to avoid generating permutations.
(even if you haven’t used it, smalltalk is fairly readable, everything is left to right, except in parens)
day7p2: in
| input |
input := in lines collect: [ :l | (l splitOn: '\:|\s' asRegex) reject: #isEmpty thenCollect: #asInteger ].
^ (input select: [ :line |
(#(1 2 3) permutationsWithRepetitionsOfSize: line size - 2)
anySatisfy: [ :num | (self d7addmulcat: line ops: num) = (line at: 1)]
]) sum: #first.
d7addmulcat: nums ops: ops
| final |
final := nums at: 2.
ops withIndexDo: [ :op :i |
op = 1 ifTrue: [ final := final * (nums at: i + 2) ].
op = 2 ifTrue: [ final := final + (nums at: i + 2) ].
op = 3 ifTrue: [ final := (final asString, (nums at: i+2) asString) asInteger ]
].
^ final
How about the overall number of checks you did? String cat is heavy, but addition and multiplication are absurdly fast, probably faster than the branches needed for early escape.
Inverse concat isn’t too heavy if you implement with logs and such. Certainly still heavier than integer add/mul (or sub/div in my case), but arithmetic is usually faster than memory allocation. However, predicting the performance hit due to branching a priori is tricky on modern hardware, which implements sophisticated branch prediction and speculative execution. Furthermore, branching happens anyway between terms to select the right operation, though a misprediction there is likely less significant unless you are doing string manipulation.
“Overall number of checks” is a bit ambiguous; if taken to mean the number of times I check against the target for early escape, plus the final check on success, the figure is 15% relative to the average 3^(n-1) / 2 checks required by brute force (n = number of terms in the equation, giving n-1 operators). That’s still almost a 7-fold decrease. If we instead look at the number of operator evaluations relative to the (n-1)/2 * 3^(n-1) evaluations expected from an average brute force search (3^(n-1) / 2 combinations with (n-1) operations conducted per combination), the figure is only 7.0%. In both cases, there is a significant amount of work not being done.