Inverse concat isn’t too heavy if you implement with logs and such. Certainly still heavier than integer add/mul (or sub/div in my case), but arithmetic is usually faster than memory allocation. However, predicting the performance hit due to branching a priori is tricky on modern hardware, which implements sophisticated branch prediction and speculative execution. Furthermore, branching happens anyway between terms to select the right operation, though a misprediction there is likely less significant unless you are doing string manipulation.
“Overall number of checks” is a bit ambiguous; if taken to mean the number of times I check against the target for early escape, plus the final check on success, the figure is 15% relative to the average 3(n-1) / 2 checks required by brute force (n = number of terms in the equation, giving n-1 operators). That’s still almost a 7-fold decrease. If we instead look at the number of operator evaluations relative to the (n-1)/2 * 3(n-1) evaluations expected from an average brute force search (3(n-1) / 2 combinations with (n-1) operations conducted per combination), the figure is only 7.0%. In both cases, there is a significant amount of work not being done.
That pruning is indeed goal. As for benchmarking, I did not implement a brute force solution; I might try it if I finish one of the next few days quickly (lol fat chance). I did bench math vs string cat but did not record numbers. IIRC it made no measurable difference in Clojure, where input parsing with my crappy parser/combinator lib dominated, but math was something like a factor of three faster than string cat for Chez Scheme.