It’s funny, with the increase in use of numerical models, so much math has been turned into computer code. Derivatives and integrals as well are defined by finite difference formulas that serve as the basis for the notations. The point of them isn’t to explain, it’s just to simplify writing and reading it. I agree it can be a bit obtuse but if you had to write out a for loop to solve a math equation every time it would take forever lol
Well this is where the computing perspective comes in.
Programming culture has generally learnt over time that the ability to read code is important and that the speed/convenience of writing ought to be traded off, to some extent, for readability. Opinions will vary from programmer to programmer and paradigm/language etc. But the idea is still there, even for a system whose purpose is to run on a computer and work.
In the case of mathematical notation, how much is maths read for the purposes of learning and understanding? Quite a lot I’d say. So why not write it out as a for loop for a text/book/paper that is going to be read my many people potentially many times?!
If mathematicians etc need a quick short hand, I think human history has shown that short hands are easily invented when needed and that we ought not worry about such a thing … it will come when needed.
Actually programs are much less readable than corresponding math representation. Even in a simpler example of a for loop. Code is known to quickly add cognitive complexity, while math language manage to keep complexity understandable.
Have you tried reading how a matrix matrix multiplication is implemented with for loops? Compare it with the mathematical representation to see what I mean
Success of fortran, mathematica, R numpy, pandas and even functional programming is because they are built to make programming closer to the simplicity of math
Will I think there’s a danger here to conflate abstraction with mathematical notation. Code, whether Fortran, C or numpy, is capable of abstraction just as mathematics is. Abstraction can help bring complexity under control. But what happens when you need to understand that complexity because you haven’t learnt it yet?
Now sure writing a program that will actually work and perform well adds an extra cognitive load. But I’m talking more about procedural pseudo code being written for the purposes of explaining to toss who don’t already understand.
Math is the language developed exactly for that, to be an unambiguous, standard way to represent extremely complex, abstract concepts.
In the example above, both the summation and the for loop are simply
a_1 + a_2 + ... + a_n
Math is the language to explain, programming languages is to implement it in a way that can be done by computers. In a real case scenario is more often
sum(x)
or
x.sum()
as a for loop is less readable (and often unoptimized).
If someone doesn’t know math he can do the same as those who don’t know programming: learn it.
Learning barrier of math is actually lower than programming
It’s funny, with the increase in use of numerical models, so much math has been turned into computer code. Derivatives and integrals as well are defined by finite difference formulas that serve as the basis for the notations. The point of them isn’t to explain, it’s just to simplify writing and reading it. I agree it can be a bit obtuse but if you had to write out a for loop to solve a math equation every time it would take forever lol
Well this is where the computing perspective comes in.
Programming culture has generally learnt over time that the ability to read code is important and that the speed/convenience of writing ought to be traded off, to some extent, for readability. Opinions will vary from programmer to programmer and paradigm/language etc. But the idea is still there, even for a system whose purpose is to run on a computer and work.
In the case of mathematical notation, how much is maths read for the purposes of learning and understanding? Quite a lot I’d say. So why not write it out as a for loop for a text/book/paper that is going to be read my many people potentially many times?!
If mathematicians etc need a quick short hand, I think human history has shown that short hands are easily invented when needed and that we ought not worry about such a thing … it will come when needed.
Actually programs are much less readable than corresponding math representation. Even in a simpler example of a for loop. Code is known to quickly add cognitive complexity, while math language manage to keep complexity understandable.
Have you tried reading how a matrix matrix multiplication is implemented with for loops? Compare it with the mathematical representation to see what I mean
Success of fortran, mathematica, R numpy, pandas and even functional programming is because they are built to make programming closer to the simplicity of math
Will I think there’s a danger here to conflate abstraction with mathematical notation. Code, whether Fortran, C or numpy, is capable of abstraction just as mathematics is. Abstraction can help bring complexity under control. But what happens when you need to understand that complexity because you haven’t learnt it yet?
Now sure writing a program that will actually work and perform well adds an extra cognitive load. But I’m talking more about procedural pseudo code being written for the purposes of explaining to toss who don’t already understand.
Math is the language developed exactly for that, to be an unambiguous, standard way to represent extremely complex, abstract concepts.
In the example above, both the summation and the for loop are simply
Math is the language to explain, programming languages is to implement it in a way that can be done by computers. In a real case scenario is more often
or
as a for loop is less readable (and often unoptimized).
If someone doesn’t know math he can do the same as those who don’t know programming: learn it.
Learning barrier of math is actually lower than programming
Using for loops instead of sigma notation would be almost universally awful for readability.