I have a feeling that our arguments are not that different (though not identical), but just phrased in very different words:> This works great because in math your abstractions don't change.
This is just a different formulation about the "volatility" of a lot of requirements of software by the users/customers.
> In contrast in computer programming, the goal of abstraction is largely isolation. You want to be able to change something in the abstraction, and it not affect the system very much.
Here my opinion differs: isolation is at best just one aspect of abstraction (and I would even claim that these concepts are even only tangentially related). I claim that the better isolation is rather a (very useful) side effect of some abstractions that are very commonly used in software development. But on the other hand, I don't think that it is really hard to come up with abstractions for software development that would be very useful, but don't lead to better isolation.
The central purpose of abstraction in computer programs is to make is easier to reason about the the code, and being able to avoid having to write "related" code multiple times. Similar to mathematics: you want to prove a general theorem (e.g. about groups) instead of having to prove one theorem about S_n, one theorem about Z_n etc.
You actually partly write about the aspect of reasoning about the code by yourself:
> Ideally you should be able to understand what the abstraction does by only looking at the abstraction's code and not the whole system.
In this sense using more abstractions is a particular optimization for the goals:
- you want to make it easier to reason about the code abstractly
- you want to avoid having to duplicate code (i.e. save money since less lines have to be written)
But this is not a panacea:
- If the abstraction turns out to be bad, you either have to re-engineer a lot, or you will have a maintenance nightmare (my "volatility of customer requirements" argument). Indeed, I claim that the question of "do we really use the best possible abstractions in our code for the problem that we want to solve" is nearly always neglected in software projects, because the answer is nearly always very inconvenient, necessitating lots of re-engineering of the code.
- low-level optimizations become harder, so making the code really fast gets much more complicated
- since abstractions are more "abstract", (depending on the abstraction) you might need "smarter" programmers (who can be more expensive). For an example consider some complicated metaprogramming libraries of Boost (C++): in the hands of really good programmers such abstractions can become "magic", but worse programmers will likely be overwhelmed by them.
- fighting about the "right" abstraction can become very political (for low-level code there is often less of such a fight, because here "what is more performant is typically right").
---
Concerning
> To summarize, i think in math abstractions are the base of the complexity pyramid. All the complexity is built on top of them.
This is actually not a bad idea to organize code (under my stated specific edge conditions! When these specific edge conditions are violated, my judgment might change). :-)
---
> P.S. my controversial opinion is that this is the flaw in a lot of reasoning haskell fans use.
I am not a particular fan of Haskell, but I think the Haskell fans' flaw lies in a very different point: they emphasize very particular aspects of computer programming, and, admittedly, often come up with clever solutions for these.
The problem is: in my opinion there exist aspects of software development that are in my opinion far more important, but don't fit into the kind of structures that Haskell fans appreciate. The difficulty is thus in my experience convincing Haskell fans that such aspects actually matter a lot instead of being unimportant side aspects of software development.