Maybe the book it’s just not for you. It doesn’t mean it’s not for anyone.I understand that deep learning is all in vogue now. But when I was in graduate school, a professor asked me why I was using neural nets in a project since it was not as good as SVMs. We used to study Vapnik and VC dimensions, SVMs etc. and neural nets were totally out of fashion.
Imagine what would have happened if everybody were using and researching only those methods because they worked better. And deep learning could benefit from a theory that explains why, when and how it works so well. Maybe someone working on this could develop on it to include it.
Also I don’t think you’re right to assume that all models out there are deep learning models. Yes they are very good for many cases (specially those with less structured data, like image or nlp). But in some cases gradient boosting or even GLMs are better suited for the task (because of the structure and size of the data or because of computing restrictions).
And in the end, people can just want to learn it because they find it interesting.
It’s a bit sad to do only things that are “useful”. That’s my 2 cents.