This type of comment is often made in machine learning (ML) related submissions.
The pre-req list is long: calculus, linear algebra, stats, probability, numerical methods (for optimization, linear algebra, maybe interpolation), etc. BUT, you don't really need to go through the entirety of each subject for ML. For example, in calculus, you probably only need to focus on the aspects necessary for optimization, rather than integral techniques, convergence of sequences, etc. The trouble is that it is difficult to know which subtopics of each subject are worth spending time on unless you already know machine learning (or you have the luxury of someone with experience guiding you).
The latter difficulty is compounded by the fact that there seems to be many more resources (at least posted as popular submission on the web) for learning neural nets or learning some specific framework to implement neural networks, than to learn the mathematical and statistical foundations of ML. This is fine -- neural nets are a popular and powerful model, and people like to work on something tangible to get acquainted with a topic.
I wonder if people might enjoy a well-written textbook covering the basic math for ML -- something like, "All the math you missed (but need to know for machine learning)" . I might enjoy working on such an ebook if there was desire for one, but my time is pretty limited (like most).
Once I had finished that, Cullen's "Matrices and linear transformations" was really helpful too. But I wouldn't do Cullen if you're still, as I was, floundering with the concepts of why you're doing this in the first place. It's great once you have those concepts down.
Fresh book recommendations delivered straight to your inbox every Thursday.