The proposed aim is to teach just enough calculus and (finite dimensional) linear algebra for an understanding of quadratic form minimization and of principal components, presupposing only familiarity with R and basic algebra. This objective’s modesty is due to the experimental nature of the course, which is twofold. It would be the first “auxiliary” DSS course, supporting the specialization but not actually in it. Also, it could become an introduction or review for a contemplated linear models MOOC.

The contemplated linear models MOOC would evidently cover foundations of machine learning including regularization, (infinite dimensional) Hilbert spaces, reproducing kernels, and representer theorems. The possibility of getting from the modest goals above to these advanced topics should be kept in mind. Perhaps finite dimensional examples, e.g., discrete Fourier transforms or other tensor product constructs, could serve as conceptual introductions. (NOTE: Fourier transforms require complex numbers which are otherwise unnecessary for immediate objectives. Walsh transforms or wavelets if they can be motivated?)

IMO, swirl’s strength is in examples and how to do things, not in abstractions and how to think about things. As such it is a complement to exposition, as are traditional examples and chapter exercises, but swirl brings automation to the party. So, even though it is possible to do symbolic algebra and calculus in R (e.g., through its bindings to Python’s sympy package,) it’s probably best to stick with numerical calculus, and with vectors and matrices, i.e., with representations rather than vector spaces as algebraic abstractions.