There's something to say for languages like Python and Clojure were plain ordinary math might involve ordinary integers, arbitrary precision integers, floats or even rationals.
In grad school it was drilled into me to use floats instead of doubles wherever I could which cuts your memory consumption of big arrays in half. (It was odd that Intel chips in the 1990s were about the same speed for floats and doubles but all the RISC competitors had floats about twice the speed of doubles, something that Intel caught up with in the 2000s)
Old books on numerical analysis, particularly Foreman Acton's
teach the art of how to formulate calculations to minimize the effect of rounding errors which resolves some of the need for deep precision. For that matter, modern neural networks use specialized formats like FP4 because these save memory and are effectively faster in SIMD.
---
Personally when it comes to general purpose programming languages I've watched a lot of people have experiences that lead them to thinking that "programming is not for them", I think
>>> 0.1+0.2
0.30000000000000004
is one of them. Accountants, for instance, expect certain invariants to be true and if they see some nonsense like
>>> 0.1+0.2==0.3
False
it is not unusual for them to refuse to work or leave the room or have a sit-down strike until you can present them numbers that respect the invariants. You have a lot of people who could be productive lay programmers and put their skills on wheels and if you are using the trash floats that we usually use instead of DEC64 you are hitting them in the face with pepper spray as soon as they start.
In grad school it was drilled into me to use floats instead of doubles wherever I could which cuts your memory consumption of big arrays in half. (It was odd that Intel chips in the 1990s were about the same speed for floats and doubles but all the RISC competitors had floats about twice the speed of doubles, something that Intel caught up with in the 2000s)
Old books on numerical analysis, particularly Foreman Acton's
https://www.amazon.com/Real-Computing-Made-Engineering-Calcu...
teach the art of how to formulate calculations to minimize the effect of rounding errors which resolves some of the need for deep precision. For that matter, modern neural networks use specialized formats like FP4 because these save memory and are effectively faster in SIMD.
---
Personally when it comes to general purpose programming languages I've watched a lot of people have experiences that lead them to thinking that "programming is not for them", I think
is one of them. Accountants, for instance, expect certain invariants to be true and if they see some nonsense like it is not unusual for them to refuse to work or leave the room or have a sit-down strike until you can present them numbers that respect the invariants. You have a lot of people who could be productive lay programmers and put their skills on wheels and if you are using the trash floats that we usually use instead of DEC64 you are hitting them in the face with pepper spray as soon as they start.