The "critical thinking" Rob Kitchin is talking about is analyzing algorithms' impact with a social lens. Because algorithms affect people's lives, we shouldn't be content with letting them be opaque black boxes.
It seems to have overlap with the themes in the book by Ed Finn "What Algorithms Want - Imagination in the Age of Computing".[1]
Both say that algorithms are intensely studied from a technical perspective. E.g. O(log n) is better than O(n^2), etc.
Their idea is that the algorithms themselves are creating their own "culture" or "reality" and this should be studied through the lens of "humanities" or "sociology" instead of just "mathematics".
E.g. neural net or statistics algorithm computes that Person A is better credit risk than Person B. However, observers notice that Person B is always black and therefore claim that algorithms are (re)creating racial inequality. Or algorithms that provide sentencing guidelines for convicted felons. Or algorithms that diagnose medical problems.
Other writings with somewhat similar themes:
- Cathy O'Neil, "Weapons of Math Destruction - How Big Data Increases Inequality and Threatens Democracy"[2]
- Eli Pariser, "The Filter Bubble"[3]
There doesn't seem a universal term coined that generalizes the ideas in all 4 of those books but nevertheless, I'm sure more and more writers will notice they are talking about similar ideas.
Side observation about language usage... What I notice in all 4 books is that authors are using the word "algorithms" as a catch-all term for "machine learning". They're not really concerned about building-block algorithms such as "quick sort" or "discrete Fourier transform". What they're all talking about is "Facebook machine learning" is imposing X on us, or "Google's machine learning" is making us think Y. For some reason, the word "algorithm" has gained more currency than "machine learning" in these pop science books.
It seems to have overlap with the themes in the book by Ed Finn "What Algorithms Want - Imagination in the Age of Computing".[1]
Both say that algorithms are intensely studied from a technical perspective. E.g. O(log n) is better than O(n^2), etc.
Their idea is that the algorithms themselves are creating their own "culture" or "reality" and this should be studied through the lens of "humanities" or "sociology" instead of just "mathematics".
E.g. neural net or statistics algorithm computes that Person A is better credit risk than Person B. However, observers notice that Person B is always black and therefore claim that algorithms are (re)creating racial inequality. Or algorithms that provide sentencing guidelines for convicted felons. Or algorithms that diagnose medical problems.
Other writings with somewhat similar themes:
- Cathy O'Neil, "Weapons of Math Destruction - How Big Data Increases Inequality and Threatens Democracy"[2]
- Eli Pariser, "The Filter Bubble"[3]
There doesn't seem a universal term coined that generalizes the ideas in all 4 of those books but nevertheless, I'm sure more and more writers will notice they are talking about similar ideas.
Side observation about language usage... What I notice in all 4 books is that authors are using the word "algorithms" as a catch-all term for "machine learning". They're not really concerned about building-block algorithms such as "quick sort" or "discrete Fourier transform". What they're all talking about is "Facebook machine learning" is imposing X on us, or "Google's machine learning" is making us think Y. For some reason, the word "algorithm" has gained more currency than "machine learning" in these pop science books.
[1] https://mitpress.mit.edu/books/what-algorithms-want
[2] https://www.amazon.com/Weapons-Math-Destruction-Increases-In...
[3] https://www.amazon.com/Filter-Bubble-Personalized-Changing-T...