Machine learning models, applied in the real world, can have unanticipated, harmful side effects. Recommended counter-measures include structured documentation of models ("Model Cards for Model Reporting") and training data used ("Data Sheets for Datasets").
In this talk, I'd like to propose a similar, multi-dimensional approach to analyzing the "solution" as a whole - "solution" as in "tech solutionism", the common term for technical "fixes" that have unintended, harmful consequences.
The idea here is that, by asking WHAT a solution is doing, WHO is providing it and WHY, as well as WHERE and HOW it will be used, we should be able to systematically assess whether we are, in fact, confronted with an instance of tech solutionism.
Sigrid works at RStudio, where she writes about open-source deep learning, machine learning, and scientific-computation frameworks. Seeing how related technologies are increasingly becoming a part of our everyday lives - often, without us even knowing - she is deeply worried about political, societal, and ethical impacts of AI use. While she's completely aware that talking and writing about what is going on won't make the dangers go away, she still thinks that in a situation where everyone's thoughts and actions matter, raising attention and awareness is a useful thing to do.