Londa Schiebinger: "Gender and Fairness in Machine Learning and Robotics"
Big data often contains unconscious human gender and other bias. Machine learn models trained on these datasets tend to amplify existing bias into the future. Technology, i.e., our devices, programs, and processes shape human attitudes, behaviors, and culture—and future data. In other words, past bias is perpetuated into the future, even when governments, universities, and companies themselves have implemented policies to foster equality. How can humans intervene in automated processes to enhance and not harm social equalities? Who should make these decisions? What are practical techniques to guarantee social fairness. This talk will also touch on gender and robotics, as time allows.
Londa Schiebinger is John L. Hinds Professor of the History of Science and a former Clayman Institute Director.