How algorithms can become fairer Algorithmic bias and fairness

How algorithms can become fairer Algorithmic bias and fairness

HomeJordan HarrodHow algorithms can become fairer Algorithmic bias and fairness
How algorithms can become fairer Algorithmic bias and fairness
ChannelPublish DateThumbnail & View CountDownload Video
Channel AvatarPublish Date not found Thumbnail
0 Views
In the second part of this series on Algorithmic Bias and Fairness, we look at how we can make artificial intelligence and algorithms fairer. To learn more about the math and statistics behind bias, visit http://brilliant.org/jordan and sign up for free. The first 200 people to sign up will also receive a 20% discount on the annual Premium subscription.

Twitter – http://twitter.com/jordanbharrod
Instagram – http://www.instagram.com/jordanbharrod

Sources:
Bender, E.M., and Friedman, B. (2018). Data explanations for natural language processing: toward reducing system biases and enabling better science. https://doi.org/10.1162/tacl_a_00041

Bolukbasi, T., Chang, K.-W., Zou, J., Saligram, V., and Kalai, A. (2016). Reduce word embedding. Retrieved from https://code.google.com/archive/p/word2vec/

Chouldechova, A. (2017). Fair prediction with disparate impact: An examination of bias in recidivism prediction tools. https://doi.org/10.1089/big.2016.0047

DeVries, T., Misra, I., W*ng, C., & van der Maaten, L. (2019). Does object recognition work for everyone? Retrieved from http://arxiv.org/abs/1906.02659

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé, H., & Crawford, K. (2018). Datasheets for datasets. Retrieved from http://arxiv.org/abs/1803.09010

Guo, C., Pleiss, G., Zon, Y., & Weinberger, K. Q. (2017). On calibration of modern neural networks.

Hardt, M., Price, E., & Srebro, N. (2016). Equal opportunities in supervised learning.

Hoffmann, A.L. (2019). Where fairness fails: data, algorithms and the limits of anti-discrimination discourse. https://doi.org/10.1080/1369118X.2019.1573912

Hovy, D., & Spruit, S. L. (2016). The social impact of natural language processing.

Hu, L., and Chen, Y. (2020). Fair classification and social welfare. https://doi.org/10.1145/3351095.3372857

Jia, S., Meng, T., Zhao, J., & Chang, K.-W. (2020). Mitigating the reinforcement of gender bias in distribution through posterior regularization. Retrieved from http://arxiv.org/abs/2005.06251

Jo, E. S., & Gebru, T. (2020). Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning. https://doi.org/10.1145/3351095.3372829

Kasy, M., & Abebe, R. (nd). Fairness, Equality, and Power in Algorithmic Decision Making, 1–14.

Kleinberg, J., Mullainathan, S., and Raghavan, M. (2017). Inherent tradeoffs in the fair determination of risk scores. https://doi.org/10.4230/LIPIcs.ITCS.2017.43

Lecun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539

Mitchell, S., Potash, E., Barocas, S., D'Amour, A., & Lum, K. (2018). Prediction-Based Decisions and Fairness: A Catalog of Choices, Assumptions, and Definitions, 1–22. Retrieved from http://arxiv.org/abs/1811.07867

Olteanu, A., Castillo, C., Diaz, F., & Kıcıman, E. (2019). Social data: biases, methodological pitfalls, and ethical boundaries. https://doi.org/10.3389/fdata.2019.00013

Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Exploring the impact of publicly labeling distorted performance results of commercial AI products. https://doi.org/10.1145/3306618.3314244

Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., & Denton, E. (2020). Saving Face: Exploring the Ethical Concerns of Facial Recognition Auditing. https://doi.org/10.1145/3375627.3375820

Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., and Sculley, D. (2017). No classification without representation: assessing geodiversity issues in open datasets for the developing world, retrieved from http://arxiv.org/abs/1711.08536

Stock, P., & Cisse, M. (2018). ConvNets and Imagenet that go beyond accuracy: understanding errors and exposing biases. 11210 LNCS, 504–519. https://doi.org/10.1007/978-3-030-01231-1_31

Suresh, H., & Guttag, J. V. (2019). A framework for understanding the unintended consequences of machine learning. Retrieved from http://arxiv.org/abs/1901.10002

Verma, S., and Rubin, J. (2018). Definitions of fairness explained. Proceedings – International Conference on Software Engineering, 1–7. https://doi.org/10.1145/3194770.3194776

W*ng, T, Zhao, J., Yatskar, M., Chang, KW, & Ordonez, V. (2019). Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. https://doi.org/10.1109/ICCV.2019.00541

Wilson, B., Hoffman, J., & Morgenstern, J. (2019). Predictive disparity in object detection. Retrieved from http://arxiv.org/abs/1902.11097

Zhao, J., W*ng, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2017). Men like shopping too: Reducing gender bias using corpus-level constraints. https://doi.org/10.18653/v1/d17-1323

Some interesting Twitter threads that include these resources and more:
https://twitter.com/rajiinio/status/1275056558091747333
https://twitter.com/rctatman/status/1275183674007277569?s20
https://twitter.com/rajiinio/status/1275303539896651783?s20
https://twitter.com/maxkasy/status/1270024467268452354?s20

Please take the opportunity to connect and share this video with your friends and family if you find it helpful.