Channel | Publish Date | Thumbnail & View Count | Download Video |
---|---|---|---|
Publish Date not found | 0 Views |
There, her lawyers asked if anyone could prove the link between high testosterone levels and higher athletic performance. Was there actually a connection or medical evidence to prove this? The evidence never showed up and the standard for testosterone levels was declared invalid.
Dutee won.
This is why Dutee's case is relevant to our conversation today: when a standard or classification is built on an idea of "normal," the decision-making system will fail those outside that idea of "normal." . The extraordinary will fail. And if you think about it, nature is not neat. It's the outliers that move the species forward – something automated decision-making systems, including #algorithms, don't really understand.
In the latest episode of #LetsTalkAbout #BigData, we talk to Laura Reig, a PhD student at the Technical University of Denmark, about how AI makes mistakes in gender classification, and Chirag Agarwal, a research fellow at Harvard University, about what explainability in AI means . We also talk to Joy Lu, associate professor at Carnegie Mellon University, about what makes a good explanation of what an algorithm does. Is it accuracy? Is it understandable?
Listen to the full episode: https://www.newslaundry.com/2021/02/26/lets-talk-about-big-data-ep-5-algorithmic-transparency
—
To watch these and many more videos, click http://www.newslaundry.com/
Follow and interact with us on social media:
Facebook: https://facebook.com/newslaundry
Twitter: https://twitter.com/newslaundry
Instagram: https://instagram.com/newslaundry
Please take the opportunity to connect and share this video with your friends and family if you find it useful.