In this episode, Mary Gray, Senior Principal Researcher at Microsoft Research and co-author of Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, discusses the work of the often invisible contract employees who bring an essential human element to tech and how the COVID-19 pandemic is bringing their undeniable value to light.
In this episode you will learn:
How contract workers are essential in aiding AIs and search engines
Examples of a ghost work in everyday technology
How the tech industry often devalues contract employees
What data labeling is
What a ghost worker’s daily schedule looks like
How the growing telehealth industry is a prime example of under-appreciated, yet essential contract work
The three elements that undermine job happiness
How business are benefiting from contract workers
The growing challenges of moving towards more contract-driven business
Why we should mind the gap rather than close the gap
How the pandemic is demonstrating the value of contract and ghost work
What are the limits of tech and where does human creativity and spontaneity become irreplaceable
Kaveh Azartash holds a PhD in Biomedical Engineering from University of California, Irvine with a focus on Vision Science. Kaveh’s career has been focused on innovating software applications in the neuroscience and now artificial intelligence domain. He co-founded KidSense.ai in 2015 after realizing children are unable to effectively communicate with the technology around them through voice.
What is digital literacy and why is it important for our kids to learn these skills? Founder of Cybercivics.com and Cyberwise.org, Diana Graber, joins the show to discuss theses topics and more. Diana is the author of “Raising Humans in a Digital World: Helping Kids Build a Healthy Relationship with Technology.” In this episode of Innovation for All Podcast, we take a look at Diana’s book and the Cyber Civics course she developed for schools.
Caroline Criado Perez is a writer, journalist and feminist campaigner. She has written two books: Do It Like A Woman and Invisible Women. In her most recent book Invisible Women: Exposing Data Bias in a World Designed for Men she describes how very old data bias can affect women today. In this episode, Sheana learns about the different ways data bias is affecting women today, from trivial things such as phone size to not so trivial things such as seat belt safety. Caroline tells all this and more in this episode of Innovation For All Podcast.
AI and Machine Learning systems are quickly becoming an integral part of how we work with, understand, and socialize with each other. Although this new technology is extremely exciting and offers a new wave of technological advancement, with it comes many ethical issues concerning discrimination, undermining human emotion, breaking social contracts and more.
Sheana Ahlqvist talks to Josh Lovejoy, Principal Design Manager at Microsoft, specializing in the Ethics and Society sector. Josh believes that human-centered design thinking can change the world for the better; that by seeking to address the needs of people- especially those at the margins- in ways that respect, restore and augment their capabilities, we can invent forms of technological innovation that would have otherwise been invisible.
IN THIS EPISODE YOU’LL HEAR:
Why do corporations want to know what people are thinking and feeling?
Forming trust relationships using AI systems.
What is a design ethicist?
What kinds of things can impartial AI autonomous systems do better than humans.
How do autonomous AI systems take advantage of consumers?
What is predictive policing and how does it relate to AI ethics?
What are some examples of misapplications of Machine Learning systems.
What is a deepfake?
What is a mean opinion score and how does it apply to voice automation?
Josh’s opinion on how AI tools should be developed.
What happens when you give up personal data in exchange for a more personalized experience?
Who should have the authority to make consequential decisions about AI?
How will AI and Machine Learning systems shape our knowledge and create change for the future?
How do you create machine learning systems that are unbiased but still function effectively for the user?