What does human-centered AI even mean? A very meta conversation with Josh Lovejoy.

“When a system begins to remember us forever, and wherever we go…. we will not be our true selves. We will be the self we know it’s okay to remember.”
— Josh Lovejoy, Principal design manager, ethics and society at Microsoft.

AI and Machine Learning systems are quickly becoming an integral part of how we work with, understand, and socialize with each other. Although this new technology is extremely exciting and offers a new wave of technological advancement, with it comes many ethical issues concerning discrimination, undermining human emotion, breaking social contracts and more.

Sheana Ahlqvist talks to Josh Lovejoy, Principal Design Manager at Microsoft, specializing in the Ethics and Society sector. Josh believes that human-centered design thinking can change the world for the better; that by seeking to address the needs of people- especially those at the margins- in ways that respect, restore and augment their capabilities, we can invent forms of technological innovation that would have otherwise been invisible.

IN THIS EPISODE YOU’LL HEAR:

  • Why do corporations want to know what people are thinking and feeling?
  • Forming trust relationships using AI systems.
  • What is a design ethicist?
  • What kinds of things can impartial AI autonomous systems do better than humans.
  • How do autonomous AI systems take advantage of consumers?
  • What is predictive policing and how does it relate to AI ethics?
  • What are some examples of misapplications of Machine Learning systems.
  • What is a deepfake?
  • What is a mean opinion score and how does it apply to voice automation?
  • Josh’s opinion on how AI tools should be developed.
  • What happens when you give up personal data in exchange for a more personalized experience?
  • Who should have the authority to make consequential decisions about AI?
  • How will AI and Machine Learning systems shape our knowledge and create change for the future?
  • How do you create machine learning systems that are unbiased but still function effectively for the user?

LINKS:

OTHERS MENTIONED:

  • Youtube
  • Spotify
  • AI
  • Machine-Learning Algorithms
  • Predictive Policing
  • Google
  • Reddit
  • Terminator
  • Deepfake
  • Eric Horvitz
  • Microsoft Research
  • Google Duplex
  • Brad Smith
  • Wavenet
  • Deep Mind
  • Adobe
  • Mean Opinion Score
  • Moritz Hart
  • Kate Crawford
  • Stanford
  • Star Trek
  • Facebook
  • Meredith Whittaker
  • AI Now
  • Nick Bostrom
  • Super Intelligence
  • Joy Buolamwini
  • Google Clips

CONNECT WITH JOSH

How Meal Delivery Apps are Killing Your Favorite Restaurants featuring Chris Webb, CEO of ChowNow

Food delivery apps like UberEats are putting mom and pop restaurants out of business. In final episode of Season 1 of the Innovation For All podcast, Chris Webb, CEO of ChowNow, shows the actual cost of meal delivery and how ChowNow is trying to mitigate those risks through an alternative business model.

You’ll learn:

  • How much food marketplaces charge the host restaurant, on top of the fees they charge the customer
  • How his experience at Lehman Brothers in 2008 shapes his current skepticism
  • Why ordering direct from the retailer should always be the consumer’s first option
  • Does the restaurant know who is buying their food when ordered through a delivery app?
  • Why are restaurants willing to use delivery apps even when they are unprofitable?
  • What does a model that puts the restaurant first look like?

Chris has always had an affinity for small and independently owned restaurants. His love of these small businesses and his own family’s small step into the food retail space revealed a passion at the intersection of food and technology.

ChowNow is the leading online ordering and marketing platform for local restaurants. Founded in 2011, ChowNow currently works with 11,000 restaurants nationwide – making it easy for customers to order directly from their websites, ChowNow-built branded mobile apps and third-party websites including Google, Yelp, and Instagram.

Prior to ChowNow, Chris was a founding investor in healthy, fast-casual chain Tender Greens. Chris’ involvement in Tender Greens fueled his mission to put smaller independent restaurants on a level playing field with the national chains when it came to technology solutions, tools, and apps.

Connect with Chris Webb & ChowNow:

Others Mentioned:

Stay Tuned for Season 2!

Innovation For All will be returning for Season 2 in May 2019. Subscribe on Apple Podcasts or your favorite podcasting platform to listen to great episodes in Season 1 and get alerted as soon as Season 2 begins.

What would ethical data practices look like? Featuring Amanda McGlothlin

“Tech should be built for good” says Amanda McGlothlin, co-founder and Chief Design Officer at HQ Network, a Los Angeles space start-up providing digital security products and services for individuals and businesses. As a leader in tech, Amanda believes that privacy is a fundamental human right. Hear her tactical, realistic approach to product design that truly protects the user’s privacy.


IN THIS EPISODE YOU’LL LEARN:

  • How VPNs secure your information and prevent unwanted information getting to your devices.
  • How ads and third party tracker are not only annoying, but cost us money and make our technology less valuable.
  • The future of an ad-free user experience.
  • The use of ad-blockers and whether they are as effective as we think.
  • The new privacy laws that protect consumers from data breaches.
  • How companies can exercise more responsibility around their data practices to both protect the user and create success for their business.
  • What product managers and coders can do to support these companies who are willing to change their data practices for good.
  • What dark patterns are and how they apply to data and tracking.
  • Why it’s possible to collect data in moderation and still experience the benefits of analytics.
  • HQ Network’s view of data collecting and their ethical approach to their data practices.
  • A recent Facebook scandal and how it relates to user research.
  • How consumers can protect their data and exercise safety while online.
  • Facebook, as an example of a company that uses less than perfect data practices.

LINKS:

OTHERS MENTIONED:

  • VPN
  • GDPR
  • Facebook
  • iTunes
  • Apple
  • Sally Hubbard
  • Google Analytics
  • Cookies
  • Javascript
  • Stripe
  • App Store
  • Google
  • Enterprise Certificate
  • Instagram
  • WhatsApp
  • Troy Hunt
  • Katharine Hargreaves
  • ARKO
  • Stuart Turner

If you enjoy this episode, you might enjoy my conversation with Sally Hubbard: Google and Facebook are Monopolies: Does it matter?

What did you change your mind about in 2018? Answers on AI, data, work, and more.

In this special episode, our favorite experts on AI, tech monopolies, and more return to answer two key questions: What is something you’ve changed your mind about in 2018? And what is something you’d like to see become a larger part of the conversation in 2019?

You don’t want to miss this one. Want to hear more from these great guests? Check out their full episodes:

When bad data leads to social injustice, featuring David Robinson

Can AI really change the world? Or are its developing algorithms formalizing social injustice? When these highly-technical systems derive patterns from existing datasets, their models can perpetuate past mistakes.

In this episode of the Innovation For All Podcast, Sheana Ahlqvist discusses with David Robinson the threats of social bias and discrimination becoming embedded in Artificial Intelligence.

IN THIS EPISODE YOU’LL LEARN:

  • What is the role of technological advances in shaping society?
  • What is the difference between Machine Learning vs. Artificial Intelligence?
  • Social Justice Implications of Technology
  • What are the limitations of finding patterns in previous data?
  • How does should government regulate new, highly technical systems?
  • The need for more resources and more thoughtfulness in regulating data
  • Examples of data-driven issues in the private sector.
  • Removing skepticism of regulatory agencies in examining data models.
  • Authorities should remember that there are limits to what AI models can do.

David is the co-founder of Upturn and currently a Visiting Scientist at the AI Policy and Practice Initiative in Cornell’s College of Computing and Information Science. David touches on how government regulatory agencies should examine new AI models and systems, especially as the technology continues to creep its way into our day-to-day lives. David discusses the importance of “ground truthing.” David emphasizes looking at a technology’s capabilities and limits before deciding on whether decision makers should implement it.

LINKS

OTHERS MENTIONED

CONNECT WITH DAVID

If you enjoy this episode on AI and ethics, you might also enjoy WHEN ARE “FAIR” ALGORITHMS BETTER THAN ACCURATE ONES?