Kaveh Azartash – Voice Recognition for Kids

We had a really interesting journey coming in from a kids’ language learning background. Understanding the real sound spectrum of different languages, we built a sophisticated sound map.” – Kaveh Azartash, Founder of KidSense.ai

Kaveh Azartash holds a PhD in Biomedical Engineering from University of California, Irvine with a focus on Vision Science. Kaveh’s career has been focused on innovating software applications in the neuroscience and now artificial intelligence domain. He co-founded KidSense.ai in 2015 after realizing children are unable to effectively communicate with the technology around them through voice.

In this episode you will learn:

  • The story of how KidSense.ai was started
  • Kaveh’s professional and academic background
  • The key components of voice recognition software for kids
  • How AIs can recognize changes in kids’ speech patterns over time
  • How KidSense.ai’s model can be applied to other challenges in voice recognition, like speech impediments or non-native English speakers
  • How KidSense.ai maintains privacy and data security
  • The data collection process required to develop complex AI models that mature overtime
  • Both the acoustic and language components that are behind a voice recognition software
  • Why these new AI technologies are considered valuable
  • The future business goals of KidSense.ai

Links and mentions:

Connect with Kaveh:

What does human-centered AI even mean? A very meta conversation with Josh Lovejoy.

“When a system begins to remember us forever, and wherever we go…. we will not be our true selves. We will be the self we know it’s okay to remember.”
— Josh Lovejoy, Principal design manager, ethics and society at Microsoft.

AI and Machine Learning systems are quickly becoming an integral part of how we work with, understand, and socialize with each other. Although this new technology is extremely exciting and offers a new wave of technological advancement, with it comes many ethical issues concerning discrimination, undermining human emotion, breaking social contracts and more.

Sheana Ahlqvist talks to Josh Lovejoy, Principal Design Manager at Microsoft, specializing in the Ethics and Society sector. Josh believes that human-centered design thinking can change the world for the better; that by seeking to address the needs of people- especially those at the margins- in ways that respect, restore and augment their capabilities, we can invent forms of technological innovation that would have otherwise been invisible.

IN THIS EPISODE YOU’LL HEAR:

  • Why do corporations want to know what people are thinking and feeling?
  • Forming trust relationships using AI systems.
  • What is a design ethicist?
  • What kinds of things can impartial AI autonomous systems do better than humans.
  • How do autonomous AI systems take advantage of consumers?
  • What is predictive policing and how does it relate to AI ethics?
  • What are some examples of misapplications of Machine Learning systems.
  • What is a deepfake?
  • What is a mean opinion score and how does it apply to voice automation?
  • Josh’s opinion on how AI tools should be developed.
  • What happens when you give up personal data in exchange for a more personalized experience?
  • Who should have the authority to make consequential decisions about AI?
  • How will AI and Machine Learning systems shape our knowledge and create change for the future?
  • How do you create machine learning systems that are unbiased but still function effectively for the user?

LINKS:

OTHERS MENTIONED:

  • Youtube
  • Spotify
  • AI
  • Machine-Learning Algorithms
  • Predictive Policing
  • Google
  • Reddit
  • Terminator
  • Deepfake
  • Eric Horvitz
  • Microsoft Research
  • Google Duplex
  • Brad Smith
  • Wavenet
  • Deep Mind
  • Adobe
  • Mean Opinion Score
  • Moritz Hart
  • Kate Crawford
  • Stanford
  • Star Trek
  • Facebook
  • Meredith Whittaker
  • AI Now
  • Nick Bostrom
  • Super Intelligence
  • Joy Buolamwini
  • Google Clips

CONNECT WITH JOSH

  • Connect with Josh on
  • Follow Josh on

What did you change your mind about in 2018? Answers on AI, data, work, and more.

In this special episode, our favorite experts on AI, tech monopolies, and more return to answer two key questions: What is something you’ve changed your mind about in 2018? And what is something you’d like to see become a larger part of the conversation in 2019?

You don’t want to miss this one. Want to hear more from these great guests? Check out their full episodes:

When bad data leads to social injustice, featuring David Robinson

Can AI really change the world? Or are its developing algorithms formalizing social injustice? When these highly-technical systems derive patterns from existing datasets, their models can perpetuate past mistakes.

In this episode of the Innovation For All Podcast, Sheana Ahlqvist discusses with David Robinson the threats of social bias and discrimination becoming embedded in Artificial Intelligence.

IN THIS EPISODE YOU’LL LEARN:

  • What is the role of technological advances in shaping society?
  • What is the difference between Machine Learning vs. Artificial Intelligence?
  • Social Justice Implications of Technology
  • What are the limitations of finding patterns in previous data?
  • How does should government regulate new, highly technical systems?
  • The need for more resources and more thoughtfulness in regulating data
  • Examples of data-driven issues in the private sector.
  • Removing skepticism of regulatory agencies in examining data models.
  • Authorities should remember that there are limits to what AI models can do.

David is the co-founder of Upturn and currently a Visiting Scientist at the AI Policy and Practice Initiative in Cornell’s College of Computing and Information Science. David touches on how government regulatory agencies should examine new AI models and systems, especially as the technology continues to creep its way into our day-to-day lives. David discusses the importance of “ground truthing.” David emphasizes looking at a technology’s capabilities and limits before deciding on whether decision makers should implement it.

LINKS

OTHERS MENTIONED

CONNECT WITH DAVID

If you enjoy this episode on AI and ethics, you might also enjoy WHEN ARE “FAIR” ALGORITHMS BETTER THAN ACCURATE ONES?

ai-ethics-podcast

When Are “Fair” Algorithms Better Than Accurate Ones? with Osonde Osoba

Artificial Intelligence continues to penetrate our lives. As it does so, we should be wary of its ethical and social implications.

Listen in iTunes

Listen on Stitcher

Listen in-browser

Osonde Osoba, an engineer at the RAND Corporation and a professor at the Pardee RAND Graduate School, joins Sheana Ahlqvist in today’s episode of Innovation For All Podcast to talk about fairness in Artificial Intelligence and Machine Learning. AI has the ability to seriously impact our lives, which is why Osonde is pushing for systems that are accurate, unbiased, and flexible.

Discover what areas we should be wary when handing over the decision-making to AI’s, why this isn’t just a technical issue, but also political, and who should we put in charge of these systems. Learn also the importance of accountability, ethics, privacy, and regulation in AI systems.

IN THIS EPISODE YOU’LL LEARN:

  • The difference between Machine Learning and Artificial Intelligence
  • Should AI systems intentionally be made to ‘align with our comfort’?
  • What roles do the legislators, policy makers, etc. do?
  • Strategies to protect Data Privacy in AI and ML models
  • Regulatory rules between the developers and the users
  • If technology changes so rapidly, how can regulators keep up?
  • How can we build accountability into AI & ML?

LINKS

  • RAND Corporation
  • GDPR
  • Fairness, Accountability, and Transparency in Machine Learning (FAT/ML)
  • Fairness and Machine Learning by Solon Barocas, Moritz Hardt & Arvind Narayanan

Others Mentioned

CONNECT WITH OSONDE