Explore an innovative approach to teaching language to deaf infants using a multi-agent system combining a robot and virtual human. Delve into the challenges of providing sufficient language exposure during critical developmental periods, especially for deaf infants born to hearing parents. Examine the design and implementation of an integrated system engineered to augment language exposure for 6-12 month old infants. Discover how the team addressed the complexities of human-machine design for infants, considering the limitations of screen-based media and robots in language learning. Learn about the system's ability to provide visual language and facilitate socially contingent human conversational exchange. Analyze case studies demonstrating successful engagement of the technology with both deaf and hearing infants. Gain insights into the interdisciplinary team's combined goals, system design, robot and virtual human components, and evaluation process. Understand the design lessons learned and potential implications for future research in accessible and inclusive education for infants with hearing impairments.
Read more
Teaching Language to Deaf Infants with a Robot and a Virtual Human