Project overview
Large Language Models (LLMs) have received significant attention from the AI community in recent years. However, research on multimodal LLMs, particularly in the context of social child-robot interactions, remains in its infancy. This presents a compelling opportunity to explore new frontiers in AI applications that are both innovative and socially impactful in the context of human-robot interaction.
This project aims to advance the field by developing a multimodal LLM tailored for social child-robot interactions. Beyond text-based models, our proposed framework will enable robots to generate verbal and non-verbal behaviors, creating enriched conversational experiences with children. By integrating advanced AI with human-centered design principles, this research seeks to contribute to more effective and considerate child-robot interactions.
This project aims to advance the field by developing a multimodal LLM tailored for social child-robot interactions. Beyond text-based models, our proposed framework will enable robots to generate verbal and non-verbal behaviors, creating enriched conversational experiences with children. By integrating advanced AI with human-centered design principles, this research seeks to contribute to more effective and considerate child-robot interactions.
Staff
Lead researchers