About the project
This project aims to explore how Large Language Models (LLMs) can be harnessed to continuously acquire new skills to solve novel tasks as opposed to mastering a predefined and fixed set of tasks. In particular, methods for incrementally learning skill representations jointly from textual descriptions and spatio-temporal information of action sequences will be developed and evaluated on learning visuomotor robotic tasks in a household environment.
The integration of LLMs into robot learning has recently demonstrated remarkable success. It became possible to ground LLMs in the physical context and solve long-horizon robotic tasks efficiently by querying LLMs to generate a sequence of natural language commands corresponding to pretrained skills that accomplish a given task. LLMs have also enabled few-shot adaptation to novel tools by generating task-agnostic tool descriptions for language-conditioned learning of manipulation skills.
Long-horizon task learning and few-shot adaptation, supported by LLMs, are essential to lifelong robot learning. However, the current utilization of LLMs remains incompatible with the continual learning setting, wherein tasks are sequentially presented to the robot without a predefined task distribution. Moreover, the robot is expected to retain knowledge of previous tasks when learning new ones.
You will join the School of Electronics and Computer Science which is ranked 1st in the UK for Electrical and Electronic Engineering (Guardian University Guide 2022) within the University of Southampton which is ranked in the top 1% of universities worldwide.
You are invited to apply soon, as we will be reviewing applications on a rolling basis until the position is filled. PhD funding happens every one or two months, and once funded this position will close.