Title: Interactive Learning and Adaptation for Personalized Robot-Assisted Training
Speaker: Konstantinos Tsiakas

The Heracleia Human Centered Computing Laboratory
Computer Science and Engineering Department
The University of Texas Arlington

Time: 13 December 17, Wednesday, 2pm
Place: ISEC TBD, 805 Columbus Ave., Boston, MA 02120

Abstract: Robot Assisted Training systems have been extensively used for a variety of applications, including educational assistants, exercise coaches and training task instructors. The main goal of such systems is to provide a personalized and tailored session that matches user abilities and needs. In this research, we focus on the adaptation and personalization aspects of Robot Assisted Training systems, proposing an Interactive Learning and Adaptation Framework for Personalized Robot Assisted Training. This framework extends the Reinforcement Learning framework by integrating Interactive Reinforcement Learning methods to facilitate the adaptation of the robot to each specific user. More specifically, we show how task engagement can be integrated to the personalization, through EEG signals. Moreover, we show how Human-in-the-Loop approaches can be used to utilize human expertise through informative control interfaces, towards a safe and tailored interaction. We illustrate this framework with a Socially Assistive Robot that monitors and instructs a cognitive training task for working memory.

Bio: Konstantinos Tsiakas is a 5th year Ph.D. Candidate at the University of Texas at Arlington under the supervision of Prof. Fillia Makedon. He is a Graduate Research Assistant at the HERACLEIA Human-Centered Computing Laboratory (Director: Prof. Fillia Makedon) and a Research Fellow on the Software and Knowledge Engineering Lab at National Centre for Scientific Research – NCSR Demokritos (Director: Prof. Vangelis Karkaletsis). He has received a Diploma of Engineering from the Electrical and Computer Engineering Department,  Technical University of Crete, Greece. During his Diploma Thesis, he conducted research on Language Modeling using Gaussian Mixture Models.

His current research interests lie on the area of Interactive Reinforcement Learning for Robot-Assisted Training, with applications in Socially Assistive Robotics,  Adaptive Rehabilitation and Vocational Training. During his Ph.D., he investigates how Interactive Reinforcement Learning methods can be utilized and applied for the dynamical adaptation of robotic agents to different users. The goal of his research is to develop a computational framework for Interactive Learning and Adaptation that enables the robot to personalize its strategy towards the needs and abilities of the current user by analyzing user implicit feedback through sensors (e.g., MUSE EEG headband), integrating also the expertise and guidance of a human supervisor in the learning process.

He has published in peer-reviewed conferences as AAAI, IJCAI, IUI, ICSR, HCII, IVA, PETRA and MMEDIA. He has also been member of the reviewing committee of ICSR, MDPI journals, ADAPTIVE and PETRA, as well as organizing committee member of PETRA. He has also participated in the IAB meeting of the NSF I/UCRC iPerform Center (http://iperform.uta.edu/), receiving an iPerform scholarship for Machine Learning to enhance human performance.  His long-term research interests include research in computational cognitive modeling for robot-based personalized assessment and training.