News

Cyber-Physical Systems Seminar: Erik Fredericks

Title: Software Engineering for Cyber-Physical Systems
Speaker: Erik Fredericks, PhD

Assistant Professor
Department of Computer Science and Engineering
Oakland University

Time: 5 April 18, Thursday, 2:30pm
Place: ISEC 140, 805 Columbus Ave., Boston, MA 02120

Abstract: Cyber-physical systems play a key role in today’s society, from autonomous vehicles to smart homes. The intersection of computational and physical domains presents complex challenges that must be considered when designing and implementing such systems. This talk will explore recent and current research projects that deal how uncertainty, a concern that results from poorly understood environmental conditions or misinterpreted requirements, can negatively impact a system. Specifically, we will discuss how software engineering techniques can be applied to minimize the impact of uncertainty on a system both at design time and run time. These techniques will be motivated in the context of a remote data mirroring network that must disseminate data across a nationwide network, an intelligent vacuum system that must maintain safety and performance concerns, and a smart home for supporting Alzheimer’s patients in a home environment.

Bio: Dr. Erik Fredericks is an Assistant Professor in the Department of Computer Science and Engineering at Oakland University. He received his B.S. degree from Lake Superior State University, his M.S. degree from Oakland University, and his Ph.D. from Michigan State University. His research interests include minimizing the impact of uncertainty on software systems via search-based software engineering, particularly those that are self-adaptive, multi-agent, and cyber-physical in nature. In addition to regularly publishing at search-based venues, Dr. Fredericks has served on numerous program committees, including the Symposium for Search-Based Software Engineering (SSBSE), Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), and the Workshop on Search-Based Software Testing (SBST). Dr. Fredericks is also a co-organizer of a workshop on Natural Language Processing for Software Engineering (NL4SE).

A Lighter Note: Dr. Fredericks took Padir’s undergraduate Digital Signal Processing class some years ago. ūüôā

Robotics Seminar: Konstantinos Tsiakas

Title: Interactive Learning and Adaptation for Personalized Robot-Assisted Training
Speaker: Konstantinos Tsiakas

The Heracleia Human Centered Computing Laboratory
Computer Science and Engineering Department
The University of Texas Arlington

Time: 12 December 17, Tuesday, 2pm
Place: ISEC 140, 805 Columbus Ave., Boston, MA 02120

Abstract: Robot Assisted Training systems have been extensively used for a variety of applications, including educational assistants, exercise coaches and training task instructors. The main goal of such systems is to provide a personalized and tailored session that matches user abilities and needs. In this research, we focus on the adaptation and personalization aspects of Robot Assisted Training systems, proposing an Interactive Learning and Adaptation Framework for Personalized Robot Assisted Training. This framework extends the Reinforcement Learning framework by integrating Interactive Reinforcement Learning methods to facilitate the adaptation of the robot to each specific user. More specifically, we show how task engagement can be integrated to the personalization, through EEG signals. Moreover, we show how Human-in-the-Loop approaches can be used to utilize human expertise through informative control interfaces, towards a safe and tailored interaction. We illustrate this framework with a Socially Assistive Robot that monitors and instructs a cognitive training task for working memory.

Bio: Konstantinos Tsiakas is a 5th year Ph.D. Candidate at the University of Texas at Arlington under the supervision of Prof. Fillia Makedon. He is a Graduate Research Assistant at the HERACLEIA Human-Centered Computing Laboratory (Director: Prof. Fillia Makedon) and a Research Fellow on the Software and Knowledge Engineering Lab at National Centre for Scientific Research РNCSR Demokritos (Director: Prof. Vangelis Karkaletsis). He has received a Diploma of Engineering from the Electrical and Computer Engineering Department,  Technical University of Crete, Greece. During his Diploma Thesis, he conducted research on Language Modeling using Gaussian Mixture Models.

His current research interests lie on the area of Interactive Reinforcement Learning for Robot-Assisted Training, with applications in Socially Assistive Robotics,  Adaptive Rehabilitation and Vocational Training. During his Ph.D., he investigates how Interactive Reinforcement Learning methods can be utilized and applied for the dynamical adaptation of robotic agents to different users. The goal of his research is to develop a computational framework for Interactive Learning and Adaptation that enables the robot to personalize its strategy towards the needs and abilities of the current user by analyzing user implicit feedback through sensors (e.g., MUSE EEG headband), integrating also the expertise and guidance of a human supervisor in the learning process.

He has published in peer-reviewed conferences as AAAI, IJCAI, IUI, ICSR, HCII, IVA, PETRA and MMEDIA. He has also been member of the reviewing committee of ICSR, MDPI journals, ADAPTIVE and PETRA, as well as organizing committee member of PETRA. He has also participated in the IAB meeting of the NSF I/UCRC iPerform Center (http://iperform.uta.edu/), receiving an iPerform scholarship for Machine Learning to enhance human performance.  His long-term research interests include research in computational cognitive modeling for robot-based personalized assessment and training.

Robotics Seminar: Nandan Banerjee

Title: vSLAM on the Roomba

Speaker: Nandan Banerjee
Robotics Software Engineer at iRobot Corp.

Time: 19 October 17, Thursday, 3:30pm
Place: ISEC 140, 805 Columbus Ave., Boston, MA 02120

Abstract:¬†SLAM has been around since the 1990s – it wasn’t as powerful as it is now and it required a lot of computational power back then.¬†Powerful and cheap processors, advanced vision and mapping algorithms have enabled robots of today with SLAM technologies using computer vision. But all said, is it possible to put SLAM into household consumer robots to map an entire floor of a house or an apartment? The answer is yes. The latest generation of iRobot Roomba products uses vSLAM technology to map an entire floor. In this talk, I will discuss the various challenges that we had to deal with while trying to put SLAM on a low cost consumer robot, a brief overview of the underlying SLAM system that we have, a detailed discussion on the computer vision aspect of the vSLAM system, and the algorithmic challenges that we faced.

Bio: Nandan Banerjee is a Robotics Software Engineer with iRobot Corporation in Bedford, MA. He works in the Technology Organization of iRobot developing new technologies related to mapping & navigation of mobile consumer robots, and manipulation research. Before iRobot, he was a part of the WPI-CMU team that participated in the DARPA Robotics Challenge where he contributed to the vision and planning aspect that went into completing the task of getting the Atlas robot open and walk through a door. He has also worked as a Software Engineer at Samsung R&D Institute, India working on mobile phone platforms. He has a Bachelor of Technology in Computer Science from the National Institute of Technology, Durgapur, India and a Master of Science in Robotics Engineering from the Worcester Polytechnic Institute in Worcester, MA. His research interests are in Robotics (Motion planning, visual servoing, mapping and navigation), AI, and computer vision.

Robotics Seminar: N. Andrew Browning

Title: High speed reactive obstacle avoidance and aperture traversal using a monocular camera

Speaker: Andrew Browning, PhD
Deputy Director of R&D at SSCI

Time: 10 October 17, Tuesday, 11:00am
Place: ISEC 140, 805 Columbus Ave., Boston, MA 02120

Abstract: Flight in cluttered indoor and outdoor environments requires effective detection of obstacles and rapid trajectory updates to ensure successful avoidance. We present a low-computation, monocular-camera based solution that rapidly assesses collision risk in the environment through the computation of Expansion Rate, and fuses this with the range and bearing to a goal location (or object), in a Steering Field to steer around obstacles while flying towards the goal. The Steering Field provides instantaneous steering decisions based on the current collision risk in the environment, Expansion Rate provides an automatically-speed-scaled estimate of collision risk. Results from recent flight tests will be shown with flights at up to 20m/s around man-made and natural obstacles and through 5x5m apertures.

Bio: Dr. N. Andrew Browning obtained his PhD in Computational Neuroscience from Boston University with a thesis on how primates and humans process visual information for reactive navigation, the resulting neural model was built into a multi-layer convolutional neural network (called ViSTARS) and demonstrated, in simulation, to generate human-like trajectories in cluttered reactive navigation tasks. Following his PhD, applied post-doctoral research, and a brief stint as a Research Assistant Professor at BU, Dr. Browning started a research group at Scientific Systems Company Inc. (SSCI) to develop Active Perception and Cognitive Learning (APCL) systems as applied to autonomous robotic systems. The APCL lab at SSCI has developed into a global leader in the development of applied perception and autonomy solutions for small UAVs. Dr. Browning is now Deputy Director of Research and Development at SSCI with a broad remit across the areas of advanced controls, single vehicle and collaborative autonomy, visual perception, acoustic perception, and vision-aided GNC.

Media:

DARPA project takes flight in Medfield

 

Robotics Seminar: Salah Bazzi

Soft Nonholonomic Constraints: Theory and Applications to Optimal Control

Speaker: Salah Bazzi, PhD
Postdoctoral Researcher
Action Lab, Northeastern University

Time: 5 October 17, Thursday, 3:30pm
Place: ISEC 140

Abstract: Efficient motion is an important aspect of robotic locomotion. Even for the simplest wheeled robots, an exact description of the fastest paths that the robot can follow is only known in special cases. Moreover, these known optimal solutions are infeasible in practice since they are derived based on kinematic models of the robots. Attempts to find the optimal trajectories for dynamically-extended models have shown that the optimal control will exhibit chattering, which is also impractical due to the infinite number of control switches required.

In this talk, I will present a new approach for addressing the problem of chattering arising in the time-optimal trajectories of dynamically-extended models of¬†wheeled robots, thereby bringing the theoretical optimal solutions closer to practical feasibility. This approach is based on the relaxation of the nonholonomic constraints in the robot model, such that they incorporate skidding effects. Rather than resorting to existing skidding models, we develop a new method for relaxing nonholonomic constraints, to ensure that the¬†skidding model remains susceptible to analysis using tools and techniques from classical optimal control theory. We refer to these relaxed constraints as¬†‚Äėsoft nonholonomic constraints‚Äô.¬†This¬†proposed methodology is the first approach that eliminates chattering by addressing¬†its root cause,¬†namely the order of the singular segments of the optimal¬†control solution.

Bio: Salah Bazzi is a Postdoctoral Research Associate in the Action Lab at Northeastern University. He obtained his PhD in Mechanical Engineering from the American University of Beirut in May 2017. His research interests are robotic locomotion and manipulation, optimal control, nonlinear dynamics, nonholonomic mechanics, and human motor control and learning.

Robotics Seminar: Sertaç Karaman, MIT

Control Synthesis and Visual Perception for Agile Autonomous Vehicles

Speaker: Sertac Karaman
Associate Professor of Aeronautics and Astronautics
Laboratory for Information and Decision Systems
Institute for Data, Systems and Society
Massachusetts Institute of Technology

Time: 27 April 17, Thursday, 3pm
Place: ISEC 136

Abstract:
Agile autonomous vehicles that can exploit the full envelope of their dynamics to navigate through complex environments at high speeds require fast, accurate perception and control algorithms. In the first part of the talk, we focus on the control synthesis problems for agile vehicles. We present computationally-efficient algorithms for automated controller synthesis for systems with high-dimensional state spaces. In a nutshell, the new algorithms represent the value function in a compressed form enabled by a novel compression technique called the function train decomposition; and compute the controller using dynamic programming techniques while keeping the value function in this compressed format. We show that the new algorithms have run times that scales polynomially with the dimensionality of the state space and the rank of the value of the value function. In computational experiments, the new algorithms provide up to ten orders of magnitude improvement, when compared to standard dynamic programming algorithms, such as value iteration. In the second part of the talk, we focus on perception problems. We present new visual-inertial navigation algorithms that carefully select features to maximize the localization performance. The resulting algorithms are based on sub-modular optimization techniques, which lead to efficient algorithms with performance guarantees.

Bio: Sertac Karaman is the Class of ’48 Career Development Chair Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. He has obtained B.S. degrees in mechanical engineering and in computer engineering from the Istanbul Technical University, Turkey, in 2007; an S.M. degree in mechanical engineering from MIT in 2009; and a Ph.D. degree in electrical engineering and computer science also from MIT in 2012. His research interests lie in the broad areas of robotics and control theory. In particular, he studies the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems. The application areas of his research include driverless cars, unmanned aerial vehicles, distributed aerial surveillance systems, air traffic control, certification and verification of control systems software, and many others. He is the recipient of an IEEE Robotics and Automation Society Early Career Award in 2017, an Office of Naval Research Young Investigator Award in 2017, Army Research Office Young Investigator Award in 2015, National Science Foundation Faculty Career Development (CAREER) Award in 2014, AIAA Wright Brothers Graduate Award in 2012, and an NVIDIA Fellowship in 2011.

Robotics Seminar: Philip Long, Northeastern University

Modeling of closed chain systems and sensor based control of robot manipulators

Time: 20 April 17, Thursday, 3:30pm
Place: ISEC 136

Abstract:
In this talk, two subjects are studied, closed chain systems and sensor based control of robot manipulators. Closed chain systems are defined as two or more chains attached to a single movable platform. These system have several advantages over serial manipulators including increased stiffness, precision and load repartition. However, the additional closed loop constraint means advanced modeling and control strategies are required. This talk focuses on three such systems, cooperative serial manipulators grasping rigid object, cooperative serial manipulators grasping a deformable object and cable driven parallel robots. The second part of the talk focuses on sensor based robotic control. Sensor based control allows robot manipulators to interact with an unknown dynamic environment. Two applications are studied. In the first case, the separation of deformable bodies using multiple robots is investigated. Force/Vision control schemes are proposed that allow the system to adapt to on-line deformations of the target object. In the second case, we present sensor based control strategies that allow a robot to adapt its behavior to the presence of an human operator in the workspace.

Bio:
Philip Long obtained a 1st class honours degree in mechanical engineering from the National University of Ireland Galway (NUIG) in 2007, followed by two years working as a mechanical engineering analyst at Xyratex Ltd Havant UK. From 2009-2011, he completed the EMARO program, a two year European research masters in advanced robotics  in the University of Genova, Italy and Ecole Centrale de Nantes, France. He received his PhD, which focused on cooperative manipulators, from Ecole Centrale de Nantes in 2014. From 2014-2017 he worked as a robotics research engineer at the Jules Verne Institute of Technological Research (IRT JV). As technical lead of the collaborative robots division, he was responsible for the implementation of multi-modal control schemes for industrial partners. He is currently a postdoctoral researcher at the RIVeR lab at Northeastern University, where his research focuses on modeling and control of humanoid robots.

Open House: Valkyrie Awaits

Professors Dagmar Sternad and Taskin Padir and their hard-working students invite you to the Action Lab Open House. Please join us to meet NASA Johnson Space Center’s humanoid robot Valkyrie (R5), and interact with the team members who are developing autonomous behaviors and human-robot collaboration techniques for future space¬†missions and many other applications here on Earth.

When: Thursday, March 30, 2017, 3-5pm

Where: Action Lab, Richards Hall 425

What: A meet and greet with Valkyrie and her humans

Robotics Seminar: Stefanie Tellex, Brown University

Learning Models of Language, Action and Perception for Human-Robot Collaboration
Stefanie Tellex, Brown University
Time: 8 February 17, Wednesday, 11am
Place: Richards Hall 458
Abstract:¬†Robots can act as a force multiplier for people, whether a robot¬†assisting an astronaut with a repair on the International Space¬†station, a UAV taking flight over our cities, or an autonomous vehicle¬†driving through our streets. ¬†To achieve complex tasks, it is¬†essential for robots to move beyond merely interacting with people and¬†toward collaboration, so that one person can easily and flexibly work¬†with many autonomous robots. ¬†The aim of my research program is to¬†create autonomous robots that collaborate with people to meet their¬†needs by learning decision-theoretic models for communication, action,¬†and perception. ¬†Communication for collaboration requires models of¬†language that map between sentences and aspects of the external world.¬†My work enables a robot to learn compositional models for word¬†meanings that allow a robot to explicitly reason and communicate about¬†its own uncertainty, increasing the speed and accuracy of human-robot¬†communication. ¬†Action for collaboration requires models that match¬†how people think and talk, because people communicate about all¬†aspects of a robot’s behavior, from low-level motion preferences¬†(e.g., “Please fly up a few feet”) to high-level requests (e.g.,¬†“Please inspect the building”). ¬†I am creating new methods for¬†learning how to plan in very large, uncertain state-action spaces by¬†using hierarchical abstraction. ¬†Perception for collaboration requires¬†the robot to detect, localize, and manipulate the objects in its¬†environment that are most important to its human collaborator. ¬†I am¬†creating new methods for autonomously acquiring perceptual models in¬†situ so the robot can perceive the objects most relevant to the¬†human’s goals. My unified decision-theoretic framework supports¬†data-driven training and robust, feedback-driven human-robot¬†collaboration.
Bio: Stefanie Tellex is an Assistant Professor of Computer Science and¬†Assistant Professor of Engineering at Brown University. ¬†Her group,¬†the Humans To Robots Lab, creates robots that seamlessly collaborate¬†with people to meet their needs using language, gesture, and¬†probabilistic inference, aiming to empower every person with a¬†collaborative robot. ¬†She completed her Ph.D. at the MIT Media Lab in¬†2010, where she developed models for the meanings of spatial¬†prepositions and motion verbs. ¬†Her postdoctoral work at MIT CSAIL¬†focused on creating robots that understand natural language. ¬†She has¬†published at SIGIR, HRI, RSS, AAAI, IROS, ICAPs and ICMI, winning Best¬†Student Paper at SIGIR and ICMI, Best Paper at RSS, and an award from¬†the CCC Blue Sky Ideas Initiative. ¬†Her awards include being named one¬†of IEEE Spectrum’s AI’s 10 to Watch in 2013, the Richard B. Salomon¬†Faculty Research Award at Brown University, a DARPA Young Faculty¬†Award in 2015, and a 2016 Sloan Research Fellowship. ¬†Her work has¬†been featured in the press on National Public Radio, MIT Technology¬†Review, Wired UK and the Smithsonian. ¬†She was named one of Wired UK’s¬†Women Who Changed Science In 2015 and listed as one of MIT Technology¬†Review’s Ten Breakthrough Technologies in 2016.

Robotics Seminar: Professor Masayuki Inaba, University of Tokyo

We are pleased to host Professor Masayuki Inaba and his colleagues¬†at Northeastern’s Robotics Collaborative. Professor Inaba leads JSK Robotics Lab at the University of Tokyo which is the home of numerous humanoid robots including Kengoro¬†and JAXON. Here are the details of Professor Inaba’s seminar talk.

Recent Robotics Activities and Background at JSK Lab, University of Tokyo
Professor Masayuki Inaba

Time: 13 December 2016, 2pm
Place: Richards Hall 300

The talk will introduce recent ongoing activities of the research and development at the JSK Robotics Lab, University of Tokyo which include our DRC humanoid, JAXON, and musculoskeletal humanoid, Kengoro. At JSK Lab (http://www.jsk.t.u-tokyo.ac.jp/research.html). We are working on several research topics on system architecture and integration for task execution, continuous perception with attention control, variable robots for system abstraction, hardware platform like HRP2 and PR2 for general purpose system, software platform with lisp-based programming environment, asynchronous multi-component envrironment of RTM-ROS environment with continuous integration tools, specialization for industry collaboration and hardware challenges for robot vision, whole-body tactile and flesh sensors, flexible and redundancy with complexity of muscloskeletal, high-power drives with heat transfer and control, flying and hovering in space, and so on.

Masayuki Inaba is a professor of Department of Creative Informatics and Department of Mechano-Informatics of the graduate school of information science and technology of The University of Tokyo. He received B.S of Mechanical Engineering in 1981, Dr. of Engineering from The University of Tokyo in 1986. He was appointed as a lecturer in 1986, an associate professor in 1989, and a professor in 2000 at The University of Tokyo. His research interests include key technologies of robotic systems and their research infrastructure to keep continuous development for advance robotics.

Professors Dennis Hong (UCLA), Masayuki Inaba (University of Tokyo) and Taskin Padir. Photo taken on 24 April 2014 at the JSK Robotics Lab, University of Tokyo.