Robotics Seminar: Sertaç Karaman, MIT

Control Synthesis and Visual Perception for Agile Autonomous Vehicles

Speaker: Sertac Karaman
Associate Professor of Aeronautics and Astronautics
Laboratory for Information and Decision Systems
Institute for Data, Systems and Society
Massachusetts Institute of Technology

Time: 27 April 17, Thursday, 3pm
Place: ISEC 136

Agile autonomous vehicles that can exploit the full envelope of their dynamics to navigate through complex environments at high speeds require fast, accurate perception and control algorithms. In the first part of the talk, we focus on the control synthesis problems for agile vehicles. We present computationally-efficient algorithms for automated controller synthesis for systems with high-dimensional state spaces. In a nutshell, the new algorithms represent the value function in a compressed form enabled by a novel compression technique called the function train decomposition; and compute the controller using dynamic programming techniques while keeping the value function in this compressed format. We show that the new algorithms have run times that scales polynomially with the dimensionality of the state space and the rank of the value of the value function. In computational experiments, the new algorithms provide up to ten orders of magnitude improvement, when compared to standard dynamic programming algorithms, such as value iteration. In the second part of the talk, we focus on perception problems. We present new visual-inertial navigation algorithms that carefully select features to maximize the localization performance. The resulting algorithms are based on sub-modular optimization techniques, which lead to efficient algorithms with performance guarantees.

Bio: Sertac Karaman is the Class of ’48 Career Development Chair Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. He has obtained B.S. degrees in mechanical engineering and in computer engineering from the Istanbul Technical University, Turkey, in 2007; an S.M. degree in mechanical engineering from MIT in 2009; and a Ph.D. degree in electrical engineering and computer science also from MIT in 2012. His research interests lie in the broad areas of robotics and control theory. In particular, he studies the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems. The application areas of his research include driverless cars, unmanned aerial vehicles, distributed aerial surveillance systems, air traffic control, certification and verification of control systems software, and many others. He is the recipient of an IEEE Robotics and Automation Society Early Career Award in 2017, an Office of Naval Research Young Investigator Award in 2017, Army Research Office Young Investigator Award in 2015, National Science Foundation Faculty Career Development (CAREER) Award in 2014, AIAA Wright Brothers Graduate Award in 2012, and an NVIDIA Fellowship in 2011.

Robotics Seminar: Philip Long, Northeastern University

Modeling of closed chain systems and sensor based control of robot manipulators

Time: 20 April 17, Thursday, 3:30pm
Place: ISEC 136

In this talk, two subjects are studied, closed chain systems and sensor based control of robot manipulators. Closed chain systems are defined as two or more chains attached to a single movable platform. These system have several advantages over serial manipulators including increased stiffness, precision and load repartition. However, the additional closed loop constraint means advanced modeling and control strategies are required. This talk focuses on three such systems, cooperative serial manipulators grasping rigid object, cooperative serial manipulators grasping a deformable object and cable driven parallel robots. The second part of the talk focuses on sensor based robotic control. Sensor based control allows robot manipulators to interact with an unknown dynamic environment. Two applications are studied. In the first case, the separation of deformable bodies using multiple robots is investigated. Force/Vision control schemes are proposed that allow the system to adapt to on-line deformations of the target object. In the second case, we present sensor based control strategies that allow a robot to adapt its behavior to the presence of an human operator in the workspace.

Philip Long obtained a 1st class honours degree in mechanical engineering from the National University of Ireland Galway (NUIG) in 2007, followed by two years working as a mechanical engineering analyst at Xyratex Ltd Havant UK. From 2009-2011, he completed the EMARO program, a two year European research masters in advanced robotics  in the University of Genova, Italy and Ecole Centrale de Nantes, France. He received his PhD, which focused on cooperative manipulators, from Ecole Centrale de Nantes in 2014. From 2014-2017 he worked as a robotics research engineer at the Jules Verne Institute of Technological Research (IRT JV). As technical lead of the collaborative robots division, he was responsible for the implementation of multi-modal control schemes for industrial partners. He is currently a postdoctoral researcher at the RIVeR lab at Northeastern University, where his research focuses on modeling and control of humanoid robots.

Open House: Valkyrie Awaits

Professors Dagmar Sternad and Taskin Padir and their hard-working students invite you to the Action Lab Open House. Please join us to meet NASA Johnson Space Center’s humanoid robot Valkyrie (R5), and interact with the team members who are developing autonomous behaviors and human-robot collaboration techniques for future space missions and many other applications here on Earth.

When: Thursday, March 30, 2017, 3-5pm

Where: Action Lab, Richards Hall 425

What: A meet and greet with Valkyrie and her humans

Robotics Seminar: Stefanie Tellex, Brown University

Learning Models of Language, Action and Perception for Human-Robot Collaboration
Stefanie Tellex, Brown University
Time: 8 February 17, Wednesday, 11am
Place: Richards Hall 458
Abstract: Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets.  To achieve complex tasks, it is essential for robots to move beyond merely interacting with people and toward collaboration, so that one person can easily and flexibly work with many autonomous robots.  The aim of my research program is to create autonomous robots that collaborate with people to meet their needs by learning decision-theoretic models for communication, action, and perception.  Communication for collaboration requires models of language that map between sentences and aspects of the external world. My work enables a robot to learn compositional models for word meanings that allow a robot to explicitly reason and communicate about its own uncertainty, increasing the speed and accuracy of human-robot communication.  Action for collaboration requires models that match how people think and talk, because people communicate about all aspects of a robot’s behavior, from low-level motion preferences (e.g., “Please fly up a few feet”) to high-level requests (e.g., “Please inspect the building”).  I am creating new methods for learning how to plan in very large, uncertain state-action spaces by using hierarchical abstraction.  Perception for collaboration requires the robot to detect, localize, and manipulate the objects in its environment that are most important to its human collaborator.  I am creating new methods for autonomously acquiring perceptual models in situ so the robot can perceive the objects most relevant to the human’s goals. My unified decision-theoretic framework supports data-driven training and robust, feedback-driven human-robot collaboration.
Bio: Stefanie Tellex is an Assistant Professor of Computer Science and Assistant Professor of Engineering at Brown University.  Her group, the Humans To Robots Lab, creates robots that seamlessly collaborate with people to meet their needs using language, gesture, and probabilistic inference, aiming to empower every person with a collaborative robot.  She completed her Ph.D. at the MIT Media Lab in 2010, where she developed models for the meanings of spatial prepositions and motion verbs.  Her postdoctoral work at MIT CSAIL focused on creating robots that understand natural language.  She has published at SIGIR, HRI, RSS, AAAI, IROS, ICAPs and ICMI, winning Best Student Paper at SIGIR and ICMI, Best Paper at RSS, and an award from the CCC Blue Sky Ideas Initiative.  Her awards include being named one of IEEE Spectrum’s AI’s 10 to Watch in 2013, the Richard B. Salomon Faculty Research Award at Brown University, a DARPA Young Faculty Award in 2015, and a 2016 Sloan Research Fellowship.  Her work has been featured in the press on National Public Radio, MIT Technology Review, Wired UK and the Smithsonian.  She was named one of Wired UK’s Women Who Changed Science In 2015 and listed as one of MIT Technology Review’s Ten Breakthrough Technologies in 2016.

Robotics Seminar: Professor Masayuki Inaba, University of Tokyo

We are pleased to host Professor Masayuki Inaba and his colleagues at Northeastern’s Robotics Collaborative. Professor Inaba leads JSK Robotics Lab at the University of Tokyo which is the home of numerous humanoid robots including Kengoro and JAXON. Here are the details of Professor Inaba’s seminar talk.

Recent Robotics Activities and Background at JSK Lab, University of Tokyo
Professor Masayuki Inaba

Time: 13 December 2016, 2pm
Place: Richards Hall 300

The talk will introduce recent ongoing activities of the research and development at the JSK Robotics Lab, University of Tokyo which include our DRC humanoid, JAXON, and musculoskeletal humanoid, Kengoro. At JSK Lab ( We are working on several research topics on system architecture and integration for task execution, continuous perception with attention control, variable robots for system abstraction, hardware platform like HRP2 and PR2 for general purpose system, software platform with lisp-based programming environment, asynchronous multi-component envrironment of RTM-ROS environment with continuous integration tools, specialization for industry collaboration and hardware challenges for robot vision, whole-body tactile and flesh sensors, flexible and redundancy with complexity of muscloskeletal, high-power drives with heat transfer and control, flying and hovering in space, and so on.

Masayuki Inaba is a professor of Department of Creative Informatics and Department of Mechano-Informatics of the graduate school of information science and technology of The University of Tokyo. He received B.S of Mechanical Engineering in 1981, Dr. of Engineering from The University of Tokyo in 1986. He was appointed as a lecturer in 1986, an associate professor in 1989, and a professor in 2000 at The University of Tokyo. His research interests include key technologies of robotic systems and their research infrastructure to keep continuous development for advance robotics.

Professors Dennis Hong (UCLA), Masayuki Inaba (University of Tokyo) and Taskin Padir. Photo taken on 24 April 2014 at the JSK Robotics Lab, University of Tokyo.


We received very exciting news yesterday… NASA awarded our team a Valkyrie humanoid robot. We are looking forward to collaborating with NASA, our colleagues at MIT and Space Robotics Challenge competitors to advance the autonomy on Valkyrie. Here is a brief summary of our project acknowledging our entire team:

Accessible Testing on Humanoid-Robot-R5 and Evaluation of NASA Administered (ATHENA) Space Robotics Challenge

Taskin Padir, Associate Professor, Electrical and Computer Engineering, Northeastern University
Robert Platt, Assistant Professor, College of Computer and Information Science, Northeastern University
Holly Yanco, Professor, Computer Science Department, University of Massachusetts, Lowell

Our overarching goal in this basic and applied research and technology development effort is to advance humanoid robot autonomy for the success of future space missions. We will achieve this goal by (1) establishing a tight collaborative environment among our institutions (Northeastern University (NEU) and the University of Massachusetts Lowell (UML)) and NASA’s Johnson Space Center, (2) leveraging our collective DARPA Robotics Challenge (DRC) experience in humanoid robot control, mobility, manipulation, perception, and operator interfaces, (3) developing a systematic model-based task validation methodology for the Space Robotics Challenge (SRC) tasks, (4) implementing novel perception based grasping and human-robot interaction techniques, (5) providing access to collaborative testing facilities for the SRC competition teams, and (6) making the developed software available to the humanoid robotics community. Successful completion of this project will not only progress the technological readiness of humanoid robots for practical applications but also nurture a community of competitors and collaborators to enhance the outcomes of the SRC to be administered by NASA in 2016. We propose to unify our team’s complementary expertise in robot control (Padir), grasping (Platt), and human-robot interaction (Yanco) to advance the autonomy of NASA’s R5. Since August 2012, Padir has been co-leading the WPI DRC Team, the only Track C team that participated in the DRC Finals with an Atlas humanoid robot. Platt participated in Team TRACLabs’ DRC entry to enhance Atlas robot’s autonomous manipulation capabilities. Yanco led the DARPA-funded study of human-robot interaction (HRI) for the DRC, at both the Trials and the Finals. Our team is unique in terms of facilities and capabilities for hosting NASA’s R5 at the New England Robotics Validation and Experimentation (NERVE) Center at UMass Lowell in Lowell Massachusetts, less than 30 miles from Boston. At 10,000 square feet, the NERVE Center is the largest indoor robot test facility in New England.


New NSF Award: HANDs

Within the scope of this collaborative NSF award, we will develop prosthetic and wearable hands controlled via nested control that seamlessly blends neural control based on human brain activity and dynamic control based on sensors on robots. These Hand Augmentation using Nested Decision (HAND) systems will also provide rudimentary tactile feedback to the user. The HAND design framework will contribute to the assistive and augmentative robotics field. – See more at:

RIVeR Lab moves to Northeastern University…

Since its founding in 2010, Robotics and Intelligent Vehicles Research (RIVeR) Laboratory has been the home of many researchers from PhD students to high-school interns who worked on a variety of projects from assistive robotics to robots for disaster response. As of September 1, 2015, we announce that RIVeR Lab moves to Northeastern University. We are now recruiting researchers at all levels. For more information contact Professor Padir at t.padir@northeastern. edu


Northeastern University

Recruiting MQP-VINE Students

WPI’s Robotics and Intelligent Vehicles Research (RIVeR) Laboratory ( is leading a new initiative called Major Qualifying Project-Vertically Integrated Experience (MQP-VINE, in short). Within the scope of an MQP-VINE, students will have an opportunity to begin their MQP experience as early as their sophomore year by getting involved in our cross-disciplinary sponsored research projects on assistive robotics, autonomous exploration rovers, aerial vehicles, human-in-the-loop robot control, and cyber physical systems. In its full implementation, we envision that students will gain theoretical knowledge and practical skills in robotics engineering by completing their MQP in functional multidisciplinary project teams of sophomores, juniors, and seniors on well-scoped yet challenging projects.

In a nutshell, here’s how it works.
– If you are a rising sophomore now, you will complete your MQP over three years by registering 1/3 units each year.
– If you are a rising junior, you will complete your MQP over two years by registering 1/3+2/3 units.
– It is expected that you will spread your units over at least two terms each year.
– It is expected that you will commit 9-11 hours (on average) per week for each 1/6 unit that you register.
– It is expected that you will “stay with problems” for the duration of the project regardless of the units you register.

At this time, we are seeking applications from rising sophomores and rising juniors majoring in RBE, ECE, CS, and ME programs. Double majors are welcome to apply. We expect to recruit up to three sophomores and three juniors this year.

Please send the following information to if you are interested in joining the MQP-VINE in RIVeR Lab.

– A resume including your name, contact information, academic standing, project work, skills, and other relevant information.
– An unofficial copy of your transcript from Bannerweb.
– A short essays (200 words or less) on your research interests, goals, and preparation.

All relevant questions can be directed to Professor Padir,