Control Synthesis and Visual Perception for Agile Autonomous Vehicles
Speaker: Sertac Karaman
Associate Professor of Aeronautics and Astronautics
Laboratory for Information and Decision Systems
Institute for Data, Systems and Society
Massachusetts Institute of Technology
Time: 27 April 17, Thursday, 3pm
Place: ISEC 136
Agile autonomous vehicles that can exploit the full envelope of their dynamics to navigate through complex environments at high speeds require fast, accurate perception and control algorithms. In the first part of the talk, we focus on the control synthesis problems for agile vehicles. We present computationally-efficient algorithms for automated controller synthesis for systems with high-dimensional state spaces. In a nutshell, the new algorithms represent the value function in a compressed form enabled by a novel compression technique called the function train decomposition; and compute the controller using dynamic programming techniques while keeping the value function in this compressed format. We show that the new algorithms have run times that scales polynomially with the dimensionality of the state space and the rank of the value of the value function. In computational experiments, the new algorithms provide up to ten orders of magnitude improvement, when compared to standard dynamic programming algorithms, such as value iteration. In the second part of the talk, we focus on perception problems. We present new visual-inertial navigation algorithms that carefully select features to maximize the localization performance. The resulting algorithms are based on sub-modular optimization techniques, which lead to efficient algorithms with performance guarantees.
Bio: Sertac Karaman is the Class of ’48 Career Development Chair Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. He has obtained B.S. degrees in mechanical engineering and in computer engineering from the Istanbul Technical University, Turkey, in 2007; an S.M. degree in mechanical engineering from MIT in 2009; and a Ph.D. degree in electrical engineering and computer science also from MIT in 2012. His research interests lie in the broad areas of robotics and control theory. In particular, he studies the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems. The application areas of his research include driverless cars, unmanned aerial vehicles, distributed aerial surveillance systems, air traffic control, certification and verification of control systems software, and many others. He is the recipient of an IEEE Robotics and Automation Society Early Career Award in 2017, an Office of Naval Research Young Investigator Award in 2017, Army Research Office Young Investigator Award in 2015, National Science Foundation Faculty Career Development (CAREER) Award in 2014, AIAA Wright Brothers Graduate Award in 2012, and an NVIDIA Fellowship in 2011.
Modeling of closed chain systems and sensor based control of robot manipulators
In this talk, two subjects are studied, closed chain systems and sensor based control of robot manipulators. Closed chain systems are defined as two or more chains attached to a single movable platform. These system have several advantages over serial manipulators including increased stiffness, precision and load repartition. However, the additional closed loop constraint means advanced modeling and control strategies are required. This talk focuses on three such systems, cooperative serial manipulators grasping rigid object, cooperative serial manipulators grasping a deformable object and cable driven parallel robots. The second part of the talk focuses on sensor based robotic control. Sensor based control allows robot manipulators to interact with an unknown dynamic environment. Two applications are studied. In the first case, the separation of deformable bodies using multiple robots is investigated. Force/Vision control schemes are proposed that allow the system to adapt to on-line deformations of the target object. In the second case, we present sensor based control strategies that allow a robot to adapt its behavior to the presence of an human operator in the workspace.
Philip Long obtained a 1st class honours degree in mechanical engineering from the National University of Ireland Galway (NUIG) in 2007, followed by two years working as a mechanical engineering analyst at Xyratex Ltd Havant UK. From 2009-2011, he completed the EMARO program, a two year European research masters in advanced robotics in the University of Genova, Italy and Ecole Centrale de Nantes, France. He received his PhD, which focused on cooperative manipulators, from Ecole Centrale de Nantes in 2014. From 2014-2017 he worked as a robotics research engineer at the Jules Verne Institute of Technological Research (IRT JV). As technical lead of the collaborative robots division, he was responsible for the implementation of multi-modal control schemes for industrial partners. He is currently a postdoctoral researcher at the RIVeR lab at Northeastern University, where his research focuses on modeling and control of humanoid robots.
Professors Dagmar Sternad and Taskin Padir and their hard-working students invite you to the Action Lab Open House. Please join us to meet NASA Johnson Space Center’s humanoid robot Valkyrie (R5), and interact with the team members who are developing autonomous behaviors and human-robot collaboration techniques for future space missions and many other applications here on Earth.
When: Thursday, March 30, 2017, 3-5pm
Where: Action Lab, Richards Hall 425
What: A meet and greet with Valkyrie and her humans
We are pleased to host Professor Masayuki Inaba and his colleagues at Northeastern’s Robotics Collaborative. Professor Inaba leads JSK Robotics Lab at the University of Tokyo which is the home of numerous humanoid robots including Kengoro and JAXON. Here are the details of Professor Inaba’s seminar talk.
Recent Robotics Activities and Background at JSK Lab, University of Tokyo
Professor Masayuki Inaba
Time: 13 December 2016, 2pm
Place: Richards Hall 300
The talk will introduce recent ongoing activities of the research and development at the JSK Robotics Lab, University of Tokyo which include our DRC humanoid, JAXON, and musculoskeletal humanoid, Kengoro. At JSK Lab (http://www.jsk.t.u-tokyo.ac.jp/research.html). We are working on several research topics on system architecture and integration for task execution, continuous perception with attention control, variable robots for system abstraction, hardware platform like HRP2 and PR2 for general purpose system, software platform with lisp-based programming environment, asynchronous multi-component envrironment of RTM-ROS environment with continuous integration tools, specialization for industry collaboration and hardware challenges for robot vision, whole-body tactile and flesh sensors, flexible and redundancy with complexity of muscloskeletal, high-power drives with heat transfer and control, flying and hovering in space, and so on.
Masayuki Inaba is a professor of Department of Creative Informatics and Department of Mechano-Informatics of the graduate school of information science and technology of The University of Tokyo. He received B.S of Mechanical Engineering in 1981, Dr. of Engineering from The University of Tokyo in 1986. He was appointed as a lecturer in 1986, an associate professor in 1989, and a professor in 2000 at The University of Tokyo. His research interests include key technologies of robotic systems and their research infrastructure to keep continuous development for advance robotics.
We received very exciting news yesterday… NASA awarded our team a Valkyrie humanoid robot. We are looking forward to collaborating with NASA, our colleagues at MIT and Space Robotics Challenge competitors to advance the autonomy on Valkyrie. Here is a brief summary of our project acknowledging our entire team:
Accessible Testing on Humanoid-Robot-R5 and Evaluation of NASA Administered (ATHENA) Space Robotics Challenge
Taskin Padir, Associate Professor, Electrical and Computer Engineering, Northeastern University
Robert Platt, Assistant Professor, College of Computer and Information Science, Northeastern University
Holly Yanco, Professor, Computer Science Department, University of Massachusetts, Lowell
Our overarching goal in this basic and applied research and technology development effort is to advance humanoid robot autonomy for the success of future space missions. We will achieve this goal by (1) establishing a tight collaborative environment among our institutions (Northeastern University (NEU) and the University of Massachusetts Lowell (UML)) and NASA’s Johnson Space Center, (2) leveraging our collective DARPA Robotics Challenge (DRC) experience in humanoid robot control, mobility, manipulation, perception, and operator interfaces, (3) developing a systematic model-based task validation methodology for the Space Robotics Challenge (SRC) tasks, (4) implementing novel perception based grasping and human-robot interaction techniques, (5) providing access to collaborative testing facilities for the SRC competition teams, and (6) making the developed software available to the humanoid robotics community. Successful completion of this project will not only progress the technological readiness of humanoid robots for practical applications but also nurture a community of competitors and collaborators to enhance the outcomes of the SRC to be administered by NASA in 2016. We propose to unify our team’s complementary expertise in robot control (Padir), grasping (Platt), and human-robot interaction (Yanco) to advance the autonomy of NASA’s R5. Since August 2012, Padir has been co-leading the WPI DRC Team, the only Track C team that participated in the DRC Finals with an Atlas humanoid robot. Platt participated in Team TRACLabs’ DRC entry to enhance Atlas robot’s autonomous manipulation capabilities. Yanco led the DARPA-funded study of human-robot interaction (HRI) for the DRC, at both the Trials and the Finals. Our team is unique in terms of facilities and capabilities for hosting NASA’s R5 at the New England Robotics Validation and Experimentation (NERVE) Center at UMass Lowell in Lowell Massachusetts, less than 30 miles from Boston. At 10,000 square feet, the NERVE Center is the largest indoor robot test facility in New England.
RIVeR Lab participated in Northeastern’s College of Engineering Undergraduate Research Lab Fair. We thank all 68 undergrads signed up for our mailing list as volunteers. Read more: http://www.northeastern.edu/news/2015/09/engineering-research-fair-highlights-robust-undergraduate-research-opportunities/
Within the scope of this collaborative NSF award, we will develop prosthetic and wearable hands controlled via nested control that seamlessly blends neural control based on human brain activity and dynamic control based on sensors on robots. These Hand Augmentation using Nested Decision (HAND) systems will also provide rudimentary tactile feedback to the user. The HAND design framework will contribute to the assistive and augmentative robotics field. – See more at: http://www.ece.neu.edu/news/neural-controlled-prosthetics
Since its founding in 2010, Robotics and Intelligent Vehicles Research (RIVeR) Laboratory has been the home of many researchers from PhD students to high-school interns who worked on a variety of projects from assistive robotics to robots for disaster response. As of September 1, 2015, we announce that RIVeR Lab moves to Northeastern University. We are now recruiting researchers at all levels. For more information contact Professor Padir at t.padir@northeastern. edu
WPI’s Robotics and Intelligent Vehicles Research (RIVeR) Laboratory (robot.neu.edu) is leading a new initiative called Major Qualifying Project-Vertically Integrated Experience (MQP-VINE, in short). Within the scope of an MQP-VINE, students will have an opportunity to begin their MQP experience as early as their sophomore year by getting involved in our cross-disciplinary sponsored research projects on assistive robotics, autonomous exploration rovers, aerial vehicles, human-in-the-loop robot control, and cyber physical systems. In its full implementation, we envision that students will gain theoretical knowledge and practical skills in robotics engineering by completing their MQP in functional multidisciplinary project teams of sophomores, juniors, and seniors on well-scoped yet challenging projects.
In a nutshell, here’s how it works.
– If you are a rising sophomore now, you will complete your MQP over three years by registering 1/3 units each year.
– If you are a rising junior, you will complete your MQP over two years by registering 1/3+2/3 units.
– It is expected that you will spread your units over at least two terms each year.
– It is expected that you will commit 9-11 hours (on average) per week for each 1/6 unit that you register.
– It is expected that you will “stay with problems” for the duration of the project regardless of the units you register.
At this time, we are seeking applications from rising sophomores and rising juniors majoring in RBE, ECE, CS, and ME programs. Double majors are welcome to apply. We expect to recruit up to three sophomores and three juniors this year.
Please send the following information to email@example.com if you are interested in joining the MQP-VINE in RIVeR Lab.
– A resume including your name, contact information, academic standing, project work, skills, and other relevant information.
– An unofficial copy of your transcript from Bannerweb.
– A short essays (200 words or less) on your research interests, goals, and preparation.
All relevant questions can be directed to Professor Padir, firstname.lastname@example.org.