Robots today have evolved from being able to perform only structured actions to being able to act re-actively based on sensing their environment. Robot learning has played a crucial role in enabling this capability. The classical paradigm involves a pipeline containing modules for perception, world modelling, planning and control, each of which are carefully engineered, incorporating handcrafted features and task-specific structures. A typical modern robot control system is an ensemble of modules, which often contain learning based models, and that are designed to perform dedicated tasks aimed at accomplishing a specific goal. In the last decade, Deep Convolutional Neural Network (DCNN) architectures have achieved remarkable results across several robotic problems. However, the focus has been on designing individual networks for specific problems including perception, localization, navigation and manipulation. In addition, several disjoint models have been used in conjunction. This limits the overall learning ability of the robot as most models are trained in a supervised fashion and independently, therefore they have no ability to share cross-domain information using training signals from auxiliary tasks. Our vision is a unified multimodel deep learning framework that jointly learns multiple robot tasks across multiple domains including perception, planning and control. We propose a multimodel framework that incorporates soft parameter sharing thereby enabling the network to decide what layers from auxiliary tasks to share and which sub-models can benefit from representations learned by layers in other sub-models. We believe that this will enable robots to learn tasks with limited amount of data by leveraging transfer learning across sub-models and equipping it with the capability to continuously learn from what it experiences and perceives in the real-world.
展开▼