Ändringar mellan två versioner
Här visas ändringar i "Master thesis proposals - external" mellan 2018-10-06 21:47 av Patric Jensfelt och 2018-12-07 20:41 av Patric Jensfelt.
Visa < föregående | nästa > ändring.
Master thesis proposals - external
Hybrid Model-based Model-free Reinforcement Learning for Robotics Manipulation BackgroundRecent advances in artificial intelligence has enabled machines to compete with humans even in the most difficult of domains. Google Deepmind's AlphaGo is a case in point. Similar approaches of reinforcement learning (RL) have been tried in the robotics community on problems of skill learning. By skill we mean a sensorimotor policy (control policy) that can perform a single continuous-time task. Numerous successes in skill learning have been reported for a variety of manipulation tasks that are otherwise difficult to program. Examples include, batting, pancake flipping, pouring, pole balancing etc. One of the most challenging class of manipulation tasks is assembly of mating parts. Not surprisingly, the capability to learn assembly skills is highly sought after. ¶
Problem descriptionRL can be divided into model-based and model-free methods. In model-based methods, the algorithm learns a dynamics model of the manipulation task and utilizes it to optimize the policy. Contrary to this, in model-free RL (policy search), the policy is often directly optimized without the intermediate step of model learning. The trade off here is between number of trials (sample efficiency) and model bias. While mode-based methods are sample efficient, model-free methods do not suffer from model bias. We propose a hybrid approach that has benefits of both methods in it. It employs a global black box optimization method called Bayesian optimization (BO) to learn the policy in a fundamentally model-free way, but at the same time uses a learned model to guide the process. We will exploit the fact that BO does not require a cost function for the learning process. Our application will be an assembly task in which an ABB YuMi robot will insert one part into another part.¶
Purpose and aimsThe objective of this thesis is to develop a skill learning method under the framework of RL. The robot should be able to demonstrate the learning process by continuously trying to do the insertion while making incremental progress and finally achieve convergence by being able to complete the task successfully in a few consecutive trials.¶
The work will include the following tasks:¶
* Conduct literature review on RL based skill learning and BO.
* Formulate a strategy for utilizing a learned dynamics model for guiding the BO. Model learning algorithm can be assumed to be given.
* Set up either MuJoCo or Bullet simulation environment. Implement a simpler task of inverted pendulum and then the main insertion task.
* Develop a parameterized policy (not necessarily deep network) and implement the BO based RL algorithm including the results of Step 2.
* Evaluate the method on a real robot and draw conclusions about the hybrid method.
We are searching for a highly motivated student from master programs such as Systems, Control and Robotics, or Machine learning, or a student with a similar background. Knowledge in modeling and control of robotics manipulator is highly advantageous. Any prior exposure to Gaussian process regression, RL or BO will be valued. A medium to high level of competency in either Python or Matlab is necessary. Masters level knowledge of linear algebra and probability theory is expected and general competence in machine learning will be highly appreciated.¶
The master student will gain competences within Robotics, Robot Control, Reinforcement learning, Bayesian optimization, Gaussian process, etc. Note that the student will work in ABB Corporate Research in Västerås and compensation plus accommodation will be provided by the company. This project is defined within the context of an ongoing PhD project and therefore, the student can expect a high level of research environment and support, including software and systems. Prospective PhD student will be given preference. It may also be possible to do this project at RPL but the decision will be taken on a case by case basis.¶
Contact: Shahbaz Khader, +46725305968, shahbaz.khader@se.abb.com, ABB ¶
¶
Online Planning Based Reinforcement Learning for Robotics Manipulation BackgroundRecent advances in artificial intelligence has enabled machines to compete with humans even in the most difficult of domains. Google Deepmind's AlphaGo is a case in point. Similar approaches of reinforcement learning (RL) have been tried in the robotics community on problems of skill learning. By skill we mean a sensorimotor policy (control policy) that can perform a single continuous-time task. Numerous successes in skill learning have been reported for a variety of manipulation tasks that are otherwise difficult to program. Examples include, batting, pancake flipping, pouring, pole balancing etc. One of the most challenging class of manipulation tasks is assembly of mating parts. Not surprisingly, the capability to learn assembly skills is highly sought after. ¶
Problem descriptionMost skill learning RL methods are of policy search type. In policy search methods, the optimal parameters of a parameterized policy is obtained from an optimization process. Computing a general policy that takes the best action in any possible state is a much harder problem than planning a sequence of actions from a single state. On the other hand, while the policy provides robustness to uncertainties, planning cannot cope with any deviations from the plan. Online planning or model predictive control (MPC) is a method in which the best of both worlds come together. Instead of computing a policy offline, a plan is computed in an online manner at every execution step. Only the first action is applied and the rest is discarded. The process is repeated at every time step. The drawback with the online planning method is the high computational cost of planning at every time step. When combined with dynamics model learning, the overall method becomes a reinforcement learning approach. Some of the challenges that we aim to tackle in this thesis are: trading off planning horizon versus computational cost, planning under uncertain dynamics model, and incorporating prior information of the task instead of completely relying on learning the dynamics. Our application will be an assembly task in which an ABB YuMi robot will insert one part into another part. ¶
Purpose and aimsThe objective of this thesis is to develop a skill learning method under the framework of RL. The robot should be able to demonstrate the learning process by continuously trying to do the insertion while making incremental progress and finally achieve convergence by being able to complete the task successfully in a few consecutive trials.¶
The work will include the following tasks:¶
* Conduct literature review on RL based skill learning and MPC.
* Formulate a method for online planning that utilizes the uncertainties of the learned dynamics model. Model learning algorithm can be assumed to be given.
* Develop a strategy for combining offline learning from simulation and online planning.
* Evaluate the method on simulated tasks and also a real robot.
¶
We are searching for a highly motivated student from master programs such as Systems, Control and Robotics, or Machine learning, or a student with a similar background. Knowledge in modeling and control of robotics manipulator is highly advantageous. Any prior exposure to optimal control, MPC, or RL will be valued. A medium to high level of competency in either Python or Matlab is necessary. Masters level knowledge of linear algebra and probability theory is expected and general competence in machine learning will be highly appreciated.¶
The master student will gain competences within Robotics, Robot Control, Reinforcement learning, Optimization, Optimal Control, etc. Note that the student will work in ABB Corporate Research in Västerås and compensation plus accommodation will be provided by the company. This project is defined within the context of an ongoing PhD project and therefore, the student can expect a high level of research environment and support, including software and systems. Prospective PhD student will be given preference. It may also be possible to do this project at RPL but the decision will be taken on a case by case basis.¶
Contact: Shahbaz Khader, +46725305968, shahbaz.khader@se.abb.com, ABB Corporate Research¶
Geistt Using AI methods to Optimize Performance of a Human-UAV Team Performing Humanitarian Disaster Response Delivery of Food and MedicineContact: Petter Ögren (petter@kth.se)
Company: Geistt (http://www.geistt.com)
In a disaster scenario with severely damaged infrastructure, such as flooding or earthquakes, the delivery of food and medicine is often very difficult.In particular, the so-called last mile logistics, getting items from airports or distribution centers to the victims, is a challenge.However, it is believed that a fleet of small (< 1m) UAVs can contribute in this area. In this thesis project, you will use Unity3D (a game development platform) to support the creation (an initial environment will be available) of a realistic simulation environment to evaluate your AI solutions. The focus of the thesis will be ondesigning and implementing AI capabilities for the UAV system, where example behaviors that are needed include searching for victims, delivering packages to victims, resource/task allocation to the UAV fleet of 3-30 vehicles (who searches where, who delivers what, trade-offs between risk and mission completion etc). The intended system is supposed to be operated by a human rescue worker. However, this person cannot control 30 UAVs at the same time, so an important part of the thesis is to determine an efficient division of work between AI and the human operator. Different design choices will be evaluated using the game functionality of the Unity simulation engine.
Research questions
* What AI methods are efficient in a UAV disaster response system?
* What is a good division of work between Human and system in a UAV disaster response system?