Till innehåll på sidan
Till KTH:s startsida Till KTH:s startsida

Publikationer

Här listas avdelningens 50 senaste publikationer från KTH:s publikationsportal DiVA.

Länk till hela publikationslistan för RPL i DiVA hittas i botten av denna lista.

Publikationer av författare från RPL

[1]
C. Costen et al., "Multi-Robot Allocation of Assistance from a Shared Uncertain Operator," i AAMAS 2024 - Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, 2024, s. 400-408.
[2]
J. Read et al., "Children and Emerging Technologies: Ethical and Practical Research and Design," i CHI 2024 - Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Sytems, 2024.
[3]
C. Li et al., "The Poses for Equine Research Dataset (PFERD)," Scientific Data, vol. 11, no. 1, 2024.
[4]
E. Englesson, "On Label Noise in Image Classification : An Aleatoric Uncertainty Perspective," Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2024:45, 2024.
[5]
E. Englesson och H. Azizpour, "Robust Classification via Regression for Learning with Noisy Labels," i Proceedings ICLR 2024 - The Twelfth International Conference on Learning Representations, 2024.
[6]
Y. Xie, "Bathymetric Surveying Through Neural Inverse Sonar Modeling," Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2024:38, 2024.
[7]
M. Kartasev och P. Ögren, "Improving the Performance of Learned Controllers in Behavior Trees Using Value Function Estimates at Switching Boundaries," IEEE Robotics and Automation Letters, vol. 9, no. 5, s. 4647-4654, 2024.
[8]
M. Krale et al., "Robust Active Measuring under Model Uncertainty," i 38th AAAI Conference on Artificial Intelligence, AAAI 2024, Vancouver, Canada, Feb 20 2024 - Feb 27 2024, 2024, s. 21276-21284.
[9]
S. Gillet et al., "Interaction-Shaping Robotics: Robots That Influence Interactions between Other Agents," ACM Transactions on Human-Robot Interaction, vol. 13, no. 1, 2024.
[10]
S. Holk, D. Marta och I. Leite, "PREDILECT: Preferences Delineated with Zero-Shot Language-based Reasoning in Reinforcement Learning," i HRI 2024 - Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 259-268.
[11]
N. Rahimzadagan et al., "Drone Fail Me Now: How Drone Failures Afect Trust and Risk-Taking Decisions," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 862-866.
[12]
M. K. Wozniak et al., "Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 1361-1363.
[13]
M. K. Wozniak, "Enhancing Robot Perception with Real-World HRI," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 160-162.
[14]
E. Yadollahi et al., "Explainability for Human-Robot Collaboration," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 1364-1366.
[16]
G. L. Marchetti, "On Symmetries and Metrics in Geometric Inference," Doktorsavhandling : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2024:26, 2024.
[17]
S. Zojaji et al., "Join Me Here if You Will : Investigating Embodiment and Politeness Behaviors When Joining Small Groups of Humans, Robots, and Virtual Characters," i CHI Conference on Human Factors in Computing Systems (CHI ’24), Oʻahu, Hawaii, USA, 11-16 May 2024, 2024.
[18]
W. Yin et al., "Scalable Motion Style Transfer with Constrained Diffusion Generation," i The 38th Annual AAAI Conference on Artificial Intelligence, February 20-27, 2024, Vancouver, Canada, 2024.
[19]
W. Yin, "Developing Data-Driven Models for Understanding Human Motion," Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2024:9, 2024.
[20]
R. Yadav et al., "Unsupervised flood detection on SAR time series using variational autoencoder," International Journal of Applied Earth Observation and Geoinformation, vol. 126, 2024.
[21]
R. Gieselmann, "Synergies between Policy Learning and Sampling-based Planning," Doktorsavhandling Stockholm, Sweden : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2024:6, 2024.
[22]
Y. Deng et al., "An experimental study on the effect of chemical additives in coolant on steam explosion," International Journal of Heat and Mass Transfer, vol. 218, 2024.
[23]
I. Torre et al., "Smiling in the Face and Voice of Avatars and Robots : Evidence for a ‘smiling McGurk Effect’," IEEE Transactions on Affective Computing, vol. 15, no. 2, s. 393-404, 2024.
[24]
C. Pek et al., "SpaTiaL : monitoring and planning of robotic tasks using spatio-temporal logic specifications," Autonomous Robots, vol. 47, no. 8, s. 1439-1462, 2023.
[25]
M. Gamba et al., "Deep Double Descent via Smooth Interpolation," Transactions on Machine Learning Research, vol. 4, 2023.
[26]
E. Englesson, A. Mehrpanah och H. Azizpour, "Logistic-Normal Likelihoods for Heteroscedastic Label Noise," Transactions on Machine Learning Research, vol. 8, 2023.
[27]
M. Gamba et al., "Deep Double Descent via Smooth Interpolation," Transactions on Machine Learning Research, vol. 4, 2023.
[28]
M. B. Colomer et al., "To Adapt or Not to Adapt? : Real-Time Adaptation for Semantic Segmentation," i 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, s. 16502-16513.
[29]
S. Gerard, Y. Zhao och J. Sullivan, "WildfireSpreadTS: A dataset of multi-modal time series for wildfire spread prediction," i Advances in Neural Information Processing Systems 36 - 37th Conference on Neural Information Processing Systems, NeurIPS 2023, 2023.
[30]
M. Moletta et al., "A Virtual Reality Framework for Human-Robot Collaboration in Cloth Folding," i 2023 IEEE-RAS 22nd International Conference on Humanoid Robots, 2023.
[31]
Z. Weng et al., "GoNet : An Approach-Constrained Generative Grasp Sampling Network," i 2023 IEEE-RAS 22nd International Conference on Humanoid Robots, 2023.
[32]
Y. Ma et al., "Emotion-Aware Voice Assistants: Design, Implementation, and Preliminary Insights," i Proceedings of the 11th International Symposium of Chinese CHI: Generative, Reflective, Envisioning, Chinese CHI 2023, 2023, s. 527-532.
[33]
Q. Zhang et al., "A Dynamic Points Removal Benchmark in Point Cloud Maps," i 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023, 2023, s. 608-614.
[34]
T. Rastogi och M. Björkman, "Automated Construction of Time-Space Diagrams for Traffic Analysis Using Street-View Video Sequences," i 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023, 2023, s. 2282-2288.
[35]
Y. Yang et al., "RMP : A Random Mask Pretrain Framework for Motion Prediction," i 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023, 2023, s. 3717-3723.
[36]
M. K. Wozniak et al., "Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data," IEEE Robotics and Automation Letters, vol. 8, no. 11, s. 7018-7025, 2023.
[37]
T. Kucherenko et al., "The GENEA Challenge 2023 : A large-scale evaluation of gesture generation models in monadic and dyadic setings," i Proceedings Of The 25Th International Conference On Multimodal Interaction, Icmi 2023, 2023, s. 792-801.
[38]
X. Zhu et al., "Surface Defect Detection with Limited Training Data : A Case Study on Crown Wheel Surface Inspection," i 56th CIRP International Conference on Manufacturing Systems, CIRP CMS 2023, 2023, s. 1333-1338.
[39]
H. Chen et al., "Resilient Synchronization of Networked Lagrangian Systems in Adversarial Environments," i 2023 62nd IEEE Conference on Decision and Control, CDC 2023, 2023, s. 7539-7545.
[40]
J. Fu et al., "Component atention network for multimodal dance improvisation recognition," i PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2023, 2023, s. 114-118.
[41]
S. Sabzevari et al., "PG-3DVTON : Pose-Guided 3D Virtual Try-on Network," i VISIGRAPP 2023 - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Volume 4, 2023, s. 819-829.
[42]
S. Van Waveren et al., "Generating Scenarios from High-Level Specifications for Object Rearrangement Tasks," i 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, 2023, s. 11420-11427.
[43]
D. Marta et al., "VARIQuery: VAE Segment-Based Active Learning for Query Selection in Preference-Based Reinforcement Learning," i 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, 2023, s. 7878-7885.
[44]
M. Kartasev, J. Salér och P. Ögren, "Improving the Performance of Backward Chained Behavior Trees that use Reinforcement Learning," i 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, 2023, s. 1572-1579.
[45]
B. Orthmann et al., "Sounding Robots : Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction," ACM Transactions on Human-Robot Interaction, vol. 12, no. 4, 2023.
[46]
M. Romeo et al., "Putting Robots in Context : Challenging the Influence of Voice and Empathic Behaviour on Trust," i 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, s. 2045-2050.
[47]
I. Torre et al., "Can a gender-ambiguous voice reduce gender stereotypes in human-robot interactions?," i 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, s. 106-112.
[48]
N. Rajabi et al., "Detecting the Intention of Object Handover in Human-Robot Collaborations : An EEG Study," i 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, s. 549-555.
[49]
W. Yin et al., "Controllable Motion Synthesis and Reconstruction with Autoregressive Diffusion Models," i 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, s. 1102-1108.
[50]
B. J. Zhang et al., "Hearing it Out : Guiding Robot Sound Design through Design Thinking," i 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, s. 2064-2071.
Fullständig lista i KTH:s publikationsportal