Publikationer
Här listas avdelningens 50 senaste publikationer från KTH:s publikationsportal DiVA.
Länk till hela publikationslistan för RPL i DiVA hittas i botten av denna lista.
Publikationer av författare från RPL
[1]
A. Khoche et al.,
"DoGFlow: Self-Supervised LiDAR Scene Flow via Cross-Modal Doppler Guidance,"
IEEE Robotics and Automation Letters, vol. 11, no. 3, s. 3836-3843, 2026.
[2]
R. Wang et al.,
"PALM: Enhanced Generalizability for Local Visuomotor Policies Via Perception Alignment,"
IEEE Robotics and Automation Letters, 2026.
[3]
M. Vahs et al.,
"Parameter-Robust MPPI for Safe Online Learning of Unknown Parameters,"
IEEE Robotics and Automation Letters, vol. 11, no. 4, s. 3931-3938, 2026.
[4]
L. Gulnaar, S. A. Muthukumaraswamy och K. J. D'souza,
"MediCheck Express : A Smart Self-service Kiosk for Vital Monitoring,"
i ICT for Intelligent Systems - Proceedings of ICTIS 2025, 2026, s. 563-573.
[5]
J. Styrud,
"Creating Behavior Trees for Autonomous Versatile Robots,"
Doktorsavhandling Stockholm, Sweden : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2026:17, 2026.
[6]
S. Qamar et al.,
"UNet with self-adaptive Mamba-like attention and causal-resonance learning for medical image segmentation,"
Scientific Reports, vol. 16, no. 1, 2026.
[7]
S. An et al.,
"Dexterous Manipulation through Imitation Learning : A Survey,"
IEEE Transactions on Automation Science and Engineering, vol. 23, s. 1760-1792, 2026.
[8]
A. Terán Espinoza et al.,
"STERN: Simultaneous Trajectory Estimation and Relative Navigation for Autonomous Underwater Proximity Operations,"
IEEE Journal of Oceanic Engineering, vol. 51, no. 1, s. 293-316, 2026.
[9]
M. Tarle et al.,
"Reinforcement Learning for Optimizing FACTS Setpoints With Limited Set of Measurements,"
IEEE Open Access Journal of Power and Energy, vol. 13, s. 51-63, 2026.
[10]
S. Jin, R. Wang och F. T. Pokorny,
"RealCraft : Attention Control as A Tool for Zero-Shot Consistent Video Editing,"
i Neural Information Processing - 32nd International Conference, ICONIP 2025, Proceedings, 2026, s. 137-152.
[11]
P. Khanna et al.,
"Early detection of human handover intentions in human–robot collaboration: Comparing EEG, gaze, and hand motion,"
Robotics and Autonomous Systems, vol. 196, 2026.
[12]
A. Larsson Forsberg et al.,
"Temporal Intent-Aware Multi-agent Learning for Network Optimization,"
i Computer Safety, Reliability, and Security. SAFECOMP 2025 Workshops - CoC3CPS, DECSoS, SASSUR, SENSEI, SRToITS, and WAISE, 2025, Proceedings, 2026, s. 29-40.
[13]
R. Johansson, P. Hammer och T. Lofthouse,
"Arbitrarily Applicable Same/Opposite Relational Responding with NARS,"
i Artificial General Intelligence - 18th International Conference, AGI 2025, Proceedings, 2026, s. 314-324.
[14]
S. B. Betran et al.,
"FLAME: A Federated Learning Benchmark for Robotic Manipulation,"
i IROS 2025 - 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems, Conference Proceedings, 2025, s. 2494-2500.
[15]
L. Hadjiloizou et al.,
"Towards Safe Reinforcement Learning with Reduced Conservativeness: A Case Study on Drone Flight Control,"
i IROS 2025 - 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems, Conference Proceedings, 2025, s. 14870-14876.
[16]
A. Naoum et al.,
"Adapting Robot's Explanation for Failures Based on Observed Human Behavior in Human-Robot Collaboration,"
i IROS 2025 - 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems, Conference Proceedings, 2025, s. 15087-15094.
[17]
P. Koczy, M. C. Welle och D. Kragic Jensfelt,
"Learning Dexterous In-Hand Manipulation with Multifingered Hands via Visuomotor Diffusion,"
i IROS 2025 - 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems, Conference Proceedings, 2025, s. 121-127.
[18]
Y. Dong et al.,
"CageCoOpt: Enhancing Manipulation Robustness through Caging-Guided Morphology and Policy Co-Optimization,"
i IROS 2025 - 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems, Conference Proceedings, 2025, s. 21795-21802.
[19]
H. Ding et al.,
"Towards Safe Imitation Learning via Potential Field-Guided Flow Matching,"
i IROS 2025 - 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems, Conference Proceedings, 2025, s. 11693-11700.
[20]
G. Mårtensson et al.,
"Video Analysis of Infant Spontaneous Movements to Predict Neuroedevelopmental Deficiency,"
Acta Paediatrica, vol. 114, s. 333-333, 2025.
[21]
C. R. Sidrane och J. Tumova,
"TTT : A Temporal Refinement Heuristic for Tenuously Tractable Discrete Time Reachability Problems,"
i 2025 American Control Conference-ACC, 2025, s. 1288-1293.
[22]
S. Fan et al.,
"Diffusion Trajectory-Guided Policy for Long-Horizon Robot Manipulation,"
IEEE Robotics and Automation Letters, vol. 10, no. 12, s. 12788-12795, 2025.
[23]
K. Wijk, R. Vinuesa och H. Azizpour,
"SFESS : Score Function Estimators for k-Subset Sampling,"
i The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, Apr 24-28, 2025, 2025.
[24]
R. Lanzino et al.,
"Neural Transcoding Vision Transformers for EEG-to-fMRI Synthesis,"
i Computer Vision-Eccv 2024 Workshops, Pt Xx, 2025, s. 53-70.
[25]
I. Hakkinen et al.,
"Medical Image Segmentation with SAM-Generated Annotations,"
i Computer Vision-Eccv 2024 Workshops, Pt Xxii, 2025, s. 51-62.
[26]
Y. Ma et al.,
"Measuring User Experience Through Speech Analysis : Insights from HCI Interviews,"
i Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, CHI EA 2025, 2025.
[27]
S. Jin et al.,
"PACA : Perspective-Aware Cross-Attention Representation for Zero-shot Scene Rearrangement,"
i Proceedings IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025, 2025, s. 6559-6569.
[28]
H. Azuma, Y. Matsui och A. Maki,
"ZoDi : Zero-Shot Domain Adaptation with Diffusion-Based Image Transfer,"
i Computer Vision-Eccv 2024 Workshops, Pt Xviii, 2025, s. 151-167.
[29]
Y. Ma et al.,
"Advancing User-Voice Interaction : Exploring Emotion-Aware Voice Assistants Through a Role-Swapping Approach,"
i Distributed, Ambient And Pervasive Interactions, Dapi 2025, Pt I, 2025, s. 303-320.
[30]
Y. Zhao, S. Gerard och Y. Ban,
"TS-SatFire : A Multi-Task Satellite Image Time-Series Dataset for Wildfire Detection and Prediction,"
Scientific Data, vol. 12, no. 1, 2025.
[31]
O. Zaland et al.,
"One-Shot Federated Learning with Classifier-Free Diffusion Models,"
i 2025 IEEE International Conference on Multimedia and Expo: Journey to the Center of Machine Imagination, ICME 2025 - Conference Proceedings, 2025.
[32]
F. Ahmad, J. Styrud och V. Krueger,
"Addressing Failures in Robotics Using Vision-Based Language Models (VLMs) and Behavior Trees (BT),"
i EUROPEAN ROBOTICS FORUM 2025, 2025, s. 281-287.
[33]
H. Fang och H. Azizpour,
"Leveraging Satellite Image Time Series for Accurate Extreme Event Detection,"
i 2025 Ieee/Cvf Winter Conference On Applications Of Computer Vision Workshops, Wacvw, 2025, s. 489-498.
[34]
C. Ceylan,
"Towards Unsupervised, Analysable and Scalable Node Embedding Models for Transaction Networks,"
Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2025:100, 2025.
[35]
F. Zangeneh,
"Camera Relocalization through Distribution Modeling,"
Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2025:106, 2025.
[36]
X. Zhu et al.,
"Towards Automated Assembly Quality Inspection with Synthetic Data and Domain Randomization,"
i Proceedings : IEEE/CVF International Conference on Computer Vision Workshop, ICCVW, 2025, 2025, s. 1395-1403.
[37]
X. Zhu,
"Towards Automated Parts Recognition in Manufacturing with Synthetic Data,"
Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2025:105, 2025.
[38]
Q. Yang et al.,
"S2-Diffusion : Generalizing from Instance-level to Category-level Skills in Robot Manipulation,"
IEEE Robotics and Automation Letters, vol. 10, no. 12, s. 12995-13002, 2025.
[39]
Q. Zhang et al.,
"HiMo: High-Speed Objects Motion Compensation in Point Clouds,"
IEEE Transactions on robotics, vol. 41, s. 5896-5911, 2025.
[40]
S. Qamar et al.,
"ScaleFusionNet: transformer-guided multi-scale feature fusion for skin lesion segmentation,"
Scientific Reports, vol. 15, no. 1, 2025.
[41]
Y. Xu et al.,
"Skor-Xg : Skeleton-Oriented Expected Goal Estimation in Soccer,"
i Proceedings - 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2025, 2025, s. 5957-5967.
[42]
M. Kartašev et al.,
"SMaRCSim : Maritime Robotics Simulation Modules,"
i 2025 Symposium on Maritime Informatics and Robotics, MARIS 2025, 2025.
[43]
Z. Gong et al.,
"Bridging Cultures : A Framework for Facial Expression and Empathy,"
i IEEE International Conference on Multimedia and Expo Workshops: Journey to the Center of Machine Imagination, ICMEW 2025 - Proceedings, 2025.
[44]
L. Bruns et al.,
"ACE-G : Improving Generalization of Scene Coordinate Regression Through Query Pre-Training,"
i Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, s. 26751-26761.
[45]
L. Bruns, J. Zhang och P. Jensfelt,
"Neural Graph Map : Dense Mapping with Efficient Loop Closure Integration,"
i 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2025, s. 2900-2909.
[46]
L. Bruns,
"Improving Spatial Understanding Through Learning and Optimization,"
Doktorsavhandling Stockholm : KTH Royal Institute of Technology, TRITA-EECS-AVL, 2025:97, 2025.
[47]
P. Isaev och P. Hammer,
"NARS-GPT : An Integrated Reasoning System for Natural Language Interactions,"
i Intelligent Systems and Applications - Proceedings of the 2025 Intelligent Systems Conference IntelliSys, 2025, s. 404-420.
[48]
X. Wang et al.,
"LineArt: A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model,"
i Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2025, s. 2912-2923.
[49]
H. Lu et al.,
"Grasping a Handful: Sequential Multi-Object Dexterous Grasp Generation,"
IEEE Robotics and Automation Letters, vol. 10, no. 11, s. 11880-11887, 2025.
[50]
C. Liu et al.,
"Message from the General and Program Chairs CVPR 2025,"
i Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2025, s. ccclxxxii-ccclxxxiii.