Publikationer av Gabriel Skantze
Refereegranskade
Artiklar
[1]
J. Kejriwal et al., "Does a robot's gaze behavior affect entrainment in HRI?," Computing and informatics, vol. 43, no. 5, s. 1256-1284, 2024.
[2]
A. Borg et al., "Enhancing clinical reasoning skills for medical students : a qualitative comparison of LLM-powered social robotic versus computer-based virtual patients within rheumatology," Rheumatology International, 2024.
[3]
B. Irfan, S. Kuoppamäki och G. Skantze, "Recommendations for designing conversational companion robots with older adults through foundation models," Frontiers in Robotics and AI, vol. 11, 2024.
[4]
C. Mishra et al., "Does a robot's gaze aversion affect human gaze aversion?," Frontiers in Robotics and AI, vol. 10, 2023.
[5]
C. Mishra et al., "Real-time emotion generation in human-robot dialogue using large language models," Frontiers in Robotics and AI, vol. 10, 2023.
[6]
F. Förster et al., "Working with troubles and failures in conversation between humans and robots: workshop report," Frontiers in Robotics and AI, vol. 10, 2023.
[7]
P. Blomsma, G. Skantze och M. Swerts, "Backchannel Behavior Influences the Perceived Personality of Human and Artificial Communication Partners," Frontiers in Artificial Intelligence, vol. 5, 2022.
[8]
S. Ahlberg et al., "Co-adaptive Human-Robot Cooperation : Summary and Challenges," Unmanned Systems, vol. 10, no. 02, s. 187-203, 2022.
[9]
G. Skantze och B. Willemsen, "CoLLIE : Continual Learning of Language Grounding from Language-Image Embeddings," The journal of artificial intelligence research, vol. 74, s. 1201-1223, 2022.
[10]
A. Axelsson, H. Buschmeier och G. Skantze, "Modeling Feedback in Interaction With Conversational Agents—A Review," Frontiers in Computer Science, vol. 4, 2022.
[11]
A. Axelsson och G. Skantze, "Multimodal User Feedback During Adaptive Robot-Human Presentations," Frontiers in Computer Science, vol. 3, 2022.
[12]
G. Skantze, "Turn-taking in Conversational Systems and Human-Robot Interaction : A Review," Computer speech & language (Print), vol. 67, 2021.
[13]
G. Skantze, "Real-Time Coordination in Human-Robot Interaction Using Face and Voice," AI Magazine, vol. 37, no. 4, s. 19-31, 2016.
[14]
H. Cuayahuitl, K. Komatani och G. Skantze, "Introduction for Speech and language for interactive robots," Computer speech & language (Print), vol. 34, no. 1, s. 83-86, 2015.
[15]
R. Meena, G. Skantze och J. Gustafsson, "Data-driven models for timing feedback responses in a Map Task dialogue system," Computer speech & language (Print), vol. 28, no. 4, s. 903-922, 2014.
[16]
G. Skantze, A. Hjalmarsson och C. Oertel, "Turn-taking, feedback and joint attention in situated human-robot interaction," Speech Communication, vol. 65, s. 50-66, 2014.
[17]
N. Mirnig et al., "Face-To-Face With A Robot : What do we actually talk about?," International Journal of Humanoid Robotics, vol. 10, no. 1, s. 1350011, 2013.
[18]
S. Al Moubayed, G. Skantze och J. Beskow, "The Furhat Back-Projected Humanoid Head-Lip Reading, Gaze And Multi-Party Interaction," International Journal of Humanoid Robotics, vol. 10, no. 1, s. 1350005, 2013.
[19]
G. Skantze och A. Hjalmarsson, "Towards incremental speech generation in conversational systems," Computer speech & language (Print), vol. 27, no. 1, s. 243-262, 2013.
[20]
D. Schlangen och G. Skantze, "A General, Abstract Model of Incremental Dialogue Processing," Dialogue and Discourse, vol. 2, no. 1, s. 83-111, 2011.
[21]
G. Skantze, "Exploring human error recovery strategies : Implications for spoken dialogue systems," Speech Communication, vol. 45, no. 3, s. 325-341, 2005.
Konferensbidrag
[22]
A. Borg, I. Parodis och G. Skantze, "Creating Virtual Patients using Robots and Large Language Models : A Preliminary Study with Medical Students," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 273-277.
[23]
S. Ashkenazi et al., "Goes to the Heart: Speaking the User's Native Language," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 214-218.
[24]
Y. Wang et al., "How Much Does Nonverbal Communication Conform to Entropy Rate Constancy? : A Case Study on Listener Gaze in Interaction," i 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Proceedings of the Conference, 2024, s. 3533-3545.
[25]
K. Inoue et al., "Multilingual Turn-taking Prediction Using Voice Activity Projection," i 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings, 2024, s. 11873-11883.
[26]
B. Willemsen och G. Skantze, "Referring Expression Generation in Visually Grounded Dialogue with Discourse-aware Comprehension Guiding," i 17th International Natural Language Generation Conference (INLG), 2024, s. 453-469.
[27]
A. Axelsson et al., "Robots in autonomous buses: Who hosts when no human is there?," i HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, s. 1278-1280.
[28]
E. Ekstedt et al., "Automatic Evaluation of Turn-taking Cues in Conversational Speech Synthesis," i Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2023, 2023, s. 5481-5485.
[29]
C. Figueroa, M. Ochs och G. Skantze, "Classification of Feedback Functions in Spoken Dialog Using Large Language Models and Prosodic Features," i 27th Workshop on the Semantics and Pragmatics of Dialogue, 2023, s. 15-24.
[30]
T. Offrede et al., "Do Humans Converge Phonetically When Talking to a Robot?," i Proceedings of the 20th International Congress of Phonetic Sciences, Prague 2023, 2023, s. 3507-3511.
[31]
A. Axelsson och G. Skantze, "Do you follow? : A fully automated system for adaptive robot presenters," i HRI 2023 : Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 2023, s. 102-111.
[32]
A. M. Kamelabad och G. Skantze, "I Learn Better Alone! Collaborative and Individual Word Learning With a Child and Adult Robot," i Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 2023, s. 368-377.
[33]
C. Figueroa, Š. Beňuš och G. Skantze, "Prosodic Alignment in Different Conversational Feedback Functions," i Proceedings of the 20th International Congress of Phonetic Sciences, Prague 2023, 2023, s. 154-1518.
[34]
B. Willemsen, L. Qian och G. Skantze, "Resolving References in Visually-Grounded Dialogue via Text Generation," i Proceedings of the 24th Meeting of the Special Interest Group on Discourse and Dialogue, 2023, s. 457-469.
[35]
B. Jiang, E. Ekstedt och G. Skantze, "Response-conditioned Turn-taking Prediction," i Findings of the Association for Computational Linguistics, ACL 2023, 2023, s. 12241-12248.
[36]
E. Ekstedt och G. Skantze, "Show & Tell : Voice Activity Projection and Turn-taking," i Interspeech 2023, 2023, s. 2020-2021.
[37]
G. Skantze och A. S. Doğruöz, "The Open-domain Paradox for Chatbots : Common Ground as the Basis for Human-like Dialogue," i Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2023, s. 605-614.
[38]
K. Inoue et al., "Towards Objective Evaluation of Socially-Situated Conversational Robots : Assessing Human-Likeness through Multimodal User Behaviors," i ICMI 2023 Companion : Companion Publication of the 25th International Conference on Multimodal Interaction, 2023, s. 86-90.
[39]
A. Axelsson och G. Skantze, "Using Large Language Models for Zero-Shot Natural Language Generation from Knowledge Graphs," i Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023), 2023, s. 39-54.
[40]
B. Jiang, E. Ekstedt och G. Skantze, "What makes a good pause? Investigating the turn-holding effects of fillers," i Proceedings 20th International Congress of Phonetic Sciences (ICPhS), 2023, s. 3512-3516.
[41]
M. P. Aylett et al., "Why is my Agent so Slow? Deploying Human-Like Conversational Turn-Taking," i HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, 2023, s. 490-492.
[42]
C. Figueroa et al., "Annotation of Communicative Functions of Short Feedback Tokens in Switchboard," i 2022 Language Resources and Evaluation Conference, LREC 2022, 2022, s. 1849-1859.
[43]
B. Willemsen, D. Kalpakchi och G. Skantze, "Collecting Visually-Grounded Dialogue with A Game Of Sorts," i Proceedings of the 13th Conference on Language Resources and Evaluation, 2022, s. 2257-2268.
[44]
M. Elgarf et al., "CreativeBot : a Creative Storyteller robot to stimulate creativity in children," i ICMI '22: Proceedings of the 2022 International Conference on Multimodal Interaction, 2022, s. 540-548.
[45]
E. Ekstedt och G. Skantze, "How Much Does Prosody Help Turn-taking?Investigations using Voice Activity Projection Models," i Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2022, s. 541-551.
[46]
G. Skantze och C. Mishra, "Knowing where to look : A planning-based architecture to automate the gaze behavior of social robots," i 31st IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2022, Napoli, Italy, August 29 - Sept. 2, 2022, 2022.
[47]
E. Ekstedt och G. Skantze, "Voice Activity Projection: Self-supervised Learning of Turn-taking Events," i Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2022, 2022, s. 5190-5194.
[48]
G. Skantze, "Conversational interaction with social robots," i ACM/IEEE International Conference on Human-Robot Interaction, 2021.
[49]
A. S. Dogruoz och G. Skantze, "How "open" are the conversations with open-domain chatbots? : A proposal for Speech Event based evaluation," i SIGDIAL 2021 : 22Nd Annual Meeting Of The Special Interest Group On Discourse And Dialogue (Sigdial 2021), 2021, s. 392-402.
[50]
M. Elgarf, G. Skantze och C. Peters, "Once Upon a Story : Can a Creative Storyteller Robot Stimulate Creativity in Children?," i Proceedings of the 21st ACM international conference on intelligent virtual agents (IVA), 2021, s. 60-67.
[51]
E. Ekstedt och G. Skantze, "Projection of Turn Completion in Incremental Spoken Dialogue Systems," i SIGDIAL 2021 : SIGDIAL 2021 - 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference, Virtual, Singapore 29 July 2021 through 31 July 2021, 2021, s. 431-437.
[52]
O. Ibrahim och G. Skantze, "Revisiting robot directed speech effects in spontaneous Human-Human-Robot interactions," i Human Perspectives on Spoken Human-Machine Interaction, 2021.
[53]
E. Ekstedt och G. Skantze, "TurnGPT : a Transformer-based Language Model for Predicting Turn-taking in Spoken Dialog," i Findings of the Association for Computational Linguistics : EMNLP 2020, 2020, s. 2981-2990.
[54]
N. Axelsson och G. Skantze, "Using knowledge graphs and behaviour trees for feedback-aware presentation agents," i Proceedings of Intelligent Virtual Agents 2020, 2020.
[55]
T. Shore och G. Skantze, "Using lexical alignment and referring ability to address data sparsity in situated dialog reference resolution," i Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, 2020, s. 2288-2297.
[56]
P. Jonell et al., "Crowdsourcing a self-evolving dialog graph," i CUI '19: Proceedings of the 1st International Conference on Conversational User Interfaces, 2019.
[57]
O. Ibrahim et al., "Fundamental frequency accommodation in multi-party human-robot game interactions : The effect of winning or losing," i Proceedings Interspeech 2019, 2019, s. 3980-3984.
[58]
T. Shore, T. Androulakaki och G. Skantze, "KTH Tangrams: A Dataset for Research on Alignment and Conceptual Pacts in Task-Oriented Dialogue," i LREC 2018 - 11th International Conference on Language Resources and Evaluation, 2019, s. 768-775.
[59]
N. Axelsson och G. Skantze, "Modelling Adaptive Presentations in Human-Robot Interaction using Behaviour Trees," i 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue : Proceedings of the Conference, 2019, s. 345-352.
[60]
D. Kontogiorgos et al., "The Effects of Embodiment and Social Eye-Gaze in Conversational Agents," i Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci), 2019.
[61]
D. Kontogiorgos et al., "A Multimodal Corpus for Mutual Gaze and Joint Attention in Multiparty Situated Interaction," i Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018, s. 119-127.
[62]
C. Li et al., "Effects of Posture and Embodiment on Social Distance in Human-Agent Interaction in Mixed Reality," i Proceedings of the 18th International Conference on Intelligent Virtual Agents, 2018, s. 191-196.
[63]
C. Peters et al., "Investigating Social Distances between Humans, Virtual Humans and Virtual Robots in Mixed Reality," i Proceedings of 17th International Conference on Autonomous Agents and MultiAgent Systems, 2018, s. 2247-2249.
[64]
M. Roddy, G. Skantze och N. Harte, "Investigating speech features for continuous turn-taking prediction using LSTMs," i Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018, s. 586-590.
[65]
M. Roddy, G. Skantze och N. Harte, "Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNs," i ICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction, 2018, s. 186-190.
[66]
D. Kontogiorgos et al., "Multimodal reference resolution in collaborative assembly tasks," i Multimodal reference resolution in collaborative assembly tasks, 2018.
[67]
C. Peters et al., "Towards the use of mixed reality for hri design via virtual robots," i HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot InteractionMarch 2020, 2018.
[68]
J. Lopes, O. Engwall och G. Skantze, "A First Visit to the Robot Language Café," i Proceedings of the ISCA workshop on Speech and Language Technology in Education, 2017.
[69]
R. Johansson, G. Skantze och A. Jönsson, "A psychotherapy training environment with virtual patients implemented using the furhat robot platform," i 17th International Conference on Intelligent Virtual Agents, IVA 2017, 2017, s. 184-187.
[70]
V. Avramova et al., "A virtual poster presenter using mixed reality," i 17th International Conference on Intelligent Virtual Agents, IVA 2017, 2017, s. 25-28.
[71]
T. Shore och G. Skantze, "Enhancing reference resolution in dialogue using participant feedback," i Proc. GLU 2017 International Workshop on Grounding Language Understanding, 2017, s. 78-82.
[72]
G. Skantze, "Predicting and Regulating Participation Equality in Human-robot Conversations : Effects of Age and Gender," i Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017, s. 196-204.
[73]
G. Skantze, "Towards a General, Continuous Model of Turn-taking in Spoken Dialogue using LSTM Recurrent Neural Networks," i Proceedings of SIGDIAL 2017 - 18th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference, 2017.
[74]
M. Johansson et al., "Making Turn-Taking Decisions for an Active Listening Robot for Memory Training," i SOCIAL ROBOTICS, (ICSR 2016), 2016, s. 940-949.
[75]
S. Georgiladakis et al., "Root Cause Analysis of Miscommunication Hotspots in Spoken Dialogue Systems," i Interspeech 2016, 2016.
[76]
G. Skantze, M. Johansson och J. Beskow, "A Collaborative Human-Robot Game as a Test-bed for Modelling Multi-party, Situated Interaction," i INTELLIGENT VIRTUAL AGENTS, IVA 2015, 2015, s. 348-351.
[77]
R. Meena et al., "Automatic Detection of Miscommunication in Spoken Dialogue Systems," i Proceedings of 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), 2015, s. 354-363.
[78]
J. Lopes et al., "Detecting Repetitions in Spoken Dialogue Systems Using Phonetic Distances," i INTERSPEECH-2015, 2015, s. 1805-1809.
[79]
G. Skantze, M. Johansson och J. Beskow, "Exploring Turn-taking Cues in Multi-party Human-robot Discussions about Objects," i Proceedings of the 2015 ACM International Conference on Multimodal Interaction, 2015.
[80]
G. Skantze och M. Johansson, "Modelling situated human-robot interaction using IrisTK," i Proceedings of the SIGDIAL 2015 Conference, 2015, s. 165-167.
[81]
M. Johansson och G. Skantze, "Opportunities and obligations to take turns in collaborative multi-party human-robot interaction," i SIGDIAL 2015 - 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference, 2015, s. 305-314.
[82]
M. Johansson, G. Skantze och J. Gustafson, "Comparison of human-human and human-robot Turn-taking Behaviour in multi-party Situated interaction," i UM3I '14 : Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions, 2014, s. 21-26.
[83]
R. Meena et al., "Crowdsourcing Street-level Geographic Information Using a Spoken Dialogue System," i Proceedings of the SIGDIAL 2014 Conference, 2014, s. 2-11.
[84]
S. Al Moubayed et al., "Human-robot Collaborative Tutoring Using Multiparty Multimodal Spoken Dialogue," i 9th Annual ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 2014.
[85]
S. Al Moubayed, J. Beskow och G. Skantze, "Spontaneous spoken dialogues with the Furhat human-like robot head," i HRI '14 Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, 2014, s. 326.
[86]
S. Al Moubayed et al., "Tutoring Robots: Multiparty Multimodal Social Dialogue With an Embodied Tutor," i 9th International Summer Workshop on Multimodal Interfaces, Lisbon, Portugal, 2014.
[87]
S. Al Moubayed et al., "UM3I 2014 : International workshop on understanding and modeling multiparty, multimodal interactions," i ICMI 2014 - Proceedings of the 2014 International Conference on Multimodal Interaction, 2014, s. 537-538.
[88]
G. Skantze, C. Oertel och A. Hjalmarsson, "User Feedback in Human-Robot Dialogue : Task Progression and Uncertainty," i Proceedings of the HRI Workshop on Timing in Human-Robot Interaction, 2014.
[89]
R. Meena et al., "Using a Spoken Dialogue System for Crowdsourcing Street-level Geographic Information," i 2nd Workshop on Action, Perception and Language, SLTC 2014, 2014.
[90]
R. Meena, G. Skantze och J. Gustafson, "A Data-driven Model for Timing Feedback in a Map Task Dialogue System," i 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGdial, 2013, s. 375-383.
[91]
G. Skantze, A. Hjalmarsson och C. Oertel, "Exploring the effects of gaze and pauses in situated human-robot interaction," i 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue : SIGDIAL 2013, 2013.
[92]
M. Johansson, G. Skantze och J. Gustafson, "Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions," i Social Robotics : 5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013, Proceedings, 2013, s. 351-360.
[93]
R. Meena, G. Skantze och J. Gustafson, "Human Evaluation of Conceptual Route Graphs for Interpreting Spoken Route Descriptions," i Proceedings of the 3rd International Workshop on Computational Models of Spatial Language Interpretation and Generation (CoSLI), 2013, s. 30-35.
[94]
S. Al Moubayed, J. Beskow och G. Skantze, "The Furhat Social Companion Talking Head," i Interspeech 2013 - Show and Tell, 2013, s. 747-749.
[95]
R. Meena, G. Skantze och J. Gustafson, "The Map Task Dialogue System : A Test-bed for Modelling Human-Like Dialogue," i 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGdial, 2013, s. 366-368.
[96]
G. Skantze, C. Oertel och A. Hjalmarsson, "User feedback in human-robot interaction : Prosody, gaze and timing," i Proceedings of Interspeech 2013, 2013, s. 1901-1905.
[97]
R. Meena, G. Skantze och J. Gustafson, "A Chunking Parser for Semantic Interpretation of Spoken Route Directions in Human-Robot Dialogue," i Proceedings of the 4th Swedish Language Technology Conference (SLTC 2012), 2012, s. 55-56.
[98]
G. Skantze, "A Testbed for Examining the Timing of Feedback using a Map Task," i Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog, 2012.
[99]
R. Meena, G. Skantze och J. Gustafson, "A data-driven approach to understanding spoken route directions in human-robot dialogue," i 13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012, 2012, s. 226-229.
[100]
M. Blomberg et al., "Children and adults in dialogue with the robot head Furhat - corpus collection and initial analysis," i Proceedings of WOCCI, 2012.
[101]
S. Al Moubayed et al., "Furhat : A Back-projected Human-like Robot Head for Multiparty Human-Machine Interaction," i Cognitive Behavioural Systems : COST 2102 International Training School, Dresden, Germany, February 21-26, 2011, Revised Selected Papers, 2012, s. 114-130.
[102]
G. Skantze et al., "Furhat at Robotville : A Robot Head Harvesting the Thoughts of the Public through Multi-party Dialogue," i Proceedings of the Workshop on Real-time Conversation with Virtual Agents IVA-RCVA, 2012.
[103]
S. Al Moubayed et al., "Furhat goes to Robotville: a large-scale multiparty human-robot interaction data collection in a public space," i Proc of LREC Workshop on Multimodal Corpora, 2012.
[104]
G. Skantze och S. Al Moubayed, "IrisTK : A statechart-based toolkit for multi-party face-to-face interaction," i ICMI'12 - Proceedings of the ACM International Conference on Multimodal Interaction, 2012, s. 69-75.
[105]
S. Al Moubayed, G. Skantze och J. Beskow, "Lip-reading : Furhat audio visual intelligibility of a back projected animated face," i Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, s. 196-203.
[106]
S. Al Moubayed et al., "Multimodal Multiparty Social Interaction with the Furhat Head," i 14th ACM International Conference on Multimodal Interaction, Santa Monica, CA, 2012, s. 293-294.
[107]
S. Al Moubayed och G. Skantze, "Perception of Gaze Direction for Situated Interaction," i Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction, Gaze-In 2012, 2012.
[108]
S. Al Moubayed och G. Skantze, "Effects of 2D and 3D Displays on Turn-taking Behavior in Multiparty Human-Computer Dialog," i SemDial 2011 : Proceedings of the 15th Workshop on the Semantics and Pragmatics of Dialogue, 2011, s. 192-193.
[109]
M. Johnson-Roberson et al., "Enhanced Visual Scene Understanding through Human-Robot Dialog," i 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, s. 3342-3348.
[110]
S. Al Moubayed och G. Skantze, "Turn-taking Control Using Gaze in Multiparty Human-Computer Dialogue : Effects of 2D and 3D Displays," i Proceedings of the International Conference on Audio-Visual Speech Processing 2011, 2011, s. 99-102.
[111]
M. Johansson, G. Skantze och J. Gustafson, "Understanding route directions in human-robot dialogue," i Proceedings of SemDial, 2011, s. 19-27.
[112]
M. Johnson-Roberson et al., "Enhanced visual scene understanding through human-robot dialog," i Dialog with Robots : AAAI 2010 Fall Symposium, 2010.
[113]
D. Schlangen et al., "Middleware for Incremental Processing in Conversational Agents," i Proceedings of SIGDIAL 2010 : the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2010, s. 51-54.
[114]
G. Skantze och A. Hjalmarsson, "Towards Incremental Speech Generation in Dialogue Systems," i Proceedings of the SIGDIAL 2010 Conference : 11th Annual Meeting of the Special Interest Group onDiscourse and Dialogue, 2010, s. 1-8.
[115]
D. Schlangen och G. Skantze, "A general, abstract model of incremental dialogue processing," i EACL 2009 - 12th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings, 2009, s. 710-718.
[116]
G. Skantze och J. Gustafson, "Attention and interaction control in a human-human-computer dialogue setting," i Proceedings of SIGDIAL 2009 : the 10th Annual Meeting of the Special Interest Group in Discourse and Dialogue, 2009, s. 310-313.
[117]
G. Skantze och D. Schlangen, "Incremental dialogue processing in a micro-domain," i Proceedings of the 12th Conference of the European Chapter of the ACL, 2009, s. 745-753.
[118]
G. Skantze och J. Gustafson, "Multimodal interaction control in the MonAMI Reminder," i Proceedings of DiaHolmia : 2009 Workshop on the Semantics and Pragmatics of Dialogue, 2009, s. 127-128.
[119]
J. Beskow et al., "The MonAMI Reminder : a spoken dialogue system for face-to-face interaction," i Proceedings of the 10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009, 2009, s. 300-303.
[120]
J. Beskow et al., "Innovative interfaces in MonAMI : The Reminder," i Perception In Multimodal Dialogue Systems, Proceedings, 2008, s. 272-275.
[121]
G. Skantze, "Making grounding decisions : Data-driven estimation of dialogue costs and confidence thresholds," i Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue, 2007, s. 206-210.
[122]
G. Skantze, J. Edlund och R. Carlson, "Talking with Higgins : Research challenges in a spoken dialogue system," i PERCEPTION AND INTERACTIVE TECHNOLOGIES, PROCEEDINGS, 2006, s. 193-196.
[123]
Å. Wallers, J. Edlund och G. Skantze, "The effect of prosodic features on the interpretation of synthesised backchannels," i Perception And Interactive Technologies, Proceedings, 2006, s. 183-187.
[124]
G. Skantze, D. House och J. Edlund, "User Responses to Prosodic Variation in Fragmentary Grounding Utterances in Dialog," i INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, 2006, s. 2002-2005.
[125]
G. Skantze, "Galatea: a discourse modeller supporting concept-level error handling in spoken dialogue systems," i 6th SIGdial Workshop on Discourse and Dialogue, 2005, s. 178-189.
[126]
J. Edlund, D. House och G. Skantze, "The effects of prosodic features on the interpretation of clarification ellipses," i Proceedings of Interspeech 2005 : Eurospeech, 2005, s. 2389-2392.
[127]
G. Skantze och J. Edlund, "Early error detection on word level," i Proceedings of ISCA Tutorial and Research Workshop (ITRW) on Robustness Issues in Conversational Interaction, 2004.
[128]
J. Edlund, G. Skantze och R. Carlson, "Higgins : a spoken dialogue system for investigating error handling techniques," i Proceedings of the International Conference on Spoken Language Processing, ICSLP 04, 2004, s. 229-231.
[129]
G. Skantze och J. Edlund, "Robust interpretation in the Higgins spoken dialogue system," i Proceedings of ISCA Tutorial and Research Workshop (ITRW) on Robustness Issues in Conversational Interaction, 2004.
Kapitel i böcker
[130]
G. Skantze, J. Gustafson och J. Beskow, "Multimodal Conversational Interaction with Robots," i The Handbook of Multimodal-Multisensor Interfaces, Volume 3 : Language Processing, Software, Commercialization, and Emerging Directions, Sharon Oviatt, Björn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Krüger red., : ACM Press, 2019.
[131]
J. Beskow et al., "Multimodal Interaction Control," i Computers in the Human Interaction Loop, Waibel, Alexander; Stiefelhagen, Rainer red., Berlin/Heidelberg : Springer Berlin/Heidelberg, 2009, s. 143-158.
[132]
G. Skantze, "Galatea : A discourse modeller supporting concept-level error handling in spoken dialogue systems," i Recent Trends in Discourse and Dialogue, Dybkjær, L.; Minker, W. red., Dordrecht : Springer Science + Business Media B.V, 2008.
Icke refereegranskade
Artiklar
[133]
D. Traum et al., "Special issue on multimodal processing and robotics for dialogue systems (Part II)," Advanced Robotics, vol. 38, no. 4, s. 193-194, 2024.
[134]
D. Traum et al., "Special Issue on Multimodal processing and robotics for dialogue systems (Part 1)," Advanced Robotics, vol. 37, no. 21, s. 1347-1348, 2023.
Konferensbidrag
[135]
S. Al Moubayed et al., "UM3I 2014 chairs' welcome," i UM3I 2014 - Proceedings of the 2014 ACM Workshop on Understanding and Modeling Multiparty, Multimodal Interactions, Co-located with ICMI 2014, 2014, s. iii.
[136]
S. Al Moubayed et al., "Talking with Furhat - multi-party interaction with a back-projected robot head," i Proceedings of Fonetik 2012, 2012, s. 109-112.
[137]
J. Beskow et al., "Speech technology in the European project MonAMI," i Proceedings of FONETIK 2008, 2008, s. 33-36.
[138]
G. Skantze, D. House och J. Edlund, "Grounding and prosody in dialog," i Working Papers 52 : Proceedings of Fonetik 2006, 2006, s. 117-120.
[139]
R. Carlson et al., "Towards human-like behaviour in spoken dialog systems," i Proceedings of Swedish Language Technology Conference (SLTC 2006), 2006.
[140]
J. Edlund, D. House och G. Skantze, "Prosodic Features in the Perception of Clarification Ellipses," i Proceedings of Fonetik 2005 : The XVIIIth Swedish Phonetics Conference, 2005, s. 107-110.
Avhandlingar
[141]
G. Skantze, "Error Handling in Spoken Dialogue Systems : Managing Uncertainty, Grounding and Miscommunication," Doktorsavhandling Stockholm : KTH, Trita-CSC-A, 2007:14, 2007.
Senaste synkning med DiVA:
2024-12-01 01:04:31