Hoppa till huvudinnehållet
Till KTH:s startsida

Publikationer av Joakim Gustafsson

Refereegranskade

Artiklar

[1]
Kontogiorgos, D., Abelho Pereira, A. T. & Gustafsson, J. (2021). Grounding behaviours with conversational interfaces: effects of embodiment and failures. Journal on Multimodal User Interfaces.
[3]
Jonell, P., Moell, B., Håkansson, K., Henter, G. E., Kucherenko, T., Mikheeva, O. ... Beskow, J. (2021). Multimodal Capture of Patient Behaviour for Improved Detection of Early Dementia : Clinical Feasibility and Preliminary Results. Frontiers in Computer Science, 3.
[4]
Oertel, C., Jonell, P., Kontogiorgos, D., Mora, K. F., Odobez, J.-M. & Gustafsson, J. (2021). Towards an Engagement-Aware Attentive Artificial Listener for Multi-Party Interactions. Frontiers in Robotics and AI, 8.
[5]
Meena, R., Skantze, G. & Gustafsson, J. (2014). Data-driven models for timing feedback responses in a Map Task dialogue system. Computer speech & language (Print), 28(4), 903-922.
[6]
Mirnig, N., Weiss, A., Skantze, G., Al Moubayed, S., Gustafson, J., Beskow, J. ... Tscheligi, M. (2013). Face-To-Face With A Robot : What do we actually talk about?. International Journal of Humanoid Robotics, 10(1), 1350011.
[7]
Neiberg, D., Salvi, G. & Gustafson, J. (2013). Semi-supervised methods for exploring the acoustics of simple productive feedback. Speech Communication, 55(3), 451-469.
[8]
Edlund, J., Gustafson, J., Heldner, M. & Hjalmarsson, A. (2008). Towards human-like spoken dialogue systems. Speech Communication, 50(8-9), 630-645.
[9]
Boye, J., Gustafson, J. & Wiren, M. (2006). Robust spoken language understanding in a computer game. Speech Communication, 48(03-4), 335-353.
[10]
Gustafson, J. & Bell, L. (2000). Speech technology on trial : Experiences from the August system. Natural Language Engineering, 6(3-4), 273-286.

Konferensbidrag

[11]
Tånnander, C., Edlund, J., Gustafsson, J. (2024). Revisiting Three Text-to-Speech Synthesis Experiments with a Web-Based Audience Response System. I 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings. (s. 14111-14121). European Language Resources Association (ELRA).
[12]
Lameris, H., Székely, É., Gustafsson, J. (2024). The Role of Creaky Voice in Turn Taking and the Perception of Speaker Stance: Experiments Using Controllable TTS. I 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings. (s. 16058-16065). European Language Resources Association (ELRA).
[13]
Wang, S., Henter, G. E., Gustafsson, J., Székely, É. (2023). A Comparative Study of Self-Supervised Speech Representations in Read and Spontaneous TTS. I ICASSPW 2023: 2023 IEEE International Conference on Acoustics, Speech and Signal Processing Workshops, Proceedings. Institute of Electrical and Electronics Engineers (IEEE).
[14]
Peña, P. R., Doyle, P. R., Ip, E. Y., Di Liberto, G., Higgins, D., McDonnell, R., Branigan, H., Gustafsson, J., McMillan, D., Moore, R. J., Cowan, B. R. (2023). A Special Interest Group on Developing Theories of Language Use in Interaction with Conversational User Interfaces. I CHI 2023: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM).
[15]
Ekstedt, E., Wang, S., Székely, É., Gustafsson, J., Skantze, G. (2023). Automatic Evaluation of Turn-taking Cues in Conversational Speech Synthesis. I Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2023. (s. 5481-5485). International Speech Communication Association.
[16]
Lameris, H., Gustafsson, J., Székely, É. (2023). Beyond style : synthesizing speech with pragmatic functions. I Interspeech 2023. (s. 3382-3386). International Speech Communication Association.
[17]
Gustafsson, J., Székely, É., Alexanderson, S., Beskow, J. (2023). Casual chatter or speaking up? Adjusting articulatory effort in generation of speech and animation for conversational characters. I 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition, FG 2023. Institute of Electrical and Electronics Engineers (IEEE).
[18]
Gustafsson, J., Székely, É., Beskow, J. (2023). Generation of speech and facial animation with controllable articulatory effort for amusing conversational characters. I 23rd ACM International Conference on Interlligent Virtual Agent (IVA 2023). Institute of Electrical and Electronics Engineers (IEEE).
[19]
Miniotaitė, J., Wang, S., Beskow, J., Gustafson, J., Székely, É., Abelho Pereira, A. T. (2023). Hi robot, it's not what you say, it's how you say it. I 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN. (s. 307-314). Institute of Electrical and Electronics Engineers (IEEE).
[20]
Kirkland, A., Gustafsson, J., Székely, É. (2023). Pardon my disfluency : The impact of disfluency effects on the perception of speaker competence and confidence. I Interspeech 2023. (s. 5217-5221). International Speech Communication Association.
[21]
Lameris, H., Mehta, S., Henter, G. E., Gustafsson, J., Székely, É. (2023). Prosody-Controllable Spontaneous TTS with Neural HMMs. I International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Institute of Electrical and Electronics Engineers (IEEE).
[22]
Székely, É., Gustafsson, J., Torre, I. (2023). Prosody-controllable gender-ambiguous speech synthesis : a tool for investigating implicit bias in speech perception. I Interspeech 2023. (s. 1234-1238). International Speech Communication Association.
[23]
Székely, É., Wang, S., Gustafsson, J. (2023). So-to-Speak : an exploratory platform for investigating the interplay between style and prosody in TTS. I Interspeech 2023. (s. 2016-2017). International Speech Communication Association.
[24]
Wang, S., Gustafsson, J., Székely, É. (2022). Evaluating Sampling-based Filler Insertion with Spontaneous TTS. I LREC 2022: Thirteen International Conference On Language Resources And Evaluation. (s. 1960-1969). European Language Resources Association (ELRA).
[25]
Moell, B., O'Regan, J., Mehta, S., Kirkland, A., Lameris, H., Gustafsson, J., Beskow, J. (2022). Speech Data Augmentation for Improving Phoneme Transcriptions of Aphasic Speech Using Wav2Vec 2.0 for the PSST Challenge. I The RaPID4 Workshop: Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments. (s. 62-70). Marseille, France.
[26]
Kirkland, A., Lameris, H., Székely, É., Gustafsson, J. (2022). Where's the uh, hesitation? : The interplay between filled pause location, speech rate and fundamental frequency in perception of confidence. I INTERSPEECH 2022. (s. 4990-4994). International Speech Communication Association.
[27]
Kontogiorgos, D., Tran, M., Gustafsson, J., Soleymani, M. (2021). A Systematic Cross-Corpus Analysis of Human Reactions to Robot Conversational Failures. I ICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction. (s. 112-120). Association for Computing Machinery (ACM).
[28]
Wang, S., Alexanderson, S., Gustafsson, J., Beskow, J., Henter, G. E., Székely, É. (2021). Integrated Speech and Gesture Synthesis. I ICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction. (s. 177-185). Association for Computing Machinery (ACM).
[29]
Kirkland, A., Włodarczak, M., Gustafsson, J., Székely, É. (2021). Perception of smiling voice in spontaneous speech synthesis. I Proceedings of Speech Synthesis Workshop (SSW11). (s. 108-112). International Speech Communication Association.
[30]
Székely, É., Edlund, J., Gustafsson, J. (2020). Augmented Prompt Selection for Evaluation of Spontaneous Speech Synthesis. I Proceedings of The 12th Language Resources and Evaluation Conference. (s. 6368-6374). European Language Resources Association.
[31]
Kontogiorgos, D., Abelho Pereira, A. T., Sahindal, B., van Waveren, S., Gustafson, J. (2020). Behavioural Responses to Robot Conversational Failures. I HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. ACM Digital Library.
[32]
Székely, É., Henter, G. E., Beskow, J., Gustafsson, J. (2020). Breathing and Speech Planning in Spontaneous Speech Synthesis. I 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). (s. 7649-7653). IEEE.
[33]
Kontogiorgos, D., Sibirtseva, E., Gustafson, J. (2020). Chinese whispers : A multimodal dataset for embodied language grounding. I LREC 2020 - 12th International Conference on Language Resources and Evaluation, Conference Proceedings. (s. 743-749). European Language Resources Association (ELRA).
[34]
Abelho Pereira, A. T., Oertel, C., Fermoselle, L., Mendelson, J., Gustafson, J. (2020). Effects of Different Interaction Contexts when Evaluating Gaze Models in HRI. Presenterad vid ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 23-26, 2020, Cambridge, ENGLAND. (s. 131-138). Association for Computing Machinery (ACM).
[35]
Kontogiorgos, D., van Waveren, S., Wallberg, O., Abelho Pereira, A. T., Leite, I., Gustafson, J. (2020). Embodiment Effects in Interactions with Failing Robots. I CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM Digital Library.
[36]
Håkansson, K., Beskow, J., Kjellström, H., Gustafsson, J., Bonnard, A., Rydén, M., Stormoen, S., Hagman, G., Akenine, U., Peres, K. M., Henter, G. E., Kivipelto, M. (2020). Robot-assisted detection of subclinical dementia : progress report and preliminary findings. I In 2020 Alzheimer's Association International Conference. ALZ...
[37]
Székely, É., Henter, G. E., Gustafson, J. (2019). Casting to Corpus : Segmenting and Selecting Spontaneous Dialogue for TTS with a CNN-LSTM Speaker-Dependent Breath Detector. I 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). (s. 6925-6929). IEEE.
[38]
Kontogiorgos, D., Abelho Pereira, A. T., Gustafson, J. (2019). Estimating Uncertainty in Task Oriented Dialogue. I ICMI 2019 - Proceedings of the 2019 International Conference on Multimodal Interaction. (s. 414-418). ACM Digital Library.
[39]
Székely, É., Henter, G. E., Beskow, J., Gustafson, J. (2019). How to train your fillers: uh and um in spontaneous speech synthesis. Presenterad vid The 10th ISCA Speech Synthesis Workshop.
[40]
Székely, É., Henter, G. E., Beskow, J., Gustafson, J. (2019). Off the cuff : Exploring extemporaneous speech delivery with TTS. I Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. (s. 3687-3688). International Speech Communication Association.
[41]
Malisz, Z., Berthelsen, H., Beskow, J., Gustafson, J. (2019). PROMIS: a statistical-parametric speech synthesis system with prominence control via a prominence network. I Proceedings of SSW 10 - The 10th ISCA Speech Synthesis Workshop. Vienna.
[42]
Abelho Pereira, A. T., Oertel, C., Fermoselle, L., Mendelson, J., Gustafson, J. (2019). Responsive Joint Attention in Human-Robot Interaction. I Proceedings 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019. (s. 1080-1087). Institute of Electrical and Electronics Engineers (IEEE).
[43]
Wagner, P., Beskow, J., Betz, S., Edlund, J., Gustafson, J., Henter, G. E., Le Maguer, S., Malisz, Z., Székely, É., Tånnander, C. (2019). Speech Synthesis Evaluation : State-of-the-Art Assessment and Suggestion for a Novel Research Program. I Proceedings of the 10th Speech Synthesis Workshop (SSW10)..
[44]
Székely, É., Henter, G. E., Beskow, J., Gustafson, J. (2019). Spontaneous conversational speech synthesis from found data. I Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. (s. 4435-4439). ISCA.
[45]
Tånnander, C., Fallgren, P., Edlund, J., Gustafson, J. (2019). Spot the pleasant people! Navigating the cocktail party buzz. I Proceedings Interspeech 2019, 20th Annual Conference of the International Speech Communication Association. (s. 4220-4224).
[46]
Kontogiorgos, D., Skantze, G., Abelho Pereira, A. T., Gustafson, J. (2019). The Effects of Embodiment and Social Eye-Gaze in Conversational Agents. I Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci)..
[47]
Kontogiorgos, D., Abelho Pereira, A. T., Gustafson, J. (2019). The Trade-off between Interaction Time and Social Facilitation with Collaborative Social Robots. I The Challenges of Working on Social Robots that Collaborate with People..
[48]
Kontogiorgos, D., Abelho Pereira, A. T., Andersson, O., Koivisto, M., Gonzalez Rabal, E., Vartiainen, V., Gustafson, J. (2019). The effects of anthropomorphism and non-verbal social behaviour in virtual assistants. I IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. (s. 133-140). Association for Computing Machinery (ACM).
[49]
Malisz, Z., Henter, G. E., Valentini-Botinhao, C., Watts, O., Beskow, J., Gustafson, J. (2019). The speech synthesis phoneticians need is both realistic and controllable. I Proceedings from FONETIK 2019. Stockholm.
[50]
Sibirtseva, E., Kontogiorgos, D., Nykvist, O., Karaoǧuz, H., Leite, I., Gustafson, J., Kragic, D. (2018). A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction. I Proceedings 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018. IEEE.
[51]
Kontogiorgos, D., Avramova, V., Alexanderson, S., Jonell, P., Oertel, C., Beskow, J., Skantze, G., Gustafson, J. (2018). A Multimodal Corpus for Mutual Gaze and Joint Attention in Multiparty Situated Interaction. I Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). (s. 119-127). Paris.
[52]
Jonell, P., Oertel, C., Kontogiorgos, D., Beskow, J., Gustafson, J. (2018). Crowdsourced Multimodal Corpora Collection Tool. I Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). (s. 728-734). Paris.
[53]
Kragic, D., Gustafson, J., Karaoǧuz, H., Jensfelt, P., Krug, R. (2018). Interactive, collaborative robots : Challenges and opportunities. I IJCAI International Joint Conference on Artificial Intelligence. (s. 18-25). International Joint Conferences on Artificial Intelligence.
[54]
Kontogiorgos, D., Sibirtseva, E., Pereira, A., Skantze, G., Gustafson, J. (2018). Multimodal reference resolution in collaborative assembly tasks. I Multimodal reference resolution in collaborative assembly tasks. ACM Digital Library.
[55]
Székely, É., Wagner, P., Gustafson, J. (2018). THE WRYLIE-BOARD: MAPPING ACOUSTIC SPACE OF EXPRESSIVE FEEDBACK TO ATTITUDE MARKERS. I Proc. IEEE Spoken Language Technology conference..
[56]
Malisz, Z., Berthelsen, H., Beskow, J., Gustafson, J. (2017). Controlling prominence realisation in parametric DNN-based speech synthesis. I Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2017. (s. 1079-1083). International Speech Communication Association.
[57]
Oertel, C., Jonell, P., Kontogiorgos, D., Mendelson, J., Beskow, J., Gustafson, J. (2017). Crowd-Sourced Design of Artificial Attentive Listeners. I Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. (s. 854-858). International Speech Communication Association.
[58]
Jonell, P., Oertel, C., Kontogiorgos, D., Beskow, J., Gustafson, J. (2017). Crowd-powered design of virtual attentive listeners. I 17th International Conference on Intelligent Virtual Agents, IVA 2017. (s. 188-191). Springer.
[59]
Oertel, C., Jonell, P., Kontogiorgos, D., Mendelson, J., Beskow, J., Gustafson, J. (2017). Crowdsourced design of artificial attentive listeners. Presenterad vid INTERSPEECH: Situated Interaction, Augusti 20-24 Augusti, 2017.
[60]
Heldner, M., Gustafson, J., Strömbergsson, S. (2017). Message from the technical program chairs. I Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. International Speech Communication Association.
[61]
Szekely, E., Mendelson, J., Gustafson, J. (2017). Synthesising uncertainty : The interplay of vocal effort and hesitation disfluencies. I Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. (s. 804-808). International Speech Communication Association.
[62]
Oertel, C., Jonell, P., Haddad, K. E., Szekely, E., Gustafson, J. (2017). Using crowd-sourcing for the design of listening agents : Challenges and opportunities. I ISIAA 2017 - Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, Co-located with ICMI 2017. (s. 37-38). Association for Computing Machinery (ACM).
[63]
Edlund, J., Gustafson, J. (2016). Hidden resources - Strategies to acquire and exploit potential spoken language resources in national archives. I Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016. (s. 4531-4534). European Language Resources Association (ELRA).
[64]
Johansson, M., Hori, T., Skantze, G., Hothker, A., Gustafson, J. (2016). Making Turn-Taking Decisions for an Active Listening Robot for Memory Training. I SOCIAL ROBOTICS, (ICSR 2016). (s. 940-949). Springer.
[65]
Oertel, C., Gustafson, J., Black, A. (2016). On Data Driven Parametric Backchannel Synthesis for Expressing Attentiveness in Conversational Agents. I Proceedings of Multimodal Analyses enabling Artificial Agents in Human­-Machine Interaction (MA3HMI), satellite workshop of ICMI 2016..
[66]
Oertel, C., David Lopes, J., Yu, Y., Funes, K., Gustafson, J., Black, A., Odobez, J.-M. (2016). Towards Building an Attentive Artificial Listener: On the Perception of Attentiveness in Audio-Visual Feedback Tokens. I Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI 2016). Tokyo, Japan.
[67]
Oertel, C., Gustafson, J., Black, A. (2016). Towards Building an Attentive Artificial Listener: On the Perception of Attentiveness in Feedback Utterances. I Proceedings of Interspeech 2016. San Fransisco, USA.
[68]
Edlund, J., Tånnander, C., Gustafson, J. (2015). Audience response system-based assessment for analysis-by-synthesis. I Proc. of ICPhS 2015. ICPhS.
[69]
Meena, R., David Lopes, J., Skantze, G., Gustafson, J. (2015). Automatic Detection of Miscommunication in Spoken Dialogue Systems. I Proceedings of 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). (s. 354-363).
[70]
Oertel, C., Funes, K., Gustafson, J., Odobez, J.-M. (2015). Deciphering the Silent Participant : On the Use of Audio-Visual Cues for the Classification of Listener Categories in Group Discussions. I Proccedings of ICMI 2015. ACM Digital Library.
[71]
Lopes, J., Salvi, G., Skantze, G., Abad, A., Gustafson, J., Batista, F., Meena, R., Trancoso, I. (2015). Detecting Repetitions in Spoken Dialogue Systems Using Phonetic Distances. I INTERSPEECH-2015. (s. 1805-1809).
[72]
Bollepalli, B., Urbain, J., Raitio, T., Gustafson, J., Cakmak, H. (2014). A COMPARATIVE EVALUATION OF VOCODING TECHNIQUES FOR HMM-BASED LAUGHTER SYNTHESIS. Presenterad vid IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), MAY 04-09, 2014, Florence, ITALY. (s. 255-259).
[73]
Johansson, M., Skantze, G., Gustafson, J. (2014). Comparison of human-human and human-robot Turn-taking Behaviour in multi-party Situated interaction. I UM3I '14: Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions. (s. 21-26). Istanbul, Turkey.
[74]
Meena, R., Boye, J., Skantze, G., Gustafson, J. (2014). Crowdsourcing Street-level Geographic Information Using a Spoken Dialogue System. I Proceedings of the SIGDIAL 2014 Conference. (s. 2-11). Association for Computational Linguistics.
[75]
Edlund, J., Edelstam, F., Gustafson, J. (2014). Human pause and resume behaviours for unobtrusive humanlike in-car spoken dialogue systems. I Proceedings of the of the EACL 2014 Workshop on Dialogue in Motion (DM). (s. 73-77). Gothenburg, Sweden.
[76]
Al Moubayed, S., Beskow, J., Bollepalli, B., Gustafson, J., Hussen-Abdelaziz, A., Johansson, M., Koutsombogera, M., Lopes, J. D., Novikova, J., Oertel, C., Skantze, G., Stefanov, K., Varol, G. (2014). Human-robot Collaborative Tutoring Using Multiparty Multimodal Spoken Dialogue. Presenterad vid 9th Annual ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany. IEEE conference proceedings.
[77]
Dalmas, T., Götze, J., Gustafsson, J., Janarthanam, S., Kleindienst, J., Mueller, C., Stent, A., Vlachos, A., Artzi, Y., Benotti, L., Boye, J., Clark, S., Curin, J., Dethlefs, N., Edlund, J., Goldwasser, D., Heeman, P., Jurcicek, F., Kelleher, J., Komatani, K., Kwiatkowski, T., Larsson, S., Lemon, O., Lenke, N., Macek, J., Macek, T., Mooney, R., Ramachandran, D., Rieser, V., Shi, H., Tenbrink, T., Williams, J. (2014). Introduction. I Proceedings 2014 Workshop on Dialogue in Motion, DM 2014. Association for Computational Linguistics (ACL).
[78]
Meena, R., Boye, J., Skantze, G., Gustafson, J. (2014). Using a Spoken Dialogue System for Crowdsourcing Street-level Geographic Information. Presenterad vid 2nd Workshop on Action, Perception and Language, SLTC 2014.
[79]
Oertel, C., Funes, K., Sheiki, S., Odobez, J.-M., Gustafson, J. (2014). Who will get the grant? : A multimodal corpus for the analysis of conversational behaviours in group interviews. I UM3I 2014 - Proceedings of the 2014 ACM Workshop on Understanding and Modeling Multiparty, Multimodal Interactions, Co-located with ICMI 2014. (s. 27-32). Association for Computing Machinery (ACM).
[80]
Meena, R., Skantze, G., Gustafson, J. (2013). A Data-driven Model for Timing Feedback in a Map Task Dialogue System. I 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGdial. (s. 375-383). Metz, France.
[81]
Al Moubayed, S., Edlund, J., Gustafson, J. (2013). Analysis of gaze and speech patterns in three-party quiz game interaction. I Interspeech 2013. (s. 1126-1130). The International Speech Communication Association (ISCA).
[82]
Johansson, M., Skantze, G., Gustafson, J. (2013). Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions. I Social Robotics: 5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013, Proceedings. (s. 351-360). Springer.
[83]
Meena, R., Skantze, G., Gustafson, J. (2013). Human Evaluation of Conceptual Route Graphs for Interpreting Spoken Route Descriptions. I Proceedings of the 3rd International Workshop on Computational Models of Spatial Language Interpretation and Generation (CoSLI). (s. 30-35). Potsdam, Germany.
[84]
Bollepalli, B., Beskow, J., Gustafsson, J. (2013). Non-Linear Pitch Modification in Voice Conversion using Artificial Neural Networks. I Advances in nonlinear speech processing: 6th International Conference, NOLISP 2013, Mons, Belgium, June 19-21, 2013 : proceedings. (s. 97-103). Springer Berlin/Heidelberg.
[85]
Edlund, J., Al Moubayed, S., Tånnander, C., Gustafson, J. (2013). Temporal precision and reliability of audience response system based annotation. I Proc. of Multimodal Corpora 2013..
[86]
Oertel, C., Salvi, G., Götze, J., Edlund, J., Gustafson, J., Heldner, M. (2013). The KTH Games Corpora : How to Catch a Werewolf. I IVA 2013 Workshop Multimodal Corpora: Beyond Audio and Video: MMC 2013..
[87]
Meena, R., Skantze, G., Gustafson, J. (2013). The Map Task Dialogue System : A Test-bed for Modelling Human-Like Dialogue. I 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGdial. (s. 366-368). Metz, France.
[88]
Meena, R., Skantze, G., Gustafson, J. (2012). A Chunking Parser for Semantic Interpretation of Spoken Route Directions in Human-Robot Dialogue. I Proceedings of the 4th Swedish Language Technology Conference (SLTC 2012). (s. 55-56). Lund, Sweden.
[89]
Meena, R., Skantze, G., Gustafson, J. (2012). A data-driven approach to understanding spoken route directions in human-robot dialogue. I 13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012. (s. 226-229).
[90]
Blomberg, M., Skantze, G., Al Moubayed, S., Gustafson, J., Beskow, J., Granström, B. (2012). Children and adults in dialogue with the robot head Furhat - corpus collection and initial analysis. I Proceedings of WOCCI. Portland, OR: The International Society for Computers and Their Applications (ISCA).
[91]
Neiberg, D., Gustafson, J. (2012). Cues to perceived functions of acted and spontaneous feedback expressions. I Proceedings of theInterdisciplinary Workshop on Feedback Behaviors in Dialog. (s. 53-56).
[92]
Neiberg, D., Gustafson, J. (2012). Exploring the implications for feedback of a neurocognitive theory of overlapped speech. I Proceedings of Workshop on Feedback Behaviors in Dialog. (s. 57-60).
[93]
Skantze, G., Al Moubayed, S., Gustafson, J., Beskow, J., Granström, B. (2012). Furhat at Robotville : A Robot Head Harvesting the Thoughts of the Public through Multi-party Dialogue. I Proceedings of the Workshop on Real-time Conversation with Virtual Agents IVA-RCVA..
[94]
Al Moubayed, S., Beskow, J., Granström, B., Gustafson, J., Mirning, N., Skantze, G., Tscheligi, M. (2012). Furhat goes to Robotville: a large-scale multiparty human-robot interaction data collection in a public space. I Proc of LREC Workshop on Multimodal Corpora. Istanbul, Turkey.
[95]
Oertel, C., Wlodarczak, M., Edlund, J., Wagner, P., Gustafson, J. (2012). Gaze Patterns in Turn-Taking. I 13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012, Vol 3. (s. 2243-2246). Portland, Oregon, US.
[96]
Bollepalli, B., Beskow, J., Gustafson, J. (2012). HMM based speech synthesis system for Swedish Language. I The Fourth Swedish Language Technology Conference. Lund, Sweden.
[97]
Edlund, J., Oertel, C., Gustafson, J. (2012). Investigating negotiation for load-time in the GetHomeSafe project. I Proc. of Workshop on Innovation and Applications in Speech Technology (IAST). (s. 45-48). Dublin, Ireland.
[98]
Al Moubayed, S., Skantze, G., Beskow, J., Stefanov, K., Gustafson, J. (2012). Multimodal Multiparty Social Interaction with the Furhat Head. Presenterad vid 14th ACM International Conference on Multimodal Interaction, Santa Monica, CA. (s. 293-294). Association for Computing Machinery (ACM).
[99]
Edlund, J., Heldner, M., Gustafson, J. (2012). On the effect of the acoustic environment on the accuracy of perception of speaker orientation from auditory cues alone. I 13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012, Vol 2. (s. 1482-1485).
[100]
Boye, J., Fredriksson, M., Götze, J., Gustafson, J., Königsmann, J. (2012). Walk this way : Spatial grounding for city exploration. I IWSDS..
[101]
Edlund, J., Heldner, M., Gustafson, J. (2012). Who am I speaking at? : perceiving the head orientation of speakers from acoustic cues alone. I Proc. of LREC Workshop on Multimodal Corpora 2012. Istanbul, Turkey.
[102]
Neiberg, D., Gustafson, J. (2011). A Dual Channel Coupled Decoder for Fillers and Feedback. I INTERSPEECH 2011, 12th Annual Conference of the International Speech Communication Association. (s. 3097-3100).
[103]
Johnson-Roberson, M., Bohg, J., Skantze, G., Gustafsson, J., Carlson, R., Kragic, D., Rasolzadeh, B. (2011). Enhanced Visual Scene Understanding through Human-Robot Dialog. I 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. (s. 3342-3348). IEEE.
[104]
Neiberg, D., Gustafson, J. (2011). Predicting Speaker Changes and Listener Responses With And Without Eye-contact. I INTERSPEECH 2011, 12th Annual Conference of the International Speech Communication Association. (s. 1576-1579). Florence, Italy.
[105]
Neiberg, D., Ananthakrishnan, G., Gustafson, J. (2011). Tracking pitch contours using minimum jerk trajectories. I INTERSPEECH 2011, 12th Annual Conference of the International Speech Communication Association. (s. 2056-2059).
[106]
Johansson, M., Skantze, G., Gustafson, J. (2011). Understanding route directions in human-robot dialogue. I Proceedings of SemDial. (s. 19-27). Los Angeles, CA.
[107]
Gustafson, J., Neiberg, D. (2010). Directing conversation using the prosody of mm and mhm. I Proceedings of SLTC 2010. (s. 15-16). Linköping, Sweden.
[108]
Johnson-Roberson, M., Bohg, J., Kragic, D., Skantze, G., Gustafson, J., Carlson, R. (2010). Enhanced visual scene understanding through human-robot dialog. I Dialog with Robots: AAAI 2010 Fall Symposium..
[109]
Beskow, J., Edlund, J., Granström, B., Gustafsson, J., House, D. (2010). Face-to-Face Interaction and the KTH Cooking Show. I Development of multimodal interfaces: Active listing and synchrony. (s. 157-168).
[110]
Neiberg, D., Gustafson, J. (2010). Modeling Conversational Interaction Using Coupled Markov Chains. I Proceedings of DiSS-LPSS Joint Workshop 2010..
[111]
Gustafson, J., Neiberg, D. (2010). Prosodic cues to engagement in non-lexical response tokens in Swedish. I Proceedings of DiSS-LPSS Joint Workshop 2010. Tokyo, Japan.
[112]
Schötz, S., Beskow, J., Bruce, G., Granström, B., Gustafson, J. (2010). Simulating Intonation in Regional Varieties of Swedish. I Speech Prosody 2010. Chicago, USA.
[113]
Neiberg, D., Gustafson, J. (2010). The Prosody of Swedish Conversational Grunts. I 11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010. (s. 2562-2565).
[114]
Skantze, G., Gustafson, J. (2009). Attention and interaction control in a human-human-computer dialogue setting. I Proceedings of SIGDIAL 2009: the 10th Annual Meeting of the Special Interest Group in Discourse and Dialogue. (s. 310-313).
[115]
Gustafson, J., Merkes, M. (2009). Eliciting interactional phenomena in human-human dialogues. I Proceedings of the SIGDIAL 2009 Conference: 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue. (s. 298-301).
[116]
Skantze, G., Gustafson, J. (2009). Multimodal interaction control in the MonAMI Reminder. I Proceedings of DiaHolmia: 2009 Workshop on the Semantics and Pragmatics of Dialogue. (s. 127-128).
[117]
Beskow, J., Edlund, J., Granström, B., Gustafson, J., Skantze, G., Tobiasson, H. (2009). The MonAMI Reminder : a spoken dialogue system for face-to-face interaction. I Proceedings of the 10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009. (s. 300-303). Brighton, U.K.
[118]
Gustafson, J., Edlund, J. (2008). EXPROS : A toolkit for exploratory experimentation with prosody in customized diphone voices. I Perception In Multimodal Dialogue Systems, Proceedings. (s. 293-296).
[119]
Beskow, J., Edlund, J., Granström, B., Gustafson, J., Skantze, G. (2008). Innovative interfaces in MonAMI : The Reminder. I Perception In Multimodal Dialogue Systems, Proceedings. (s. 272-275).
[120]
Gustafson, J., Heldner, M., Edlund, J. (2008). Potential benefits of human-like dialogue behaviour in the call routing domain. I Perception In Multimodal Dialogue Systems, Proceedings. (s. 240-251).
[121]
Strangert, E., Gustafson, J. (2008). Subject ratings, acoustic measurements and synthesis of good-speaker characteristics. I Proceedings of Interspeech 2008. (s. 1688-1691).
[122]
Strangert, E., Gustafson, J. (2008). What makes a good speaker? : Subject ratings, acoustic measurements and perceptual evaluations. I Proc. Annu. Conf. Int. Speech. Commun. Assoc., INTERSPEECH. (s. 1688-1691).
[123]
Bell, L., Gustafson, J. (2007). Children’s convergence in referring expressions to graphical objects in a speech-enabled computer game. I 8th Annual Conference of the International Speech Communication Association. (s. 2788-2791). Antwerp, Belgium.
[124]
Edlund, J., Heldner, M., Gustafson, J. (2006). Two faces of spoken dialogue systems. I Interspeech 2006 - ICSLP Satellite Workshop Dialogue on Dialogues: Multidisciplinary Evaluation of Advanced Speech-based Interactive Systems. Pittsburgh PA, USA.
[125]
Boye, J., Gustafson, J. (2005). How to do dialogue in a fairy-tale world. I Proceedings of the 6th SIGDial workshop on discourse and dialogue..
[126]
Gustafson, J., Boye, J., Fredriksson, M., Johannesson, L., Königsmann, J. (2005). Providing computer game characters with conversational abilities. I INTELLIGENT VIRTUAL AGENTS, PROCEEDINGS. (s. 37-51). Kos, Greece.
[127]
Bell, L., Boye, J., Gustafson, J., Heldner, M., Lindström, A., Wirén, M. (2005). The Swedish NICE Corpus : Spoken dialogues between children and embodied characters in a computer game scenario. I 9th European Conference on Speech Communication and Technology. (s. 2765-2768). Lisbon, Portugal.
[128]
Boye, J., Wirén, M., Gustafson, J. (2004). Contextual reasoning in multimodal dialogue systems : two case studies. I Proceedings of The 8th Workshop on the Semantics and Pragmatics of Dialogue Catalogue'04. (s. 19-21). Barcelona.
[129]
Gustafson, J., Bell, L., Boye, J., Lindström, A., Wirén, M. (2004). The NICE fairy-tale game system. I Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue at HLT-NAACL 2004. Boston.
[130]
Gustafson, J., Sjölander, K. (2004). Voice creations for conversational fairy-tale characters. I Proc 5th ISCA speech synthesis workshop. (s. 145-150). Pittsburgh.
[131]
Gustafson, J., Bell, L., Johan, B., Edlund, J., Wirn, M. (2002). Constraint Manipulation and Visualization in a Multimodal Dialogue System. I Proceedings of MultiModal Dialogue in Mobile Environments..
[132]
Gustafson, J., Sjölander, K. (2002). Voice Transformations For Improving Children's Speech Recognition In A Publicly Available Dialogue System. I Proceedings of ICSLP 02. (s. 297-300). International Speech Communication Association.
[133]
Bell, L., Boye, J., Gustafson, J. (2001). Real-time Handling of Fragmented Utterances. I Proceedings of the NAACL Workshop on Adaption in Dialogue Systems..
[134]
Bell, L., Eklund, R., Gustafson, J. (2000). A Comparison of Disfluency Distribution in a Unimodal and a Multimodal Speech Interface. I Proceedings of ICSLP 00..
[135]
Bell, L., Boye, J., Gustafson, J., Wirén, M. (2000). Modality Convergence in a Multimodal Dialogue System. I Proceedings of Götalog. (s. 29-34).
[136]
Bell, L., Gustafson, J. (2000). Positive and Negative User Feedback in a Spoken Dialogue Corpus. I Proceedings of ICSLP 00..
[137]
Bell, L., Gustafson, J. (1999). Repetition and its phonetic realizations : investigating a Swedish database of spontaneous computer directed speech. I Proceedings of the XIVth International Congress of Phonetic Sciences. (s. 1221).
[138]
Gustafson, J., Larsson, A., Carlson, R., Hellman, K. (1997). How do System Questions Influence Lexical Choices in User Answers?. I Proceedings of Eurospeech '97, 5th European Conference on Speech Communication and Technology : Rhodes, Greece, 22 - 25 September 1997. (s. 2275-2278). Grenoble: European Speech Communication Association (ESCA).

Kapitel i böcker

[139]
Skantze, G., Gustafson, J. & Beskow, J. (2019). Multimodal Conversational Interaction with Robots. I Sharon Oviatt, Björn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Krüger (Red.), The Handbook of Multimodal-Multisensor Interfaces, Volume 3: Language Processing, Software, Commercialization, and Emerging Directions. ACM Press.
[140]
Boye, J., Fredriksson, M., Götze, J., Gustafson, J. & Königsmann, J. (2014). Walk this way : Spatial grounding for city exploration. I Natural interaction with robots, knowbots and smartphones (s. 59-67). Springer-Verlag.
[141]
Edlund, J. & Gustafson, J. (2010). Ask the experts : Part II: Analysis. I Juel Henrichsen, Peter (Red.), Linguistic Theory and Raw Sound (s. 183-198). Frederiksberg: Samfundslitteratur.
[142]
Gustafson, J. & Edlund, J. (2010). Ask the experts - Part I: Elicitation. I Juel Henrichsen, Peter (Red.), Linguistic Theory and Raw Sound (s. 169-182). Samfundslitteratur.
[143]
Edlund, J., Heldner, M. & Gustafson, J. (2005). Utterance segmentation and turn-taking in spoken dialogue systems. I Fisseni, B.; Schmitz, H-C.; Schröder, B.; Wagner, P. (Red.), Computer Studies in Language and Speech (s. 576-587). Frankfurt am Main, Germany: Peter Lang.

Icke refereegranskade

Konferensbidrag

[144]
Lameris, H., Mehta, S., Henter, G. E., Kirkland, A., Moëll, B., O'Regan, J., Gustafsson, J., Székely, É. (2022). Spontaneous Neural HMM TTS with Prosodic Feature Modification. I Proceedings of Fonetik 2022..
[145]
Edlund, J., Al Moubayed, S., Tånnander, C., Gustafson, J. (2013). Audience response system based annotation of speech. I Proceedings of Fonetik 2013. (s. 13-16). Linköping: Linköping University.
[146]
Al Moubayed, S., Beskow, J., Blomberg, M., Granström, B., Gustafson, J., Mirning, N., Skantze, G. (2012). Talking with Furhat - multi-party interaction with a back-projected robot head. I Proceedings of Fonetik 2012. (s. 109-112). Gothenberg, Sweden.
[148]
Edlund, J., Gustafson, J., Beskow, J. (2010). Cocktail : a demonstration of massively multi-component audio environments for illustration and analysis. I SLTC 2010, The Third Swedish Language Technology Conference (SLTC 2010): Proceedings of the Conference..
[149]
Beskow, J., Edlund, J., Gustafson, J., Heldner, M., Hjalmarsson, A., House, D. (2010). Modelling humanlike conversational behaviour. I SLTC 2010: The Third Swedish Language Technology Conference (SLTC 2010), Proceedings of the Conference. (s. 9-10). Linköping, Sweden.
[150]
Neiberg, D., Gustafson, J. (2010). Prosodic Characterization and Automatic Classification of Conversational Grunts in Swedish. I Working Papers 54: Proceedings from Fonetik 2010..
[151]
Beskow, J., Edlund, J., Gustafson, J., Heldner, M., Hjalmarsson, A., House, D. (2010). Research focus : Interactional aspects of spoken face-to-face communication. I Proceedings from Fonetik, Lund, June 2-4, 2010: . (s. 7-10). Lund, Sweden: Lund University.
[152]
Schötz, S., Beskow, J., Bruce, G., Granström, B., Gustafson, J., Segerup, M. (2010). Simulating Intonation in Regional Varieties of Swedish. I Fonetik 2010. Lund, Sweden.
[153]
Beskow, J., Gustafson, J. (2009). Experiments with Synthesis of Swedish Dialects. I Proceedings of Fonetik 2009. (s. 28-29). Stockholm: Stockholm University.
[154]
Gustafson, J., Edlund, J. (2008). EXPROS : Tools for exploratory experimentation with prosody. I Proceedings of FONETIK 2008. (s. 17-20). Gothenburg, Sweden.
[155]
Strangert, E., Gustafson, J. (2008). Improving speaker skill in a resynthesis experiment. I Proceedings FONETIK 2008: The XXIst Swedish Phonetics Conference. (s. 69-72).
[156]
Beskow, J., Edlund, J., Granström, B., Gustafson, J., Jonsson, O., Skantze, G. (2008). Speech technology in the European project MonAMI. I Proceedings of FONETIK 2008. (s. 33-36). Gothenburg, Sweden: University of Gothenburg.

Kapitel i böcker

[157]
Bertenstam, J., Mats, B., Carlson, R., Elenius, K., Granström, B., Gustafson, J. ... Ström, N. (1995). Spoken dialogue data collected in the Waxholm project. I Quarterly progress and status report: April 15, 1995 /Speech Transmission Laboratory ( (1 uppl.) s. 50-73). Stockholm: KTH.

Avhandlingar

Övriga

[159]
Wang, S., Henter, G. E., Gustafsson, J., Székely, É. (2023). A comparative study of self-supervised speech representationsin read and spontaneous TTS. (Manuskript).
[160]
Jonell, P., Mendelson, J., Storskog, T., Hagman, G., Östberg, P., Leite, I. ... Kjellström, H. (2017). Machine Learning and Social Robotics for Detecting Early Signs of Dementia..
Senaste synkning med DiVA:
2024-11-17 01:10:44