Publikationer av Olov Engwall
Refereegranskade
Artiklar
[1]
O. Engwall, R. Cumbal och A. R. Majlesi, "Socio-cultural perception of robot backchannels," Frontiers in Robotics and AI, vol. 10, 2023.
[2]
R. Cumbal et al., "Stereotypical nationality representations in HRI : perspectives from international young adults," Frontiers in Robotics and AI, vol. 10, 2023.
[3]
O. Engwall et al., "Identification of Low-engaged Learners in Robot-led Second Language Conversations with Adults," ACM Transactions on Human-Robot Interaction, vol. 11, no. 2, 2022.
[4]
O. Engwall och J. David Lopes, "Interaction and collaboration in robot-assisted language learning for adults," Computer Assisted Language Learning, vol. 35, no. 5-6, s. 1273-1309, 2022.
[5]
O. Engwall, J. D. Águas Lopes och R. Cumbal, "Is a Wizard-of-Oz Required for Robot-Led Conversation Practice in a Second Language?," International Journal of Social Robotics, 2022.
[6]
O. Engwall et al., "Learner and teacher perspectives on robot-led L2 conversation practice," ReCALL, vol. 34, no. 3, s. 344-359, 2022.
[7]
S. Dabbaghchian et al., "Simulation of vowel-vowel utterances using a 3D biomechanical-acoustic model," International Journal for Numerical Methods in Biomedical Engineering, vol. 37, no. 1, 2021.
[8]
O. Engwall, J. David Lopes och A. Åhlund, "Robot interaction styles for conversation practice in second language learning," International Journal of Social Robotics, 2020.
[9]
M. Arnela et al., "MRI-based vocal tract representations for the three-dimensional finite element synthesis of diphthongs," IEEE Transactions on Audio, Speech, and Language Processing, vol. 27, no. 12, s. 2173-2182, 2019.
[10]
S. Dabbaghchian et al., "Reconstruction of vocal tract geometries from biomechanical simulations," International Journal for Numerical Methods in Biomedical Engineering, 2018.
[11]
M. Arnela et al., "Influence of lips on the production of vowels based on finite element simulations and experiments," Journal of the Acoustical Society of America, vol. 139, no. 5, s. 2852-2859, 2016.
[12]
M. Arnela et al., "Influence of vocal tract geometry simplifications on the numerical simulation of vowel sounds," Journal of the Acoustical Society of America, vol. 140, no. 3, s. 1707-1718, 2016.
[13]
C. Koniaris, G. Salvi och O. Engwall, "On mispronunciation analysis of individual foreign speakers using auditory periphery models," Speech Communication, vol. 55, no. 5, s. 691-706, 2013.
[14]
O. Engwall, "Analysis of and feedback on phonetic features in pronunciation training with a virtual teacher," Computer Assisted Language Learning, vol. 25, no. 1, s. 37-64, 2012.
[15]
G. Ananthakrishnan, O. Engwall och D. Neiberg, "Exploring the Predictability of Non-Unique Acoustic-to-Articulatory Mappings," IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 10, s. 2672-2682, 2012.
[16]
G. Ananthakrishnan och O. Engwall, "Mapping between acoustic and articulatory gestures," Speech Communication, vol. 53, no. 4, s. 567-589, 2011.
[17]
H. Kjellström och O. Engwall, "Audiovisual-to-articulatory inversion," Speech Communication, vol. 51, no. 3, s. 195-209, 2009.
[18]
J. Beskow et al., "Visualization of speech and audio for hearing-impaired persons," Technology and Disability, vol. 20, no. 2, s. 97-107, 2008.
[19]
O. Engwall och O. Bälter, "Pronunciation feedback from real and virtual language teachers," Computer Assisted Language Learning, vol. 20, no. 3, s. 235-262, 2007.
[20]
O. Engwall et al., "Designing the user interface of the computer-based speech training system ARTUR based on early user tests," Behavior and Information Technology, vol. 25, no. 4, s. 353-365, 2006.
[21]
O. Engwall, "Combining MRI, EMA and EPG measurements in a three-dimensional tongue model," Speech Communication, vol. 41, no. 2-3, s. 303-329, 2003.
Konferensbidrag
[22]
M. Jansson et al., "An initial exploration of semi-automated tutoring : How AI could be used as support for online human tutors," i Proceedings of the Fourteenth International Conference on Networked Learning, 2024.
[23]
R. Cumbal och O. Engwall, "Speaking Transparently : Social Robots in Educational Settings," i Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24 Companion), March 11--14, 2024, Boulder, CO, USA, 2024.
[24]
R. Cumbal et al., "Shaping unbalanced multi-party interactions through adaptive robot backchannels," i IVA 2022 - Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents, 2022.
[25]
S. Gillet et al., "Robot Gaze Can Mediate Participation Imbalance in Groups with Different Skill Levels," i Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021, s. 303-311.
[26]
R. Cumbal et al., "“You don’t understand me!” : Comparing ASR Results for L1 and L2 Speakers of Swedish," i Proceedings Interspeech 2021, 2021, s. 96-100.
[27]
R. Cumbal, J. David Lopes och O. Engwall, "Detection of Listener Uncertainty in Robot-Led Second Language Conversation Practice," i Proceedings ICMI '20: International Conference on Multimodal Interaction, 2020.
[28]
R. Cumbal, J. Lopes och O. Engwall, "Uncertainty in robot assisted second language conversation practice," i HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020, s. 171-173.
[29]
J. Lopes, O. Engwall och G. Skantze, "A First Visit to the Robot Language Café," i Proceedings of the ISCA workshop on Speech and Language Technology in Education, 2017.
[30]
M. Arnela et al., "A semi-polar grid strategy for the three-dimensional finite element simulation of vowel-vowel sequences," i Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2017, 2017, s. 3477-3481.
[31]
S. Dabbaghchian et al., "Synthesis of VV utterances from muscle activation to sound with a 3d model," i Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2017, 2017, s. 3497-3501.
[32]
M. Arnela et al., "Finite element generation of vowel sounds using dynamic complex three-dimensional vocal tracts," i Proceedings of the 23rd international congress on sound and vibration : From ancient to modern acoustics, 2016.
[33]
S. Dabbaghchian et al., "Using a Biomechanical Model and Articulatory Data for the Numerical Production of Vowels," i Interspeech 2016, 2016, s. 3569-3573.
[34]
S. Dabbaghchian, M. Arnela och O. Engwall, "SIMPLIFICATION OF VOCAL TRACT SHAPES WITH DIFFERENT LEVELS OF DETAIL," i Proceedings of the 18th International Congress of Phonetic Sciences. Glasgow, UK, 2015, s. 1-5.
[35]
C. Koniaris, O. Engwall och G. Salvi, "Auditory and Dynamic Modeling Paradigms to Detect L2 Mispronunciations," i 13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012, Vol 1, 2012, s. 898-901.
[36]
C. Koniaris, O. Engwall och G. Salvi, "On the Benefit of Using Auditory Modeling for Diagnostic Evaluation of Pronunciations," i International Symposium on Automatic Detection of Errors in Pronunciation Training (IS ADEPT), Stockholm, Sweden, June 6-8, 2012, 2012, s. 59-64.
[37]
O. Engwall, "Pronunciation analysis by acoustic-to-articulatory feature inversion," i Proceedings of the International Symposium on Automatic detection of Errors in Pronunciation Training, 2012, s. 79-84.
[38]
C. Koniaris och O. Engwall, "Perceptual differentiation modeling explains phoneme mispronunciation by non-native speakers," i ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2011, s. 5704-5707.
[39]
C. Koniaris och O. Engwall, "Phoneme Level Non-Native Pronunciation Analysis by an Auditory Model-based Native Assessment Scheme," i 12th Annual Conference of the International Speech Communication Association, INTERSPEECH 2011, 2011, s. 1157-1160.
[40]
G. Ananthakrishnan och O. Engwall, "Resolving Non-uniqueness in the Acoustic-to-Articulatory Mapping," i ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2011, s. 4628-4631.
[41]
G. Ananthakrishnan et al., "Using an Ensemble of Classifiers for Mispronunciation Feedback," i Proceedings of SLaTE, 2011.
[42]
S. Picard et al., "Detection of Specific Mispronunciations using Audiovisual Features," i Auditory-Visual Speech Processing (AVSP) 2010, 2010.
[43]
O. Engwall, "Is there a McGurk effect for tongue reading?," i Proceedings of AVSP : International Conferenceon Audio-Visual Speech Processing, 2010.
[44]
G. Ananthakrishnan et al., "Predicting Unseen Articulations from Multi-speaker Articulatory Models," i Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010, 2010, s. 1588-1591.
[45]
O. Engwall och P. Wik, "Are real tongue movements easier to speech read than synthesized?," i INTERSPEECH 2009 : 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, 2009, s. 824-827.
[46]
O. Engwall och P. Wik, "Can you tell if tongue movements are real or synthetic?," i Proceedings of AVSP, 2009.
[47]
G. Ananthakrishnan, D. Neiberg och O. Engwall, "In search of Non-uniqueness in the Acoustic-to-Articulatory Mapping," i INTERSPEECH 2009 : 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, 2009, s. 2799-2802.
[48]
N. Katsamanis et al., "Audiovisual speech inversion by switching dynamical modeling Governed by a Hidden Markov Process," i Proceedings of EUSIPCO, 2008.
[49]
O. Engwall, "Can audio-visual instructions help learners improve their articulation? : an ultrasound study of short term changes," i INTERSPEECH 2008 : 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, 2008, s. 2631-2634.
[50]
P. Wik och O. Engwall, "Can visualization of internal articulators support speech perception?," i INTERSPEECH 2008 : 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, VOLS 1-5, 2008, s. 2627-2630.
[51]
G. Ananthakrishnan och O. Engwall, "Important regions in the articulator trajectory," i Proceedings of International Seminar on Speech Production, 2008, s. 305-308.
[52]
D. Neiberg, G. Ananthakrishnan och O. Engwall, "The Acoustic to Articulation Mapping : Non-linear or Non-unique?," i INTERSPEECH 2008 : 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, 2008, s. 1485-1488.
[53]
H. Kjellström et al., "Audio-visual phoneme classification for pronunciation training applications," i INTERSPEECH 2007 : 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, 2007, s. 57-60.
[54]
O. Engwall, "Evaluation of speech inversion using an articulatory classifier," i In Proceedings of the Seventh International Seminar on Speech Production, 2006, s. 469-476.
[55]
O. Engwall et al., "Feedback management in the pronunciation training system ARTUR," i Proceedings of CHI 2006, 2006, s. 231-234.
[56]
O. Engwall, V. Delvaux och T. Metens, "Interspeaker Variation in the Articulation of French Nasal Vowels," i In Proceedings of the Seventh International Seminar on Speech Production, 2006, s. 3-10.
[57]
H. Kjellström, O. Engwall och O. Bälter, "Reconstructing Tongue Movements from Audio and Video," i INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, Vol. 1-5, 2006, s. 2238-2241.
[58]
O. Engwall, "Articulatory synthesis using corpus-based estimation of line spectrum pairs," i 9th European Conference on Speech Communication and Technology, 2005, s. 1909-1912.
[59]
E. Eriksson et al., "Design Recommendations for a Computer-Based Speech Training System Based on End User Interviews," i Proceedings of the Tenth International Conference on Speech and Computers, 2005, s. 483-486.
[60]
O. Engwall, "Introducing visual cues in acoustic-to-articulatory inversion," i Interspeech 2005 : 9th European Conference on Speech Communication and Technology, 2005, s. 3205-3208.
[61]
O. Bälter et al., "Wizard-of-Oz Test of ARTUR - a Computer-Based Speech Training System with Articulation Correction," i proceedings of ASSETS 2005, 2005, s. 36-43.
[62]
O. Engwall et al., "Design strategies for a virtual language tutor," i INTERSPEECH 2004, ICSLP, 8th International Conference on Spoken Language Processing, Jeju Island, Korea, October 4-8, 2004, 2004, s. 1693-1696.
[63]
O. Engwall, "From real-time MRI to 3D tongue movements," i INTERSPEECH 2004 : ICSLP 8th International Conference on Spoken Language Processing, 2004, s. 1109-1112.
[64]
O. Engwall, "Speaker adaptation of a three-dimensional tongue model," i INTERSPEECH 2004 : ICSLP 8th International Conference on Spoken Language Processing, 2004, s. 465-468.
Kapitel i böcker
[65]
O. Engwall, "Augmented Reality Talking Heads as a Support for Speech Perception and Production," i Augmented Reality : Some Emerging Application Areas, Nee, Andrew Yeh Ching red., : IN-TECH, 2011, s. 89-114.
[66]
O. Engwall, "Assessing MRI measurements : Effects of sustenation, gravitation and coarticulation," i Speech production : Models, Phonetic Processes and Techniques, Harrington, J.; Tabain, M. red., New York : Psychology Press, 2006, s. 301-314.
Icke refereegranskade
Artiklar
[67]
O. Engwall et al., "Editorial : Socially, culturally and contextually aware robots," Frontiers in Robotics and AI, vol. 10, 2023.
[68]
G. Ananthakrishnan, P. Wik och O. Engwall, "Detecting confusable phoneme pairs for Swedish language learners depending on their first language," TMH-QPSR, vol. 51, no. 1, s. 89-92, 2011.
[69]
O. Engwall, "Feedback strategies of human and virtual tutors in pronunciation training," TMH-QPSR, vol. 48, no. 1, s. 011-034, 2006.
[70]
O. Engwall, "Dynamical Aspects of Coarticulation in Swedish Fricatives : A Combined EMA and EPG Study," TMH Quarterly Status and Progress Report, s. 49-73, 2000.
[71]
O. Engwall och P. Badin, "Collecting and Analysing Two- and Three-dimensional MRI data for Swedish," TMH Quarterly Status and Progress Report, s. 11-38, 1999.
[72]
Konferensbidrag
[73]
M. Arnela et al., "Effects of vocal tract geometry simplifications on the numerical simulation of vowels," i PAN EUROPEAN VOICE CONFERENCE ABSTRACT BOOK : Proceedings e report 104, 2015, s. 177.
[74]
S. Dabbaghchian, I. Nilsson och O. Engwall, "From Tongue Movement Data to Muscle Activation – A Preliminary Study of Artisynth's Inverse Modelling," i Parametric Modeling of Human Anatomy, PMHA 14, Aug 22-23, 2014, Vancouver, BC, CA, 2014.
[75]
J. Beskow, O. Engwall och B. Granström, "Resynthesis of Facial and Intraoral Articulation fromSimultaneous Measurements," i Proceedings of the 15th International Congress of phonetic Sciences (ICPhS'03), 2003.
[76]
O. Engwall, "Evaluation of a System for Concatenative Articulatory Visual Synthesis," i Proceedings of the ICSLP, 2002.
[77]
O. Engwall, "Synthesizing Static Vowels and Dynamic Sounds Using a 3D Vocal Tract Model," i Proceedings of the 4th ISCA workshop on Speech Synthesis, 2001, s. 81-86.
[78]
O. Engwall och P. Badin, "An MRI Study of Swedish Fricatives : Coarticulatory effects," i Proceedings of the 5th Speech Production Seminar, 2000, s. 297-300.
[79]
O. Engwall, "Are Static MRI Data Representative of Dynamic Speech? : Results from a Comparative Study Using MRI, EMA, and EPG," i Proceedings of the 6th ICSLP, 2000, s. 17-20.
Kapitel i böcker
[80]
O. Engwall, "Datoranimerade talande ansikten," i Människans ansikten : Emotion, interaktion och konst, Adelswärd, V.; Forstorp, P-A. red., Stockholm : Carlssons Bokförlag, 2012.
[81]
O. Engwall, "Bättre tala än texta - talteknologi nu och i framtiden," i Tekniken bakom språket, Domeij, Rickard red., Stockholm : Norstedts Akademiska Förlag, 2008, s. 98-118.
Avhandlingar
[82]
O. Engwall, "Tongue Talking : Studies in Intraoral Speech Synthesis," Doktorsavhandling Stockholm : KTH, TRITA-TMH, 2002:04, 2002.
Övriga
[83]
[84]
O. Engwall et al., "Identification of low-engaged learners in robot-led second language conversations with adults," (Manuskript).
[85]
O. Engwall et al., "Learner and teacher perspectives on robot-led L2 conversation practice," (Manuskript).
Senaste synkning med DiVA:
2024-12-15 03:12:13