Homepage 中 文 English

Improving behavioral and cognitive processes during lectures with speech-to-text recognition technology: a study on visual attention, learning attention and meditation

Speaker:Prof. Rustam Shadiev(Nanjing Normal University)

Time and Date: 14:45- , Apr. 11, 2017

Place: Room 308, Genetics Building, Handan Campus




Attention and meditation are important behavioral and cognitive processes during learning. For example, due to attention, some important information is selected and processed by a learner, whereas the reception of some unimportant information is inhibited. Concentration in a calm state of mind referred as meditation, when its level is high, can increase learner ability to pay attention and help better absorb and retain learning information. To the best of our knowledge, not much attention has been paid to such important behavioral and cognitive processes during lectures in a foreign language. Especially, little is known what are levels of attention and meditation of learners during lectures in a foreign language and whether they can be facilitated with educational technologies or not. This issue must be considered if we hope to create an ideal learning environment which fosters both attention and meditation. So we carried out two studies to bridge this gap. Non-native English speaking undergraduates and graduates participated in our studies. In the first study, students were randomly assigned into either a control or an experimental group, with 30 students in each group. Two lectures, both in English but at different levels of difficulty, were given in a classroom environment. Students in the control group received a lecture containing only a video of the instructor and slides; students in the experimental group received the video of the instructor and slides as well as texts of the lecture generated by speech-to-text recognition (STR) technology. We explored the effectiveness of providing STR-texts to students during lectures in a foreign language on enhancing learning, attention, and meditation. This effectiveness was further explored with regard to foreign language ability and gender. Finally, students’ perceptions towards STR-texts were surveyed. In the second study, 21 students were watching the same lectures with STR-texts provided and their visual attention was investigated. We also explored visual attention of students on STR-texts and their learning behavior to use STR-texts to different characteristics was compared. The following main findings were obtained. First, STR-texts had a positive effect on the learning performance, attention and meditation of students. Most students had positive perceptions regarding the usefulness of STR-texts for learning. This is because students received instructional content in both verbal and visual forms, which made the content more comprehensible and easier to process. During lectures with STR-texts, high ability and female students had higher levels of attention and meditation in most cases compared to their counterparts. As for the results of visual attention analysis, we found that most students had a greater use of STR-texts to enhance their comprehension of the lectures’ content. Particularly, STR-texts significantly helped enhance learning achievement of low EFL ability students. All participants, no matter their EFL ability, learning style preference and gender, learned with the aid of STR-texts. Finally, we found that as difficulty level of the lecture increased, participants paid their visual attention mostly on STR-texts compared to other media to comprehend the lecture content better. Based on these results, we made several suggestions and implications for educators and researchers in the field.




罗斯坦(Rustam Shadiev),乌兹别克斯坦籍,男,台湾中央大学教育技术学博士毕业,现为南京师范大学教育科学学院教授。2010 年以来发表SSCI 论文 22 篇,另有多篇论文录用待发,曾主持台湾教育部多个科研项目,兼任Computers & Education、British Journal of Educational Technology、Educational Technology & Society 等重要 SSCI 刊物的审稿人。研究方向: 主要研究方向为先进学习技术研究,当前主要关注如下两个主题的研究:基于语音识别与评价软件对学生语言能力(演讲能力等)培养的研究;基于头戴式脑电采集设备进行学生注意力测试的研究。


2018-1-24WednesdayCopyright@2000-2011 School Of Information Science And Technology