You are currently logged in as an
Institutional Subscriber.
If you would like to logout,
please click on the button below.
Home / Publications / E-library page
Only AES members and Institutional Journal Subscribers can download
The Transformer, an attention-based encoder-decoder network, has recently become the prevailing model for automatic speech recognition because of its high recognition accuracy. However, the convergence speed of the Transformer is not that optimal. In order to address this problem, a structure called Dual-Residual Transformer Network (DRTNet), which has fast convergence speed, is proposed. In DRTNet, a direct path is added in the encoder and decoder layers to propagate features with the inspiration of the structure proposed in ResNet. Moreover, this architecture can also fuse features, which tends to improve the model performance. Specifically, the input of the current layer is the integration of the input and output of the previous layer. Empirical evaluation of the proposed DRTNet has been conducted on two public datasets, which are AISHELL-1 and HKUST, respectively. Experimental results on these two datasets show that DRTNet has faster convergence speed and better performance.
Author (s): Duan, Zhikui; Gao, Guozhi; Chen, Jiawei; Li, Shiren; Ruan, Jinbiao; Yang, Guangguang; Yu, Xinmei
Affiliation:
Foshan University, Foshan, China
(See document for exact affiliation information.)
Publication Date:
2022-10-06
Import into BibTeX
Permalink: https://aes2.org/publications/elibrary-page/?id=22013
(850KB)
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.
Duan, Zhikui; Gao, Guozhi; Chen, Jiawei; Li, Shiren; Ruan, Jinbiao; Yang, Guangguang; Yu, Xinmei; 2022; Dual-Residual Transformer Network for Speech Recognition [PDF]; Foshan University, Foshan, China; Paper ; Available from: https://aes2.org/publications/elibrary-page/?id=22013
Duan, Zhikui; Gao, Guozhi; Chen, Jiawei; Li, Shiren; Ruan, Jinbiao; Yang, Guangguang; Yu, Xinmei; Dual-Residual Transformer Network for Speech Recognition [PDF]; Foshan University, Foshan, China; Paper ; 2022 Available: https://aes2.org/publications/elibrary-page/?id=22013
@article{duan2022dual-residual,
author={duan zhikui and gao guozhi and chen jiawei and li shiren and ruan jinbiao and yang guangguang and yu xinmei},
journal={journal of the audio engineering society},
title={dual-residual transformer network for speech recognition},
year={2022},
volume={70},
issue={10},
pages={871-881},
month={october},}
TY – paper
TI – Dual-Residual Transformer Network for Speech Recognition
SP – 871 EP – 881
AU – Duan, Zhikui
AU – Gao, Guozhi
AU – Chen, Jiawei
AU – Li, Shiren
AU – Ruan, Jinbiao
AU – Yang, Guangguang
AU – Yu, Xinmei
PY – 2022
JO – Journal of the Audio Engineering Society
VO – 70
IS – 10
Y1 – October 2022