AES E-Library

Dual-Residual Transformer Network for Speech Recognition

The Transformer, an attention-based encoder-decoder network, has recently become the prevailing model for automatic speech recognition because of its high recognition accuracy. However, the convergence speed of the Transformer is not that optimal. In order to address this problem, a structure called Dual-Residual Transformer Network (DRTNet), which has fast convergence speed, is proposed. In DRTNet, a direct path is added in the encoder and decoder layers to propagate features with the inspiration of the structure proposed in ResNet. Moreover, this architecture can also fuse features, which tends to improve the model performance. Specifically, the input of the current layer is the integration of the input and output of the previous layer. Empirical evaluation of the proposed DRTNet has been conducted on two public datasets, which are AISHELL-1 and HKUST, respectively. Experimental results on these two datasets show that DRTNet has faster convergence speed and better performance.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
Publication Date:
Permalink: https://aes2.org/publications/elibrary-page/?id=22013


(850KB)


Download Now

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content