AES E-Library

Enhancing Speaker Verification through Trainable Audio Features

This study presents a novel pipeline for speaker verification, exploiting the advantages of trainable audio features to enhance identification accuracy and efficiency. Unlike traditional systems that rely heavily on spectrograms and handcrafted features like Mel-Frequency Cepstral Coefficients (MFCCs), our approach incorporates a learnable audio frontend encoder, a speaker embeddings system, and a decoder for signal reconstruction. This pipeline employs audio encoding methods to directly process raw audio inputs, dynamically adapting its filters to extract reliable speaker-specific features. By incorporating trainable features, our system not only captures a broader range of speaker information but also offers flexibility in handling either direct output from the audio encoder or signals reconstructed by the decoder. This capability allows for an in-depth analysis of the impact of direct feature extraction on speaker identification tasks. The proposed pipeline is evaluated on a frequently used speaker verification dataset. Our findings demonstrate significant improvements over the baselines feature extraction method, highlighting the potential of trainable features in redefining the landscape of speech processing systems.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
AES Convention: Paper Number:
Publication Date:
Permalink: https://aes2.org/publications/elibrary-page/?id=22591


(520KB)


Download Now

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content