Multi-channel target speech enhancement using labeled random finite sets and deep learning under reverberant environments
Autor
Datta, Jayanta
Dehghan Firoozabadi, Ali
Zabala-Blanco, David
Castillo Soria, Francisco Ruben
Adams, Martin
Perez, Claudio
Fecha
2023Resumen
We proposed a multi-channel speech enhancement procedure under reverberant conditions with acoustic source tracking and beamforming. A deep learning algorithm was applied to improve the construction of a measurement set for labeled random finite set (RFS)-based target source tracking for source localization and tracking to predict time-frequency (T-F) mask and enhance the target speech. During the source localization, steered response power phase transform (SRP-PHAT) was used to construct the measurement set for the labeled random finite set-based source tracking framework. However, owing to noise and reverberation effects, the constructed measurement set suffered from impairments that degraded the performance of the source tracking algorithm. Accurate location estimates of the target source in motion are crucial to the subsequent speech enhancement framework. Owing to its de-noising capability, a deep learning algorithm was applied to compensate for the impairments arising due to noise and reverberation for the construction of an improved measurement set. This enabled the source tracking framework to estimate the location of the target source with improved accuracy. Furthermore, a deep learning framework was applied in the speech enhancement to predict the T-F mask corresponding to the target source. T-F masking, originally used in computational auditory scene analysis (CASA), was treated with a speech enhancement method in which weights were assigned to the bins of T-F representations of the received mixture signal to enhance the received signal containing the target speech. By using the information from the source tracking sab-system, a deep learning framework was used to predict the T-F mask corresponding to the target source in the spectral domain. The inclusion of such a neural T-F mask prediction sub-system within the speech enhancement stage improved the target source separation of the time-varying beamformer. Computer simulation results showed the application of deep learning algorithms in the source localization as well as final speech enhancement stages dynamically localized acoustic sources as well as constructed effective time-varying beamformers and T-F masking. Thus, the speech corresponding to the target source was enhanced under reverberant conditions with multiple interferers.
Fuente
2023 IEEE 5th Eurasia Conference on IOT, Communication and Engineering (ECICE), 640-645Link de Acceso
Click aquí para ver el documentoIdentificador DOI
doi.org/10.1109/ECICE59523.2023.10382971Colecciones
La publicación tiene asociados los siguientes ficheros de licencia: