We saw a post in the previous blog about SpeechBrain, Features, PreTrained models, and Speech Recognition On Different Languages By SpeechBrain.
Today, we are going to see in detail about Multi-Speaker Separation and Recognition.
What is Multi-Speaker Separation and Recognition?
When you were listening to audio and found that there were many people talking on the audio. However, you want to hear audio from a particular person. This feature requires high-end software or need to work with sound engineers or audio professionals to extract only the voice which you want. The emergence of Artificialy Intelligence brings this task very easy in just 13 lines of code and produce multi-speaker separation.
Let’s get into a code to check simple Multi-Speaker Separation and Recognition.
I have used SpeechBrain Pretrained models and audio files and downloaded mixed audio files (Audacity) from Azure Github.
To check my full code in Google Colab as well as here.
I have just pasted only image of audio file image, please go to Google Colab and play there (not to upload huge MB here).
I have just pasted only image of audio file image, please go to Google Colab and play there (not to upload huge MB here).
I have downloaded files from Github repository, mixed two files into one wave file using Audacity.
You can see the file output after a below code.
Resampling the audio from 16000 Hz to 8000 Hz
I have just pasted only image of audio file image, please go to Google Colab and play there (not to upload huge MB here).
I have just pasted only image of audio file image, please go to Google Colab and play there (not to upload huge MB here).
Have anyone tried SpeechBrain?
Would you please comment below?
Further Reading
Posts on Artificial Intelligence, Deep Learning, Machine Learning, and Design Thinking articles:
Artificial Intelligence Chatbot Using Neural Network and Natural Language Processing
Leave A Comment