BLOGS

Non-parallel Emotional Voice Conversion for
Unseen Speaker-emotion Pairs

By Nirmesh J. Shah, Senior Research Scientist at Sony Research India

29th March 2023

In this blog, Brijraj Singh summarises the paper titled ‘Impulsion of Movie’s Content-Based Factors in Multi-Modal Movie Recommendation System’ co-authored by Prabir Mondal (IIT Patna), Pulkit Kapoor (IIT Patna), Siddharth Singh (IIT Patna), Prof. Sriparna Saha (IIT Patna) and Naoyuki Onoe which was accepted at the International Conference on Neural Information Processing (ICONIP) in Changsha, China from 20th-23rd November 2023.

Collaborative Background: In Sony Research India, we are given opportunities to work and collaborate with experts across global Sony Group of Companies. Being the experts in developing speech technologies for Indian languages, we explored the opportunity to forge a close collaboration with Dr.Naoya Takahashi, one of the leading experts in the audio/speech domain in Sony Group Corporation, Japan to develop an Emotional Voice Conversion system.

What is an Emotional Voice Conversion System and its applications?

Emotional voice conversion (EVC) system converts the emotion of a given speech signal from one style to another,without modifying the linguistic content of the signal. EVC technology has potential applications in movie dubbing, conversational assistance, cross-lingual synthesis, etc.

Why is it important to develop EVC for unseen Speaker-emotion pairs?

Most of the previous approaches of EVC systems can convert the emotion of a speaker whose emotional data is present either at the time of training or testing, i.e., for seen speaker-emotion combinations only. However, collecting emotional voice for target speakers is often expensive, time- consuming, and sometimes impossible. In this paper, we address the problem of converting the emotion of speakers (of whom we only possess data having neutral emotion) by leveraging emotional speech data from other supporting speakers.

How have we achieved EVC for unseen speaker-emotion pairs?

We first modify the StarGANv2-VC architecture for converting the speaker and emotion stylessimultaneously in a unified model by utilizing two encoders for learning speaker style and emotion style embeddings along with dual domain source classifiers for classifying source speaker and the emotion style. We then devise training strategies to achieve EVC for Unseen Speaker-Emotion Pairs (i.e., EVC-USEP) by using emotional data from supporting speakers. To achieve this, we propose a Virtual Domain Pairing (VDP) training strategy, which randomly generates the combinations of speaker-emotion pairs that are not present in the real data without compromising the min-max game of a discriminator and generator in adversarial training. In particular, a fake-pair masking (FPM) strategy is proposed to ensure that the discriminator does not overfit because of the fake pairs. We refer our proposed system as EVC-USEP throughout the paper.

Figure 1: Block diagram of the proposed EVC-USEP architecture.

Key Results and Findings

We presented our results on Hindi emotional database. Demo audio samples can be found online.
URL: https://demosamplesites.github.io/EVCUP/

We have conducted two subjective tests, namely, mean opinion scores (MOS) and ABX test to evaluate the quality of converted voices and evaluation of emotion conversion, respectively. For objective evaluation, we use an emotion classification network to evaluate the accuracy of emotion conversion and speaker similarity scores. From both objective and subjective evaluations, we confirm that the proposed method successfully converts the emotion of the target speakers,
outperforming the baselines w.r.t. emotion similarity, speaker similarity, and quality of the converted voices, while achieving decent naturalness.

Table 1: Subjective and objective evaluations results. MOS are shown for quality along with margin of error corresponding to 95% confidence interval.

Figure 2: ABX Subjective Evaluation for Emotion Similarity.

To know more about Sony Research India’s Research Publications, visit the ‘Publications’ section on our ‘Open Innovation’s page:

Open Innovation with Sony R&D – Sony Research India

Skip to content