뒤로가기back

GAUDIO STUDIO Sound Separation Tips - A Sound Engineer's Guide 🐝

2024.03.26 by Bright Kwon

Hello, this is Bright, a sound engineer from Gaudio Lab!

 

These days, many fields are utilizing AI to increase productivity, and I'm sure you've come across AI tools at one point or another. But have you ever wondered how a sound engineer at an audio AI company utilizes AI?

 

I remember back when I was young, I had to struggle through Google to practice MR production and mixing. I recall the difficulty of separating MRs myself or downloading multi-tracks shared as learning materials. The processes were cumbersome, and even the works completed with such effort, the quality were not so good. 😭

 

But now, with the era of AI, all that hardship has become a thing of the past! Especially with the commercialization of AI technologies for separating audio sources, many tasks in the audio industry have become much simpler. As a sound engineer, I think it's a great era where we can fully focus on creativity.

 

Today, I'd like to introduce various tips for GAUDIO STUDIO, one of the tools I use the most. It boasts top-notch performance among various AI audio separation services, and by following along slowly, you too can become a top-notch sound engineer like me😎

 

 

 

🍯 Tip 1 - Create an MR

 

Step 1 - Separating the vocals

How do I remove vocals from the music in GAUDIO STUDIO?


This is one of the most common questions I get asked, and I believe many people use GAUDIO STUDIO primarily for MR production for events like karaoke, celebrations, and more.

 

 

vocal removal

Only 'Vocals' and 'Other Instruments' selected

 

 

audio separation instrument selectionAll instruments selected

 

 

In GAUDIO STUDIO, you can separate the sound source by selecting the instruments you want (vocals, drums, bass, electric guitar, piano, and other instruments). So if you select vocals only and separate them, you can create an MR, right?

 

The AI will take care of the rest, making it easy to create MRs with just a few simple clicks!

 

 

Step 2 - Key up / down

How do I customize the key of my MR?

 

If you don't already have a music editing program, I recommend Audacity - it's free, has tons of hidden features, and I used it a lot during my student days.

 

Now that you're all set up, let's try it step by step!

 

 

audacity file selection

 

audacity track selection

 

First, click [File] → [Import] → [Audio] at the top to import the sound source, then double-click on the loaded file to select it entirely.

 

 

 

audacity pitch shift

 

audacity pitch selection

 

Then, go to [Effect] → [Pitch & Tempo] → [Change Pitch] to adjust the key and that's it!

You can also fine-tune it, so play around with it a few times to get it to the pitch you want.

 

For those who followed along well so far, do you feel something strange? Or do you want to create a high-quality MR different from others?

 

There's one thing we often overlook. Drums don't have pitches! 
Because of this, if you change the key with a drum track included, the pitch of the drumbeat also changes, affecting the overall quality.

 

😎 Now, here's a trick: try adjusting the key of the rest of the instruments, this time without the drum track, and then put them back together with the drums. That weird dissonance should be gone! 

 

 

Step 3 - Put it to use

So what more can I do with this? 

 

After separating the MR and adjusting the keys, you can create content like this.

 

 

Do you get the idea? You can create duets with singers who have different vocal keys!

 

If you further process the separated voices with the Voice Conversion AI learning model, you can also create AI cover content, which is trending these days. Of course, the better the quality of the separated voices, the better the trained results, which is why I've heard that many people use GAUDIO STUDIO a lot. 👀

 

Aren't you curious about your favorite singer singing songs by other artists?

 😎 There are endless possibilities for using GAUDIO STUDIO like this.

 

 

 

 

🍯 Tip 2 - Adjusting a specific track in an already recorded song

 

This time, let me show you an example of how you can use GAUDIO STUDIO in situations you might encounter in your daily life.

 

 

Situation 1 - You've just finished a really great ensemble, but the drums are just too loud!

 

In such cases, if you separate only the drum track and adjust the volume, you'll be able to bring out the other instruments. Similarly, reducing excessively thumping beats in concert footage can highlight the artist's voice more.

Even recordings that seemed impossible to separate or footage that seemed impossible to adjust specific sounds can now be excellently mixed and uploaded!

 

 

 

 

Situation 2. I filmed a vlog in a cafe and it was recorded with copyrighted music!

 

If you've ever filmed a outdoor vlog for YouTube and the music from a store is recorded along, it could be detected as a copyrighted element, which could limit your monetization. Perhaps until now, you've probably just turned down the volume or raised your voice, and if that didn't work, you might have ended up deleting all the sounds and recording narration separately.

 

😎 Now, you don't need to do that anymore. Just separate your voice and cleanly remove unwanted music.

 

 

With just GAUDIO STUDIO, you no longer have to suffer from unexpected copyright issues!

 

 

 

How did you find the endless applications of AI music separation that I introduced?

I'm often amazed at how tasks that were difficult or required tremendous effort in the past are now so easily accomplished.

 

Why not use the magic of GAUDIO STUDIO to create and enjoy your own unique content? 

GAUDIO STUDIO will continue to evolve until, in the not-too-distant future, all track stems will be neatly separated when you just insert a stereo file. 

 

We look forward to your continued interest and enjoyment!

 

 

 

 

pre-image
Gaudio Lab's Continuous Challenge towards Clear Voice - Just Voice Lite Release

  I Want to Hear Your Voice Better   Bam! Bang! Tat tat tat tat I’m out, I need another ??? ??? anybody ??? ???   Hi, this is Howard, the product manager for Just Voice Lite. Imagine a scene in a war movie, filled with the sound of bombs exploding.   The characters on the screen are having a conversation, but it's almost impossible to hear them over the bombs, gunfire, and loud background music. In such situations, we want to hear the actors' voices more clearly. If the sound effects and background music added by the sound director to enhance the atmosphere end up drowning out the voices of the main actors, we might miss out on crucial parts of the story.   At this point, what would happen if you increased the audio volume to hear the dialogue better?   The overall volume might become so loud that it feels like your eardrums are about to burst. Particularly if you're wearing earphones or headphones, the audio volume you've set might already be too loud to increase further for better dialogue clarity. Conversely, reducing the volume might render the dialogue almost inaudible, creating an ironic situation where you can't lower the volume either. In the end, these situations ultimately make it necessary to rely on subtitles even when watching movies in one's native language.     This isn't what I wanted...   Wouldn't it be great if there were a beautiful technology that could selectively enhance the voices of actors in such situations, making them clearer?   Or better yet, if the content were crafted with better dialogue clarity from the outset, such dilemmas wouldn't arise. The perspectives of artists striving to convey the essence of their content and audiences eager for better dialogue clarity are always bound to differ.     Is it just movies, though?   We often find ourselves straining to hear the speaker's voice in poorly recorded concert performances, travel YouTubers' video recorded in noisy environments, and videos of bike/car club rides, etc. What if you are on a scenic beach with waves crashing in, or at an outdoor coffee shop, and you're strumming a guitar and singing to your girlfriend/boyfriend over video call?   When noise or clutter makes it difficult to hear the voices we care about, and when we want to hear the other person's voice in that space a little better, we seek solutions for a better listening experience.   Just Voice, the voice enhancement application, emerges to quench this thirst. 🤓       Real-time Processing Using On-device AI   Let's get a little technical.   So how on earth do we solve this problem?   Some might suggest, "What if there were a voice volume-up button on the remote control that could amplify only the vocal elements?" You're kidding!   It’s not as easy as it sounds. Technically speaking, such an action would need to be processed within milliseconds (thousandths of a second). That's because it needs to be done in real-time while you're watching the video. In other words, it means that the voices should be promptly separated and processed to ensure clear audibility while playing, and of course, the video and audio should be perfectly synchronized when the output is sent back to you.     But what if GaudioLab's researchers step in?   After nearly two years of extensive research, Gaudio Lab has successfully developed this challenging technology.   Utilizing the world's leading voice separation technology GSEP(Gaudio Source Separation) and On-device AI technology, we've created an engine that enhances voice clarity in real-time, Low Delay GSEP. (To be more precise, it takes less than 30ms to process) This technology eliminates surrounding noise, and accentuates the desired voice, making it easier to hear. Of course, to achieve real-time processing, there's a slight, very slight performance trade-off compared to the non-real-time voice separation technology GSEP.   With this technology, you can hear the desired voice more clearly without being disturbed by environmental or background music in all the video content you consume. We believe it will provide a better listening experience for those who want to enjoy content according to their preferences.       Experience Just Voice Lite for macOS   Try Just Voice Lite!   The first in the Just Voice app series, Just Voice Lite for macOS, is now available.   If you're a macOS user, you can enhance voice clarity in all sound environments, whether it's video conferencing, watching movies, or listening to music, through the Just Voice Lite app. Designed to amplify voices while leaving surrounding sounds intact, Just Voice Lite ensures that viewers can fully enjoy the content without compromising the intended sound effects by the creators.     How much does it cost?   Just Voice Lite is available for free.   To put it bluntly, this app simply separates voices from content and slightly increases voice volume. But this technology has infinite potential.   What if we could modify the separated voice in real-time? What if we could apply real-time pitch shift or de-reverberate the voice recorded in echoing spaces? Or, by separating the voice and applying Gaudio Lab's spatial audio technology (GSA, Gaudio Spatial Audio) to the surrounding noises, we could enhance the audio spatial perception as desired.   Instead of passively consuming content created by artists, imagine being able to change elements of the content according to the audience's real-time demands. If you put a value on that freedom, how much are you willing to pay for it? The continuous endeavor to support the audience's free consumption of content is Gaudio Lab's direction towards Metaverse audio.       “We Also Want to Process it On-device!”   Do you need Just Voice SDK?   If you're a developer, there's good news. We have also prepared the Just Voice SDK. If you need an audio engine for hearing aid software, video conferencing systems, AI Contact Centers(AICC), or language learning applications, don't hesitate to reach out.   Oh! Of course, while the Just Voice Lite app is developed for the purpose of amplifying voices, the Just Voice SDK can completely eliminate surrounding noise with its noise reduction(De-noise) capabilities. The choice of how to utilize it is entirely up to you as the user.     Oh, by the way, you still haven't tried Just Voice Lite?   If you're a macOS user, give it a try anytime. Experience the future of audio that will make the voices you want to hear clearer! Try it now!   🔗Go to Mac App Store  

2024.03.08
after-image
My Score Is... Introducing the MUSHRA Listening Test

Hello, I'm Ted, and since the start of Gaudio Lab, I've been on board tackling a myriad of tasks.   Recently, we've conducted listening tests to assess the performance of the technology we've developed. Thinking it would be beneficial to have an easily understandable explanation of this listening test, I've decided to jot down a few notes.   Just as when you visit a hospital or watch a medical drama, you might occasionally be asked this question: "On a scale from 0, representing no pain at all, to 10, denoting the most intense pain imaginable, how would you rate your pain right now?" While writing this article, I learned that such a question is referred to as an NRS (Numeric Rating Scale). Since the experience of pain is subjective, the NRS helps to simplify and quantify it in an easily understandable manner, thus effectively aiding in pain management and treatment. It might feel odd discussing medical terminology in an audio-related blog. 🙂       Can Sound Be Quantified? - The MUSHRA Listening Test   So, what about sound? When there are two sounds, how can we evaluate which one is better?    In the audio field, there have been numerous attempts to develop technology that can objectively evaluate sound without human hearing. Unfortunately, such technology has not yet been perfected. In other words, we have not reached the point where a machine can analyze sound and declare, "This sound scores an 80, human.🤖"   Instead, methodologies that involve listening to and evaluating sounds have been widely used for some time. For example, there are MUSHRA (Multiple Stimuli with Hidden Reference and Anchor), ABX, and MOS (Mean Opinion Score), among others. Today, I'd like to introduce the MUSHRA evaluation method, which is particularly tailored to assess the subtle differences between high-quality audio samples.   MUSHRA stands for the Method of Assessment of audio systemS Handling of Degraded Reference Signals. It is primarily used to evaluate high-quality audio technologies/systems. Standardized by the International Telecommunication Union (ITU), it is especially useful for assessing the subtle differences between audio samples. The core principle of the MUSHRA evaluation involves presenting several test samples simultaneously and asking participants to compare them, rating each on a scale from 0 to 100.  The samples provided include:   Hidden Reference: A high-quality version of the original audio track, used as the highest benchmark for participants to compare other samples against. Participants are unaware that this sample serves as the reference. Anchor: Typically, a lower-quality audio sample that acts as the lower benchmark for evaluation. This helps participants have a clearer understanding of the rating scale. Test Samples: Samples generated through various audio systems that are being evaluated.     The Hidden Reference is considered the "correct answer," or 100 points, and the Anchor is set as a low benchmark, roughly equivalent to 20 points. Test Samples are then evaluated on a scale from 0 to 100.   Comparing this to the NRS, if we were to draw parallels, the Hidden Reference could be likened to the most intense pain imaginable, while the Anchor would represent no pain at all. However, unlike the NRS where the no pain mark is set at 0, the Anchor is not set at 0 in MUSHRA evaluations because Test Samples may perform worse than the Anchor. Another distinction from the NRS is that while the most intense pain can vary from person to person, the Hidden Reference is a consistent sound for everyone, making it more objective.   Moreover, MUSHRA includes a post-screening rule to ensure that evaluators do not rate randomly, understand the given instructions well, and have the capacity to sufficiently distinguish between performances. It's quite a systematic approach, isn't it?       We've Tried MUSHRA Listening Test Ourselves.   Understanding this might still be challenging, so let me illustrate with an example from a subjective performance evaluation of the Just Voice SDK, conducted by Gaudio Lab in January.   1) MUSHRA Test Design   The Just Voice SDK is designed for implementation in Mobile, PC, and Embedded systems, offering the capability to eliminate noise in real-time. We aimed to compare its performance with that of Krisp, a noise cancellation technology integrated into Discord, focusing on two main aspects: the effectiveness of noise removal and the clarity of the voice. Both performances were assessed using the MUSHRA method.   The Hidden Reference was recorded in a quiet studio, simulating a typical scenario like a video conference, using various smartphones. The Test Samples were created by adding noise with an SNR of 5dB to the Hidden Reference and then processing these signals with the Just Voice SDK and Krisp SDK for noise removal, respectively, for comparison.   What's interesting is the Anchor. Since the two performances evaluated are different, they necessitated setting different Anchors. For the first performance evaluation, noise removal, the Anchor was set as the signal mixed with noise at an SNR of 5dB before noise removal. For the second performance evaluation, voice clarity, the Anchor was set as the Hidden Reference passed through a 3.5kHz Low-pass filter, leaving only the lower frequency bands - a common method used in voice quality evaluation.     2) MUSHRA Test Procedure   The evaluation was carried out using a tool  WebMushra, which features the following UI setup. The Reference plays the Hidden Reference, and Cond. 1 to 4 randomly play the Hidden Reference, Anchor, and Test Samples (Just Voice SDK, Krisp). Evaluators listen to and compare Cond. 1 to 4, attempting to identify the Hidden Reference to award it 100 points, and the Anchor to give it a score around 20 points, a relatively low score. For the remaining two Conditions, they are to assign scores relative to the Reference and Anchor.     When conducting evaluations with multiple Test Items, the scores assigned by each evaluator for each Condition are recorded in a csv file, as shown in the image below.   How Did the Results Turn Out?   1) Interpreting MUSHRA Test Results   Once all evaluators have completed their assessments, the post-screening rule is applied to exclude any unfit results. Then, the average scores for each Condition, along with their 95% confidence intervals, are plotted for comparison. A 95% confidence interval means there's a 95% probability that the scores given by evaluators fall within a specific range.   Below are the results for the noise removal performance from our experiment. The grey markers represent the averages, while the blue and orange markers indicate the maximum and minimum of the 95% confidence intervals, respectively. If these confidence intervals do not overlap, it signifies a statistically significant difference in performance between the conditions, meaning they are distinguishable from each other. Moreover, the more evaluators there are, the narrower these confidence intervals become.     2) Noise Removal Evaluation Results   This experiment, with a participation of 66 individuals, was large scale, resulting in quite narrow confidence intervals. Comparing the benchmark (Krisp) with Just Voice, we observe that the confidence intervals do not overlap, and there is a difference of 12.5 points between them. Such a margin clearly indicates a distinguishable performance difference between the two technologies.   When analyzing the listening test results in detail, it's important to examine the outcomes for each Test item. Just Voice was found to have significantly better noise removal performance than the Benchmark (Krisp) in 7 out of 16 Test items at a 95% significance level (indicated in green).   An interesting observation was that in 3 Test items (14p-03_office, 15p-02_hallway, s20p-04_office), the average scores for Just Voice were higher than those for the Hidden Reference (indicated in blue and orange). This was attributed to the inclusion of noise in the smartphone-recorded References used to simulate real-world environments. Just Voice managed to remove noise more effectively without distorting the voice, resulting in higher scores than the Reference, making it nearly indistinguishable from the Reference in terms of noise removal.   Remarkably, for the 14p-03_office item, Just Voice achieved results that were not only statistically significant at a 95% confidence level but also scored higher than the Reference (indicated in orange), effectively being judged as better than the Reference itself.👍     3) Voice Clarity Evaluation Results   For those curious about the voice clarity experiment results, I've attached them below. Using the method described earlier, you can interpret the results directly.😉     Concluding Thoughts   Today, we delved into the MUSHRA, a subjective audio quality evaluation method used to compare the performance of high-quality audio/systems. Evaluating subjective audio quality requires considerable thought and effort, from determining what to use as the Hidden Reference and Anchor, to ensuring the experiment runs smoothly.   Personally, I'm looking forward to the day when AI technology advances to the point where it can say, "This sound scores a 95, human. 🤖" with high precision.   If you're interested in learning more about the MUSHRA methodology or other subjective audio quality evaluation methods like ABX or MOS, please leave a query. I'd be happy to write more on this topic.🙂    

2024.04.11