뒤로가기back

Th3rdEyeXR Interview with Adam Levenson from VRX Boston 2018

2018.10.30ㆍ by Gaudio Lab

Th3rdEyeXR Interview with Adam Levenson from VRX Boston 2018

Adam Levenson, Vice President of Business Development at G’Audio Lab, discusses the evolution of the XR industry and how audio can be adopted for experiences.

 

 

Th3rdEyeXR was established by Th3rd Coast Media Solutions’ co-founders Joseph Van Harken and Jeff Joanisse when they noticed a gap in industry coverage targeting the professional audience. It provides easy-to-understand Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR) and eXtended Reality (XR) expert analysis and news content. Purpose-driven use cases and applications are geared specifically for the corporate professional and business-oriented audience. REAL solutions and REAL applications for REAL world problems using Virtual and Augmented Realities, which can be applied to a company or an industry today.

158
The Audio Voice: Hearables and Augmented Reality

The Audio Voice: Hearables and Augmented Reality By J. Martins, Editor-in-Chief You might have noticed that on my previous The Audio Voice write-up about Headphone Technology and Markets I didn’t use the term “hearables” since that would take the discussion to a whole other level. That was a deliberate option given that I really wanted to address the topic separately. On the consumer electronics side, we have in-ears and truly wireless stereo earbuds, and on the other side of the equation, we have the audiology and medical market, with hearing aids now expanding into Personal Sound Amplification Devices (PSADs) and hearing enhancers that soon will be available over the counter. But hearables are (or should be) different. How different?   Curiously, the same day we sent out that edition of The Audio Voice with my editorial on the topic, we received a press release from an American company – ZVOX Audio. The company is working so far in the field of intelligibility with sound for TV and soundbars, that it decided to be among the first to grab the OTC opportunities and design an affordable solution, purposely designed for voice enhancement, targeting a very specific market of older people. It is a curious example of the diversity of approaches this segment will see in the following months.   The Jabra Elite 65t can be considered the state-of-the-art true wireless stereo earbuds struggling to become hearables, even benefiting from the extensive experience in wireless audio and hearing aids from the GN Group – owner of Jabra and ReSound, among other brands.   The term “hearables” has been so far associated with the evolution of consumer-oriented in-ears toward something that could be health-related, not necessarily medical. As consultant Nick Hunn wrote in his document “The Market for Hearable Devices 2016-2020″, published in November 2016, “The real hearables revolution began in 2014 when two European companies launched crowdfunding campaigns for earbuds. In Sweden, Earin acquired funding for a pair of Bluetooth earbuds that would stream audio. Approximately 1,500 km farther south in Munich, another startup – Bragi, raised the unprecedented sum of $3.39 million for a far more ambitious hearable device – the Dash.” As Hunn explains and audioXpress reported at the time, the Dash could stream music like Earin, but it “could also store and play music without the presence of a phone, as well as housing a host of biometric sensors, which would feed back data to a range of fitness applications.”   Things have evolved from there, and many industry analysts have frequently noted the fact that many of those subsequent TWS crowdfunded projects were based on the assumption that they were “the first to have the idea,” when in fact multiple companies, including some of the largest audiology and large consumer electronics manufacturers had been exploring the field and held important patents on key aspects of technology implementation and design. Not surprisingly, since Bragi successfully launched its Dash Pro “hearables” the company also decided to collaborate with the leading US audiology company Starkey, launching The Dash Pro tailored by Starkey Hearing Technologies. Bragi itself holds an important patent portfolio in this area, and the company makes no assumption that there will be confusion between the two markets, as Bragi’s founder and CEO, Nikolaj Hviid stated in the interview I did with him at CES 2018. In that interview, Nikolaj even confessed the wish that medically oriented hearing aid companies would enter the “hearing enhancement” space proudly stating, “Hearing aids are not a luxury item, because it is a tool for those people. They need it and it’s very important to them. But there’s this huge unsatisfied group of people that have an issue that they need to deal with. I even see many people that would buy our product, will eventually go and buy a hearing aid. Because they will realize that what we can do for them is OK, but that they actually need more. What we do will also help making hearing aids more accessible.”   That’s why it’s important to address the market for hearables from its own intrinsic perspective and key selling arguments, outside of the “hearing enhancement” applications. As Hunn clarified in the mentioned 2016 report, “Previously I defined a hearable as any device that included wireless connectivity, as the differentiating factor between wired and wireless headphones. That included wireless stereo headphones and mono Bluetooth earpieces, but excluded most hearing aids which had no wireless connection to a phone. In just two years, the picture has become far more complex. When I coined the word “hearables” at the beginning of 2014, the wireless headphone market was still niche, and no one had considered sound isolation, audio curation, or translation as real consumer opportunities. All of those are now in development or already shipping. So now I’m considering anything that fits in or on an ear that contains a wireless link, whether that’s for audio, or remote control of audio augmentation.” And he states, “…the real innovation in hearables will come from other earbud developers, not least because of their willingness to add biometrics. The intimate, relatively isolated contact that earbuds provide, along with the stabilizing effect on balance from the semi-circular canals in our ears, means that the ear is one of the best locations for sensing many physiological parameters. Whilst some of the biometrics will not be applicable to headphones, some will be, and we will see them incorporated in new headphone designs.”   “The applications being considered are more diverse than what we’ve seen so far with other wearables,” he adds and mentions specifically “the rise of voice communications for Internet of Voice (IoV) applications” with voice detection and processing,” while detailing the enormous potential in Audio Curation and Augmented Hearing, together with Hearing Protection and Isolation, as main drivers for market development and expansion of hearables.   As always, Apple gets the fundamentals right, first. With the AirPods, Apple focused on what people needed from wireless earbuds and made it work 100%. They didn’t overpromise, kept it simple, and delivered. The AirPods get consistent great reviews from every user. No wonder Apple commands the segment and owns 83% of this market.   Augmented Hearing was precisely the topic for a presentation I attended at the IBC 2018 show, with Gaudio Labs VP of Business Development Adam Levenson, titled “Augmented Reality Audio: The Next Generation of Hearables.” In this session, Levenson presented his forward-looking glimpse into the future of Augmented Reality (AR) Audio.   As a starting point, Levenson asked the audience the same questions: What are Hearables? He defines it as a product at the crossroads of headphone and hearing aid technologies and Augmented Reality (AR). In his presentation, he referenced Poppy Crum, Chief Scientist at Dolby, who said hearables are at the “convergence between entertainment, lifestyle, and hearing health” and David Cannington, the co-founder of Nuheara, who said, a hearable is a device to “control the elements of your physical environment,” to “orchestrate your soundscape,” and to provide an “on-the-fly personalized hearing experience.”   Levenson goes on to detail that, apart from the basic functions of music streaming and input level adjustment from connected devices, hearables should support audio enhancement features, with personalization based on a hearing test, noise cancellation, and noise reduction. But more important, hearables should have “smart capabilities” and support speech amplification, listening directivity, translation, voice assistant, and biometric tracking – all things that we’ve seen in multiple products so far.   In the session Levenson also says, “Imagine having the ability to adjust the mix of your daily listening experience with a technology that reduces ambient noise, amplifies and focuses speech, adjusts EQ to match your unique hearing profile, and interactively layers music with your voice assistant. This is AR Audio, and the latest Hearables on the market are already tapping into this potential. But this is just the beginning. The future of AR Audio is Artificial Intelligence (AI), advanced DSP, and procedural sound.   “Machine learning will teach AR Audio systems to recognize the sound of a dog barking, a jack hammer, a sports bar crowd, and many more common sound events. With a constantly expanding database of recognized sounds, AI will power adaptive and precise noise cancellation. An understanding of language and accent will enable enhanced speech intelligibility. Intelligent DSP systems will smooth variations in volume levels, and situational awareness will allow us to zoom in and focus on specific sound sources. In the near future, procedurally-generated sound effects will attach to virtual objects and respond to physics.”   Levenson’s company,   Gaudio Labs is developing and licensing technology in this domain, and they have recently announced a related and interesting SDK targeting loudness management for streaming services, from over-the-top to music streaming platforms. The company has a strong R&D background in the development of spatial audio technologies and its founders have in fact co-invented and developed the MPEG-H 3D Audio Binaural Renderer that is now an ISO/IEC international standard.   Since I was unaware of the company and its work, I was intrigued with the topics addressed in the IBC session, and I met Levenson after his talk for a brief interview, where he shared his conviction that “the software solution for hearables is extremely complex.”   Adam Levenson, V.P. of Business Development, Gaudio Lab, believes that hearables hold a lot potential for augmented reality (AR) and his company is looking at offering unique solutions from an engineering perspective. “I think the whole area of AR in audio is inevitable. How it’s going to look, how this is going to become a consumer product is still very much unknown.”   “It’s a difficult challenge and it requires a lot of knowledge. Like active noise cancellation. Consumers are now familiar with it. We use it and we love it. That tech is out there, but unless you are licensing from Bose, you are going to have to create it, and it is complex. And to make it work on a hearable device, with a very specific chipset, is not an easy challenge. You need DSP knowledge, spatial audio knowledge, machine learning knowledge, it’s a really wide range of skills and knowledge that you have to have in order to address the software problem for hearables.”   “When you look at startups, companies like Nuheara and Bragi, they are struggling with noise cancellation. There’s latency and as a result you get phasing. But what these guys are doing is amazing. They still have not conquered the latency issue, and even Bose is having problems with their hearable product. And battery life… And the form factors. No one is there yet. Even companies like Starkey, which are doing amazing work for hearing aids, are trying to put this amazingly complex software solution together. Because all of us are missing this piece or that piece. But someone is going to do it!”   I fully agree with Levenson when he states that hearables should evolve toward the “Augmented Reality Audio” approach, powered by machine learning (ML) algorithms, digital signal processing (DSP), and binaural rendering. In his opinion, this will enable new features like Situational Responsiveness, Focused Listening, Selective Hearing, and positioning of Virtual Objects. And this should be supported in automatic response to the “environment classification” and not something manually adjusted by the user.   I believe that we will see multiple companies exploring this hearables potential, both from the consumer electronics side and audiology leaders. As Bragi has shown with its Dash Pro and Starkey Hearing Technologies has announced with its Livio AI platform, this will be both a multipurpose and completely-new augmented concept.   The result of a multi-year engineering effort, combining artificial intelligence (AI) and advancements in sensors and hearing technology, Starkey’s new Livio AI platform combines tracking brain and body health and advanced environmental detection to create what the company calls “Hearing Reality technology.”   When Starkey’s President Brandon Sawalich says, “we are not in the hearing aids business, we are in the communication business,” that is a clear sign where things are heading. Their Livio AI platform explores all the embedded processing power and AI capabilities and goes as far as pitch new language translation possibilities. Starkey calls the concept “healthables” and they talk about a new “hearing reality.”   In the end, I just hope these companies prepare themselves to deal with the “reality” of not knowing exactly how consumers are going to react to all the exciting new possibilities. If only they could use AI for that…

2018.10.25
160
Gaudio Spatial Upmix Enables Immersive Sound on LG VELVET, LG’s Latest Smartphone

Gaudio Spatial Upmix Enables Immersive Sound on LG VELVET, LG’s Latest Smartphone June 22, 2020, PST  Source: LG Newsroom    Berkeley, CA — LG Electronics’ newest smartphone LG VELVET began launching in European markets on June. LG VELVET is a part of a new branding strategy for LG Mobile, adopting more expressive names that better fit the individual smartphone. This stylish smartphone comes with several unique highlights, such as the distinctive design with a water droplet-inspired camera, front-back symmetrical curves, and a 6.8-inch, 20.5:9 aspect ratio OLED Cinematic FullVision display that optimizes the viewing experience. With a strong emphasis on immersive audio, VELVET integrates the Stereo Speakers and LG 3D Sound Engine, with the latter capable of analyzing and optimizing the audio output based on the type of content that is playing. Here, Gaudio Lab’s Spatial Upmix enables the LG 3D Sound Engine to intensify the fully immersive experience over headphones or earphones. The Spatial Upmix on VELVET adds dimension and pushes sound in every direction, bringing an immersive audio scene from legacy stereophonic content like music, movies, and games.   What is Spatial Upmix? Why is it different from legacy 3D audio?Existing 3D audio technologies distort the original audio signal and perpetuate sound quality issues. Like too much editing or over-filtering harms the original pictures, it is quite similar when producing 3D audio. The matrix mixing L/R signals and excessive reverb for making a 3D effect cause the audio distortion, such as spectral coloration and phase distortion. Therefore, audiophiles would not ever want to listen. However, the Spatial Upmix gives your ears a fully immersive experience as if you are in the scene without a manipulated audio signal. Unlike legacy 3D audio technologies, it first extracts each sound object from the stereo mix and then renders (spatializes) them to the 3D sound scene on the basis of binaural rendering technology adopted as part of the next-generation of audio standard ISO/IEC 23008-3 MPEG-H.   How does Spatial Upmix benefit LG VELVET users? “Immersive sensory experience wherever sound goes,” said Ben Chon, Ph.D., and CSO of Gaudio Lab. “When watching movies and dramas, you would feel as if you are in the same event with the characters as the story unfolds. And when listening to music, you would feel the music come alive and surround you rather than static and flat in one spot.

2020.06.22