Meta (Facebook)
Research Scientist - Acoustic and Multi-Modal Scene Understanding
👀 5 views
📥 0 clicked apply
At Meta’s Reality Labs Research, our goal is to make world-class consumer virtual, augmented, and mixed reality experiences. Come work alongside industry-leading scientists and engineers to create the technology that makes VR, AR and smart wearables pervasive and universal. Join the adventure of a lifetime as we make science fiction real and change the world. We are a world-class team of researchers and engineers creating the future of augmented and virtual reality, which together will become as universal and essential as smartphones and personal computers are today. And just as personal computers have done over the past 45 years, AR, VR and smart wearables will ultimately change everything about how we work, play, and connect.
We are developing all the technologies needed to enable breakthrough Smartglasses, AR glasses and VR headsets, including optics and displays, computer vision, audio, graphics, brain-computer interfaces, haptic interaction, eye/hand/face/body tracking, perception science, and true telepresence. Some of those will advance much faster than others, but they all need to happen to enable AR and VR that are so compelling that they become an integral part of our lives.
The Audio team within RL Research is looking for an experienced Research Scientist with an in-depth understanding of real-time and efficient signal processing and machine learning on audio and multi-modal signals to join our team. You will be doing core and applied research in technologies that improve listener’s hearing abilities under challenging listening conditions using wearable computing, and alongside a team of dedicated researchers, developers, and engineers. You will operate at the intersection of egocentric perception, acoustics, computer vision, and signal processing algorithms with hardware and software co-design.