The SONIC Lab pursues the development of different types of hearing devices, including new hearing aid technologies, central auditory prostheses and novel neuromodulation approaches for tinnitus treatment. All of these developments require a detailed understanding of how the brain processes and perceives inputs from different devices and forms of energy transmission (i.e., sound, electrical, ultrasound and laser inputs). Furthermore, for artificial inputs into the auditory system for restoring natural sound perception, such as occurs for the ultrasound/laser hearing aid or central auditory implant stimulation, it is still not well understood what types of stimulation patterns are required to mimic brain activity for sufficient hearing function on a daily basis and ultimately for achieving near-normal hearing. These stimulation patterns will need to be effective in humans; however, detailed investigation of the effects of different stimuli on brain coding can only be performed in animals. The SONIC Lab addresses this translational gap in two stages. In the first stage, we pursue animal experiments to demonstrate that these artificial inputs can transmit sufficient information to elicit auditory brain patterns (e.g., in the auditory cortex) that mimic those achieved with natural sound stimulation with complex inputs, such as conspecific vocalizations (e.g., guinea pig calls have similarities to the spectrotemporal patterns in human speech). We are developing various encoding-decoding neural models leveraging statistical point process methods and deep learning network approaches to create a closed-loop paradigm (e.g., with neural recordings from the auditory cortex) to identify these optimal stimulation patterns, whether for ultrasound stimulation of the peripheral auditory system or electrical stimulation of the auditory nerve and brain. Once we identify effective stimulation patterns that are effective in animals, then we will have greater confidence that such stimulation patterns also exist for humans. We would hope the human brain is smarter and can learn new artificial inputs better than the animal brain. We would also perform experiments in animals with larger head sizes and brains to more closely mimic the effects expected in humans (e.g., the mechanical vibrations elicited by ultrasound stimulation of the fluids in the larger head and cochlea, or electrical activation fields in the larger auditory nerve and brain). It is not yet clear if the same parameters will translate to humans. Therefore, in the second stage we will explore different stimulation patterns directly in humans but still based on the animal findings as a starting point. Then we can implement a closed-loop algorithm based on perceptual responses provided by the subjects and EEG activity in order to identify effective stimulation patterns to achieve useful and hopefully near-normal hearing.
In addition to brain recordings, we can also apply closed-loop techniques on the recordings of basilar membrane vibrations in the cochlea in real-time while presenting ultrasound or laser stimulation of the peripheral ear. A phase differential optical coherence tomography (OCT) imaging modality is being developed to measure the vibrations of cochlear membranes that leverages its high spatial resolution and sensitivity for sensing the micro-motion of the cochlea.
Funding: NSF IGERT and NRT Graduate Fellowship Programs, NIH T32 Graduate Fellowship Program, Institute for Translational Neuroscience, Discretionary Lab Funds.
Collaborators: Eric Plourde, PhD (Electrical Engineering, Université de Sherbrooke); Zhi Yang, PhD (Biomedical Engineering, University of Minnesota); Catherine Qi Zhao, PhD (Computer Science & Engineering, University of Minnesota) Taner Akkin, PhD (Biomedical Engineering, University of Minnesota);