Suppr超能文献

用于视觉和其他感觉神经假体中刺激编码的混合神经自动编码器

Hybrid Neural Autoencoders for Stimulus Encoding in Visual and Other Sensory Neuroprostheses.

作者信息

Granley Jacob, Relic Lucas, Beyeler Michael

机构信息

Department of Computer Science, University of California, Santa Barbara.

Department of Computer Science, University of California, Santa Barbara; Department of Psychological & Brain Sciences, University of California, Santa Barbara.

出版信息

Adv Neural Inf Process Syst. 2022 Dec;35:22671-22685.

Abstract

Sensory neuroprostheses are emerging as a promising technology to restore lost sensory function or augment human capabilities. However, sensations elicited by current devices often appear artificial and distorted. Although current models can predict the neural or perceptual response to an electrical stimulus, an optimal stimulation strategy solves the inverse problem: what is the required stimulus to produce a desired response? Here, we frame this as an end-to-end optimization problem, where a deep neural network stimulus encoder is trained to invert a known and fixed forward model that approximates the underlying biological system. As a proof of concept, we demonstrate the effectiveness of this hybrid neural autoencoder (HNA) in visual neuroprostheses. We find that HNA produces high-fidelity patient-specific stimuli representing handwritten digits and segmented images of everyday objects, and significantly outperforms conventional encoding strategies across all simulated patients. Overall this is an important step towards the long-standing challenge of restoring high-quality vision to people living with incurable blindness and may prove a promising solution for a variety of neuroprosthetic technologies.

摘要

感觉神经假体正在成为一种有前景的技术,用于恢复丧失的感觉功能或增强人类能力。然而,当前设备引发的感觉往往显得不自然且扭曲。尽管当前模型可以预测对电刺激的神经或感知反应,但最优刺激策略要解决反问题:产生期望反应所需的刺激是什么?在此,我们将此构建为一个端到端优化问题,其中一个深度神经网络刺激编码器经过训练,以反转一个近似潜在生物系统的已知且固定的正向模型。作为概念验证,我们展示了这种混合神经自动编码器(HNA)在视觉神经假体中的有效性。我们发现,HNA生成了代表手写数字和日常物体分割图像的高保真患者特异性刺激,并且在所有模拟患者中显著优于传统编码策略。总体而言,这是朝着为患有不可治愈失明症的人恢复高质量视力这一长期挑战迈出的重要一步,并且可能证明是各种神经假体技术的一个有前景的解决方案。

相似文献

2
Human-in-the-Loop Optimization for Deep Stimulus Encoding in Visual Prostheses.
Adv Neural Inf Process Syst. 2023 Dec;36:79376-79398.
3
Towards a: AI-powered artificial vision for the treatment of incurable blindness.
J Neural Eng. 2022 Dec 7;19(6). doi: 10.1088/1741-2552/aca69d.
4
Decoding electroencephalographic responses to visual stimuli compatible with electrical stimulation.
APL Bioeng. 2024 Jun 12;8(2):026123. doi: 10.1063/5.0195680. eCollection 2024 Jun.
5
Optimization of Neuroprosthetic Vision via End-to-End Deep Reinforcement Learning.
Int J Neural Syst. 2022 Nov;32(11):2250052. doi: 10.1142/S0129065722500526. Epub 2022 Nov 4.
6
Deep Residual Autoencoders for Expectation Maximization-Inspired Dictionary Learning.
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2415-2429. doi: 10.1109/TNNLS.2020.3005348. Epub 2021 Jun 2.
7
Reconstruction of natural visual scenes from neural spikes with deep neural networks.
Neural Netw. 2020 May;125:19-30. doi: 10.1016/j.neunet.2020.01.033. Epub 2020 Feb 8.
8
End-to-end optimization of prosthetic vision.
J Vis. 2022 Feb 1;22(2):20. doi: 10.1167/jov.22.2.20.
9
Peripheral neurostimulation for encoding artificial somatosensations.
Eur J Neurosci. 2022 Nov;56(10):5888-5901. doi: 10.1111/ejn.15822. Epub 2022 Sep 25.
10
Lower-Limb Amputees Adjust Quiet Stance in Response to Manipulations of Plantar Sensation.
Front Neurosci. 2021 Feb 18;15:611926. doi: 10.3389/fnins.2021.611926. eCollection 2021.

引用本文的文献

1
Aligning Visual Prosthetic Development With Implantee Needs.
Transl Vis Sci Technol. 2024 Nov 4;13(11):28. doi: 10.1167/tvst.13.11.28.
3
Human-in-the-Loop Optimization for Deep Stimulus Encoding in Visual Prostheses.
Adv Neural Inf Process Syst. 2023 Dec;36:79376-79398.
4
Aligning visual prosthetic development with implantee needs.
medRxiv. 2024 Oct 28:2024.03.12.24304186. doi: 10.1101/2024.03.12.24304186.
5
Axonal stimulation affects the linear summation of single-point perception in three Argus II users.
J Neural Eng. 2024 Apr 8;21(2):026031. doi: 10.1088/1741-2552/ad31c4.
7
An actor-model framework for visual sensory encoding.
Nat Commun. 2024 Jan 27;15(1):808. doi: 10.1038/s41467-024-45105-5.
8
Neuromorphic hardware for somatosensory neuroprostheses.
Nat Commun. 2024 Jan 16;15(1):556. doi: 10.1038/s41467-024-44723-3.
9
Axonal stimulation affects the linear summation of single-point perception in three Argus II users.
medRxiv. 2023 Dec 26:2023.07.21.23292908. doi: 10.1101/2023.07.21.23292908.

本文引用的文献

1
End-to-end optimization of prosthetic vision.
J Vis. 2022 Feb 1;22(2):20. doi: 10.1167/jov.22.2.20.
2
Human-in-the-loop optimization of visual prosthetic stimulation.
J Neural Eng. 2022 Jun 20;19(3). doi: 10.1088/1741-2552/ac7615.
3
A Computational Model of Phosphene Appearance for Epiretinal Prostheses.
Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:4477-4481. doi: 10.1109/EMBC46164.2021.9629663.
6
What do blind people "see" with retinal prostheses? Observations and qualitative reports of epiretinal implant users.
PLoS One. 2021 Feb 10;16(2):e0229189. doi: 10.1371/journal.pone.0229189. eCollection 2021.
7
Testing Vision Is Not Testing For Vision.
Transl Vis Sci Technol. 2020 Dec 18;9(13):32. doi: 10.1167/tvst.9.13.32. eCollection 2020 Dec.
8
Stimulation Strategies for Improving the Resolution of Retinal Prostheses.
Front Neurosci. 2020 Mar 26;14:262. doi: 10.3389/fnins.2020.00262. eCollection 2020.
9
Development of visual Neuroprostheses: trends and challenges.
Bioelectron Med. 2018 Aug 13;4:12. doi: 10.1186/s42234-018-0013-8. eCollection 2018.
10
An update on retinal prostheses.
Clin Neurophysiol. 2020 Jun;131(6):1383-1398. doi: 10.1016/j.clinph.2019.11.029. Epub 2019 Dec 10.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验