تیمی از دانشمندان کالج پزشکی بیلور در آمریکا یک ایمپلنت مغزی طراحی کردهاند که به نابینایان امکان میدهد شکل حروف را در مغز خود شبیهسازی کنند و ببینند.
این ابزار با جستوجوی چشم و انتقال مستقیم اطلاعات بصری از یک دوربین به الکترودهای کاشته شده در مغز کار میکند و محققان ادعا میکنند که اولین گام به سوی «پروتز بینایی» است و در آینده دور به نابینایان امکان میدهد بینایی خود را به طور کامل به دست آوردند.
نکته قابل توجه در تحقیقات این محققان، آن است که شرکت کنندگان با کمک توالی پیچیده پالسهای الکتریکی که به مغزشان ارسال میشد، توانستند به راحتی خطوط دور حروف را ببینند.
دانیل یوشور، یکی از این محققان، میگوید: زمانی که از تحریک الکتریکی برای ردیابی مستقیم حروف روی مغز بیماران استفاده میکردیم، آنها به راحتی قادر به دیدن شکل حروف بودند و میتوانستند بدون اشتباه حروف مورد نظر ما را از بین حروف دیگر تشخیص دهند.
وی میافزاید: جالب اینجاست که آنها چیزی را که توسط این دستگاه میدیدند را نقاط درخشان یا خطوطی توصیف میکردند که حروف را تشکیل میدادند؛ چیزی شبیه به آسماننویسی.
آسماننویسی به نوشتن واژه بر آسمان گفته میشود که با دودی که از مایعی در انبار هواپیما بیرون میآید، ساخته میشود.
معمولا افراد نابینا برای اینکه حروف را یاد بگیرند، کسی روی کف دست آنها شکل حروف را میکشد و همین امر الهامبخش محققان برای ساخت این ابزار شد. محققان امیدوارند که چنین ابزاری تاثیر بسیار مهمی روی زندگی افراد نابینا یا کم بینا بگذارد و به آنها توانایی میدهد که برای راه رفتن کمتر به کمک دیگران نیاز داشته باشند.
گفتنی است که در این پژوهش الکترودهایی در کورتکس بینایی مغز چهار فرد کمبینا و دو فرد نابینا کاشته شد و این افراد توانستند اشکال را بدون هیچ تلاش اضافی روی کورتکس بصری خود ردیابی کنند. این افراد در آزمایشهای مذکور توانستند بیش از ۸۶ حرف را تشخیص دهند.
Brain Implant Bypasses Eyes To Help Blind People See
Early humans were hunters, and their vision systems evolved to support the chase. When gazing out at an unchanging landscape, their brains didn’t get excited by the incoming information. But if a gazelle leapt from the grass, their visual cortices lit up.
That neural emphasis on movement may be the key to restoring sight to blind people. Daniel Yoshor, the new chair of neurosurgery at the University of Pennsylvania’s Perelman School of Medicine, is taking cues from human evolution as he devises ways to use a brain implant to stimulate the visual cortex. “We’re capitalizing on that inherent bias the brain has in perception,” he tells IEEE Spectrum.
He recently described his experiments with “dynamic current steering” at the Bioelectronic Medicine Summit, and also published the research in the journal Cell in May. By tracing shapes with electricity onto the brain’s surface, his team is producing a relatively clear and functional kind of bionic vision.
Yoshor is involved in an early feasibility study of the Orion implant, developed by the Los Angeles-based company Second Sight, a company that’s been on the forefront of technology workarounds for people with vision problems.
In 2013, the U.S. Food and Drug Administration approved Second Sight’s retinal implant system, the Argus II, which uses an eyeglass-mounted video camera that sends information to an electrode array in the eye’s retina. Users have reported seeing light and dark, often enough to navigate on a street or find the brightness of a face turned toward them. But it’s far from normal vision, and in May 2019 the company announced that it would suspend production of the Argus II to focus on its next product.
The company has had a hard time over the past year: At the end of March it announced that it was winding down operations, citing the impact of COVID-19 on its ability to secure financing. But in subsequent months it announced a new business strategy, an initial public offering of stock, and finally in September the resumption of clinical trials for its Orion implant.
The Orion system uses the same type of eyeglass-mounted video camera, but it sends information to an electrode array atop the brain’s visual cortex. In theory, it could help many more people than a retinal implant: The Argus II was approved only for people with an eye disease called retinitis pigmentosa, in which the photoreceptor cells in the retina are damaged but the rest of the visual system remains intact and able to convey signals to the brain. The Orion system, by sending info straight to the brain, could help people with more widespread damage to the eye or optic nerve.
Six patients have received the Orion implant thus far, and each now has an array of 60 electrodes that tries to represent the image transmitted by the camera. But imagine a digital image made up of 60 pixels—you can’t get much resolution.
Yoshor says his work on dynamic current steering began with “the fact that getting info into the brain with static stimulation just didn’t work that well.” He says that one possibility is that more electrodes would solve the problem, and wonders aloud about what he could do with hundreds of thousands of electrodes in the brain, or even 1 million. “We’re dying to try that, when our engineering catches up with our experimental imagination,” he says.
Until that kind of hardware is available, Yoshor is focusing on the software that directs the electrodes to send electrical pulses to the neurons. His team has conducted experiments with two blind Second Sight volunteers as well as with sighted people (epilepsy patients who have temporary electrodes in their brains to map their seizures).
One way to understand dynamic current steering, Yoshor says, is to think of a trick that doctors commonly use to test perception—they trace letter shapes on a patient’s palm. “If you just press a ‘Z’ shape into the hand, it’s very hard to detect what that is,” he says. “But if you draw it, the brain can detect it instantaneously.” Yoshor’s technology does something similar, grounded in well-known information about how a person’s visual field maps to specific areas of their brain. Researchers have constructed this retinotopic map by stimulating specific spots of the visual cortex and asking people where they see a bright spot of light, called a phosphene.
The static form of stimulation that disappointed Yoshor essentially tries to create an image from phosphenes. But, says Yoshor, “when we do that kind of stimulation, it’s hard for patients to combine phosphenes to a visual form. Our brains just don’t work that way, at least with the crude forms of stimulation that we’re currently able to employ.” He believes that phosphenes cannot be used like pixels in a digital image.
With dynamic current steering, the electrodes stimulate the brain in sequence to trace a shape in the visual field. Yoshor’s early experiments have used letters as a proof of concept: Both blind and sighted people were able to recognize such letters as M, N, U, and W. This system has an additional advantage of being able to stimulate points in between the sparse electrodes, he adds. By gradually shifting the amount of current going to each (imagine electrode A first getting 100 percent while electrode B gets zero percent, then shifting to ratios of 80:20, 50:50, 20:80, 0:100), the system activates neurons in the gaps. “We can program that sequence of stimulation, it’s very easy,” he says. “It goes zipping across the brain.”
Second Sight didn’t respond to requests for comment for this article, so it’s unclear whether the company is interested in integrating Yoshor’s stimulation technique into its technology.
But Second Sight isn’t the only entity working on a cortical vision prosthetic. One active project is at Monash University in Australia, where a team has been preparing for clinical trials of its Gennaris bionic vision system.
Arthur Lowery, director of the Monash Vision Group and a professor of electrical and computer systems engineering, says that Yoshor’s research seems promising. “The ultimate goal is for the brain to perceive as much information as possible. The use of sequential stimulation to convey different information with the same electrodes is very interesting, for this reason,” he tells IEEE Spectrum in an email. “Of course, it raises other questions about how many electrodes should be simultaneously activated when presenting, say, moving images.”
Yoshor thinks the system will eventually be able to handle complex moving shapes with the aid of today’s advances in computer vision and AI, particularly if there are more electrodes in the brain to represent the images. He imagines a microprocessor that converts whatever image the person encounters in daily life into a pattern of dynamic stimulation.
Perhaps, he speculates, the system could even have different settings for different situations. “There could be a navigation mode that helps people avoid obstacles when they’re walking; another mode for conversation where the prosthetic would rapidly trace the contours of the face,” he says. That’s a far-off goal, but Yoshor says he sees it clearly.