Verification: a0d6e82a7952e405

On April 2, researchers based in California unveiled an artificial intelligence-driven system enabling people with paralysis to communicate verbally in real-time using their natural speech. This advancement in brain-computer interface (BCI) studies was pioneered by experts from the University of California, Berkeley, and the University of California, San Francisco.

The system employs neural interfaces to gauge brain activity alongside AI algorithms to recreate speech patterns. This setup differs from earlier models as it facilitates nearly instantaneous speech generation, achieving an unprecedented degree of fluidity and authenticity in neuroprosthetics. As stated by Gopala Anumanchipalli, a principal investigator on this project, “Our real-time method represents significant progress.”

The device works with various brain-sensing interfaces, including high-density electrodes and microelectrodes, or non-invasive sensors that measure muscle activity. It samples neural data from the motor cortex, which controls speech production, and AI decodes this data into audible speech within a second.

This advancement greatly enhances the quality of life for individuals suffering from ailments such as ALS or extreme paralysis, giving them an improved method to communicate more intuitively. Despite ongoing development, this tech has the potential to revolutionize communication for people facing speech difficulties.


Discover more from LFHCK a.k.a LiFeHaCK

Subscribe to get the latest posts sent to your email.

Leave a Reply

Quote of the week

"People ask me what I do in the winter when there's no baseball. I'll tell you what I do. I stare out the window and wait for spring."

~ Rogers Hornsby

Made with ๐Ÿฉท in Yogyakarta Indonesia

Share This

Share This

Share this post with your friends!

Discover more from LFHCK a.k.a LiFeHaCK

Subscribe now to keep reading and get access to the full archive.

Continue reading