• supersquirrel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    10 hours ago

    The tech developed by researchers at University of California, Davis (UC Davis) was trialed with a study participant who suffers from amyotrophic lateral sclerosis (ALS). It essentially captured raw neural signals through four microelectrode arrays surgically implanted into the region of the brain responsible for physically producing speech. In combination with low-latency processing and an AI-driven decoding model, the participant’s speech was synthesized in real time through a speaker.

    wait, a use of AI that isn’t creepy or disgusting!

    A unicorn has been found!

    To be clear, this means the system isn’t trying to read the participant’s thoughts, bur rather translating the brain signals produced when he tries to use his muscles to speak.

    The system also sounds like the participant, thanks to a voice cloning algorithm trained on audio samples captured before they developed ALS.

    Honestly seems like the perfect use of a LLM style AI that functions as a collage mimic. Concatenation baby!