


During the first 15 s of each block, common objects were presented every 3 s in varying spatial locations. Eight blocks were presented for 30 s per run. If the object was broken, the subject withheld movement. If the object was intact, the subject imagined performing the cued movement. Following the cue, a cylindrical object was displayed. Following an intertrial interval, the subject was cued with a specific imagined movement (precision grasp, power grasp, or reach without hand shaping). The participant performed two complementary tasks to ensure activation was robust across paradigms. We used fuctional magnetic resonance imaging (fMRI) to identify cortical regions involved in imagined reaching and grasping actions. The original image and legend text are published under the terms of the Creative Commons Attribution-NonCommercial 4.0 International license (CC BY-NC 4.0). Gaussian smoothing kernel (50 ms standard deviation ).įigure 1-figure supplement 1 and legend text have been reproduced from Figure S1 of Aflalo et al., 2020.

Shaded areas indicate 95% confidence intervals (across trials of one session). ( c–f) Mean firing rates for four example neurons, color-coded by attempted finger movement. Each entry ( i, j) in the matrix corresponds to the ratio of movement i trials that were classified as movement j. ( b) Confusion matrix showing robust BCI finger control (86% overall accuracy, 4016 trials aggregated over 10 sessions). The red crosshair was jittered to minimize visual occlusion. To randomize the gaze location, cues were located on a grid (three rows, four columns) in a pseudorandom order. Visual feedback indicated the decoded finger 1.5 s after cue presentation. We included a null condition ‘X’, during which the participant looked at the target but did not move her fingers. When a letter was cued by the red crosshair, the participant looked at the cue and immediately attempted to flex the corresponding finger of her right (contralateral) hand.
