Context-dependent learning interferes with visuomotor transformations for manipulation planning

Qiushi Fu, Marco Santello

Research output: Contribution to journalArticlepeer-review

25 Scopus citations


How the CNS transforms visual information of object properties into motor commands for manipulation is not well understood. We designed novel apparatus and protocols in which human subjects had to learn manipulations in two different contexts. The first task involved manipulating a U-shaped object that can afford two actions by grasping different parts of the same object. The second task involved manipulating two L-shaped objects that were posed at different orientations. In both experiments, subjects learned the manipulation over consecutive trials in one context before switching to a different context. For both objects and tasks, the visual geometric cues were effective in eliciting anticipatory control with little error at the beginning of learning of the first context. However, subjects failed to use the visual information to the same extent when switching to the second context as sensorimotor memory built through eight consecutive repetitions in the first context exerted a strong interference on subjects' ability to use visual cues again when the context changed. A follow-up experiment where subjects were exposed to a pseudorandom sequence of context switches with the U-shaped object revealed that the interference caused by the preceding context persisted even when subjects switched context after only one trial. Our results suggest that learning generalization of dexterous manipulation is fundamentally limited by context-specific learning of motor actions and competition between vision-based motor planning and sensorimotor memory.

Original languageEnglish (US)
Pages (from-to)15086-15092
Number of pages7
JournalJournal of Neuroscience
Issue number43
StatePublished - Oct 24 2012

ASJC Scopus subject areas

  • Neuroscience(all)


Dive into the research topics of 'Context-dependent learning interferes with visuomotor transformations for manipulation planning'. Together they form a unique fingerprint.

Cite this