Feedback-dependent generalization
Type
Generalization provides a window into the representational changes that occur during motor learning. Neural network models have been integral in revealing how the neural representation constrains the extent of generalization. Specifically, two key features are thought to define the pattern of generalization. First, generalization is constrained by the properties of the underlying neural units; with directionally tuned units, the extent of generalization is limited by the width of the tuning functions. Second, error signals are used to update a sensorimotor map to align the desired and actual output, with a gradient-descent learning rule ensuring that the error produces changes in those units responsible for the error. In prior studies, task-specific effects in generalization have been attributed to differences in neural tuning functions. Here we ask whether differences in generalization functions may arise from task-specific error signals. We systematically varied visual error information in a visuomotor adaptation task and found that this manipulation led to qualitative differences in generalization. A neural network model suggests that these differences are the result of error feedback processing operating on a homogeneous and invariant set of tuning functions. Consistent with novel predictions derived from the model, increasing the number of training directions led to specific distortions of the generalization function. Taken together, the behavioral and modeling results offer a parsimonious account of generalization that is based on the utilization of feedback information to update a sensorimotor map with stable tuning functions.