It is tempting to infer the behavior of individual neurons from the behavior of individual voxels in an fMRI experiment. For instance, voxel tuning functions (VTFs) measure the magnitude of the BOLD response to a range of stimulus features (e.g., orientation), producing results that resemble individual neural tuning functions (NTFs) from single-cell recordings – like a simple cell in V1, a voxel will prefer a particular orientation. However, a voxel likely reflects a mixture of different kinds of neurons with different preferred orientations. Taking a GLM approach to this problem, forward encoding models (e.g., Brouwer and Heeger, 2009, 2011) specify the strength of different neural sub-populations (e.g., neurons preferring different orientations) for each voxel. However, these models cannot identify changes in the shape of the neural tuning function because they assume a fixed NTF shape. For instance, these models could not identify whether the NTF sharpens with perceptual learning. To address this limitation, we developed a hierarchical Bayesian model for inferring not only the relative proportions of neural sub-populations contributing to a voxel, but also the shape of the NTF and changes in NTF shape. To test the validity of this approach, we collected fMRI data while subjects viewed oriented gratings at low and high contrast. We considered three alternative forms of NTF modulation by stimulus contrast (additive shift, multiplicative gain, bandwidth sharpening). To the naked eye, the VTFs revealed an additive shift from low to high contrast. However, the hierarchical Bayesian model indicated that this shift was caused by multiplicative gain in the underlying NTFs, in line with single cell recordings. Beyond orientation, this approach could determine the form of neuromodulation in many fMRI experiments that test multiple points along a well-established dimension of variation (e.g., speed of motion, angle of motion, isoluminant hue).