Forward encoding model

Functional magnetic resonance imaging records brain activity with spatially distinct voxels, but this segmentation will be misaligned with a brain’s meaningful boundaries. The segmentation results in some voxels recording activity from different types of tissue – types that are both neural an non-neural – but even voxels that exclusively sample gray matter can span functionally distinct cortex. For example, a 3T scanner allows voxels in the range of 1.5-3 mm\(^3\), but orientation columns have an average width of 0.8 mm (Yacoub, Harel, and Uğurbil 2008). Studying orientation columns with such low resolution requires statistical tools.

One statistical tool models voxel activity as a linear combination of the activity of a small number of neural channels (Brouwer and Heeger 2009; Kay et al. 2008). These models are called forward models, describing how the channel activity transforms into voxel activity. In early sensory cortex, the channels are analogous to cortical columns. In later cortex, the channels are more abstract dimensions of a representational space. Developing a forward model requires assuming not only how many channels contribute of a voxel’s activity, but also the tuning properties of those channels. With these assumptions, regression allows inferring the contribution of each channel to each voxel’s activity. Let \(N\) be the number of observations for each voxel, \(M\) be the number of voxels, and \(K\) be the number of channels within a voxel. The forward model specifies that the data (\(B\), \(M \times N\)) result from a weighted combination of the assumed channels responses (\(C\), \(K \times N\)), where the weights (\(W\), \(M \times K\)) are unknown.

\[ B = WC \]

Taking the pseudoinverse of the channel matrix and multiplying the result by the data gives an estimate of the weight matrix:

\[ \widehat{W} = BC^T(CC^T)^{-1} \]

Assumptions about \(C\) are assumptions about how the channels encode stimuli. Different encoding schemes can be instantiated with different \(C\), and any method for comparing linear models could be used to compare the schemes.

The forward encoding model enables comparison of static encoding schemes, but neural encoding schemes are dynamic. Attentional fluctuations, perceptual learning, and stimulation history all modulate neural tuning functions (McAdams and Maunsell 1999; Reynolds, Pasternak, and Desimone 2000; Siegel, Buschman, and Miller 2015; Yang and Maunsell 2004). To explore modulations with functional magnetic resonance imaging, some researchers have inverted the encoding model (Garcia, Srinivasan, and Serences 2013; Rahmati, Saber, and Curtis 2018; Saproo and Serences 2014; Scolari, Byers, and Serences 2012; Sprague and Serences 2013; Vo, Sprague, and Serences 2017). The inversion is a variation of cross validation. The method estimates the weight matrix with only some of the data (e.g., all data excluding a single run). The held out data, \(B_H\), contains observations from all experimental condition across which the tuning functions might vary. The encoding model is inverted by multiplying the pseudoinverse of the weight matrix with the held out data to estimate a new channel response matrix.

\[ \widehat{C} = \widehat{W}^T(\widehat{W}\widehat{W}^T)^{-1}B_H \]

The new channel response matrix estimates how the channels respond in each experimental condition.

Although validation studies demonstrated that the inverted encoding model enables inferences that recapitulate some modulations observed with electrophysiology (Sprague et al. 2018; Sprague, Saproo, and Serences 2015), the inversion also misleads inferences about certain fundamental modulations (Gardner and Liu 2019; Liu, Cable, and Gardner 2018). In particular, increasing the contrast of an orientation increases the gain of neurons tuned to orientation without altering their tuning bandwidth (Alitto and Usrey 2004; Sclar and Freeman 1982; Skottun et al. 1987), but the inverted encoding model (incorrectly) suggests that higher contrast decreases bandwidth (Liu, Cable, and Gardner 2018). Inferences are misled because the estimated channel responses are constrained by the initial assumptions about \(C\) (Gardner and Liu 2019). Using the encoding model to study modulations requires a way to estimate the contribution of each channel without assuming a fixed channel response function.

References

Alitto, Henry J, and W Martin Usrey. 2004. “Influence of Contrast on Orientation and Temporal Frequency Tuning in Ferret Primary Visual Cortex.” Journal of Neurophysiology 91 (6): 2797–2808.
Brouwer, Gijs Joost, and David J Heeger. 2009. “Decoding and Reconstructing Color from Responses in Human Visual Cortex.” Journal of Neuroscience 29 (44): 13992–4003.
Garcia, Javier O, Ramesh Srinivasan, and John T Serences. 2013. “Near-Real-Time Feature-Selective Modulations in Human Cortex.” Current Biology 23 (6): 515–22.
Gardner, Justin L, and Taosheng Liu. 2019. “Inverted Encoding Models Reconstruct an Arbitrary Model Response, Not the Stimulus.” eNeuro 6 (2).
Kay, Kendrick N, Thomas Naselaris, Ryan J Prenger, and Jack L Gallant. 2008. “Identifying Natural Images from Human Brain Activity.” Nature 452 (7185): 352.
Liu, Taosheng, Dylan Cable, and Justin L Gardner. 2018. “Inverted Encoding Models of Human Population Response Conflate Noise and Neural Tuning Width.” Journal of Neuroscience 38 (2): 398–408.
McAdams, Carrie J, and John HR Maunsell. 1999. “Effects of Attention on Orientation-Tuning Functions of Single Neurons in Macaque Cortical Area V4.” Journal of Neuroscience 19 (1): 431–41.
Rahmati, Masih, Golbarg T Saber, and Clayton E Curtis. 2018. “Population Dynamics of Early Visual Cortex During Working Memory.” Journal of Cognitive Neuroscience 30 (2): 219–33.
Reynolds, John H, Tatiana Pasternak, and Robert Desimone. 2000. “Attention Increases Sensitivity of V4 Neurons.” Neuron 26 (3): 703–14.
Saproo, Sameer, and John T Serences. 2014. “Attention Improves Transfer of Motion Information Between V1 and MT.” Journal of Neuroscience 34 (10): 3586–96.
Sclar, G, and RD Freeman. 1982. “Orientation Selectivity in the Cat’s Striate Cortex Is Invariant with Stimulus Contrast.” Experimental Brain Research 46 (3): 457–61.
Scolari, Miranda, Anna Byers, and John T Serences. 2012. “Optimal Deployment of Attentional Gain During Fine Discriminations.” Journal of Neuroscience 32 (22): 7723–33.
Siegel, Markus, Timothy J Buschman, and Earl K Miller. 2015. “Cortical Information Flow During Flexible Sensorimotor Decisions.” Science 348 (6241): 1352–55.
Skottun, Bernt C, Arthur Bradley, Gary Sclar, Izumi Ohzawa, and Ralph D Freeman. 1987. “The Effects of Contrast on Visual Orientation and Spatial Frequency Discrimination: A Comparison of Single Cells and Behavior.” Journal of Neurophysiology 57 (3): 773–86.
Sprague, Thomas C, Kirsten CS Adam, Joshua J Foster, Masih Rahmati, David W Sutterer, and Vy A Vo. 2018. “Inverted Encoding Models Assay Population-Level Stimulus Representations, Not Single-Unit Neural Tuning.” eNeuro 5 (3).
Sprague, Thomas C, Sameer Saproo, and John T Serences. 2015. “Visual Attention Mitigates Information Loss in Small-and Large-Scale Neural Codes.” Trends in Cognitive Sciences 19 (4): 215–26.
Sprague, Thomas C, and John T Serences. 2013. “Attention Modulates Spatial Priority Maps in the Human Occipital, Parietal and Frontal Cortices.” Nature Neuroscience 16 (12): 1879.
Vo, Vy A, Thomas C Sprague, and John T Serences. 2017. “Spatial Tuning Shifts Increase the Discriminability and Fidelity of Population Codes in Visual Cortex.” Journal of Neuroscience 37 (12): 3386–3401.
Yacoub, Essa, Noam Harel, and Kâmil Uğurbil. 2008. “High-Field fMRI Unveils Orientation Columns in Humans.” Proceedings of the National Academy of Sciences 105 (30): 10607–12.
Yang, Tianming, and John HR Maunsell. 2004. “The Effect of Perceptual Learning on Neuronal Responses in Monkey Visual Area V4.” Journal of Neuroscience 24 (7): 1617–26.

View Page Source

Patrick Sadil
Patrick Sadil
Research Associate; Biostatistics Faculty