Greetings, visitor! Please note that this site is a work in progress and we are scientists, not web designers; we remain ever-hopeful that someday we will get around to making it prettier. [TO DO: Insert ironic 1990s-style "under construction" animated GIF.]
Spring 2021: Hey, guess who has about 12 thumbs and didn't update the news part of this website for several years? That would be us. Wow, so much has happened in that time... I think there was a pandemic at some point? Hey, remember early on in that, when we were all watching Tiger King? Weird times. Anyway, sorry about the lack of updates... maybe we'll be better about that in the future. (Arrested Development narrator: "They won't.")
Sept 2016: Good news, everyone! In collaboration with the lab of Mike Dodd at UNL and some fantastic colleagues at the University of Delaware and the University of Nevada, Reno, the lab will be sharing in a new $6 million grant to do some cool research projects and help build up the infrastructure of cognitive neuroscience research at UNL. In related news, effective immediately, lab meeting refreshments will consist of champagne and caviar. Monocles will be provided.
Positions: The lab is not currently recruiting any postdocs, graduate students, or other full-time personnel. However, as Yoda said -- always in motion is the future, so things could change anytime. It is also possible that we forgot to update the website again, so if you are particularly interested in joining us, you can always email Matt for the latest news.
We are also kind of full up on undergraduate helpers at the moment but again, if you feel that information is out of date or are especially interested in joining us, just email Matt to discuss opportunities to get involved.
(If you are here because you are interested in volunteering as a research participant, please see the Contact page for details on that.)
Our group has many interests, but most of our research is broadly organized around the theme of studying the interplay between two aspects of cognition: The internal world of thoughts and memories that come from inside our heads (encompassed by the umbrella term of "reflection") and the way that we process sensory (mostly visual) information from the external environment ("perception").
After reading the previous paragraph, you may find yourself saying, "Wait a second -- so your research focuses on 'seeing stuff' and 'thinking about stuff'? Doesn't that cover just about EVERYTHING in psychology?" Well, not everything -- but you'd be correct in noting that these research themes have deep and wide-ranging connections to many different topics in cognitive psychology and cognitive neuroscience (notably working memory, executive function, long-term memory, visual processing, and mental imagery), and have potential applications in the study of aging, mental illness, childhood development, and so on.
So within these broad themes, there are numerous interesting questions we can ask, but one of the specific areas we focus on is attention. You're probably aware that perceptual (e.g. visual) attention is used to select a subset of incoming sensory information for deeper processing, while the rest of the information flooding our senses at any given moment is mostly ignored. The same is true in the reflective domain -- when focused internally, attention is a mechanism for focusing on and shifting between thoughts and memory items, and as such attention plays an important role in guiding and shaping the so-called "stream of consciousness." One of the main interests in our lab is how attention operates similarly or differently in perception versus reflection. Are perceptual attention and reflective attention basically the same thing, just focused in different directions (outward versus inward)? To what extent do their neural mechanisms overlap? Do they have similar consequences for behavior?
To address these questions, we use a number of different techniques. Sometimes we use functional MRI (fMRI), sometimes we use electroencephalography (EEG), sometimes we use good old-fashioned behavior (i.e., pressing buttons on a keyboard in response to a computer-based task). Occasionally, we incorporate measures such as eyetracking as well. Sometimes, we employ fancy statistical techniques and heroic feats of computer programming -- specifically deep learning, which we've been focusing a lot of our efforts on recently -- but other questions can be answered with more straightforward designs and relatively basic statistics. There's something for everyone!
To make things a bit more concrete, here are some of our published findings so far (see Publications page for references and PDFs of papers):
- We have found that thinking even briefly of a visual stimulus such as a face or scene affects activity in brain areas traditionally associated with visual perception. For example, thinking of a face increases activity in the "fusiform face area" (FFA) and thinking of a scene increases activity in the "parahippocampal place area" (PPA). [Johnson et al., 2007]
- Furthermore, directing reflective attention to one item can actually suppress brain activity associated with another item. For example, when participants were shown a face and a scene to keep in memory, and then were told to focus their mental attention exclusively on the face (and forget about the scene), fMRI activity in the PPA decreased relative to a condition in which participants saw the face and scene, but received no further instruction to pay particular attention to either item. [Johnson & Johnson, 2009]
- We have found that in some circumstances, directing reflective attention to an item can temporarily inhibit the ability to return attention to it later. For example, if participants were shown the words "chili" and "wrestler" and then directed to think of the word "chili" while ignoring "wrestler," they were actually slower to read the word "chili" when it was shown on screen a moment later (relative to if they were shown "wrestler"). While perhaps a bit counterintuitive, we believe this effect is related to a known phenomenon of perceptual attention called "inhibition of return," which in turn suggests a deep-seated link between the mechanisms of perceptual attention and reflective attention. [Johnson et al., 2013]
- We have used multivariate pattern analysis (MVPA) techniques to decode which of several specific scene pictures a person is viewing or imagining, using activity in several scene-specific areas traditionally considered "visual" brain regions. Furthermore, we showed that the same patterns of activity experienced in these areas during perception are then "replayed" when people later recall a specific picture. This suggests that the way the brain represents information about visual items is similar regardless of whether people are actually perceiving the item in question, or merely recalling and forming a mental image of a previous perceptual experience. [Johnson & Johnson, 2014]
- We have used EEG to show that a brief act of reflective attention (thinking back to one of two recently-presented visual items) can be broken down into two main temporal subcomponents -- the first likely associated with the initiation of attention, and the second likely associated with the activation of "visual" brain areas that encode the mental representation of the item. Furthermore, we have used MVPA to show that the category of item someone is thinking about (in this study, a scene, a face, or a word) can be decoded even during very short temporal intervals. [Johnson et al., 2015]
- We have used fMRI and more MVPA to show that seemingly-random "drift" in neural activity during a working memory delay period determines whether or not an individual will remember the item accurately. In other words, we can not only decode based on brain activity what item you're maintaining in visual short-term memory -- we can decode how accurately you're remembering it or in what manner you're misremembering it. [Lim et al., 2019]
- We have developed a new deep-learning-based technique for MVPA, which we call "paired trial classification" or PTC. Essentially the idea is that instead of classifying a trial of brain activity directly into a category, we instead present the classifier with TWO trials of brain activity, which could either be from the same category or different categories, and train it to tell us if they are the same or different. This has a number of potential advantages for MVPA studies. [Williams et al., 2020]
- We have produced and distributed our own deep learning toolbox for what we call deep MVPA or dMVPA! This is intended to facilitate the same kinds of things as projects like PyMVPA and CoSMoMVPA, but using deep learning rather than conventional MVPA approaches. For more details, see the toolbox website and/or our paper. [Kuntzelman et al., 2021]
- We have shown behaviorally that two processes often discussed in working memory models -- refreshing and removal -- may not be dissociable from each other if they are studied under carefully controlled conditions. Our conjecture is that both could be reframed as different manifestations of a single process of reallocating mental attention within working memory. [Lintz & Johnson, in press]
- Investigating the relationship between neural reinstantiation (i.e., how much brain activity patterns from when an item was perceived are "replayed" when recalling the item later) and the subjective experience of perceptual and reflective vividness.
- Predicting how different memory representations within visual short-term memory can bias and interfere with one another, modeling how those biases and interference patterns might be affected by the spatial and featural relationships between the items in memory, and examining the effect that reflective attention can have on those biases and interference patterns.
- Exploring how representations and brain activity patterns change over repeated exposures to complex audiovisual information.
- Improving deep learning techniques for dMVPA and the decoding of brain activity patterns.
- Using dMVPA to develop "smarter" artifact-rejection techniques for EEG.
The above are just a few examples of projects for which we are currently collecting or analyzing data, but we have many more projects in various stages of development or planned for the future. Once more, if you'd like to play a role in making these projects successful, please get in touch to discuss opportunities for working in the lab!
We have fun.
Food! It's what we eat!
We do also do work! But mostly, it seems, we eat food!