Academic Stuff: I received my PhD in 2012 from Arizona State University. Since that time, I have been an Assistant and then Associate Professor at Louisiana State University and now I'm an Associate Professor at New Mexico State University. I supervise the EMMA Lab and love teaching courses on Memory, Cognition, Eye Movements/Eye Tracking, Experimental Methods, and Psychophysiology. Please see below for some of my current research interests, or the Cool New Stuff page for ongoing results.
Non-Academic Stuff: I enjoy travel, and learning about new people and places. Although I love the outdoors, especially hiking, I have never kept a plant alive longer than a month. I am also a big lover of dogs, especially dachshunds, but I'll spoil anything with 3 or 4 legs and a tail (or nub).
Unfamiliar Face Recognition
Recognizing someone you know is easy, and it remains easy under myriad circumstances. Recognizing an unfamiliar person, however, is far more challenging. Although I explore these topics using long-term memory paradigms, I am also interested in the processes by which individuals compare unfamiliar individuals to their photo IDs. This task is not only remarkably fallible, but also vulnerable to changes in the base rate of fake/stolen IDs: Individuals "miss" identity mismatches far more frequently when they rarely encounter them (Papesh & Goldinger, 2014). Our current research explores why this happens, and whether experts are immune to these prevalence effects.
More recently, we have begun to explore individual differences in matching ability. For example, we are using a battery of tasks to determine which cognitive processes predict performance, because our existing research indicates that face-matching ability is unaffected by concurrent WM loads (see right).
Baseline-corrected pupil size as White participants encoded White and Asian faces, with separate lines for subsequent memory responses.
Baseline-corrected peak pupil size as participants encoded items for a subsequent memory test.
Participants' accuracy when 2, 4, or 6 visual items were concurrently held in working memory. Performance is conditionalized on correct WM retrieval after the match/mismatch decision.
LC-NE System and Attention, Memory
Pupil size is sensitive to more than just ambient lighting and emotions; it is also sensitive to ongoing cognitive processes. The neural underpinning of these latter changes is in the locus coeruleus, which is the sole source of norepinephrine for the brain. A major research goal of mine is to determine how and why pupil size tracks changes in cognitive activity, and how this might be related to changes in the tonic and phasic activity of the LC system. Current projects explore the influence of reward/motivation, fluency, goal pursuit, and awareness on dynamic changes in pupil size during tasks with implicit or explicit reliance on episodic memory.
The data depicted to the left are from two puzzling experiments:
Top: In a classic Remember/Know task, in which participants encoded Asian and White faces for a subsequent RKN task, participants' pupil sizes were smaller for items that were subsequently remembered with high fidelity (a similar pattern emerged using the process-dissociation procedure).
Bottom: In a straightforward encoding-retrieval study using spoken words (Papesh et al., 2012), participants pupil sizes during encoding were closely related to the subsequent confidence with which they retrieved their memories. Subsequent weak (and forgotten) items were associated with smaller pupils, and subsequently strong (more confident) items were associated with larger pupils.
Because strong memories (and R-based memories) should be easy to encode/retrieve, our on-going projects are aimed at figuring out this little puzzle, and we're relying on the relationship between pupil size and the LC-NE neuromodulatory system.
Eye Movements in Cognitive Processes
Although it is obvious that eye movements aid information acquisition (after all, your fovea only covers a small portion of the visual environment!), it is less obvious that they may also aid cognitive processes. My research on eye movements in cognitive processes focuses on the role of non-visually guided eye movement patterns during memory retrieval and problem solving. Previous work in my lab has revealed that reinstated patterns of eye movements across encoded and retrieval were associated with enhanced retrieval success. This, however, only occured when fixations were internally generated; external cues did not facilitate memory retrieval. We are currently investigating what types of information and retrieval strategies are influenced (or revealed by) gaze patterns.
Retrieval and Rejection Dynamics
What is easier: Knowing that you know, or knowing that you don't know? This is one of the questions that I have recently begun exploring, by adopting a modified version of the museum paradigm (e.g., St. Jacques & Schacter, 2013). Participants engage in familiar or novel activities while wearing GoPro cameras, and they subsequently try to determine which photos/videos were taken from their cameras versus someone else's. I am also exploring how mouse-movements can reveal the underlying decision dynamics for tasks such as this, as well as more laboratory-based memory paradigms. The overarching goal is to document the cognitive processes involved in retrieval and rejection processes, and the timecourse followed by each process.
I have a number of ongoing collaborations, both with researchers at NMSU and elsewhere. Many of these projects focus on two key themes: single-cell neural recordings of memory and visual search, although other projects explore mind wandering during reading, intentional forgetting, and the influence of emotional context on memory.
With John T. Wixted and Larry Squire (University of California, San Diego), Stephen D. Goldinger (ASU), and Kris Smith (Barrow Neurological Institute), we explore the neural representation of memory at the level of individual neurons (and populations of neurons). Our previous work has found evidence for sparse distributed representations of episodic memories in the human hippocampus (Wixted et al., 2014), consistent with long-standing theories in cognitive neuroscience.
With Dr. John Chang (Banner MD Anderson Cancer Center) and Steve Goldinger (ASU), I am exploring visual search in one of the noisiest signal detection tasks out there: Cancer screening. After surveying practicing radiologists, we found that they experience both cognitive and contextual demands during their regular work days. For example, every doctor rated interruptions as the number one challenge impacting radiological reading, yet little research explores this topic. In recent years, CT imaging has expanded in use, relative to x-ray, yet most of what we know about medical image screening comes from flat, 2D, images. We have developed a series of laboratory and workflow experiments designed to monitor attention and early-stage cancer detection as radiologists face the cognitive and contextual factors that they subjectively identify as damaging to their performance. Using eye-tracking and AI, we aim to better understand radiological screening to improve early-stage cancer detection.
We have also found that these sparsely distributed representations are not perfectly specific (see above). In other words, the Jennifer Aniston cell may be the Jennifer Aniston/Snakes/Planes cell. In Valdez, Papesh et al. (2015), we document that most cells do not respond to one and only one concept. Instead, they respond to an average of 13 concepts, and that mutual information analyses over the entire population of recorded cells can predict on-screen concepts with 63% accuracy. We're currently examining data from a variety of additional memory tests, and are preparing a new submission for the Proceedings of the National Academy of Sciences.