This work highlights the intricate ties between event segmentation and memory storage.
Zacks had previously theorized that the human brain was especially tuned to the small surprises that fill our lives.
Multiple mechanisms have been proposed for how the cognitive system determines when to segment the stream of behavior and switch from one active event model to another.
Another 3.5 h of activities were used to test each variant for agreement with human segmentation and categorization.
These results suggest that event model transitioning based on prediction uncertainty or prediction error can reproduce two important features of human event comprehension.
Summary: By teaching computer models to forecast commonplace occurrences, researchers discovered that understanding was enhanced by reacting to uncertainty rather than just prediction errors. This casts doubt on the notion that the brain uses just surprises to process events and raises the possibility that it uses two different processes. According to memory research, recognizing event boundaries improves recall, particularly in older adults.
Research is still being done to help people better recognize these boundaries in order to enhance memory. Results could improve our knowledge of cognitive processes and guide treatments for age-related memory loss. This study demonstrates the close connections between memory storage and event segmentation.
Key Facts:.
Computer simulations demonstrate how uncertainty enhances understanding of commonplace occurrences.
Finding the boundaries of events is a powerful predictor of memory retention.
Memory decline is often associated with event processing difficulties in older adults.
It comes from WUSTL.
Little things like brewing coffee in the morning, letting the dog outside, opening a laptop, and letting the dog back in are what make up life. Add them all up and you have a full day.
As chair of the Department of Psychological & Brain Sciences and Edgar James Swift Professor in Arts and Sciences, Jeff Zacks stated that our brains are dedicated to observing and processing the events that comprise our everyday lives.
Understanding the beginning and end of events is essential to comprehending the world, according to Zacks.
Zacks and other researchers from the McKelvey School of Engineering and Arts & Sciences examine this important aspect of human cognition in two new papers.
Zacks oversaw a study that trained computer models to watch over 25 hours of footage of people going about their daily lives, like cleaning the kitchen or preparing food, and then predict what would happen next.
Unexpectedly, the study found that the computer models performed best when they addressed uncertainty. The model’s comprehension was enhanced by resetting and reevaluating the scene when it was particularly uncertain about what would happen next.
Co-authors of the study, which will be published in an upcoming edition of PNAS Nexus, include Tan Nguyen, a graduate student in Zacks’s Dynamic Cognition Laboratory; Matt Bezdek, a senior scientist in the lab; Aaron Bobick, the James M. McKelvey Professor and dean of the McKelvey School of Engineering; Todd Braver, the William R. Stuckenberg Professor in Human Values and Moral Development; and Samuel Gershman, a Harvard neuroscientist.
The human brain, Zacks had previously hypothesized, was particularly sensitive to the little surprises that occur throughout our lives. According to his theory, “prediction error” occurs when people reevaluate a scene each time they notice something unexpected. “”.
The previous theory was called into question when it was discovered that the successful computer model focused more on uncertainty than on prediction errors. “This is science,” Zacks stated. When new information becomes available, we update our theories. “”.
Nguyen asserted that surprises are still important and that the idea of prediction error does not have to be entirely abandoned. He stated, “We are beginning to believe that the brain uses both mechanisms.”. “It’s not an either-or situation. Every model has the potential to add something special to our comprehension of human cognition. “”.
Postdoctoral researcher Maverick Smith of the Dynamic Cognition Lab is also delving deeper into the relationship between memory and event comprehension. Smith co-wrote a review paper in Nature Reviews Psychology with Heather Bailey, a former postdoc at WashU who is currently an associate professor at Kansas State University. The paper compiled the mounting evidence that long-term memory is closely linked to the capacity to correctly and logically distinguish between one event and another.
“The ability to recognize when events begin and end varies greatly from person to person, and those variations can significantly predict how much people remember later on,” Smith said.
Our goal is to develop an intervention that can help people segment events, which may enhance memory. “.”.
Smith uses video clips, like Zacks, to gain a better understanding of how the brain interprets events. His videos feature people shopping, setting up printers, and performing other menial tasks rather than cooking and cleaning.
Viewers press buttons in a variety of experiments whenever they recognize the start or finish of a specific event. Smith then uses a series of written questions to gauge the participant’s recall of the videos.
According to Smith’s research, older adults typically struggle more to process events, which may contribute to age-related memory loss. “Perhaps we can step in and help them remember the things that happened in their lives better,” he said.
In order to better understand how the brain processes and remembers events, Zacks, Nguyen, Smith, and other members of the Department of Psychological & Brain Sciences have big plans.
In order to monitor the real-time responses of forty-five study participants to videos of ordinary occurrences, Zacks’ team is using fMRI brain imaging. Zacks stated, “We’re researching the actual neural dynamics of these cognitive processes.”.
Eye movements are being tracked in another ongoing study to gain new understanding of how humans perceive the world. Zacks clarified, “People spend a lot of time looking at and thinking about people’s hands when they watch an everyday activity.”.
By making it simpler to distinguish between events, Smith is currently utilizing video-based experiments to see if he can enhance the memory of study participants, including elderly individuals and those suffering from Alzheimer’s disease. His ultimate goal is to comprehend the process by which event observations are retained in long-term memory.
According to Smith, “there are undoubtedly some people who are better than others at breaking events down into meaningful chunks.”. We are still unsure about whether it is possible to enhance that capacity and whether doing so will result in better memory. “.”.
About the news of this neuroscience and memory research.
Leah Shaffer wrote this.
It comes from WUSTL.
Reach out to Leah Shaffer at WUSTL.
Image: Neuroscience News is the source of the image.
Original Research: Free to use.
Jeff Zacks et al. created “Modeling human activity comprehension at human scale: Prediction, segmentation, and categorization.”. Nexus PNAS.
abstract.
Prediction, segmentation, and classification of human activity comprehension models at the human scale.
In order to predict how an activity will proceed, humans create sequences of event models, which are representations of the current situation. The cognitive system’s decision to divide the stream of behavior and transition between active event models has been explained by a number of different mechanisms.
Here, we combined Bayesian inference over event classes for event-to-event transitions with recurrent neural networks for short-term dynamics to create a computational model that learns information about event classes (event schemas).
This architecture builds a number of event models by representing event schemas. One pass through eighteen hours of realistic human activities was used to train this architecture. Another 3.5 h of activities were used to test each variant for agreement with human segmentation and categorization.
The architecture achieved segmentation and categorization that approached human-like performance and was able to learn to predict human activity.
In order to better simulate human event segmentation, we then compared two variations of this architecture: one that transitioned when the active event model produced a high prediction error, and the other that transitioned when the active event model produced a large prediction uncertainty.
Despite receiving no feedback regarding segmentation or categorization, the prediction uncertainty variant offered a somewhat closer match to human segmentation and categorization than the other two variants, which learned to segment and categorize events.
These results suggest that event model transitioning based on prediction uncertainty or prediction error can reproduce two important features of human event comprehension.