
How memories deteriorate with aging has been demonstrated through a model for storing information in the brain. Attractive networks are theoretical constructs that provide a model for how the brain stores memories. A new study on these networks examines how memories are initially retained and eventually lost. Mathematical models and simulations show that as people age, memories stored in patterns of brain activity transform into chaotic, impenetrable patterns before being lost in random noise. While it is not known whether this behavior occurs in real brains, the researchers propose to look for it by observing how neural activity develops over time during memory retrieval tasks.
Memory is stored and retrieved in both artificial and biological neural networks as patterns in how signals are transmitted between multiple nodes (neurons) in a network. The output value of each node in an artificial neural network is based on the inputs it receives from other nodes to which it is connected. Similarly, a biological neuron's inputs determine both its frequency and probability of "firing" (sending an electrical signal). In another comparison with neurons, the synapses that connect the nodes contain "weights" that can reinforce or suppress the information they transmit. The degree of synchronization between any two nodes a link connects determines its weight, which can change as new memories are stored.
In attractive networks, the values of signals transmitted between nodes are assumed to correspond to the firing rates of real neurons; These firing rates then serve as inputs for the responses of the receiving neurons. Such signals constantly move in a stream through the network. Researchers can use a long binary number (representing the remembered item) and assign one of its digits to each node to place a "memory" in the network. They can then monitor how the activity of the network changes when the weights are adjusted. The memory is encoded as the impulses moving through the nodes eventually settle into a repetitive pattern known as the attractive state.
Is It Possible to Get Memory Back?
The memory can be restored if a new binary number that has a direct mathematical relationship to the number that makes up the memory is applied to the nodes. This can cause the activity of the network to turn into a relevant attractive situation. A attractor network typically has the capacity to store a plurality of discrete memories, each associated with a different attractive state. The activity of the network then switches between each of these states.
According to other studies, it is predicted that only a network printed with clearly defined, stable attractive states will exhibit quieter network activity than biological neural networks. Also, research on attractive networks has revealed that these networks are susceptible to "destructive forgetting"; in this case, no memory states can be recalled if an excessive number of memory states are pressed.
Researchers in neuroscience have studied how this behavior can change if memory states are not persistent. According to the researchers' rule of updating weights, the weights created when a memory is inserted gradually break down as new memories are added. Two different kinds of memory states emerge from their simulations. The most recent memories are associated with "fixed point" attractors with distinct and permanent patterns, similar to the orbits of the planets around the Sun, as the memories are pressed in order. On the other hand, chaotic attractors, the second type of memory states, are more similar to weather patterns in that their activity is never exactly repeated as memory states age and weaken. While neural networks that can both learn and forget have been documented, the transition from fixed to chaotic dynamics has not been documented.
The seeming unpredictability of chaotic attractors increases as the network collects more memories, until the oldest attractor is lost in the background noise. At this point, it is too late to bring back the memory; the moment is completely "lost". The findings show that “forgetting” in this network requires first a transition from regular activity to chaotic activity, followed by entanglement with noise with a significant decay time. Also, in this scenario, there is no possibility of destructive forgetting as old memories disappear on their own.
The Forgetting Process in the Human Brain
According to the researchers, if this "forgetting" process is taking place in the brain, old memories must be stored as chaotic and increasingly noisy states, so the fluctuations in cell firing times must be greater when these memories are recalled. According to the researchers, this hypothesis should be testable by observing brain activity during memory tasks with increasingly longer intervals between the input and the human or animal's ability to recall the memory.
Neuroscientist Tilo Schwalger of the Technical University of Berlin thinks the findings could potentially be applied to animal brain networks and the predictions should be testable. According to neuroscientist Mastrogiuseppe of the bioscience organization Champalimaud Research in Portugal, the research “sits at the crossroads of two main areas of study in theoretical neuroscience: one is about memory; the other is about disordered neural activity in the brain”. Mastrogiuseppe adds that recent findings show a potential link between these two events.
Source: journals.aps.org/prx/abstract/10.1103/PhysRevX.13.011009
Günceleme: 29/01/2023 08:50
Be the first to comment