
In this emerging field of computer science, scientists are modeling the brain to make computers faster and more effective. Over the past few decades we have witnessed a technological revolution brought about by the creation of computer processors based on silicon and other semiconductor materials.
Computers were once the size of entire rooms, but have since been reduced to single chips. Moore's law, a concept used by Gordon Moore in 1965 to describe the observation that the number of components per integrated chip will double every two years, leading to increasingly faster computers, has been the driving force behind this trend.
But as computational demands increase due to advances in computers, robots, internet of things (IoT) and smart machines, the semiconductor industry has reached a point where it is no longer possible to further miniaturize computer chips. There are actually only so many transistors that can fit on a single chip.
As a result, computer scientists are turning to a whole new approach to computing, known as "neuromorphic computing," in which computers are designed to operate similarly to the human brain and interact with the outside world.
This field of study is gaining popularity and is considered the fundamental stage in the creation of computer hardware and artificial intelligence systems. We cover everything you need to know about this emerging field and what it means for the future of computer science.
How does the brain process and store information?
Before moving on to neuromorphic devices and their applications, it is best to discuss the biological phenomenon that motivates this field (synaptic plasticity). This is the extraordinary capacity of the human brain to change and adapt to new information. In order to properly assess this, we must first discuss the basic operation of our own "computer center".
The messenger cells of the brain are called neurons. They are all interconnected, thanks to synapses, the connecting sites that connect them all in a vast network through which electronic impulses and chemical signals are transmitted. They communicate with each other using "spikes", which are short bursts of electricity that are milliseconds long.
Memory in a computer can be increased simply by adding more memory cells, but in the brain, memories are produced by strengthening connections between neurons and creating new connections. When two neurons are more tightly connected to each other, we can say that the synaptic weight of the connected synapse increases. About 10 in our brain12 There are neurons and they are connected to each other 10.15 They communicate through synapses. These connections and the degree of communication between them fluctuate over time and the amount of stimuli or spikes received so that the brain can adapt to the changing environment, form and preserve memories.
It is crucial to understand potentiation and depression, two key mechanisms of synaptic plasticity, where synaptic connections gradually strengthen or weaken and play an important role in learning and memory. This is possible in any time range from seconds to hours or longer.
Higher frequency spikes, such as those that occur when learning a new skill, are hypothesized to be linked to the development of long-term memory by strengthening or reinforcing certain synapses. On the other hand, lower frequency stimuli causes depression and consequently weakening of the connection (or synaptic weight) at the relevant synaptic junction, which is similar to forgetting something learned.
It should be stressed that this is a bit of an oversimplification and that empowerment and depression depend not only on the frequency of the beats but also on the timing. For example, when many neurons send spikes to a synapse at the same time, the synaptic weight increases much faster than a succession of pulses.
Researchers have to think outside the box to deliberately mimic this process because it is so sophisticated and complex.
How does a neuromorphic computer work?
The von Neumann architecture used to build modern computers is based on ideas first developed by Alan Turing in the 1930s. This configuration requires keeping memory and data processing units separate, which slows down performance because data has to be sent back and forth between them and consumes more power unnecessarily.
Neuromorphic computers, on the other hand, use chip architectures that combine computation and memory in a single component. In terms of hardware, this area is expanding and includes cutting-edge new designs, various materials, and new computer parts.
Researchers from around the world are working to create synthetic networks of neurons and synapses that mimic the flexibility of the brain, using both organic and inorganic materials. Most large-scale neuromorphic computers currently in existence, such as IBM's TrueNorth, Intel's Loihi, and BrainScales-2, use transistors based on proven metal oxide semiconductor technology.
Von Neumann computers often use transistors as one of the electronic building blocks. There are hundreds of different types of transistors, with the metal oxide semiconductor field-effect transistor or MOSFET being the most popular. They primarily function as a switch (and to a lesser extent an amplifier) for electrical currents within a computer chip.
This allows each transistor to be in an on or off state, which is equivalent to a binary 1 or 0, and prevents or allows a current to flow, allowing it to exist in either state. This operating principle makes it incredibly easy to store and process information, which is why electronic memory cells and logic gates have become essential components of our digital world.
However, the electrical signals in our brain are not just 0's and 1's. For example, a connection between synapses may have different "weights" or densities.
Many tools have been created to simulate this on a neuromorphic computer. An "active layer" that modulates the signal between units is included in a particular type of semiconductor transistor known as a polymer synaptic transistor. The conductivity, and therefore the output of the signal, is affected by the specific composition of the conductive polymer used to form this layer.
When a certain voltage frequency is applied through transistors, the active layer changes, causing depressions or amplifications in the electrical signal comparable to spikes in brain activity. That's basically how plasticity comes in, and each spike contains numerical data about frequency, timing, size, and shape. The spikes can be converted to binary values and vice versa, but the exact process for doing this is currently under investigation.
Researchers have reported increasingly creative ways to mimic the structure of the brain using artificial components such as memristors, capacitors, spintronic devices, and even some intriguing attempts to perform neuromorphic computing using fungi. Neuromorphic hardware isn't just limited to transistors, either.
How to program a neuromorphic computer?
Artificial neural networks (ANNs) are frequently used by neuromorphic computers to perform computational tasks. Spiking neural networks (SNNs), one of the many variants of ANNs, are of particular interest as they are built on synthetic neurons that interact with each other by exchanging electrical signals known as "spikes" and incorporate time into their models. As a result, these systems use less energy because artificial neurons only broadcast information when the total number of spikes they receive exceeds a certain threshold.
Before the network can start working, it must be programmed, or in other words, learned. This is accomplished by giving him facts from which he can draw. The learning strategy may vary according to the type of ANN. For example, if the network is being trained to recognize cats or dogs in photographs, thousands of images can be fed with the tag "cat" or "dog" to train the subject to recognize it on his own in future work. Manipulating the color of each pixel in the image requires extremely laborious calculations for identification.
There is a wide variety of ANNs and which one to use depends on the user's needs. Although SNNs are attractive due to their low power consumption, they are generally difficult to train, mostly due to their complex neuronal dynamics and indistinguishable nature of spiking processes.
Where is neuromorphic computing used?
According to experts, neuromorphic devices will complement rather than replace traditional computer hardware, especially when it comes to solving certain technological problems. While there are claims that neuromorphic computers can simulate Boolean logic, a fundamental idea in any modern programming language, this suggests that neuromorphic computers can potentially perform general-purpose computing.
In any case, areas and applications where the brain is superior to traditional computers in terms of energy efficiency and computational speed will find neuromorphic computing very impressive.
These include applying artificial intelligence (AI) to effectively perform cognitive tasks such as voice or image identification, as well as opening up new possibilities for robotics, sensing and healthcare (to name a few).
Although the subject is still in its infancy and there are hurdles to overcome, neuromorphic computing is becoming increasingly popular and offers a viable alternative to traditional computer systems.
Source: advancedsciencenews
Günceleme: 14/03/2023 15:25