Achievements of 2023: Three breakthroughs in the history of computer science

UK-based popular dictionary ‘Collins Dictionary’ has selected ‘AI’ as the best word or ‘Word of the Year’ for 2023.

This year saw tremendous advancements in AI technology. All companies like Apple, Adobe, Microsoft or Google have announced to increase the use of AI in their products.

Apart from AI, quantum computing has also seen significant progress this year.

Let’s find out which 3 advances in computer science in 2023 will help the development of various technologies in the coming years.

1.  Hyper-dimensional computing

This year we have seen amazing innovations like ‘ChatGPT’ or ‘Dal-e’. From talking with human-like skills to creating pictures, videos or music, AI can now do many things like humans.

Considering these things, many people may think that it is not possible to stop the development of artificial intelligence or AI. But these AIs are mainly based on ‘artificial neural networks’ or ‘artificial neural networks’, which have several limitations.

These AI-based tools face a lot of complications especially in the process of coming to a decision with the help of logic.

Our brains can usually construct arguments using similes, analogies or comparisons. That’s why we don’t need new neurons to grow in our brains to interpret something new with our eyes. Rather, new ideas are created in the brain through comparison with past memories or knowledge.

But artificial neural networks cannot work in this manner. When new information is introduced into such a network, new ‘nodes’ are needed to understand that information. Nodes in AI technology work much like neurons in our brain, but they work differently.

The more new nodes are created in an AI tool’s neural network, the clearer the tool’s understanding of any information becomes. In other words, the more nodes that are created on a new piece of information, the better the AI ​​will statistically improve.

This kind of working method of AI is called ‘Statistical AI’ because of its relationship with statistics.

However, artificial intelligence works in another way, which is called ‘Symbolic AI’. By hearing the name, one can imagine that this type of AI expresses ideas with the help of symbols.

Basically statistical and symbolic, these two types of AI work in completely different ways. So different that these two AIs cannot work together.

But the researchers are trying that. That is, trying to harness the power of statistical and symbolic AI together. The method in which this is being attempted is called ‘Hyper-Dimensional Computing’.

In March of this year, IBM researchers in Zurich, Switzerland achieved a major breakthrough in research on hyper-dimensional computing.

According to IBM researcher Abbas Rahimi, working with hyper-dimensional computing is quite challenging. Because statistical and symbolic AI work in very different ways. However, if these two types of AI work together, many benefits can be obtained.

A major advantage is that working with hyper-dimensional computing does not require the AI ​​to create new nodes to understand new information or concepts.

Hyper-dimensional computing is still in the research stage. However, research evidence that such computing can be used in practice has been found in 2023.

Artificial intelligence will greatly accelerate if it is possible to use hyper-dimensional computing in AI platforms designed for various tasks. At the same time it will be easier for humans to understand the operation of AI.

AI will also require less power to operate. According to Rahimi, this AI working in a new way will save 10 to 100 times electricity compared to the current time.

Currently, the computer and ICT industry is responsible for 2 to 4 percent of global carbon emissions. In the future, power-efficient hyper-dimensional computing may bring about a major change in this picture.

2.  New algorithms for quantum computing

Quantum computing is still in the research stage. However, how a quantum computer will work has been researched for several decades. Various types of quantum algorithms are being developed for this purpose.

In simple words, the algorithm on which the quantum computer will work is called quantum algorithm.

These quantum algorithms are tested on various models of quantum computers.

Meanwhile, future quantum computers will be able to work much faster than ordinary computers. For decades, researchers have been expressing various ideas about the possibilities of what can be done with quantum computers.

For example, in the 1990s, MIT mathematician Peter Shore realized that quantum computers could factor any large prime number very quickly.

For this he created a special algorithm, which is popularly called “Shore’s Algorithm”.

Several algorithms have been developed so far to find all the factors of any prime number. Among them, Shor’s algorithm has been proven to work the fastest over the past 30 years.

But this picture has changed this year. In August 2023, New York University computer scientist Oded Regev invented an algorithm faster than Shor’s algorithm.

Originally, how to improve Shor’s algorithm and increase its working speed, that was the main purpose of Odded Regev.

However, quantum computers are still being researched. Quantum computers are not yet feasible for any real-world tasks. As a result, like Shor’s algorithm, Regev’s new algorithm is still in the theoretical stage. However, when quantum computing is used in the real world, these algorithms can make many tasks easier.

3.  Unexpected behavior of AI language models

The method by which AI can currently communicate in human language is called a ‘Language Model’. There are language models of different capabilities. Some of these language models can work quite efficiently. Some models are less efficient.

But in all these models a lot of data is entered. This is why these models are called ‘large language models’.

When the size of a language model is small, it does not have much efficiency. However, as information is entered into the model, it develops a variety of skills. This is normal. The creators of these models enter information into them with the objective of developing various skills.

But beyond the creators’ intent, new capabilities are being seen in AI language models. These unexpected skills are called ’emergence’.

Emergence is a phenomenon that can be seen in a larger system. What the small parts of the system cannot do individually, they can do collectively through emergence.

There are many examples of emergence in nature. Emergence makes it possible for several inanimate inert atoms to join together to form microscopic organisms. Innumerable water molecules come together to form waves due to emergence. Again, the rhythmicity of hummingbirds when flying in flocks can also be explained through emergence.

Researchers are noticing that such emergence is also seen in language models. A large number of digital nodes in the AI ​​language model are coming together to reveal new skills, skills researchers hadn’t thought of before.

Using this feature of Emergence, researchers are finding solutions to problems with AI that AI language models had no idea about.

For example, let’s think of a string of emojis: 👧🐟🐠🐡

A study conducted this year asked three AI language models to guess the name of a movie by inputting these four emojis. It was found that the three language models gave three different answers.

A less complex language model with a smaller number of nodes failed to guess any movie names. Instead, he gave a meaningless answer.

A slightly more complex language model with a higher number of nodes is answered by “The Emoji Movie” (an animated film released in 2017). Of course, it is not very difficult to think of an ’emoji movie’ from a series of emojis.

However, after seeing the above four emojis, the name of the popular movie that will appear in the mind of many people is the animation movie ‘Finding Nemo’ released in 2003.

The third and most complex AI language model guessed exactly this movie title. But the language model was not trained for this task.

This task of AI solving problems without prior training or concepts is called ‘zero-shot learning’.

Emergence or zero-shot learning has been extensively researched throughout 2023. And researchers have become more interested in this topic than ever before this year.

Attempts are being made to improve AI through Emergence or Zero-Shot Learning. However, no one has a clear idea of ​​why these unexpected abilities appear in complex AI language models.

According to the researchers, these unpredictable behaviors of AI have positive potential as well as risks. Researchers continue to work to better understand the phenomenon of emergence in AI language models in order to mitigate future risks and exploit the potential. This effort will undoubtedly continue for many years to come.

Computers, AI, Emergence

Leave a Reply