The word “algorithm” is widely used in the press to denote the opaque functioning of search engines and social networks.

But what exactly are we talking about? What is algorithm? This concept has progressed throughout history, from Euclid to GAFAM algorithms. Can algorithms solve any problem? What guarantees do we have on their behavior? What are the social effects?

## Originated in Iran in the 9th Century

Etymology traces the history of algorithms to the Iranian scholar Muhammad Ibn Mūsā al-Khuwārizmī, who published the first manuals for solving equations around 800 BC. The root methods of algebra typically relate to practical computational problems: questions of inheritance or measurement.

His works were translated into Latin in the 12th century and popularized by such figures as the Italian mathematician Leonardo Fibonacci. At the origin of the term “algorithm” is its Latinized name with “algorithmi” or “algorismi”. About a thousand years ago, Euclid described a method for calculating the greatest common divisor of two numbers in Elements.

## In the 20th century, the concept of algorithm forms branches of mathematics.

However, it wasn't until the beginning of the 20th century that the concept of algorithms was formalized. In his famous speech at the second international congress of mathematicians in Paris in 1900, German mathematician David Hilbert proposed 23 open problems, 23 challenges for the mathematical community; their explanations below would have had a significant impact on the development of mathematics. decades.. The tenth problem concerns the existence of “a method by which, by means of a finite number of operations, the existence of an integer solution of a polynomial equation with integer coefficients can be determined” (“Diophantine” equations). What is actually at stake is the existence of an algorithm.

Thanks to the founding work of Alan Turing and Alonzo Church, among others, algorithms become mathematical objects in their own right. Alan Turing, in his 1936 article, gives the definition of "computability" of a function: it must be a machine that gives its value in a finite number of fundamental steps, guided by a system of transitions and the contents of a strip. plays the role of memory. This is the famous "Turing machine".

Alan Turing understands the connection between the computability of a function in a system of axioms and the demonstrable character of a mathematical claim. Theoretical computer science becomes a branch of mathematics.

We limit the power of algorithms, and some problems appear to be undecidable: there is no algorithm to solve them. In 1970 Julia Robinson and Youri Matiassevitch finally solved Hilbert's tenth problem: The solution of Diophantine equations is an undecidable problem!

In the 1970s, problem hierarchies were established based on the time and space an algorithm needed to solve them: this was called complexity theory.

## What is the algorithm like?

Algorithms are often compared to recipes: a set of precise instructions that yield results in a finite number of steps.

This image is true, but undoubtedly a fundamental aspect obscures the fact that an algorithm takes data to be processed (numbers, text, relationships) and certain instructions are conditional: the steps followed depend on that data, and a sequence of executions may follow. Of course it's hard to predict. These instructions can be given in different well-defined forms (flow chart, explanation language) or even in natural language with the necessary precautions.

We all learned in elementary school the algorithm for multiplying two numbers without the aid of advanced formalism. Algorithms are in principle intended to be implemented in the form of a program in a computer understandable programming language. But the algorithm exists independently of this translation.

To understand the extent of algorithms in our modern life, we must distinguish their families.

In order to better understand the current problems and challenges with algorithms, it is important to identify their scope and characteristics on which results and behavior we can guarantee. An algorithm typology is essential for this understanding.

First, we can distinguish a family of algorithms that have become almost invisible in our daily life. These are precise algorithms for perfectly defined tasks whose results can be easily verified: multiplying two numbers, sorting a list of names in alphabetical order, storing and retrieving information efficiently, converting an analog signal to a digital signal, interpreting a program.

These are basic algorithms that have been studied since the beginning of computer science. Yet these are the subject of current research, with many mysteries remaining around the complexity of some basic operations. For example, the full complexity of the problem of multiplying two integers is still clear from a theoretical point of view: we cannot currently show that multiplication necessarily takes longer than addition! The best known multiplication algorithm was published very recently.

Optimization algorithms constitute a second important family. They solve problems where we try to define parameters or a configuration that maximizes or minimizes a value called a "target function". Concrete applications consist of, for example, searching for the shortest path between two points, scheduling the phases of a project in such a way as to minimize its duration, selecting antenna locations to cover a given area at a lower cost. Parameters of a network's routers to minimize latency.

The goals of the algorithms of these two families are measurable and their results are mathematically guaranteed. Formal methods make it possible to rigorously verify the properties of an algorithm. Linear optimization algorithms are well understood.

A third, more specialized family of algorithms are cryptographic algorithms that aim to guarantee the security of communications and transactions. This security is often based on assumptions about the complexity of algorithmic problems. The famous RSA algorithm (named after its inventors Ronald Rivest, Adi Shamir and Leonard Adleman), for example, bases the security of electronic commerce transactions on the assumption that there is no efficient algorithm for parsing a number into its prime factors.

Some procedures stemming from artificial intelligence research do not easily yield to rigorous analysis.

## Algorithms change nature with artificial intelligence

Among them, classification algorithms try to place the input data into a category that corresponds to an external reality. For example, an animal recognition algorithm would take an image in the form of a pixel array as input and would need to determine whether this image represents a cat or a dolphin. This task is not well-defined formally: There may be a vague picture, which is likely to differ from the answers given by people. The accuracy of these algorithms depends on an unofficial external reality, and their accuracy or precision can only be determined empirically.

Likewise, prediction algorithms attempt to predict the evolution of certain quantities measured in the physical world or behavior in a population. For example, they are used in finance to predict the development of markets or in marketing to show visitors to a website the products or advertisements that are most likely to catch their attention. The suitability of the results is here again verified experimentally, not mathematically. So much so that in 2006, the Netflix company started a competition with a million dollar prize to improve the performance of its algorithm for predicting movie ratings.

The development of these algorithms makes extensive use of probabilistic models as well as structures that are difficult to rigorously analyze. This is especially true for the algorithms of neural networks used in what is now called “deep learning”, based on the number of layers used in these networks. These networks implicitly encode the memory of data provided during a learning phase and make it possible to classify new input data.

## What can we demand from algorithms?

The ubiquity of algorithms is legitimately the subject of fears. What guarantees can we claim? The first type of guarantee is related to effectiveness. How long do we have to wait for an answer? How much memory should we have? These questions are well researched and have rigorous, but partial formulations and answers. Gaps in our understanding of algorithmic complexity, in particular, leave open the possibility of unprecedented attacks that compromise RSA algorithm-based cryptography.

The traditional performance issue is closely linked to resource consumption issues with ecological implications. We can give this question a broader framework and wonder about the resources consumed by software and servers. In the field of cryptographic algorithms, certain mechanisms at the heart of the functioning of cryptocurrencies, especially the "proof of work" principle, have a dramatic energy impact.

When the objectives are easily verifiable, as in the case of a ranking algorithm, or explicitly quantified, as in the case of an optimization algorithm, there is an objective measure of the quality of the solution. But in the case of optimization algorithms, the choice of the objective function, ie the optimized quantity, can have a significant human influence.

## Be the first to comment