Math Problem Solving with Artificial Intelligence

AI and Mathematics
AI and Mathematics

Hogben and Reinhart of Iowa State University were pleasantly surprised in March. Adam Wagner, a postdoctoral researcher at Tel Aviv University, replied to a question they posted a week ago, but emailed the solution without using typical math or individual effort. What he did was use a gaming machine.

Hogben said, “I was very happy to get an answer to my question. I am very happy that Wagner did this with artificial intelligence.”

While artificial intelligence had contributed to mathematics before, Wagner's technique was new.

He turned the search for answers to Hogben and Reinhart's topic into a competition and used a strategy that other scholars have successfully adapted into popular strategy games such as chess.

“I've seen all these articles about companies like DeepMind, for example, that produce systems that can play chess, Go, and Atari games at truly superhuman levels,” Wagner says.

“How surprising would it be if you could discover a method for using these self-learning algorithms in mathematics as well?” I reasoned.

Using a similar technique, Wagner began creating counterexamples—examples that contradicted (or "opposes") a mathematical hypothesis and thus proved it wrong.

He redesigned the counterexample search as a guessing game, then put his algorithm to the test on scores of unsolved arithmetic problems.

"I believe this is really nice work," said University of Sydney professor Geordie Williamson, who has previously blended machine learning and math work.

Machine learning programs "teach" computers certain abilities. Both Wagner and DeepMind use reinforcement learning methods that take a hands-on approach to training by allowing the computer to perform a task (such as a game) over and over. The model only intervenes to evaluate the performance of the computer. As a result, the computer adapts its strategy as it discovers which approaches result in higher results.

Reinforcement learning has proven to be an effective method for developing models for complex strategic games. Wagner's plan to apply this to research mathematics was surprisingly simple.

To see how reinforcement learning can be used to find counterexamples, consider the following situation. A mathematical conjecture, 2x-x2 Suppose the formula specifies that it is negative for any real number x.

This assumption is wrong and you can prove it is not true by generating a value of x (a counterexample). (counterexample, 2x-x2 its value is any number between 1 and 0 that peaks at x = 2.)

Wagner can use reinforcement learning to achieve this by unleashing his model on a game involving guessing a real number x. After the game, 2x-x to the model2 one point will be awarded. The model would guess wildly at first because it didn't know which numbers maximized the score.

However, after playing around for a while, a pattern emerged. The closer x is to 1, the bigger the score.

When the model predicts a value between 0 and 2, it will inevitably encounter a counterexample if it follows that model.

Wagner used the same basic technique to solve dozens of problems, only changing the types of points and moves the computer allowed. The tasks were all from discrete mathematics, which deals with independent and distinct elements—think integers—rather than number line continuum.

“All of these games are just a finite sequence of finite decisions,” Wagner explained. (Allowing an infinite number of steps in games would have added further complications.)
Regarding the 3 x 3 "312 matrix", which describes mixing the inputs of a three-dimensional vector so that (a,b,c) (a,b,c), Brualdi and Cao's solution included a particular set. The 312-0 matrix, which they call 1-pattern avoidance (c,a,b). A 312-0 matrix 1 is pattern avoidance if there is no way to delete some of its rows and columns and end up with a 312 matrix.

Brualdi and Cao were particularly interested in a number calculated using the "persistent" of the matrix, a tricky method that requires adding and multiplying all matrix values. They sought to discover which of the matrices avoiding the 312 pattern had the largest persistence and how large the persistence could be, and made predictions for square matrices of any size.

Wagner devised a game to solve his model's question: guess the 0-1 matrix. He chose between 0 or 1 for each entry. The score of the model is determined by the size of the permanent, the scores are lost because it does not avoid the 312 matrix. When the matrices were 4 x 4 or larger, the model discovered samples that exceeded Brualdi and Cao's predictions.

The new research is a fascinating proof-of-concept, although its contributions to mathematics have been very small so far.
“None of the assumptions [resolved by the model] were superimportant assumptions,” Wagner said.

Even in Brualdi and Cao's example, the model needed some help after the matrices got too big.

It will take a long time for mathematicians to leave their fields to machines. Meanwhile, those who want to profit from artificial intelligence should pay attention to the possibilities of including it in research.

source: quantamagazine

📩 09/03/2022 21:44

Be the first to comment

Leave a response

Your email address will not be published.


*