Skip to Main Content

Bulletin of the American Mathematical Society

The Bulletin publishes expository articles on contemporary mathematical research, written in a way that gives insight to mathematicians who may not be experts in the particular topic. The Bulletin also publishes reviews of selected books in mathematics and short articles in the Mathematical Perspectives section, both by invitation only.

ISSN 1088-9485 (online) ISSN 0273-0979 (print)

The 2024 MCQ for Bulletin of the American Mathematical Society is 0.84.

What is MCQ? The Mathematical Citation Quotient (MCQ) measures journal impact by looking at citations over a five-year period. Subscribers to MathSciNet may click through for more detailed information.

 

Working with machines in mathematics
HTML articles powered by AMS MathViewer

by Alex Davies;
Bull. Amer. Math. Soc. 61 (2024), 387-394
DOI: https://doi.org/10.1090/bull/1843
Published electronically: May 15, 2024

Abstract:

Machine learning is making significant contributions to many fields but how can it be used as a tool for mathematicians? This article explores the emerging role of machine learning in mathematical research, highlighting how its perceptual capabilities can augment human intuition and lead to new discoveries.
References
  • Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković, Geometric deep learning: Grids, groups, graphs, geodesics, and gauges, Preprint, arXiv:2104.13478, 2021.
  • Charles Blundell, Lars Buesing, Alex Davies, Petar Veličković, and Geordie Williamson, Towards combinatorial invariance for Kazhdan-Lusztig polynomials, Represent. Theory 26 (2022), 1145–1191. MR 4510816, DOI 10.1090/ert/624
  • Grant T Barkley and Christian Gaetz, Combinatorial invariance for elementary intervals, Preprint, arXiv:2303.15577, 2023.
  • Feng-Hsiung Hsu, Behind Deep Blue, Princeton University Press, Princeton, NJ, 2002. Building the computer that defeated the world chess champion. MR 1961264
  • Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  • Alex Davies, András Juhász, Marc Lackenby, and Nenad Tomasev, The signature and cusp geometry of hyperbolic knots, Preprint, arXiv:2111.15323, 2021.
  • Alex Davies, Petar Veličković, Lars Buesing, Sam Blackwell, Daniel Zheng, Nenad Tomašev, Richard Tanburn, Peter Battaglia, Charles Blundell, András Juhász, et al, Advancing mathematics by guiding human intuition with ai, Nature, 600(7887):70–74, 2021.
  • Alhussein Fawzi, Matej Balog, Aja Huang, Thomas Hubert, Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Francisco J R Ruiz, Julian Schrittwieser, Grzegorz Swirszcz, et al, Discovering faster matrix multiplication algorithms with reinforcement learning, Nature, 610(7930):47–53, 2022.
  • Jonathan Frankle and Michael Carbin, The lottery ticket hypothesis: Finding sparse, trainable neural networks, Preprint, arXiv:1803.03635, 2018.
  • Christian Gaetz and Yibo Gao, On automorphisms of undirected Bruhat graphs, Math. Z. 303 (2023), no. 2, Paper No. 31, 21. MR 4530180, DOI 10.1007/s00209-022-03194-2
  • Maxim Gurevich and Chuijia Wang, Parabolic recursions for kazhdan-lusztig polynomials and the hypercube decomposition, Preprint, arXiv:2303.09251, 2023.
  • Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, Imagenet classification with deep convolutional neural networks, Communications of the ACM, 60(6):84–90, 2017.
  • Mario Marietti, Bruhat intervals and parabolic cosets in arbitrary Coxeter groups, J. Algebra 614 (2023), 1–4. MR 4494546, DOI 10.1016/j.jalgebra.2022.09.023
  • Chris J. Maddison, Aja Huang, Ilya Sutskever, and David Silver, Move evaluation in go using deep convolutional neural networks, 2015.
  • Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever, Deep double descent: where bigger models and more data hurt, J. Stat. Mech. Theory Exp. 12 (2021), Paper No. 124003, 32. MR 4412831, DOI 10.1088/1742-5468/ac3a74
  • Richard S. Sutton and Andrew G. Barto, Reinforcement learning: an introduction, 2nd ed., Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, 2018. MR 3889951
  • David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al, Mastering the game of go with deep neural networks and tree search, nature, 529(7587):484–489, 2016.
  • David Silver, Thomas Hubert, Julian Schrittwieser et al., A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play, Science 362 (2018), no. 6419, 1140–1144. MR 3888768, DOI 10.1126/science.aar6404
  • Karen Simonyan and Andrew Zisserman, Very deep convolutional networks for large-scale image recognition, 2015.
  • Adam Zsolt Wagner, Constructions in combinatorics via neural networks, 2021.
  • Geordie Williamson, Is deep learning a useful tool for the pure mathematician?, Bull. Amer. Math. Soc. (N.S.) 61 (2024), no. 2, 271–286. MR 4726992, DOI 10.1090/bull/1829
Similar Articles
  • Retrieve articles in Bulletin of the American Mathematical Society with MSC (2020): 68-XX
  • Retrieve articles in all journals with MSC (2020): 68-XX
Bibliographic Information
  • Alex Davies
  • Affiliation: Google DeepMind, London, United Kingdom
  • MR Author ID: 1537022
  • Email: adavies@google.com
  • Received by editor(s): April 30, 2024
  • Published electronically: May 15, 2024
  • © Copyright 2024 American Mathematical Society
  • Journal: Bull. Amer. Math. Soc. 61 (2024), 387-394
  • MSC (2020): Primary 68-XX
  • DOI: https://doi.org/10.1090/bull/1843
  • MathSciNet review: 4751007