begin quote from:
https://www.science.org/doi/10.1126/science.abq1158
Competition-level code generation with AlphaCode
Machine learning systems can program too
Computer
programming competitions are popular tests among programmers that
require critical thinking informed by experience and creating solutions
to unforeseen problems, both of which are key aspects of human
intelligence but challenging to mimic by machine learning models. Using
self-supervised learning and an encoder-decoder transformer
architecture, Li et al. developed AlphaCode, a deep-learning
model that can achieve approximately human-level performance on the
Codeforces platform, which regularly hosts these competitions and
attracts numerous participants worldwide (see the Perspective by
Kolter). The development of such coding platforms could have a huge
impact on programmers’ productivity. It may even change the culture of
programming by shifting human work to formulating problems, with machine
learning being the main one responsible for generating and executing
codes. —YS
Abstract
Programming
is a powerful and ubiquitous problem-solving tool. Systems that can
assist programmers or even generate programs themselves could make
programming more productive and accessible. Recent transformer-based
neural network models show impressive code generation abilities yet
still perform poorly on more complex tasks requiring problem-solving
skills, such as competitive programming problems. Here, we introduce
AlphaCode, a system for code generation that achieved an average ranking
in the top 54.3% in simulated evaluations on recent programming
competitions on the Codeforces platform. AlphaCode solves problems by
generating millions of diverse programs using specially trained
transformer-based networks and then filtering and clustering those
programs to a maximum of just 10 submissions. This result marks the
first time an artificial intelligence system has performed competitively
in programming competitions.
No comments:
Post a Comment