gwn (2022-12-15 19:46:24) Permalink
Very high quality analysis & writing. I greatly enjoyed the article, and agree with all the points (which doesn't happen very often). Thanks

Jan (2023-05-10 05:28:35) Permalink
It still stands. I do thing that there is a less ambitious use of AI that generates a lot of dumb code. My peek at industry suggest that it account for a significant percentage of the code out there.

One cool challenge for AI would be: can you migrate code from Python 2 to 3. (I.e. you have a specification, the original P2 code, can you guess the changes needed to P3 it?)


Laurence Tratt (2023-05-10 06:59:43) Permalink
@Jan Probably unsurprisingly given how fast the field is advancing, there is already AI work on translating between languages e.g. from my colleague at King's Jie Zhang Leveraging Automated Unit Tests for Unsupervised Code Translation. It's a neat idea, but at the moment it's difficult to have confidence in the correctness of the output. Python 2->3 could be an interesting restriction of the problem where it's possible to have greater confidence. Still, I must admit that my personal bet is that we'll get more use out of current AI techniques' approximating nature in places where we can better tolerate approximations -- and I also suspect that there are probably more places where we can tolerate approximations than we realise.

Steve Phelps (2024-01-06 10:10:35) Permalink
Generative AI can work well for programming when there is a cheap, low-risk, automated and unbiased process for ranking the quality of solutions, eliminating those that fail to run and then ranking the remainder. For example, this could be expected time to solve a variety of mazes, which can be estimated from MC simulation. The LLM then becomes the generate component within a generate-and-test architecture; we set an appropriate temperature and ask the LLM to produce a population of candidate solutions. Moreover, solutions can be iteratively refined. This gives us an analog of genetic programming, but for the generate component we use a machine-learning interpolation of an existing space of related programs rather than the space of all possible programs. This is explored in Lehman et al. (2023).

Lehman, J., Gordon, J., Jain, S., Ndousse, K., Yeh, C., & Stanley, K. O. (2023). Evolution through large models. In Handbook of Evolutionary Machine Learning (pp. 331-366). Singapore: Springer Nature Singapore.