Much has been written about programming as a fairly inaccessible medium; Bret Victor's framing of programming as "blindly manipulating symbols" is difficult to refute. Even high-level languages require the programmer to think through at least two layers of translation:
The release of GPT-3 sparked exciting exploration around programming as a medium. While some demos showcased the bypassing of code completely, I found myself drawn to a more practical application: might we mitigate the 2nd layer of translation by enabling people to input programming concepts (in plain English) that then get "compiled" into code? Tangibly, this would mean less Googling of syntax we simply can't recall from memory, for example.
I prototyped this idea by hacking together my own text editor. The interface is an Electron app powered by CodeMirror, and the magic is courtesy of GPT-3 with assistance from a lexical analyzer.
More on the latter: you'll notice the user is able to inquire about "houses", to which GPT-3 correctly responds with array methods. This is not GPT-3 recognizing that it's an array—the latency to do so would be detrimental—but rather a (JS) lexical analyzer, which acts as a pre-processing step. It determines the type of each "keyword" in the file (e.g.
array), then replaces the keyword with its type when passing the inquiry to GPT-3.
For now this is just a prototype on my machine, but I suspect it's a matter of time before we see something analgous in production text editors, e.g. as an extension of VSCode's IntelliSense.
If you're interested in AI's impact on programming as a medium, I recommend checking out Andrej Karpathy's "Software 2.0".