May 5, 2024

tl;dr: check out LuaVST on ChatGPT if you want to generate some VST plugins.

I've been doing weekly beats this year and it has been a lot of eustress fun (my best song so far is "smectite canyon gambit"). I found a nice positive feedback loop between composing electronic music and writing software for dopeloop.ai. Composing helps me figure out which features are important on the software side.

screenshot.png

In recent weeks I've been tinkering with Protoplug. It's a piece of open source software which allows you to write VST plugins in Lua. It turns out Lua is efficient enough to do DSP processing on modern CPUs. You can write the code interactively in the embedded editor, which makes for a smooth iterative workflow. I am using Protoplug with OpenMPT as a host running on Wine and really enjoying it.

After tinkering for a bit I had the idea to take the Protoplug API and some examples and feed them to ChatGPT to see if it could generate plugins from a written description. If you want to try it yourself you can go here: LuaVST GPT. Note, I am using GPT-4 and I haven't tested this with GPT-3.5. You will need to install the Protoplug VST into your host and then copy the code from the chat session into the VST's built-in editor.

Results

So how good is it? I don't like AI hype. I'm going to try to be objective and honest.

  1. Good: it can generate plugin boilerplate really well. If you just want to get something up and running that is a bit more tailored than copy-pasting one of the examples then it works well. You can say something like "create me a plugin that pitches all incoming MIDI notes down by one octave" or "create me a plugin that generates a pure sine tone at 440Hz" and it will do a resonable job that is usually bug-free.
  2. Okay: it can modify your existing code. If you can't be bothered looking through the API for how to implement something you might get a pretty good first pass out of it by pasting your code in and asking for a change. For more complex changes to the code it is probably going to create a lot of bugs. One thing that would significantly improve this would be automatically feeding any errors back to the GPT. At the moment you have to copy-paste errors and often you will figure out what is wrong faster than the AI will.
  3. Bad: asking it to do something complex like "simulate a full TB-303 with incoming MIDI and take into acount the non-linearities as documented by Devilfish creator Robin Whittle in 1999" it is going to do a very poor job. Even the first part of that ("simulate a TB-303") is going to be too much to ask of it. I tried with a few different prompt variants and it couldn't get there. I think this is where the hype of AI falls down. At this point in time only a human practitioner with years of experience, a nuanced understanding, and the ability to iteratively listen to the output as they code, is able to work their way towards a really good bug-free implementation of a complex plugin.

An example of a session that went well was when I used an online graphing calculator to come up with a distortion algorithm, and then I gave the equation to the GPT and asked it to write a plugin. I tweaked the code a little bit but on the whole it was a good implementation and did what I wanted. A distortion algorithm is one of the simpler types of plugins to code from scratch of course.

In the end building this has saved me some time and typing. I am able to work with the output from the GPT and get fairly useful advice from it without having to keep the whole API in my own head. This feels like a microcosm of the larger usefulness of modern LLMs. Productivity boosting but not job destroying.