Prompt versioning with LLMs
Published on under the AI category.Toggle Memex mode
The templates used to generate prompts for my GPT 3.5-powered chatbot are versioned in a custom-made system. This was a requirement for the project that came to mind after I made the initial logic to query sources and return a result that makes reference to the sources. I decided that all prompts should be saved separately from my application code to ensure that I didn't overwrite them in testing and lose the history of the prompts with which I have been working.
My chatbot has a custom Python script that takes in a list of prompts -- a System prompt, and an arbitrary number of Assistant and User prompts that you can give to the GPT 3.5 API as instructions -- and saves them to a JSON file alongside an associated UUID and the time at which the prompt was generated. The most recent prompt added to the JSON file is used in the web application that powers the chatbot, unless otherwise specified. Every time a question is answered by the bot, the UUID of the prompt is saved in a database alongside the question and answer.
With this system, I am able to:
- Maintain a history of prompts, allowing me to refer back to previous versions and.
- View what prompts were used to generate a response through the application, thus providing me with more information I can use in debugging. Although I have not used this feature yet, I expect there may be a time in the future where being able to tie a query that I have made to an exact prompt would be useful.
I expect many projects that reference LLMs will implement some kind of versioning system for the aforementioned reasons, particuarly those that power key application features.
When I want to make an upgrade to the core prompt that powers my application, I go into a Python file, change the prompt in a JSON object, and then run the script. This saves the new prompt in a JSON store with all of the previous prompts. To use the new prompt, I need to restart the web application.
Every time I rebuild the index for my project, I generate a new dataset version folder. I can generate multiple dataset files inside each version. For example, I have a dataset that stores my wiki but not my blog; I have another dataset that stores both. Prompt versions are explicitly tied to both a dataset version and a dataset file. This provides me the abilty to replicate previous states of the LLM app should I require.
The code to my prompt versioning system is open source as part of my LLM Chatbot GitHub repository, where I am presently experimenting with using GPT 3.5 to answer questions from my blog.
Responses
Comment on this post
Respond to this post by sending a Webmention.
Have a comment? Email me at readers@jamesg.blog.