2023-11-04
🔗I've spent the last few days building PromptBox, a utility allows maintaining libraries of LLM prompt templates which can be filled in and submitted from the command line. The templates are just TOML files like this.
# File: summarize.pb.toml
= "Summarize some files"
# This can also be template_path to read from another file.
= '''
Create a {{style}} summary of the below files
which are on the topic of {{topic}}. The summary should be about {{ len }} sentences long.
{% for f in file -%}
File {{ f.filename }}:
{{ f.contents }}
{%- endfor %}
'''
[]
# These model options can also be defined in a config file to apply to the whole directory of templates.
= "gpt-3.5-turbo"
= 0.7
# Also supports top_p, frequency_penalty, presence_penalty, stop, and max_tokens
[]
= { = "int", = "The length of the summary", = 4 }
= { = "string", = "The topic of the summary" }
= { = "string", = "concise" }
= { = "file", = true, = "The files to summarize" }
Each of these options becomes a CLI option which can help fill in the template.
It works with OpenAI for the usual case, but you can also run it against LM Studio or Ollama if you like local LLMs. If you give it a try, let me know what you think!