Journals for

2023-11-04

🔗

I've spent the last few days building PromptBox, a utility allows maintaining libraries of LLM prompt templates which can be filled in and submitted from the command line. The templates are just TOML files like this.

# File: summarize.pb.toml

description = "Summarize some files"

# This can also be template_path to read from another file.
template = '''
Create a {{style}} summary of the below files
which are on the topic of {{topic}}. The summary should be about {{ len }} sentences long.

{% for f in file -%}
File {{ f.filename }}:
{{ f.contents }}


{%- endfor %}
'''

[model]
# These model options can also be defined in a config file to apply to the whole directory of templates.
model = "gpt-3.5-turbo"
temperature = 0.7
# Also supports top_p, frequency_penalty, presence_penalty, stop, and max_tokens

[options]
len = { type = "int", description = "The length of the summary", default = 4 }
topic = { type = "string", description = "The topic of the summary" }
style = { type = "string", default = "concise" }
file = { type = "file", array = true, description = "The files to summarize" }

Each of these options becomes a CLI option which can help fill in the template.

It works with OpenAI for the usual case, but you can also run it against LM Studio or Ollama if you like local LLMs. If you give it a try, let me know what you think!


Thanks for reading! If you have any questions or comments, please send me a note on Twitter.