PromptBox

Written — Updated
  • This utility allows maintaining libraries of LLM prompt templates which can be filled in and submitted from the command line.
  • Github, Website
  • A sample prompt. Each of the options below becomes a CLI flag which can fill in the template.
    • description = "Summarize some files"
      
      # This can also be template_path to read from another file.
      template = '''
      Create a {{style}} summary of the below files
      which are on the topic of {{topic}}. The summary should be about {{ len }} sentences long.
      
      {% for f in file -%}
      File {{ f.filename }}:
      {{ f.contents }}
      
      
      {%- endfor %}
      '''
      
      [model]
      # These model options can also be defined in a config file to apply to the whole directory of templates.
      model = "gpt-3.5-turbo"
      temperature = 0.7
      # Also supports top_p, frequency_penalty, presence_penalty, stop, and max_tokens
      
      [options]
      len = { type = "int", description = "The length of the summary", default = 4 }
      topic = { type = "string", description = "The topic of the summary" }
      style = { type = "string", default = "concise" }
      file = { type = "file", array = true, description = "The files to summarize" }
      
  • Task List

    • Up Next

      • Handle 429 from OpenAI
      • Verbose mode should print token stats at end
      • Testing
        • stop at the top_level config
        • Resolution of model options between different configs
        • Don't require a config in every directory
        • Malformed configs raise an error
        • Malformed templates throw an error
        • templates resolved in order from the current directory
        • Look under ./promptbox.toml and ./promptbox/promptbox.toml
        • Prompts can be in subdirectories
        • Prepend
        • Append
        • Prepend and append
        • all types of arguments
        • Bool arguments are always optional
        • required arguments (switch from required to optional)
        • Array arguments
          • Template model options should override config model options
        • make sure it works to invoke with command-line options and template options at the same time
        • system prompt, embedded and in separate file
        • json mode
    • Soon

    • Later

      • Save all invocations in a database?
      • Allow templates to reference partials in same directory
      • Allow templates to reference partials in parent template directories
      • Define ChatGPT functions in the prompt? Probably skip this
      • bash/zsh Autocomplete template names
      • Can we autocomplete options as well once the template name is present?
      • Recall previous invocations
      • Option to trim context in the middle with a <truncated> message or something like that?
    • Done

      • Chop off too-large context, option to keep beginning or end — v0.1.1 Dec 1st, 2023
        • Should also be able to specify which inputs to slice off i.e. keep the fixed template intact but remove some of the piped-in input
        • Ideally have per-model context values.
          • Ollama can get this from the API.
          • OpenAI has few enough models that we can do some basic pattern matching to make a guess
          • But need ability to specify a lower cap too, e.g. maybe we never actually want to send 128K tokens to GPT4
      • Token counter functionality — v0.1.1 Nov 30th, 2023
      • Set up CI and distribution — Nov 21th, 2023
      • Streaming support for openai — Nov 14th, 2023
      • Append any additional positional arguments
      • Append input from stdin
      • Support format="json"
      • Streaming support for ollama
      • Integrate with ollama
      • Option type to paste a file contents in, and allow wildcards for array files
      • Send request to model
      • Move the main command to be a "run" subcommand
      • Basic functionality
      • Define CLI options in template file
      • Help output always shows openai_key (maybe due to .env?)

Thanks for reading! If you have any questions or comments, please send me a note on Twitter.

Please also consider subscribing to my weekly-ish newsletter, where I write short essays, announce new articles, and share other interesting things I've found.