PromptBox

Written — Updated
  • This utility allows maintaining libraries of LLM prompt templates which can be filled in and submitted from the command line.
  • Github, Website
  • A sample prompt. Each of the options below becomes a CLI flag which can fill in the template.
    • description = "Summarize some files"
      
      # This can also be template_path to read from another file.
      template = '''
      Create a {{style}} summary of the below files
      which are on the topic of {{topic}}. The summary should be about {{ len }} sentences long.
      
      {% for f in file -%}
      File {{ f.filename }}:
      {{ f.contents }}
      
      
      {%- endfor %}
      '''
      
      [model]
      # These model options can also be defined in a config file to apply to the whole directory of templates.
      model = "gpt-3.5-turbo"
      temperature = 0.7
      # Also supports top_p, frequency_penalty, presence_penalty, stop, and max_tokens
      
      [options]
      len = { type = "int", description = "The length of the summary", default = 4 }
      topic = { type = "string", description = "The topic of the summary" }
      style = { type = "string", default = "concise" }
      file = { type = "file", array = true, description = "The files to summarize" }
      
  • Task List

    • Up Next

    • Soon

      • Verbose mode should print token stats at end
      • List command
        • List all templates in a directory
        • should also take a filter
        • Short mode for completion
        • by default print template name and description in a table
      • Show command to output the information from a template
      • "run" command detection is fragile
    • Later/Maybe

      • Save all invocations in a database? (will do as part of Chronicle switch)
      • Allow templates to reference partials in same directory
      • Allow templates to reference partials in parent template directories
      • Define ChatGPT functions in the prompt? Probably skip this, more appropriate for some other project
      • bash/zsh Autocomplete template names
      • Can we autocomplete options as well once the template name is present?
      • Recall previous invocations
      • Option to trim context in the middle with a <truncated> message or something like that
    • Done

      • Pass images to Ollama — v0.3.0 Dec 13th, 2023
      • Support for GPT4 Vision — v0.3.0 Dec 13th, 2023
      • Support OpenRouter — v0.2.0 Dec 8th, 2023
        • OpenRouter offers an OpenAI compatible API which is probably the easiest way to add this.
      • Set prompt format, context length, etc. per model - v0.2.0 Dec 8th, 2023
        • Done specifically for Together right now, can expand this to generic host at some point
        • Support standard formats and allow custom formats too
        • Needed for some providers that don't apply the template for you, or who don't provide accurate info about context length and other things.
        • This can be defined in the model definition once it can be an object (see Support multiple hosts - v0.2.0 Dec 8th, 2023).
      • Support together.xyz model host - v0.2.0 Dec 8th, 2023
        • Fetch model info from https://api.together.xyz/models/info
        • Short term cache on the model info
        • Get the config from the model info to determine how to format the prompt, stop tokens, etc.
          • Some of the configs here actually don't include things like system prompt...
            • Maybe just build in templates to the tool and lets them be specified in the config somehow
          • Looks like context length is missing from some models as well
        • max_tokens seems have a very small default, need to set this higher to be useful
      • Support multiple hosts - v0.2.0 Dec 8th, 2023
        • Allow defining hosts beyond the built-in hosts
          • API url
          • Request Format (currently just OpenAI and Ollama format but probably more in the future)
          • Environment variable that holds the API key
        • Ability to configure the default host for non-GPT-3.5/4 models (whereas now Ollama is the default)
        • Need a way to specify in the model name which host to use
          • Actually the way to do this is to allow the model name to be either a string or a { host: Option<String>, model: String } structure.
        • Tests
          • config file overrides specific fields of bulit-in hosts
            • e.g. host.openai.api_key = "DIFFERENT_VAR_NAME"
          • Adding new hosts from the config file
          • Use default provider when none is specified
          • Set default_host to something else
          • Complain when default_host refers to a nonexistent host
          • Alias handling
            • Alias can be a full model spec
            • Model can be a full model spec and also reference an alias which is a full model spec. Should fetch the alias from model and merge the remaining fields together in this case
      • Testing
        • stop at the top_level config
        • Resolution of model options between different configs
        • Don't require a config in every directory
        • Malformed configs raise an error
        • Malformed templates throw an error
        • templates resolved in order from the current directory
        • Look under ./promptbox.toml and ./promptbox/promptbox.toml
        • Prompts can be in subdirectories
        • Prepend
        • Append
        • Prepend and append
        • all types of arguments
        • Bool arguments are always optional
        • required arguments (switch from required to optional)
        • Array arguments
        • Template model options should override config model options
        • make sure it works to invoke with command-line options and template options at the same time
        • system prompt, embedded and in separate file
        • json mode
      • Handle 429 from OpenAI — v0.1.2 Dec 4th, 2023
      • Chop off too-large context, option to keep beginning or end — v0.1.1 Dec 1st, 2023
        • Should also be able to specify which inputs to slice off i.e. keep the fixed template intact but remove some of the piped-in input
        • Ideally have per-model context values.
          • Ollama can get this from the API.
          • OpenAI has few enough models that we can do some basic pattern matching to make a guess
          • But need ability to specify a lower cap too, e.g. maybe we never actually want to send 128K tokens to GPT4
      • Token counter functionality — v0.1.1 Nov 30th, 2023
      • Set up CI and distribution — Nov 21th, 2023
      • Streaming support for openai — Nov 14th, 2023
      • Append any additional positional arguments
      • Append input from stdin
      • Support format="json"
      • Streaming support for ollama
      • Integrate with ollama
      • Option type to paste a file contents in, and allow wildcards for array files
      • Send request to model
      • Move the main command to be a "run" subcommand
      • Basic functionality
      • Define CLI options in template file
      • Help output always shows openai_key (maybe due to .env?)

Thanks for reading! If you have any questions or comments, please send me a note on Twitter.