I've been reorganizing my code snippets repo, and among other things I've updated my dark mode support for modern SvelteKit.

This supports SSR by persisting both the user's choice and the default setting in cookies, to avoid that "flash of light". Check it out here.

I also set up CI and websites for PromptBox and sqlweld, where they can easily be downloaded.


  • AWS announced a new S3 storage tier designed for low-latency analytics among other things. If I ever start working on Projects/Smelter again this will be useful. 7x the cost of normal S3 but I'm sure that's worth it for certain systems.
  • Placekey looks like a nice solution for address entity resolution. It has a generous free tier and cheap paid tiers as well. Heard about it on from the Mapscaping Podcast.



Got the first version of Glance up and running. A bit more work to be done on design and then I'll be ready to write more mini-apps to feed in all the info I want to see... at a glance.



Spent some time yesterday and today updating my sqlx JSON companion crate to support sqlx 0.7. I also added a type that makes it easier to get a Box<RawValue> out of the query.

Also finally added tests! And wrote up some notes on using JSON in sqlx.



I created a small utility named sqlweld for assembling SQL query files from liquid templates with partials, so I can get statically-defined queries while still allowing reuse for common things like permissions checks. This way I can write raw SQL in my projects while still getting code reuse on my queries.

The first real use of this will be on Glance, which I'm finally starting on in earnest. Check out sqlweld on Github!



Updated SBBP so that I can add videos directly from the app, and also queue up multiple videos for download. Whisper currently doesn't do great on technical terms, so that needs some improvement. I think there's a way to do that with OpenAI's API, but it's not obvious how to do it with the Huggingface high-level packages, so I may need to dive in to the internals some to see if that can be done.



Somehow I missed that the WebP format includes a separate lossless encoding mode. I've made two updates to Pic Store as a result.

  1. Added configurable quality to the conversion profile, and a 100 quality will do lossless encoding when the input is a PNG.
  2. Updated the "reconvert" endpoint to read the conversion profile again, so that changes will take effect.

With this, I can finally upload a good screenshot from SBBP as well, where I used it to read through a MotherDuck talk on hybrid query execution and see the slides alongside.




I also sped up the image similarity process by 100x, by just running it all in one Python execution instead of starting a new invocation for every checked pair of images.

Overall I'm really happy with how this is turning out. Still need to make the design better but it's already very useful for things like long conference talks.



I added structural image similarity to SBBP , so that screenshots from the video that are too similar to the previous one are automatically dropped out. This makes the viewing experience much better since you only see a new image when something has really changed since the previous one.


Calculating Structural Similarity of Images

Turns out scikit-image in Python makes this really easy.

import cv2
from skimage import structured_similarity as ssim
img1 = cv2.imread(sys.argv[1])
img2 = cv2.imread(sys.argv[2])
sim = ssim(img1, img2, channel_axis=2)



I started a new project called "Should have been a Blog Post," or SBBP for short. This application downloads a Youtube video, extracts screenshots every 10 seconds, and runs the audio through Whisper. The text is then presented alongside the screenshots, so you can just read through the transcript and see the visuals alongside, instead of needing to spend an hour watching a single video.

Check it out on Github.



Added the ability in PromptBox to append strings from the command line and from stdin in addition to what's in the prompt template. Next up will be ability to submit images to GPT4 vision and Ollama (once the pending PR for it is merged).

I uploaded some sample prompt templates as well.


  • Spent the weekend playing some with Dagster for downloading and processing medical device approval data from the FDA. It's a nice system; the data asset model and partitioning fits well with how I like to think about things. Definitely worth checking out if you need to write data pipelines of some complexity.



Wrote a small note on how to Preview a CSV In-Browser with Papa Parse.


  • pipx is a program that exists solely to install and run Python executable packages, each in their own venv. Takes a lot of pain out of the nonsense and frustration that continues to characterize using Python.
  • Rye does seem to work better for some packages, particularly those that refuse to run on the latest version of Python, which is basically anything ML-related for a few months after the release.



I’ve been playing with using PromptBox for code generation given context from the repository. Having some trouble figuring out a good output format though.

I have been able to get it to output diffs, sort of. The main problem is that it loses track of how long the diff is supposed to be compared to what it puts in the header, so patch rejects it.

Maybe there’s some better way? Maybe I just need a post processing step to fix the diff or apply it some other way? I'll have to play with it some more.


Every year I help out with the sound at my church's Christmas play. This year we're low on help, and the play is much more involved than your average Christmas play, so I find myself both running the sound board and triggering the music/effects. This was a good excuse to automate the latter task, and a good opportunity to explore Tauri a bit.

SoundQueue is a small program that reads in a manifest of a bunch of sound files, and lets you easily play them one at a time with the press of the space bar, queueing up the next sound when one finishes playing. It allows custom volume and in/out points for each sound as well.

I was pretty happy with the productivity of Tauri. Despite not being too familiar with it, this app took only a few hours to make.



I've spent the last few days building PromptBox, a utility allows maintaining libraries of LLM prompt templates which can be filled in and submitted from the command line. The templates are just TOML files like this.

# File: summarize.pb.toml

description = "Summarize some files"

# This can also be template_path to read from another file.
template = '''
Create a {{style}} summary of the below files
which are on the topic of {{topic}}. The summary should be about {{ len }} sentences long.

{% for f in file -%}
File {{ f.filename }}:
{{ f.contents }}

{%- endfor %}

# These model options can also be defined in a config file to apply to the whole directory of templates.
model = "gpt-3.5-turbo"
temperature = 0.7
# Also supports top_p, frequency_penalty, presence_penalty, stop, and max_tokens

len = { type = "int", description = "The length of the summary", default = 4 }
topic = { type = "string", description = "The topic of the summary" }
style = { type = "string", default = "concise" }
file = { type = "file", array = true, description = "The files to summarize" }

Each of these options becomes a CLI option which can help fill in the template.

It works with OpenAI for the usual case, but you can also run it against LM Studio or Ollama if you like local LLMs. If you give it a try, let me know what you think!




You can use git log --diff-filter D --name-only | grep .changeset to find the names of all the changeset files that were ever in a repository. Found on Waylon Walker's Site

❯ git log --diff-filter D --name-only | grep .changeset




  • pg_bm25 is a PostgreSQL extension that uses Tantivy to provide BM25-based full text search in PostgreSQL.



I had an idea today for a Github bot that you can enable to automatically reply to new PRs with some message, for when you’re on vacation or otherwise unable to look at it for a while.

The downside, which any seasoned open source maintainer will sense, is the implication that at any other time you will always respond promptly.



Today marks the end of my nine-year tenure with Carevoyance. It's been quite a ride with lots of ups and downs, but a great experience overall, and since the company has been handed over to H1 and scaled up, I've been getting that "founder itch" to start something new.

For the short term, I'm going to take some time off and work on a bunch of side projects and other stuff, but will likely be back at it again before too long!



Yesterday I released v0.4 of svelte-maplibre, and this is the first release for which I didn’t write any of the code. Great to have other contributors joining in!



I'm playing with the DeBERTa-v3-base-mnli-fever-anli model a bit to do intent detection for Buzzy. The idea here is that I can figure out if I should do a web search to add some context to help answer a factual question, or maybe a bit of math, or if the question doesn't require any of that. First impressions are that it should work very well.



A useful Github trick is to watch a repo, but with custom notifications so I only get notified on releases.

CleanShot 2023-09-21 at 07.45.57@2x.png


  • How Fast to Hire by Sarah Guo focuses on navigating the transition around finding product-market fit, and hiring appropriately on both sides of that transition.



The upcoming runes system in Svelte 5 looks really nice. I think this will fix most or all of the biggest bugs in Svelte 3 and 4.

I did have some concerns about pages that need to edit complex objects, but it looks like you can write a function to "rune-ify" any object without too much trouble. I made a small project in the Svelte 5 preview REPL to try it out. There are probably some bugs there but that relieved my main concern.

Looking at the compiled code with runes, everything is a lot simpler too. No more need for dirty tracking or passing nested context into components for slots. Slot functions are just closures that directly access the runes in the outer component's state now.

The internal scheduler looks more complex than before, but that's to be expected. Importantly, there are far fewer moving parts and the moving parts aren't being generated by the compiler for every new project, which should both reduce potential for bugs and make it easier to test.




Buzzy is built out enough now that I could show the prototype to my kids. My 6-year-old said I should change the name to Alexa, or maybe Google :)

Up next will be intent detection and web search to help the model answer factual questions, but first I redid some of the web app internals to use a real state machine. First time using XState's type generation helpers, and it works well!



After some Python dependency hassles, I ended up using NVIDIA's FastConformer CTC model for the speech-to-text side of Buzzy. It works great and is roughly 8x faster than Whisper when running without a GPU.

On the speech generation side, I ended up using Mycroft's Mimic 3. There are a number of different voices to choose from, and some sound better than others, but it's super fast and the quality is more than acceptable. I found that many of the voices sound better if you use a lengthScale of 1.2 to slow it down a bit.



I initially was using Bun for Buzzy, but had to switch back to Node until some Vite ecosystem issues are resolved. I really liked Bun though, and look forward to using it more in the future. And of course, the first thing I hit when going back to Node was an ESM issue ;)



So my latest side project is Buzzy, an AI assistant geared toward answering my kids' questions. They like talking to Alexa but it frequently gives irrelevant results, so I'm hoping to give them something both smarter and more fun.

Any voice assistant needs wake word detection and while it would be fun to train my own model here, PicoVoice works great out of the box. A tad expensive if you're a solopreneur without a budget, but totally free for personal use.

Buzzy lives at https://github.com/dimfeld/buzzy and I'll continue updating here as things get going!




  • Where to set the standard by Sarah Guo is an exhortation to move quickly. The biggest advantage startups have when trying to break into existing markets is the agility that larger, more established companies can not match. This short essay urges you to embrace that advantage, and not worry about high standards scaring people away. High performers embrace high standards.
  • Excuse me, is there a problem? by Jason Cohen is a great overview of analyzing the market for your potential startup. But the big contribution from this article is a quantitative framework for rating your business idea and classifying it as appropriate for high-growth, VC-funded scale up, a medium-growth bootstrapped business, or just something not worth pursuing.



Yesterday I wrote some notes on PostgreSQL Row Level Security and it ended up on HN. It got onto the front page, which resulted in this crazy graph on my Plausible stats.


If this happens more regularly I'll have to upgrade my plan! πŸ˜†