Journals for



Today marks the end of my nine-year tenure with Carevoyance. It's been quite a ride with lots of ups and downs, but a great experience overall, and since the company has been handed over to H1 and scaled up, I've been getting that "founder itch" to start something new.

For the short term, I'm going to take some time off and work on a bunch of side projects and other stuff, but will likely be back at it again before too long!



Yesterday I released v0.4 of svelte-maplibre, and this is the first release for which I didn’t write any of the code. Great to have other contributors joining in!



I'm playing with the DeBERTa-v3-base-mnli-fever-anli model a bit to do intent detection for Buzzy. The idea here is that I can figure out if I should do a web search to add some context to help answer a factual question, or maybe a bit of math, or if the question doesn't require any of that. First impressions are that it should work very well.



A useful Github trick is to watch a repo, but with custom notifications so I only get notified on releases.

CleanShot 2023-09-21 at 07.45.57@2x.png


  • How Fast to Hire by Sarah Guo focuses on navigating the transition around finding product-market fit, and hiring appropriately on both sides of that transition.



The upcoming runes system in Svelte 5 looks really nice. I think this will fix most or all of the biggest bugs in Svelte 3 and 4.

I did have some concerns about pages that need to edit complex objects, but it looks like you can write a function to "rune-ify" any object without too much trouble. I made a small project in the Svelte 5 preview REPL to try it out. There are probably some bugs there but that relieved my main concern.

Looking at the compiled code with runes, everything is a lot simpler too. No more need for dirty tracking or passing nested context into components for slots. Slot functions are just closures that directly access the runes in the outer component's state now.

The internal scheduler looks more complex than before, but that's to be expected. Importantly, there are far fewer moving parts and the moving parts aren't being generated by the compiler for every new project, which should both reduce potential for bugs and make it easier to test.




Buzzy is built out enough now that I could show the prototype to my kids. My 6-year-old said I should change the name to Alexa, or maybe Google :)

Up next will be intent detection and web search to help the model answer factual questions, but first I redid some of the web app internals to use a real state machine. First time using XState's type generation helpers, and it works well!



After some Python dependency hassles, I ended up using NVIDIA's FastConformer CTC model for the speech-to-text side of Buzzy. It works great and is roughly 8x faster than Whisper when running without a GPU.

On the speech generation side, I ended up using Mycroft's Mimic 3. There are a number of different voices to choose from, and some sound better than others, but it's super fast and the quality is more than acceptable. I found that many of the voices sound better if you use a lengthScale of 1.2 to slow it down a bit.



I initially was using Bun for Buzzy, but had to switch back to Node until some Vite ecosystem issues are resolved. I really liked Bun though, and look forward to using it more in the future. And of course, the first thing I hit when going back to Node was an ESM issue ;)



So my latest side project is Buzzy, an AI assistant geared toward answering my kids' questions. They like talking to Alexa but it frequently gives irrelevant results, so I'm hoping to give them something both smarter and more fun.

Any voice assistant needs wake word detection and while it would be fun to train my own model here, PicoVoice works great out of the box. A tad expensive if you're a solopreneur without a budget, but totally free for personal use.

Buzzy lives at and I'll continue updating here as things get going!