Journals for




  • Where to set the standard by Sarah Guo is an exhortation to move quickly. The biggest advantage startups have when trying to break into existing markets is the agility that larger, more established companies can not match. This short essay urges you to embrace that advantage, and not worry about high standards scaring people away. High performers embrace high standards.
  • Excuse me, is there a problem? by Jason Cohen is a great overview of analyzing the market for your potential startup. But the big contribution from this article is a quantitative framework for rating your business idea and classifying it as appropriate for high-growth, VC-funded scale up, a medium-growth bootstrapped business, or just something not worth pursuing.



Yesterday I wrote some notes on PostgreSQL Row Level Security and it ended up on HN. It got onto the front page, which resulted in this crazy graph on my Plausible stats.


If this happens more regularly I'll have to upgrade my plan! 😆



Yesterday I set up a custom command bar for Neovim, and I also wrote up some details on how to do it for yourself: Creating a Custom Command Bar in Neovim



  • Building LLM Applications for Production is a bit light on the actual topic, but that's forgivable since the community as a whole is still figuring out best practices for using LLMs in production. Regardless, it is a good overview of the LLM landscape and various tools and methods around it. On the title topic, I did enjoy the discussion of unit testing.
  • One thing I’ve also found useful is to ask models to give examples for which it would give a certain label. For example, I can ask the model to give me examples of texts for which it’d give a score of 4. Then I’d input these examples into the LLM to see if it’ll indeed output 4.
  • Scott Alexander's review of a book about IRBs is a good read if you're interested in research practices and process. But this quote was too jaw-dropping to miss.
  • maybe it was unethical to do RCTs on ventilator settings at all. He asked whether they might be able to give every patient the right setting while still doing the study. The study team tried to explain to him that they didn’t know which was the right setting, that was why they had to do the study. He wouldn’t budge.



I wrote a short essay yesterday titled Are LLMs Databases?, and posted it up here too. Ultimately I don't think there's any real comparison, but it's still a useful question to think about a bit.

In more practical news, I've been thinking about how using LLMs to generate SQL seems an especially potent way for existing applications to integrate LLMs and AI in a way that’s actually useful.

Users ask the questions they care about, without having to wait for you to build the answer. The most popular questions can become full features. I'll likely be exploring this more in the future.


🔗 released part 2 of their Practical Deep Learning for Coders free course. This one covers building a model like Stable Diffusion from scratch. I’m looking forward to going through it, since the foundations needed to implement the techniques are said to apply to many other types of models, but a more important lesson jumped out at me here.

Instead of starting with the foundations of transformers, attention, autoencoders, and so on, the course adopts a top-down method of learning. In this style you start by introducing the full solution to a problem, proceed from there to examine the component parts of the solution. If you’re ever been reading a book or taking a course, and have had trouble figuring out why or how a particular chapter’s content is useful, you can probably see why this style of learning would be helpful.

Starting with a full solution leaves some things unexplained or fuzzy at first, but I think it has a few advantages.
  • It’s clear how new concepts fit into the bigger picture, because they can be introduced in the context of the initial, full example.
  • Having a working example also helps with suggesting ways to explore the new concepts that might otherwise be presented in a vacuum, instead of just promising that it will all come together later.
  • It makes it easier to jump right in and get something working right away, and for those who like to experiment, lets them start doing so more quickly.

I’ve been thinking about top-down learning so much because it applies directly to the spatial data book I’ve been writing.

After seeing how the course is structured, this week I added a new chapter right at the beginning of the book. This new chapter is a working example that quickly introduces GeoJSON and D3 and presents a very simple web page that renders a map.

Simple Country Boundary Map

This map is nothing special — just some country boundaries and a few points — but serves a very important purpose by incorporating most of the concepts explored in more detail later in the book.

There are a lot of foundational concepts to cover when starting out with spatial data: the structure of GeoJSON, how to find and load spatial data into your application, and more. Now the reader can go into these foundational chapters with some idea of what’s actually going on, instead of having to wait until the last quarter of the book to make something they can actually see on the screen.

Aside from this new chapter, I’ve been updating the “Types of Map Visualization” chapter. This had been a very quick overview, but now I’ve been adding more examples and more pitfalls for each of the types of maps. The chapter will probably be three as long as it had been, but I expect that it will be much more useful and hopefully provide some guidance through what can be a tricky process even for seasoned data visualization practitioners.


  • Malleable software in the Age of LLMs by Geoffrey Litt looks forward to how LLMs may bring software development to the masses, not so much in the sense of replacing existing software developers, but in making it much easier for non-technical people to create small one-off applications to solve specific applications. Worth some time to read and think about.