Journals for



Added support today for writing JPEGs and reading HEIC files. The HEIC input will need a bit of tweaking but it’s mostly there.

Once the Pic Store MVP is wrapped up I’m going to add support for it to my Logseq exporter, and then maybe implement the server mode for Effectum.

As for the public project roadmap, I think Linear is the way to go there. Should be easy enough to pull down the issues from their API and add some pages here to display them.



Thinking about how to better represent the progress and plans for my various projects. I moved a lot of my internal TODO lists over to Github issues, which works all right, and I can even create a view that shows issues from multiple repositories at once. This is especially nice now that I'm building up a small eco system of projects, some of which are using others as dependencies.

I also tried Linear briefly and while I love the interface, there doesn't seem to be any way to "build in public" with it. So I'm torn about that. Maybe I'll make a small program to export issues from the API and write out some Markdown with the project status, which I can then publish here.



Implemented job recovery in Effectum today, so that if the process restarts unexpectedly the jobs can be retried. I also did a bit of cleanup to prepare for the eventual addition of a server mode. The aim is to allow switching from an embedded task queue to one hosted on a server, and not have to significantly change the API, so that you can keep complexity down at first but then scale up with minimal fuss when necessary.

Pic Store is now running on Axum 0.6, which was just released. For the feature set I was using, it was a pretty seamless change. I like that there's a lot more clarity now between extractors that consume the request body (e.g. Json parsing) and those that don't touch it at all.



Added "lookup by hash" to Pic Store today. This allows the client to determine if an image already exists in the store, without knowing the ID in advance.



Didn't have time to mention it here last night but the Let's Encrypt DNS challenge tool is all working. It starts the challenge, adds the required DNS entry to Vercel, and then uploads the resulting certificate to DigitalOcean and enables it.

It's a bit over-engineered — I mean, a shell script run every 60 days would probably have been sufficient — but this will make it easier to add support for more hosts and services types in the future for other projects.

I spent an hour trying to figure out why Let's Encrypt was rejecting my DNS record, and in the end turned out to be just a typo in my DNS record (_acme_challenge instead of _acme-challenge). But overall the process wasn't too bad.

I'll get around to a full writeup in the near future, I hope, but in the meantime here's the Github repo.


In getting S3 upload for Pic Store to work, I found that DigitalOcean Spaces defaults all files to private, with no way to change the default from the web UI. But you can use s3cmd to do it by setting the S3 bucket policy. First, create a policy JSON file:

    "Version": "2008-10-17",
    "Statement": [
        "Sid": "AddPerm",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3::YOUR_BUCKET_NAME/*"

Then you can apply it to the bucket like so:

s3cmd --access_key=$ACCESS_KEY --secret_key=$SECRET_KEY \
    --host=$ \
    --host-bucket=$BUCKET.$ \
    --region $REGION \
    setpolicy policy.json s3://$BUCKET \

It's a bit quirky, as the DO web interface still shows the files as private, but they are actually public, and this public status also applies to files uploaded in the future.

But all that done with, the good news is that Pic Store is now uploading files to cloud storage. Many thanks to Hugo for figuring out this policy file solution and writing about it!



Pic Store image uploading is all working now, but I ran into issues with actually setting up a CDN to back it. I initially wanted to use Backblaze B2 as the object store, which is set up to work with Cloudflare CDN. But Cloudflare want you to switch your whole domain over to them for it to work, and I don't want to switch my whole DNS setup over since I'm using Vercel.

DigitalOcean also prefers that you switch your nameservers to them, but they do offer an alternative option, if you supply your own SSL certificate.

So today's free time will be spent building a small utility that talks to LetsEncrypt, Vercel DNS, and DigitalOcean CDN to handle generating the certificate and installing it. There are a few Rust crates that handle the ACME protocol already, so hopefully it should be quick project. I'll write up a small article once it's done.



Logseq's latest release no longer uses its pages-metadata.edn file, which my note exporter was using to track the created and updated dates of each exported page. Since this file is no longer updated, I updated the program to track them in a SQLite database.

For each page, I calculate a hash, and if the page's hash has changed, it updates the database with the file's last modified time. Nice and easy. For older pages, it can still import the initial data from pages-metadata.edn too. Integrating SQLite into the exporter program was a pretty smooth experience; I'm hoping to use it more often.

Of course, now that I have all the pages in a database, I can add other data, and that brings up more fun questions about what else I can do with it.


I set up Let's Encrypt at work today. Our nginx configuration is somewhat complex and we also run it inside a Docker container, so the process was a bit unusual. I wrote about it here.



Copilot works well for writing repetitive CRUD endpoints in Pic Store, but better is to not have to write them at all. I've been experimenting with using macros to abstract out the details of CRUD operations. This includes the relevant permissions checks, filtering on team/project, and so on.

Seems to be working well so far, and it removes a lot of the boilerplate. I may take this a step further and generate the entire endpoint function with a macro, but I haven't decided yet.


Speaking of boilerplate, Hillel Wayne recently wrote a short article on the origin of the term.



Got the Pic Store bootstrap command working today. The Postgres ability to defer constraints checking until the end of a transaction is very handy here, when you're loading a bunch of JSON files and don't want to have to sort them topologically according to the foreign arrangement of the tables.

I've also been getting a steady stream of Github notification emails for the past hour, all from the same PR at work that someone else is working on. I had an idea a while back to make an Email Digest Service and this is pushing me closer to doing just that. At least Pic Store is getting close enough to MVP that I can push through and start using it soon.


AWS Route53 lets you create "alias" records that work like a CNAME, but just redirect to another AWS resource (including other Route53 entries). This incurs less cost than a normal CNAME, since resolving the alias doesn't count as an extra lookup.



Today I wrote image reading and writing tests for  Pic Store. I came across a number of issues reading AVIF files, which eventually were fixed by switching directly to the  libavif  crate which seems to do a better job parsing certain AVIF files that aren't quite up to spec.

I also made a fork of the `imageinfo` crate to help with detecting these files, but I need to go back and see if my fix is really correct before I submit it upstream.



Started up development on  Pic Store  again. I switched out the task queue to  Effectum  and I got it to the point where the server starts, creates the database, and a simple authenticated request succeeds. A lot of the core code for the project is already implemented, so there will be some testing to see if stuff actually works and then I can start working on a simple web interface, and tools to make it convenient to use.

I'm thinking of a Vite plugin that will automatically upload the images if needed and generate a full  <picture>  tag. Also some CLI/GUI utilities that can make it convenient to upload and reference images from other contexts, such as when writing a document.

If it's possible, I would love to be able to drag an image straight into Logseq, and have that handle the upload and URL pasting. Will have to see if the plugin system allows intercepting events like that.

I've also been rethinking  Ergo a bit. While I previously had done a lot of work on the Javascript task model and got it pretty much working, the development experience still paled compared to any real development environment. I achieved a fair amount there, but I think I'm going to rip out a lot of that support, and focus instead on the "hosted state machine" model that I had originally envisioned.

Since I already have a working JS engine in the project, the state machines will continue to support bits of JS to assist in evaluating conditions and such. Tasks with heavier scripting will be runnable as actions that spawn external processes, and can then return values that can trigger further actions in the state machines.

From there, I can create other types of tasks that are basically wrappers around certain types of state machines, and hopefully come up with something that's both intuitive to use and actually useful.




Use with care, but git clean -f will delete all untracked files from a repository.



The initial release of Effectum is out! Pretty happy with how it turned out, and my first application will be to use it for background image processing in Pic Store. Look for more about that in the coming weeks.

One interesting issue I immediately encountered after releasing v0.1.0 was that the trait bounds on the job runner were overly strict. I had required Sync on the future returned by the job runner function, when actually nothing in the library required that. But that meant that any job runner function could not hold a non-Sync value across an await point. Fortunately this was just a matter of removing the trait bound and adding a test whose job runner function holds a Cell across an await.



When it comes time to actually document code for public consumption, Rust's missing_docs lint is great. I don't have to search out every place that needs documentation; I can just let the compiler yell at me about it.

In related news, Effectum first release is basically done. All the basic tests are there and I got a big performance improvement by batching database operations together, so it can process about 50,000 do-nothing jobs per second on my laptop. Going to clean some things up and then the first actual release should come early next week.

Some future features will include auto-recurring jobs, the ability to cancel or modify jobs that haven't started running yet, and support for running the task queue as its own server.