Journals for



Didn't have time to mention it here last night but the Let's Encrypt DNS challenge tool is all working. It starts the challenge, adds the required DNS entry to Vercel, and then uploads the resulting certificate to DigitalOcean and enables it.

It's a bit over-engineered — I mean, a shell script run every 60 days would probably have been sufficient — but this will make it easier to add support for more hosts and services types in the future for other projects.

I spent an hour trying to figure out why Let's Encrypt was rejecting my DNS record, and in the end turned out to be just a typo in my DNS record (_acme_challenge instead of _acme-challenge). But overall the process wasn't too bad.

I'll get around to a full writeup in the near future, I hope, but in the meantime here's the Github repo.


In getting S3 upload for Pic Store to work, I found that DigitalOcean Spaces defaults all files to private, with no way to change the default from the web UI. But you can use s3cmd to do it by setting the S3 bucket policy. First, create a policy JSON file:

    "Version": "2008-10-17",
    "Statement": [
        "Sid": "AddPerm",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3::YOUR_BUCKET_NAME/*"

Then you can apply it to the bucket like so:

s3cmd --access_key=$ACCESS_KEY --secret_key=$SECRET_KEY \
    --host=$ \
    --host-bucket=$BUCKET.$ \
    --region $REGION \
    setpolicy policy.json s3://$BUCKET \

It's a bit quirky, as the DO web interface still shows the files as private, but they are actually public, and this public status also applies to files uploaded in the future.

But all that done with, the good news is that Pic Store is now uploading files to cloud storage. Many thanks to Hugo for figuring out this policy file solution and writing about it!

Thanks for reading! If you have any questions or comments, please send me a note on Twitter.