Written — Updated
  • Writing Functions Generic across Connection Types

    • Sometimes you want a function that can take a connection pool, a connection, or a transaction. This is possible in sqlx, but it's not totally obvious how to do it.
    • Executor Trait

      • For functions that just make a single call, you can use the Executor trait (here, PgExecutor which is an alias for the Postgres version)
      • async fn do_query(db: impl PgExecutor<'_>) -> Result<Vec<Stuff>, sqlx::Error> {
        	sqlx::query_as!(Stuff, "SELECT * FROM stuff")
      • The main issue here is that Rust doesn't easily have a way to pass a reference to that impl PgExecutor object that it also thinks will implement PgExecutor. Passing it once passes ownership and you can't use it again. Still this works fine for any function that only makes one database query, and provides the simples experience to the caller.
    • Dereferencing to Connection

      • This method requires you to manually acquire a connection from a pool first, but it does allow you to call a function with either a plain connection or a transaction.
      • Here, you need to use &mut *conn any time you use the connection. This looks a bit weird, but takes advantage of the fact that all the relevant objects implement DerefMut to allow getting a reference to the Connection.
      • async fn do_query(db: &mut PgConnection) -> Result<Vec<Stuff>, sqlx::Error> {
            sqlx::query!("DELETE FROM stuff WHERE NOT active")
          		.execute(&mut *db)
        	sqlx::query_as!(Stuff, "SELECT * FROM stuff")
              .fetch_all(&mut *db)
        async fn call_it(db: sqlx::PgPool) {
          let tx = db.begin().await?;
          // Get the underlying connection from the transaction and pass it in.
          do_query(&mut *tx).await?;
  • Deserializing JSON

    • Custom Types

      • This is largely unintuitive if you don't want to go through a serde_json::Value. The trick is to rename the column in the query to include type information, which sqlx will use to
      • #[derive(Serialize, Deserialize)]
        pub struct SomeJsonField {
            // Whatever fields match the JSON structure
            pub name: String,
            pub some_param: Option<String>,
            pub count: i32,
        pub struct QueryResult {
            pub id: i32,
            pub name: String,
            pub params: sqlx::types::Json<SomeJsonField>,
        async fn do_query(pool: &PgPool) {
          let result = sqlx::query_as!(
            r##"SELECT id,
                params as "params: SomeJsonField"
              FROM some_table"##,
      • I also have a crate sqlx-transparent-json-decode, that provides a macro to implement the Decode trait for JSON columns for any structure that implements Deserialize.
      • With this crate, you can import the sqlx_json_decode macro, then add sqlx_json_decode!(SomeJsonField) to the above code, and then you won't have to wrap the type in sqlx::types::Json anymore. This can be convenient when using one of these structures for both database results and other purposes.
    • Box<RawValue>

      • Sometimes you just want to get the raw JSON out of a field without needing to create a bunch of objects from it. For example, a web server that is passing the JSON straight down to the client without even looking at it. serde_json provides the RawValue type for this, but it is a reference type, and so needs to be wrapped in a Box or something to be usable once the database connection is released.
      • sqlx provides an implementation to decode a &RawValue, but not a Box<RawValue>, and so the sqlx-transparent-json-decode crate also offers a wrapper type around Box<RawValue> that implements Decode.
  • Migrations

    • To update the checksum of a migration when the file changed, you can recalculate the hash
      • cat migrations/FILENAME.sql | openssl dgst -sha384 | cut -d ' ' -f 2
    • Then change the checksum for that row in the database:
      • update _sqlx_migrations 
        set checksum='\x<the checksum>'
        where version ='<the version>';

Thanks for reading! If you have any questions or comments, please send me a note on Twitter.