Use `refinery_cli` against a folder of `.sql` migrations.
I got tired of commenting out my code when I just wanted to rerun the initial migration.
Plain SQL is a lot more flexible than the `barrel` syntax.
Caches responses of each GET handler in a separate capacity-limited cache (as a
custom clone-able `CachedResponse` struct). Subsequent requests will build a
`Response` from the cached bytes instead of re-querying the database and
re-serializing the JSON. This greatly speeds up the list endpoints and
`get_interior_ref_list`.
Also caches the api-key-to-id mapping for `Owner`s in order to speed up frequent
authentications.
Each create handler clears the entire list response cache. Each delete handler
also clears the entire list response cache and deletes the cached response for
that key. Deleting an owner also deletes their entry in the
`owner_ids_by_api_key` cache.
Ran into some limitations of sqlx while trying to bulk create interior_refs. I
also discovered how slow creating hundreds of rows in postgres is and I'm
planning on saving interior_refs data in a jsonb column instead which seems to
be much faster.
Pretty comfortable with the choice of crates now so it's time to start
committing.
Currently the API only returns errors, but throwing good errors is important.