So that shops can have multiple shelves that save their page, filter, sort, etc.
state to the server. Buttons for the shelves are reconstructed in the plugin
during the load shop procedure.
Fixed issues with Bincode responses not actually being readable, oops. Also fix handling Bincode requests.
Added `TypedCache` for DRYing up GET request content-type handling.
Added `DeserializedBody` for DRYing up POST/PATCH request conent-type handling.
Removed "Unsaved" structs since I could just mutate Posted structs instead.
Added improved error reporting and stopped sending unfiltered interal error data.
Upgraded sqlx to proper 0.4.1 release.
This required a ton of changes including updating error handling and separating out the models into intermediate representations so that fields that are marked non-null in the database are not `Option` in the final model.
The update allows using `query_as!` in `interior_ref_list` and `merchandise_list` and model functions to specify a generic `Executor` param that can take either a db pool connection, transaction, or plain db connection. This should allow me to impl my old `Model` trait again.
Also compile times are magically 20x faster?!?
Uses `tokio::spawn` to delay updating the cache while the server responds to the request.
Because `tokio::spawn` can run on another thread, references need to be static, so I initialized the cache in `lazy_static`.
Now the client can opt out of receiving the whole JSON body if it hasn't changed since they last requested.
Right now, only `ETag` and `If-None-Match` headers are implemeted which isn't very RFC-spec compliant but it's all I need so I don't care.
Creates the transaction record and updates the merchandise quantity in one db transaction.
Managed to do the merchandise update in one UPDATE query, but the error that's thrown when an item to buy is not found is pretty confusing, so I convert it to a 404.
I also added some DB indexes.
Use `refinery_cli` against a folder of `.sql` migrations.
I got tired of commenting out my code when I just wanted to rerun the initial migration.
Plain SQL is a lot more flexible than the `barrel` syntax.
Caches responses of each GET handler in a separate capacity-limited cache (as a
custom clone-able `CachedResponse` struct). Subsequent requests will build a
`Response` from the cached bytes instead of re-querying the database and
re-serializing the JSON. This greatly speeds up the list endpoints and
`get_interior_ref_list`.
Also caches the api-key-to-id mapping for `Owner`s in order to speed up frequent
authentications.
Each create handler clears the entire list response cache. Each delete handler
also clears the entire list response cache and deletes the cached response for
that key. Deleting an owner also deletes their entry in the
`owner_ids_by_api_key` cache.