Add caching with LRU cache under mutex

Caches responses of each GET handler in a separate capacity-limited cache (as a
custom clone-able `CachedResponse` struct). Subsequent requests will build a
`Response` from the cached bytes instead of re-querying the database and
re-serializing the JSON. This greatly speeds up the list endpoints and
`get_interior_ref_list`.

Also caches the api-key-to-id mapping for `Owner`s in order to speed up frequent
authentications.

Each create handler clears the entire list response cache. Each delete handler
also clears the entire list response cache and deletes the cached response for
that key. Deleting an owner also deletes their entry in the
`owner_ids_by_api_key` cache.
This commit is contained in:
2020-08-01 00:25:04 -04:00
parent 68b04b4f4c
commit 519fcb4c5a
14 changed files with 470 additions and 123 deletions

View File

@@ -12,12 +12,15 @@ use tracing_subscriber::fmt::format::FmtSpan;
use url::Url;
use warp::Filter;
mod caches;
mod db;
mod filters;
mod handlers;
mod models;
mod problem;
use caches::Caches;
#[derive(Clap)]
#[clap(version = "0.1.0", author = "Tyler Hallada <tyler@hallada.net>")]
struct Opts {
@@ -28,6 +31,7 @@ struct Opts {
#[derive(Debug, Clone)]
pub struct Environment {
pub db: PgPool,
pub caches: Caches,
pub api_url: Url,
}
@@ -38,6 +42,7 @@ impl Environment {
.max_size(5)
.build(&env::var("DATABASE_URL")?)
.await?,
caches: Caches::initialize(),
api_url,
})
}