My personalized news and blog aggregator
Go to file
2023-10-20 01:26:00 -04:00
.github/workflows Turn off verbose logging in deploy.yml 2023-10-20 01:26:00 -04:00
.sqlx Prepare sqlx queries and add content dir 2023-10-19 23:26:10 -04:00
content Prepare sqlx queries and add content dir 2023-10-19 23:26:10 -04:00
frontend Complete reset password flow 2023-10-13 14:07:38 +02:00
migrations Complete reset password flow 2023-10-13 14:07:38 +02:00
src Prevent decoding empty uuid strings 2023-10-17 01:09:17 -04:00
.eslintignore Add eslint and prettier to frontend 2023-09-01 01:05:06 -04:00
.fdignore Prepare sqlx queries and add content dir 2023-10-19 23:26:10 -04:00
.gitignore Prepare sqlx queries and add content dir 2023-10-19 23:26:10 -04:00
.prettierrc.json Add eslint and prettier to frontend 2023-09-01 01:05:06 -04:00
.rgignore Prepare sqlx queries and add content dir 2023-10-19 23:26:10 -04:00
build.rs Also bundle css with bun so it busts caches 2023-06-29 00:38:32 -04:00
Cargo.lock Switch to async-fred-session, default config vals 2023-10-17 00:15:44 -04:00
Cargo.toml Install sqlx-cli in deploy.yml 2023-10-19 23:32:02 -04:00
drop_all.sql Basic email verification done 2023-09-28 23:53:46 -04:00
justfile Prepare sqlx queries and add content dir 2023-10-19 23:26:10 -04:00
README.md Switch to async-fred-session, default config vals 2023-10-17 00:15:44 -04:00
watch.sh Add auto-reload for frontend and backend 2023-06-28 01:23:11 -04:00

crawlnicle

My personalized news and blog aggregator. Taking back control over the algorithm. Pining for the days of Google Reader. An excuse to write more Rust.

Development Instructions

Prerequisites

Install these requirements to get started developing crawlnicle.

First-time setup

  1. Create postgres user and database:

    createuser crawlnicle
    createdb crawlnicle
    sudo -u postgres -i psql
    postgres=# ALTER DATABASE crawlnicle OWNER TO crawlnicle;
    postgres=# ALTER USER crawlnicle CREATEDB;
    \password crawlnicle
    
    # Or, on Windows in PowerShell:
    
    & 'C:\Program Files\PostgreSQL\13\bin\createuser.exe' -U postgres crawlnicle
    & 'C:\Program Files\PostgreSQL\13\bin\createdb.exe' -U postgres crawlnicle
    & 'C:\Program Files\PostgreSQL\13\bin\psql.exe' -U postgres
    postgres=# ALTER DATABASE crawlnicle OWNER TO crawlnicle;
    postgres=# ALTER USER crawlnicle CREATEDB;
    \password crawlnicle
    
  2. Save password somewhere safe and then and add a .env file to the project directory with the contents:

    RUST_LOG=crawlnicle=debug,cli=debug,lib=debug,tower_http=debug,sqlx=debug
    HOST=127.0.0.1
    PORT=3000
    PUBLIC_URL=http://localhost:3000
    DATABASE_URL=postgresql://crawlnicle:<password>@localhost/crawlnicle
    DATABASE_MAX_CONNECTIONS=5
    REDIS_URL=redis://localhost
    TITLE=crawlnicle
    MAX_MEM_LOG_SIZE=1000000
    CONTENT_DIR=./content
    SMTP_SERVER=smtp.gmail.com
    SMTP_USER=user
    SMTP_PASSWORD=password
    EMAIL_FROM="crawlnicle <no-reply@mail.crawlnicle.com>"
    SESSION_SECRET=64-bytes-of-secret
    IP_SOURCE=ConnectInfo
    
  3. Run just migrate (or sqlx migrate run) which will run all the database migrations.

Running in Development

Run just watch to build and run the server while watching the source-files for changes and triggering a recompilation when modifications are made.

The server also triggers the browser to reload the page when the server binary is updated and the server is restarted.

It also separately watches the files in frontend/ which will trigger a transpilation with bun and then rebuild the server binary so that it includes the new JS bundle names.

Alternatively, you can just run cargo run after building the frontend JavaScript with just build-dev-frontend.

Building for Production

You can also build the binary in release mode for running in production with the just build command. This will first build the minified frontend JavaScript (just build-frontend) and then build the rust binary with cargo build --release.

Using the CLI

This project also comes with a CLI binary which allows you to manipulate the database directly without needing to go through the REST API server. Run cli --help to see all of the available commands.