summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDawid Rycerz <dawid@rycerz.xyz>2025-07-16 23:03:40 +0300
committerDawid Rycerz <dawid@rycerz.xyz>2025-07-16 23:03:40 +0300
commit1aee0b802cad9fc9343b6c2966ba112f9b762f7c (patch)
tree53d9551fbfd3df01ac61ecd1128060a9a9727a84
parentdbb25297da61fe393ca1e8a6b6c6beace2513e0a (diff)
feat: refactor and remove lib usage
-rw-r--r--README.md257
-rw-r--r--src/lib.rs565
-rw-r--r--src/locations.rs1
-rw-r--r--src/main.rs586
-rw-r--r--src/notifications.rs1
-rw-r--r--src/users.rs1
-rw-r--r--src/weather_poller.rs291
-rw-r--r--src/weather_thresholds.rs30
8 files changed, 619 insertions, 1113 deletions
diff --git a/README.md b/README.md
index 3910fc4..6faebe1 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
-# Silmataivas (Rust Rewrite)
+# Silmataivas (Rust Version)
-Silmataivas is a weather monitoring service that sends personalized alerts based on user-defined thresholds and notification preferences. This is the Rust rewrite, providing a RESTful API for managing users, locations, weather thresholds, and notification settings.
+Silmataivas is a weather monitoring service that sends personalized alerts based on user-defined thresholds and notification preferences. This is the Rust version, providing a RESTful API for managing users, locations, weather thresholds, and notification settings.
## Features
- Weather monitoring using OpenWeatherMap API
@@ -8,244 +8,59 @@ Silmataivas is a weather monitoring service that sends personalized alerts based
- Flexible notifications: NTFY (push) and SMTP (email)
- User-specific configuration
- RESTful API for all resources
-- Automatic OpenAPI documentation at `/docs`
-## API Usage
-All API endpoints (except `/health` and `/docs`) require authentication using a Bearer token:
-
-```
-Authorization: Bearer <user_id>
-```
-
-### Main Endpoints
-- `GET /health` — Health check
-- `GET /api/users` — List users
-- `POST /api/users` — Create user
-- `GET /api/users/:id` — Get user
-- `PUT /api/users/:id` — Update user
-- `DELETE /api/users/:id` — Delete user
-- `GET /api/locations` — List locations
-- `POST /api/locations` — Create location
-- `GET /api/locations/:id` — Get location
-- `PUT /api/locations/:id` — Update location
-- `DELETE /api/locations/:id` — Delete location
-- `GET /api/weather-thresholds?user_id=...` — List thresholds for user
-- `POST /api/weather-thresholds` — Create threshold
-- `GET /api/weather-thresholds/:id/:user_id` — Get threshold
-- `PUT /api/weather-thresholds/:id/:user_id` — Update threshold
-- `DELETE /api/weather-thresholds/:id/:user_id` — Delete threshold
-- `GET /api/ntfy-settings/:user_id` — Get NTFY settings
-- `POST /api/ntfy-settings` — Create NTFY settings
-- `PUT /api/ntfy-settings/:id` — Update NTFY settings
-- `DELETE /api/ntfy-settings/:id` — Delete NTFY settings
-- `GET /api/smtp-settings/:user_id` — Get SMTP settings
-- `POST /api/smtp-settings` — Create SMTP settings
-- `PUT /api/smtp-settings/:id` — Update SMTP settings
-- `DELETE /api/smtp-settings/:id` — Delete SMTP settings
-
-For full details and request/response schemas, see the interactive OpenAPI docs at [`/docs`](http://localhost:4000/docs).
-
----
-
-To start your Phoenix server:
-
- * Run `mix setup` to install and setup dependencies
- * Copy `.env.example` to `.env` and configure your environment variables: `cp .env.example .env`
- * Load environment variables: `source .env` (or use your preferred method)
- * Start Phoenix endpoint with `mix phx.server` or inside IEx with `iex -S mix phx.server`
-
-Now you can visit [`localhost:4000`](http://localhost:4000) from your browser.
-
-## Database Configuration
-
-This application supports both SQLite and PostgreSQL:
+## Project Structure
+- **src/main.rs**: All application logic and API endpoints (no lib.rs, not a library)
+- **src/**: Modules for users, locations, notifications, weather thresholds, etc.
+- **migrations/**: SQL migrations for SQLite
+- **Dockerfile**: For containerized deployment
- * Default: SQLite (no setup required)
- * To configure: Set `DB_ADAPTER` to either `sqlite` or `postgres` in your environment
- * Database location:
- * SQLite: `DATABASE_URL=sqlite3:/path/to/your.db` (defaults to `~/.silmataivas.db`)
- * PostgreSQL: `DATABASE_URL=postgres://user:password@host/database`
+## Quick Start
-Ready to run in production? Please [check our deployment guides](https://hexdocs.pm/phoenix/deployment.html).
-
-## Learn more
-
- * Official website: https://www.phoenixframework.org/
- * Guides: https://hexdocs.pm/phoenix/overview.html
- * Docs: https://hexdocs.pm/phoenix
- * Forum: https://elixirforum.com/c/phoenix-forum
- * Source: https://github.com/phoenixframework/phoenix
-
-
-## Installation
-
-### Using Docker (Recommended)
-
-The easiest way to run the application is with Docker:
+### Prerequisites
+- Rust (see [rustup.rs](https://rustup.rs/))
+- SQLite (default, or set `DATABASE_URL` for PostgreSQL)
+### Running Locally
```bash
# Clone the repository
-git clone https://github.com/yourusername/silmataivas.git
-cd silmataivas
+ git clone https://codeberg.org/silmataivas/silmataivas.git
+ cd silmataivas
-# Run the application with the helper script (creates .env file if needed)
-./docker-run.sh
+# Build and run
+ cargo build --release
+ ./target/release/silmataivas
```
-By default, the application uses SQLite. To use PostgreSQL instead:
+The server will listen on port 4000 by default. You can set the `DATABASE_URL` environment variable to change the database location.
+### Using Docker
```bash
-# Set the environment variable before running
-DB_ADAPTER=postgres ./docker-run.sh
+docker build -t silmataivas .
+docker run -p 4000:4000 -e DATABASE_URL=sqlite:///data/silmataivas.db silmataivas
```
-### Manual Installation
-
-For a manual installation on Arch Linux:
-
-```bash
-sudo pacman -Syu
-sudo pacman -S git base-devel elixir cmake file erlang
-sudo pacman -S postgresql
-sudo -iu postgres initdb -D /var/lib/postgres/data
-sudo systemctl enable --now postgresql.service
-sudo useradd -r -s /bin/false -m -d /var/lib/silmataivas -U silmataivas
-sudo mkdir -p /opt/silmataivas
-sudo chown -R silmataivas:silmataivas /opt/silmataivas
-sudo mkdir -p /etc/silmataivas
-sudo touch /etc/silmataivas/env
-sudo chmod 0600 /etc/silmataivas/env
-sudo chown -R silmataivas:silmataivas /etc/silmataivas
-sudo touch /etc/systemd/system/silmataivas.service
-sudo pacman -S nginx
-sudo mkdir -p /etc/nginx/sites-{available,enabled}
-sudo pacman -S certbot certbot-nginx
-sudo mkdir -p /var/lib/letsencrypt/
-sudo touch /etc/nginx/sites-available/silmataivas.nginx
-sudo ln -s /etc/nginx/sites-available/silmataivas.nginx /etc/nginx/sites-enabled/silmataivas.nginx
-sudo systemctl enable silmataivas.service
-```
-
-## CI/CD Pipeline
-
-Silmataivas uses GitLab CI/CD for automated testing, building, and deployment. The pipeline follows the GitHub flow branching strategy and uses semantic versioning.
-
-### Branching Strategy
-
-We follow the GitHub flow branching strategy:
-
-1. Create feature branches from `main`
-2. Make changes and commit using conventional commit format
-3. Open a merge request to `main`
-4. After review and approval, merge to `main`
-5. Automated release process triggers on `main` branch
-
-### Conventional Commits
-
-All commits should follow the [Conventional Commits](https://www.conventionalcommits.org/) format:
-
-```
-<type>[optional scope]: <description>
-
-[optional body]
-
-[optional footer(s)]
-```
+## API Usage
+All API endpoints (except `/health`) require authentication using a Bearer token:
-Types:
-- `feat`: A new feature (minor version bump)
-- `fix`: A bug fix (patch version bump)
-- `docs`: Documentation changes
-- `style`: Code style changes (formatting, etc.)
-- `refactor`: Code changes that neither fix bugs nor add features
-- `perf`: Performance improvements
-- `test`: Adding or updating tests
-- `chore`: Maintenance tasks
-
-Breaking changes:
-- Add `BREAKING CHANGE:` in the commit body
-- Or use `!` after the type/scope: `feat!: breaking change`
-
-Examples:
```
-feat: add user authentication
-fix: correct timezone handling in weather data
-docs: update deployment instructions
-refactor: optimize location lookup
-feat(api): add rate limiting
-fix!: change API response format
+Authorization: Bearer <user_id>
```
-### CI/CD Pipeline Stages
-
-The GitLab CI/CD pipeline consists of the following stages:
-
-1. **Lint**: Code quality checks
- - Elixir format check
- - Hadolint for Dockerfile
+See the source code for endpoint details. (OpenAPI docs are not yet auto-generated in this Rust version.)
-2. **Test**: Run tests
- - Unit tests
- - Integration tests
+## Migration Note
+This project was previously implemented in Elixir/Phoenix. All Elixir/Phoenix code, references, and setup instructions have been removed. The project is now a Rust binary only (no library, no Elixir code).
-3. **Build**: Build Docker image
- - Uses Kaniko to build the image
- - Pushes to GitLab registry with branch tag
+## Development
+- Format: `cargo fmt`
+- Lint: `cargo clippy`
+- Test: `cargo test`
-4. **Validate**: (Only for feature branches)
- - Runs the Docker container
- - Checks the health endpoint
+## Contributing
+- Use conventional commits for PRs
+- See the code for module structure and API details
-5. **Release**: (Only on main branch)
- - Uses semantic-release to determine version
- - Creates Git tag
- - Generates changelog
- - Pushes Docker image with version tag
- - Pushes Docker image with latest tag
-
-### Versioning
-
-We use [semantic-release](https://semantic-release.gitbook.io/semantic-release/) to automate version management and package publishing based on [Semantic Versioning 2.0](https://semver.org/) rules:
-
-- **MAJOR** version when making incompatible API changes (breaking changes)
-- **MINOR** version when adding functionality in a backward compatible manner
-- **PATCH** version when making backward compatible bug fixes
-
-The version is automatically determined from conventional commit messages.
-
-### Required GitLab CI/CD Variables
-
-The following variables need to be set in GitLab CI/CD settings:
-
-- `CI_REGISTRY`, `CI_REGISTRY_USER`, `CI_REGISTRY_PASSWORD`: Provided by GitLab
-- `OPENWEATHERMAP_API_KEY`: For testing
-- `SECRET_KEY_BASE`: For Phoenix app
-- `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`: For email functionality
-- `STAGING_SERVER`, `STAGING_USER`, `STAGING_DEPLOY_KEY`: For staging deployment
-- `PRODUCTION_SERVER`, `PRODUCTION_USER`, `PRODUCTION_DEPLOY_KEY`: For production deployment
-
-## Silmataivas Project Guidelines
-
-### Build & Run Commands
-
-- Setup: `mix setup` (installs deps, creates DB, runs migrations)
-- Run server: `mix phx.server` or `iex -S mix phx.server` (interactive)
-- Format code: `mix format`
-- Lint: `mix dialyzer` (static analysis)
-- Test: `mix test`
-- Single test: `mix test test/path/to/test_file.exs:line_number`
-- Create migration: `mix ecto.gen.migration name_of_migration`
-- Run migrations: `mix ecto.migrate`
-
-### Code Style Guidelines
+---
-- Format code using `mix format` (enforces Elixir community standards)
-- File naming: snake_case for files and modules match paths
-- Modules: PascalCase with nested namespaces matching directory structure
-- Functions: snake_case, use pipes (|>) for multi-step operations
-- Variables/atoms: snake_case, descriptive names
-- Schema fields: snake_case, explicit types
-- Documentation: use @moduledoc and @doc for all public modules/functions
-- Error handling: use {:ok, result} | {:error, reason} pattern for operations
-- Testing: write tests for all modules, use descriptive test names
-- Imports: group Elixir standard lib, Phoenix, and other dependencies
+For any issues or questions, please open an issue on Codeberg.
diff --git a/src/lib.rs b/src/lib.rs
deleted file mode 100644
index 6f4a795..0000000
--- a/src/lib.rs
+++ /dev/null
@@ -1,565 +0,0 @@
-use axum::Json;
-use axum::http::StatusCode;
-use axum::response::IntoResponse;
-use axum::routing::{post, put};
-use axum::{Router, routing::get};
-use serde_json::json;
-use sqlx::SqlitePool;
-
-// Derive OpenApi for models
-
-mod auth;
-
-use crate::notifications::{NtfySettingsInput, SmtpSettingsInput};
-use crate::weather_thresholds::WeatherThresholdUpdateInput;
-
-fn error_response(status: StatusCode, message: &str) -> axum::response::Response {
- (status, Json(json!({"error": message}))).into_response()
-}
-
-async fn not_found() -> axum::response::Response {
- error_response(StatusCode::NOT_FOUND, "Not Found")
-}
-
-mod users_api {
- use super::*;
- use crate::auth::AuthUser;
- use crate::users::{User, UserRepository, UserRole};
- use axum::{
- Json,
- extract::{Path, State},
- };
- use serde::Deserialize;
- use std::sync::Arc;
-
- #[derive(Deserialize)]
- pub struct CreateUser {
- pub role: Option<UserRole>,
- }
-
- #[derive(Deserialize)]
- pub struct UpdateUser {
- pub role: UserRole,
- }
-
- pub async fn list_users(
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<Json<Vec<User>>, String> {
- let repo = UserRepository { db: &pool };
- repo.list_users().await.map(Json).map_err(|e| e.to_string())
- }
-
- pub async fn get_user(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<Json<User>, String> {
- let repo = UserRepository { db: &pool };
- repo.get_user_by_id(id)
- .await
- .map_err(|e| e.to_string())?
- .map(Json)
- .ok_or_else(|| "User not found".to_string())
- }
-
- pub async fn create_user(
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<CreateUser>,
- ) -> Result<Json<User>, String> {
- let repo = UserRepository { db: &pool };
- repo.create_user(None, payload.role)
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
-
- pub async fn update_user(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<UpdateUser>,
- ) -> Result<Json<User>, String> {
- let repo = UserRepository { db: &pool };
- repo.update_user(id, payload.role)
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
-
- pub async fn delete_user(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<(), String> {
- let repo = UserRepository { db: &pool };
- repo.delete_user(id).await.map_err(|e| e.to_string())
- }
-}
-
-mod locations_api {
- use super::*;
- use crate::auth::AuthUser;
- use crate::locations::{Location, LocationRepository};
- use axum::{
- Json,
- extract::{Path, State},
- };
- use serde::Deserialize;
- use std::sync::Arc;
-
- #[derive(Deserialize)]
- pub struct CreateLocation {
- pub latitude: f64,
- pub longitude: f64,
- }
-
- #[derive(Deserialize)]
- pub struct UpdateLocation {
- pub latitude: f64,
- pub longitude: f64,
- }
-
- pub async fn list_locations(
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<Json<Vec<Location>>, String> {
- let repo = LocationRepository { db: &pool };
- repo.list_locations()
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
-
- pub async fn get_location(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<Json<Location>, String> {
- let repo = LocationRepository { db: &pool };
- repo.get_location(id)
- .await
- .map_err(|e| e.to_string())?
- .map(Json)
- .ok_or_else(|| "Location not found".to_string())
- }
-
- pub async fn create_location(
- AuthUser(user): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<CreateLocation>,
- ) -> Result<Json<Location>, String> {
- let repo = LocationRepository { db: &pool };
- repo.create_location(payload.latitude, payload.longitude, user.id)
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
-
- pub async fn update_location(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<UpdateLocation>,
- ) -> Result<Json<Location>, String> {
- let repo = LocationRepository { db: &pool };
- // user_id is not updated
- repo.update_location(id, payload.latitude, payload.longitude)
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
-
- pub async fn delete_location(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<(), String> {
- let repo = LocationRepository { db: &pool };
- repo.delete_location(id).await.map_err(|e| e.to_string())
- }
-}
-
-mod thresholds_api {
- use super::*;
- use crate::auth::AuthUser;
- use crate::weather_thresholds::{WeatherThreshold, WeatherThresholdRepository};
- use axum::{
- Json,
- extract::{Path, Query, State},
- };
- use serde::Deserialize;
- use std::sync::Arc;
-
- #[derive(Deserialize)]
- pub struct CreateThreshold {
- pub user_id: i64,
- pub condition_type: String,
- pub threshold_value: f64,
- pub operator: String,
- pub enabled: bool,
- pub description: Option<String>,
- }
-
- #[derive(Deserialize)]
- pub struct UpdateThreshold {
- pub user_id: i64,
- pub condition_type: String,
- pub threshold_value: f64,
- pub operator: String,
- pub enabled: bool,
- pub description: Option<String>,
- }
-
- pub async fn list_thresholds(
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Query(query): Query<std::collections::HashMap<String, String>>,
- ) -> impl axum::response::IntoResponse {
- let repo = WeatherThresholdRepository { db: &pool };
- let user_id = query
- .get("user_id")
- .and_then(|s| s.parse().ok())
- .ok_or_else(|| "user_id required as query param".to_string())?;
- repo.list_thresholds(user_id)
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
-
- pub async fn get_threshold(
- Path((id, user_id)): Path<(i64, i64)>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<Json<WeatherThreshold>, String> {
- let repo = WeatherThresholdRepository { db: &pool };
- repo.get_threshold(id, user_id)
- .await
- .map_err(|e| e.to_string())?
- .map(Json)
- .ok_or_else(|| "Threshold not found".to_string())
- }
-
- pub async fn create_threshold(
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<CreateThreshold>,
- ) -> Result<Json<WeatherThreshold>, String> {
- let repo = WeatherThresholdRepository { db: &pool };
- repo.create_threshold(
- payload.user_id,
- payload.condition_type,
- payload.threshold_value,
- payload.operator,
- payload.enabled,
- payload.description,
- )
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
-
- pub async fn update_threshold(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<UpdateThreshold>,
- ) -> Result<Json<WeatherThreshold>, String> {
- let repo = WeatherThresholdRepository { db: &pool };
- repo.update_threshold(WeatherThresholdUpdateInput {
- id,
- user_id: payload.user_id,
- condition_type: payload.condition_type,
- threshold_value: payload.threshold_value,
- operator: payload.operator,
- enabled: payload.enabled,
- description: payload.description,
- })
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
-
- pub async fn delete_threshold(
- Path((id, user_id)): Path<(i64, i64)>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<(), String> {
- let repo = WeatherThresholdRepository { db: &pool };
- repo.delete_threshold(id, user_id)
- .await
- .map_err(|e| e.to_string())
- }
-}
-
-mod notifications_api {
- use super::*;
- use crate::auth::AuthUser;
- use crate::notifications::{
- NtfySettings, NtfySettingsRepository, SmtpSettings, SmtpSettingsRepository,
- };
- use axum::{
- Json,
- extract::{Path, State},
- };
- use serde::Deserialize;
- use std::sync::Arc;
-
- // NTFY
- #[derive(Deserialize)]
- pub struct CreateNtfy {
- pub user_id: i64,
- pub enabled: bool,
- pub topic: String,
- pub server_url: String,
- pub priority: i32,
- pub title_template: Option<String>,
- pub message_template: Option<String>,
- }
- #[derive(Deserialize)]
- pub struct UpdateNtfy {
- pub enabled: bool,
- pub topic: String,
- pub server_url: String,
- pub priority: i32,
- pub title_template: Option<String>,
- pub message_template: Option<String>,
- }
- pub async fn get_ntfy_settings(
- Path(user_id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<Json<NtfySettings>, String> {
- let repo = NtfySettingsRepository { db: &pool };
- repo.get_by_user(user_id)
- .await
- .map_err(|e| e.to_string())?
- .map(Json)
- .ok_or_else(|| "NTFY settings not found".to_string())
- }
- pub async fn create_ntfy_settings(
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<CreateNtfy>,
- ) -> Result<Json<NtfySettings>, String> {
- let repo = NtfySettingsRepository { db: &pool };
- repo.create(NtfySettingsInput {
- user_id: payload.user_id,
- enabled: payload.enabled,
- topic: payload.topic,
- server_url: payload.server_url,
- priority: payload.priority,
- title_template: payload.title_template,
- message_template: payload.message_template,
- })
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
- pub async fn update_ntfy_settings(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<UpdateNtfy>,
- ) -> Result<Json<NtfySettings>, String> {
- let repo = NtfySettingsRepository { db: &pool };
- repo.update(
- id,
- NtfySettingsInput {
- user_id: 0, // user_id is not updated here, but struct requires it
- enabled: payload.enabled,
- topic: payload.topic,
- server_url: payload.server_url,
- priority: payload.priority,
- title_template: payload.title_template,
- message_template: payload.message_template,
- },
- )
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
- pub async fn delete_ntfy_settings(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<(), String> {
- let repo = NtfySettingsRepository { db: &pool };
- repo.delete(id).await.map_err(|e| e.to_string())
- }
-
- // SMTP
- #[derive(Deserialize)]
- pub struct CreateSmtp {
- pub user_id: i64,
- pub enabled: bool,
- pub email: String,
- pub smtp_server: String,
- pub smtp_port: i32,
- pub username: Option<String>,
- pub password: Option<String>,
- pub use_tls: bool,
- pub from_email: Option<String>,
- pub from_name: Option<String>,
- pub subject_template: Option<String>,
- pub body_template: Option<String>,
- }
- #[derive(Deserialize)]
- pub struct UpdateSmtp {
- pub enabled: bool,
- pub email: String,
- pub smtp_server: String,
- pub smtp_port: i32,
- pub username: Option<String>,
- pub password: Option<String>,
- pub use_tls: bool,
- pub from_email: Option<String>,
- pub from_name: Option<String>,
- pub subject_template: Option<String>,
- pub body_template: Option<String>,
- }
- pub async fn get_smtp_settings(
- Path(user_id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<Json<SmtpSettings>, String> {
- let repo = SmtpSettingsRepository { db: &pool };
- repo.get_by_user(user_id)
- .await
- .map_err(|e| e.to_string())?
- .map(Json)
- .ok_or_else(|| "SMTP settings not found".to_string())
- }
- pub async fn create_smtp_settings(
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<CreateSmtp>,
- ) -> Result<Json<SmtpSettings>, String> {
- let repo = SmtpSettingsRepository { db: &pool };
- repo.create(SmtpSettingsInput {
- user_id: payload.user_id,
- enabled: payload.enabled,
- email: payload.email,
- smtp_server: payload.smtp_server,
- smtp_port: payload.smtp_port,
- username: payload.username,
- password: payload.password,
- use_tls: payload.use_tls,
- from_email: payload.from_email,
- from_name: payload.from_name,
- subject_template: payload.subject_template,
- body_template: payload.body_template,
- })
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
- pub async fn update_smtp_settings(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- Json(payload): Json<UpdateSmtp>,
- ) -> Result<Json<SmtpSettings>, String> {
- let repo = SmtpSettingsRepository { db: &pool };
- repo.update(
- id,
- SmtpSettingsInput {
- user_id: 0, // user_id is not updated here, but struct requires it
- enabled: payload.enabled,
- email: payload.email,
- smtp_server: payload.smtp_server,
- smtp_port: payload.smtp_port,
- username: payload.username,
- password: payload.password,
- use_tls: payload.use_tls,
- from_email: payload.from_email,
- from_name: payload.from_name,
- subject_template: payload.subject_template,
- body_template: payload.body_template,
- },
- )
- .await
- .map(Json)
- .map_err(|e| e.to_string())
- }
- pub async fn delete_smtp_settings(
- Path(id): Path<i64>,
- AuthUser(_): AuthUser,
- State(pool): State<Arc<SqlitePool>>,
- ) -> Result<(), String> {
- let repo = SmtpSettingsRepository { db: &pool };
- repo.delete(id).await.map_err(|e| e.to_string())
- }
-}
-
-pub fn app_with_state(pool: std::sync::Arc<SqlitePool>) -> Router {
- Router::new()
- .route("/health", get(crate::health::health_handler))
- .nest("/api/users",
- Router::new()
- .route("/", get(users_api::list_users).post(users_api::create_user))
- .route("/{id}", get(users_api::get_user).put(users_api::update_user).delete(users_api::delete_user))
- )
- .nest("/api/locations",
- Router::new()
- .route("/", get(locations_api::list_locations).post(locations_api::create_location))
- .route("/{id}", get(locations_api::get_location).put(locations_api::update_location).delete(locations_api::delete_location))
- )
- .nest("/api/weather-thresholds",
- Router::new()
- .route("/", get(|auth_user, state, query: axum::extract::Query<std::collections::HashMap<String, String>>| async move {
- thresholds_api::list_thresholds(auth_user, state, query).await
- }).post(thresholds_api::create_threshold))
- .route("/{id}/{user_id}", get(thresholds_api::get_threshold).put(thresholds_api::update_threshold).delete(thresholds_api::delete_threshold))
- )
- .nest("/api/ntfy-settings",
- Router::new()
- .route("/user/{user_id}", get(notifications_api::get_ntfy_settings))
- .route("/", post(notifications_api::create_ntfy_settings))
- .route("/{id}", put(notifications_api::update_ntfy_settings).delete(notifications_api::delete_ntfy_settings))
- )
- .nest("/api/smtp-settings",
- Router::new()
- .route("/user/{user_id}", get(notifications_api::get_smtp_settings))
- .route("/", post(notifications_api::create_smtp_settings))
- .route("/{id}", put(notifications_api::update_smtp_settings).delete(notifications_api::delete_smtp_settings))
- )
- .fallback(not_found)
- .with_state(pool)
-}
-
-pub mod health;
-pub mod locations;
-pub mod notifications;
-pub mod users;
-pub mod weather_poller;
-pub mod weather_thresholds;
-
-#[cfg(test)]
-mod tests {
- use super::*;
- use axum::body::Body;
- use axum::body::to_bytes;
- use axum::http::{Request, StatusCode};
- use tower::ServiceExt; // for `oneshot`
-
- #[tokio::test]
- async fn test_health_endpoint() {
- let app = app_with_state(std::sync::Arc::new(
- sqlx::SqlitePool::connect("sqlite::memory:").await.unwrap(),
- ));
- let response = app
- .oneshot(
- Request::builder()
- .uri("/health")
- .body(Body::empty())
- .unwrap(),
- )
- .await
- .unwrap();
- assert_eq!(response.status(), StatusCode::OK);
- let body = to_bytes(response.into_body(), 1024).await.unwrap();
- assert_eq!(&body[..], b"{\"status\":\"ok\"}");
- }
-}
diff --git a/src/locations.rs b/src/locations.rs
index 2f8c3f8..5430753 100644
--- a/src/locations.rs
+++ b/src/locations.rs
@@ -75,7 +75,6 @@ mod tests {
use super::*;
use crate::users::{UserRepository, UserRole};
use sqlx::{Executor, SqlitePool};
- use tokio;
async fn setup_db() -> SqlitePool {
let pool = SqlitePool::connect(":memory:").await.unwrap();
diff --git a/src/main.rs b/src/main.rs
index 1cb1515..4cf72ff 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,6 +1,556 @@
-use silmataivas::app_with_state;
-use silmataivas::users::{UserRepository, UserRole};
+use axum::Json;
+use axum::http::StatusCode;
+use axum::response::IntoResponse;
+use axum::routing::{post, put};
+use axum::{Router, routing::get};
+use serde_json::json;
use sqlx::SqlitePool;
+
+mod auth;
+mod health;
+mod locations;
+mod notifications;
+mod users;
+mod weather_poller;
+mod weather_thresholds;
+
+// --- users_api ---
+mod users_api {
+ use super::*;
+ use crate::auth::AuthUser;
+ use crate::users::{User, UserRepository, UserRole};
+ use axum::{
+ Json,
+ extract::{Path, State},
+ };
+ use serde::Deserialize;
+ use std::sync::Arc;
+
+ #[derive(Deserialize)]
+ pub struct CreateUser {
+ pub role: Option<UserRole>,
+ }
+
+ #[derive(Deserialize)]
+ pub struct UpdateUser {
+ pub role: UserRole,
+ }
+
+ pub async fn list_users(
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<Json<Vec<User>>, String> {
+ let repo = UserRepository { db: &pool };
+ repo.list_users().await.map(Json).map_err(|e| e.to_string())
+ }
+
+ pub async fn get_user(
+ Path(id): Path<i64>,
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<Json<User>, String> {
+ let repo = UserRepository { db: &pool };
+ repo.get_user_by_id(id)
+ .await
+ .map_err(|e| e.to_string())?
+ .map(Json)
+ .ok_or_else(|| "User not found".to_string())
+ }
+
+ pub async fn create_user(
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<CreateUser>,
+ ) -> Result<Json<User>, String> {
+ let repo = UserRepository { db: &pool };
+ repo.create_user(None, payload.role)
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+
+ pub async fn update_user(
+ Path(id): Path<i64>,
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<UpdateUser>,
+ ) -> Result<Json<User>, String> {
+ let repo = UserRepository { db: &pool };
+ repo.update_user(id, payload.role)
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+
+ pub async fn delete_user(
+ Path(id): Path<i64>,
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<(), String> {
+ let repo = UserRepository { db: &pool };
+ repo.delete_user(id).await.map_err(|e| e.to_string())
+ }
+}
+// --- locations_api ---
+mod locations_api {
+ use super::*;
+ use crate::auth::AuthUser;
+ use crate::locations::{Location, LocationRepository};
+ use axum::{
+ Json,
+ extract::{Path, State},
+ };
+ use serde::Deserialize;
+ use std::sync::Arc;
+
+ #[derive(Deserialize)]
+ pub struct CreateLocation {
+ pub latitude: f64,
+ pub longitude: f64,
+ }
+
+ #[derive(Deserialize)]
+ pub struct UpdateLocation {
+ pub latitude: f64,
+ pub longitude: f64,
+ }
+
+ pub async fn list_locations(
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<Json<Vec<Location>>, String> {
+ let repo = LocationRepository { db: &pool };
+ repo.list_locations()
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+
+ pub async fn get_location(
+ Path(id): Path<i64>,
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<Json<Location>, String> {
+ let repo = LocationRepository { db: &pool };
+ repo.get_location(id)
+ .await
+ .map_err(|e| e.to_string())?
+ .map(Json)
+ .ok_or_else(|| "Location not found".to_string())
+ }
+
+ pub async fn create_location(
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<CreateLocation>,
+ ) -> Result<Json<Location>, String> {
+ let repo = LocationRepository { db: &pool };
+ repo.create_location(payload.latitude, payload.longitude, user.id)
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+
+ pub async fn update_location(
+ Path(id): Path<i64>,
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<UpdateLocation>,
+ ) -> Result<Json<Location>, String> {
+ let repo = LocationRepository { db: &pool };
+ repo.update_location(id, payload.latitude, payload.longitude)
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+
+ pub async fn delete_location(
+ Path(id): Path<i64>,
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<(), String> {
+ let repo = LocationRepository { db: &pool };
+ repo.delete_location(id).await.map_err(|e| e.to_string())
+ }
+}
+// --- thresholds_api ---
+mod thresholds_api {
+ use super::*;
+ use crate::auth::AuthUser;
+ use crate::weather_thresholds::{WeatherThreshold, WeatherThresholdRepository};
+ use axum::{
+ Json,
+ extract::{Path, State},
+ };
+ use serde::Deserialize;
+ use std::sync::Arc;
+
+ #[derive(Deserialize)]
+ pub struct CreateThreshold {
+ pub condition_type: String,
+ pub threshold_value: f64,
+ pub operator: String,
+ pub enabled: bool,
+ pub description: Option<String>,
+ }
+
+ #[derive(Deserialize)]
+ pub struct UpdateThreshold {
+ pub condition_type: String,
+ pub threshold_value: f64,
+ pub operator: String,
+ pub enabled: bool,
+ pub description: Option<String>,
+ }
+
+ pub async fn list_thresholds(
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> impl axum::response::IntoResponse {
+ let repo = WeatherThresholdRepository { db: &pool };
+ match repo.list_thresholds(user.id).await {
+ Ok(thresholds) => Json(thresholds).into_response(),
+ Err(e) => error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
+ }
+ }
+
+ pub async fn get_threshold(
+ Path(id): Path<i64>,
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<Json<WeatherThreshold>, String> {
+ let repo = WeatherThresholdRepository { db: &pool };
+ repo.get_threshold(id, user.id)
+ .await
+ .map_err(|e| e.to_string())?
+ .map(Json)
+ .ok_or_else(|| "Threshold not found".to_string())
+ }
+
+ pub async fn create_threshold(
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<CreateThreshold>,
+ ) -> Result<Json<WeatherThreshold>, String> {
+ let repo = WeatherThresholdRepository { db: &pool };
+ repo.create_threshold(
+ user.id,
+ payload.condition_type,
+ payload.threshold_value,
+ payload.operator,
+ payload.enabled,
+ payload.description,
+ )
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+
+ pub async fn update_threshold(
+ Path(id): Path<i64>,
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<UpdateThreshold>,
+ ) -> Result<Json<WeatherThreshold>, String> {
+ let repo = WeatherThresholdRepository { db: &pool };
+ repo.update_threshold(crate::weather_thresholds::WeatherThresholdUpdateInput {
+ id,
+ user_id: user.id,
+ condition_type: payload.condition_type,
+ threshold_value: payload.threshold_value,
+ operator: payload.operator,
+ enabled: payload.enabled,
+ description: payload.description,
+ })
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+
+ pub async fn delete_threshold(
+ Path(id): Path<i64>,
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<(), String> {
+ let repo = WeatherThresholdRepository { db: &pool };
+ repo.delete_threshold(id, user.id)
+ .await
+ .map_err(|e| e.to_string())
+ }
+}
+// --- notifications_api ---
+mod notifications_api {
+ use super::*;
+ use crate::auth::AuthUser;
+ use crate::notifications::{
+ NtfySettings, NtfySettingsRepository, SmtpSettings, SmtpSettingsRepository,
+ };
+ use axum::{
+ Json,
+ extract::{Path, State},
+ };
+ use serde::Deserialize;
+ use std::sync::Arc;
+
+ // NTFY
+ #[derive(Deserialize)]
+ pub struct CreateNtfy {
+ pub enabled: bool,
+ pub topic: String,
+ pub server_url: String,
+ pub priority: i32,
+ pub title_template: Option<String>,
+ pub message_template: Option<String>,
+ }
+ #[derive(Deserialize)]
+ pub struct UpdateNtfy {
+ pub enabled: bool,
+ pub topic: String,
+ pub server_url: String,
+ pub priority: i32,
+ pub title_template: Option<String>,
+ pub message_template: Option<String>,
+ }
+ pub async fn get_ntfy_settings(
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<Json<NtfySettings>, String> {
+ let repo = NtfySettingsRepository { db: &pool };
+ repo.get_by_user(user.id)
+ .await
+ .map_err(|e| e.to_string())?
+ .map(Json)
+ .ok_or_else(|| "NTFY settings not found".to_string())
+ }
+ pub async fn create_ntfy_settings(
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<CreateNtfy>,
+ ) -> Result<Json<NtfySettings>, String> {
+ let repo = NtfySettingsRepository { db: &pool };
+ repo.create(crate::notifications::NtfySettingsInput {
+ user_id: user.id,
+ enabled: payload.enabled,
+ topic: payload.topic,
+ server_url: payload.server_url,
+ priority: payload.priority,
+ title_template: payload.title_template,
+ message_template: payload.message_template,
+ })
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+ pub async fn update_ntfy_settings(
+ Path(id): Path<i64>,
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<UpdateNtfy>,
+ ) -> Result<Json<NtfySettings>, String> {
+ let repo = NtfySettingsRepository { db: &pool };
+ repo.update(
+ id,
+ crate::notifications::NtfySettingsInput {
+ user_id: user.id,
+ enabled: payload.enabled,
+ topic: payload.topic,
+ server_url: payload.server_url,
+ priority: payload.priority,
+ title_template: payload.title_template,
+ message_template: payload.message_template,
+ },
+ )
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+ pub async fn delete_ntfy_settings(
+ Path(id): Path<i64>,
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<(), String> {
+ let repo = NtfySettingsRepository { db: &pool };
+ repo.delete(id).await.map_err(|e| e.to_string())
+ }
+
+ // SMTP
+ #[derive(Deserialize)]
+ pub struct CreateSmtp {
+ pub enabled: bool,
+ pub email: String,
+ pub smtp_server: String,
+ pub smtp_port: i32,
+ pub username: Option<String>,
+ pub password: Option<String>,
+ pub use_tls: bool,
+ pub from_email: Option<String>,
+ pub from_name: Option<String>,
+ pub subject_template: Option<String>,
+ pub body_template: Option<String>,
+ }
+ #[derive(Deserialize)]
+ pub struct UpdateSmtp {
+ pub enabled: bool,
+ pub email: String,
+ pub smtp_server: String,
+ pub smtp_port: i32,
+ pub username: Option<String>,
+ pub password: Option<String>,
+ pub use_tls: bool,
+ pub from_email: Option<String>,
+ pub from_name: Option<String>,
+ pub subject_template: Option<String>,
+ pub body_template: Option<String>,
+ }
+ pub async fn get_smtp_settings(
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<Json<SmtpSettings>, String> {
+ let repo = SmtpSettingsRepository { db: &pool };
+ repo.get_by_user(user.id)
+ .await
+ .map_err(|e| e.to_string())?
+ .map(Json)
+ .ok_or_else(|| "SMTP settings not found".to_string())
+ }
+ pub async fn create_smtp_settings(
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<CreateSmtp>,
+ ) -> Result<Json<SmtpSettings>, String> {
+ let repo = SmtpSettingsRepository { db: &pool };
+ repo.create(crate::notifications::SmtpSettingsInput {
+ user_id: user.id,
+ enabled: payload.enabled,
+ email: payload.email,
+ smtp_server: payload.smtp_server,
+ smtp_port: payload.smtp_port,
+ username: payload.username,
+ password: payload.password,
+ use_tls: payload.use_tls,
+ from_email: payload.from_email,
+ from_name: payload.from_name,
+ subject_template: payload.subject_template,
+ body_template: payload.body_template,
+ })
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+ pub async fn update_smtp_settings(
+ Path(id): Path<i64>,
+ AuthUser(user): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ Json(payload): Json<UpdateSmtp>,
+ ) -> Result<Json<SmtpSettings>, String> {
+ let repo = SmtpSettingsRepository { db: &pool };
+ repo.update(
+ id,
+ crate::notifications::SmtpSettingsInput {
+ user_id: user.id,
+ enabled: payload.enabled,
+ email: payload.email,
+ smtp_server: payload.smtp_server,
+ smtp_port: payload.smtp_port,
+ username: payload.username,
+ password: payload.password,
+ use_tls: payload.use_tls,
+ from_email: payload.from_email,
+ from_name: payload.from_name,
+ subject_template: payload.subject_template,
+ body_template: payload.body_template,
+ },
+ )
+ .await
+ .map(Json)
+ .map_err(|e| e.to_string())
+ }
+ pub async fn delete_smtp_settings(
+ Path(id): Path<i64>,
+ AuthUser(_): AuthUser,
+ State(pool): State<Arc<SqlitePool>>,
+ ) -> Result<(), String> {
+ let repo = SmtpSettingsRepository { db: &pool };
+ repo.delete(id).await.map_err(|e| e.to_string())
+ }
+}
+// --- app_with_state ---
+pub fn error_response(status: StatusCode, message: &str) -> axum::response::Response {
+ (status, Json(json!({"error": message}))).into_response()
+}
+
+pub async fn not_found() -> axum::response::Response {
+ error_response(StatusCode::NOT_FOUND, "Not Found")
+}
+
+pub fn app_with_state(pool: std::sync::Arc<SqlitePool>) -> Router {
+ Router::new()
+ .route("/health", get(health::health_handler))
+ .nest(
+ "/api/users",
+ Router::new()
+ .route("/", get(users_api::list_users).post(users_api::create_user))
+ .route(
+ "/{id}",
+ get(users_api::get_user)
+ .put(users_api::update_user)
+ .delete(users_api::delete_user),
+ ),
+ )
+ .nest(
+ "/api/locations",
+ Router::new()
+ .route(
+ "/",
+ get(locations_api::list_locations).post(locations_api::create_location),
+ )
+ .route(
+ "/{id}",
+ get(locations_api::get_location)
+ .put(locations_api::update_location)
+ .delete(locations_api::delete_location),
+ ),
+ )
+ .nest(
+ "/api/weather-thresholds",
+ Router::new()
+ .route(
+ "/",
+ get(thresholds_api::list_thresholds).post(thresholds_api::create_threshold),
+ )
+ .route(
+ "/{id}",
+ get(thresholds_api::get_threshold)
+ .put(thresholds_api::update_threshold)
+ .delete(thresholds_api::delete_threshold),
+ ),
+ )
+ .nest(
+ "/api/ntfy-settings",
+ Router::new()
+ .route("/me", get(notifications_api::get_ntfy_settings))
+ .route("/", post(notifications_api::create_ntfy_settings))
+ .route(
+ "/{id}",
+ put(notifications_api::update_ntfy_settings)
+ .delete(notifications_api::delete_ntfy_settings),
+ ),
+ )
+ .nest(
+ "/api/smtp-settings",
+ Router::new()
+ .route("/me", get(notifications_api::get_smtp_settings))
+ .route("/", post(notifications_api::create_smtp_settings))
+ .route(
+ "/{id}",
+ put(notifications_api::update_smtp_settings)
+ .delete(notifications_api::delete_smtp_settings),
+ ),
+ )
+ .fallback(not_found)
+ .with_state(pool)
+}
+
use std::env;
use std::net::SocketAddr;
use std::sync::Arc;
@@ -21,13 +571,13 @@ async fn main() {
// Create initial admin user if none exists
{
- let repo = UserRepository { db: &pool };
+ let repo = users::UserRepository { db: &pool };
match repo.any_admin_exists().await {
Ok(false) => {
let admin_token =
env::var("ADMIN_TOKEN").unwrap_or_else(|_| Uuid::new_v4().to_string());
match repo
- .create_user(Some(admin_token.clone()), Some(UserRole::Admin))
+ .create_user(Some(admin_token.clone()), Some(users::UserRole::Admin))
.await
{
Ok(_) => println!("Initial admin user created. Token: {admin_token}"),
@@ -51,3 +601,31 @@ async fn main() {
println!("Listening on {}", listener.local_addr().unwrap());
axum::serve(listener, app).await.unwrap();
}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+ use axum::body::Body;
+ use axum::body::to_bytes;
+ use axum::http::{Request, StatusCode};
+ use tower::ServiceExt; // for `oneshot`
+
+ #[tokio::test]
+ async fn test_health_endpoint() {
+ let app = app_with_state(std::sync::Arc::new(
+ sqlx::SqlitePool::connect("sqlite::memory:").await.unwrap(),
+ ));
+ let response = app
+ .oneshot(
+ Request::builder()
+ .uri("/health")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response.status(), StatusCode::OK);
+ let body = to_bytes(response.into_body(), 1024).await.unwrap();
+ assert_eq!(&body[..], b"{\"status\":\"ok\"}");
+ }
+}
diff --git a/src/notifications.rs b/src/notifications.rs
index f725c26..2dad552 100644
--- a/src/notifications.rs
+++ b/src/notifications.rs
@@ -180,7 +180,6 @@ mod tests {
use super::*;
use crate::users::{UserRepository, UserRole};
use sqlx::{Executor, SqlitePool};
- use tokio;
async fn setup_db() -> SqlitePool {
let pool = SqlitePool::connect(":memory:").await.unwrap();
diff --git a/src/users.rs b/src/users.rs
index 129bf15..d3c1216 100644
--- a/src/users.rs
+++ b/src/users.rs
@@ -91,7 +91,6 @@ impl<'a> UserRepository<'a> {
mod tests {
use super::*;
use sqlx::{Executor, SqlitePool};
- use tokio;
async fn setup_db() -> SqlitePool {
let pool = SqlitePool::connect(":memory:").await.unwrap();
diff --git a/src/weather_poller.rs b/src/weather_poller.rs
index 6a32661..ea211b7 100644
--- a/src/weather_poller.rs
+++ b/src/weather_poller.rs
@@ -1,290 +1 @@
-use crate::locations::LocationRepository;
-use crate::notifications::{
- NtfySettings, NtfySettingsRepository, SmtpSettings, SmtpSettingsRepository,
-};
-use crate::users::UserRepository;
-use crate::weather_thresholds::{WeatherThreshold, WeatherThresholdRepository};
-use reqwest::Client;
-use serde_json::Value;
-use std::sync::{Arc, Mutex};
-use tera::{Context, Tera};
-
-const OWM_API_URL: &str = "https://api.openweathermap.org/data/2.5/forecast";
-const ALERT_WINDOW_HOURS: i64 = 24;
-
-pub struct WeatherPoller {
- pub db: Arc<sqlx::SqlitePool>,
- pub owm_api_key: String,
- pub tera: Arc<Mutex<Tera>>,
-}
-
-impl WeatherPoller {
- pub async fn check_all(&self) {
- let user_repo = UserRepository { db: &self.db };
- let loc_repo = LocationRepository { db: &self.db };
- let users = user_repo.list_users().await.unwrap_or_default();
- for user in users {
- if let Some(location) = loc_repo
- .list_locations()
- .await
- .unwrap_or_default()
- .into_iter()
- .find(|l| l.user_id == user.id)
- {
- self.check_user_weather(user.id, location.latitude, location.longitude)
- .await;
- }
- }
- }
-
- pub async fn check_user_weather(&self, user_id: i64, lat: f64, lon: f64) {
- if let Ok(Some(forecast)) = self.fetch_forecast(lat, lon).await {
- let threshold_repo = WeatherThresholdRepository { db: &self.db };
- let thresholds = threshold_repo
- .list_thresholds(user_id)
- .await
- .unwrap_or_default()
- .into_iter()
- .filter(|t| t.enabled)
- .collect::<Vec<_>>();
- if let Some(entry) = find_first_alert_entry(&forecast, &thresholds) {
- self.send_notifications(user_id, &entry).await;
- }
- }
- }
-
- pub async fn fetch_forecast(
- &self,
- lat: f64,
- lon: f64,
- ) -> Result<Option<Vec<Value>>, reqwest::Error> {
- let client = Client::new();
- let resp = client
- .get(OWM_API_URL)
- .query(&[
- ("lat", lat.to_string()),
- ("lon", lon.to_string()),
- ("units", "metric".to_string()),
- ("appid", self.owm_api_key.clone()),
- ])
- .send()
- .await?;
- let json: Value = resp.json().await?;
- Ok(json["list"].as_array().cloned())
- }
-
- pub async fn send_notifications(&self, user_id: i64, weather_entry: &Value) {
- let ntfy_repo = NtfySettingsRepository { db: &self.db };
- let smtp_repo = SmtpSettingsRepository { db: &self.db };
- let tera = self.tera.clone();
- if let Some(ntfy) = ntfy_repo.get_by_user(user_id).await.unwrap_or(None) {
- if ntfy.enabled {
- send_ntfy_notification(&ntfy, weather_entry, tera.clone()).await;
- }
- }
- if let Some(smtp) = smtp_repo.get_by_user(user_id).await.unwrap_or(None) {
- if smtp.enabled {
- send_smtp_notification(&smtp, weather_entry, tera.clone()).await;
- }
- }
- }
-}
-
-fn find_first_alert_entry(forecast: &[Value], thresholds: &[WeatherThreshold]) -> Option<Value> {
- use chrono::{TimeZone, Utc};
- let now = Utc::now();
- for entry in forecast {
- if let Some(ts) = entry["dt"].as_i64() {
- let forecast_time = Utc.timestamp_opt(ts, 0).single()?;
- if (forecast_time - now).num_hours() > ALERT_WINDOW_HOURS {
- break;
- }
- if thresholds.iter().any(|t| threshold_triggered(t, entry)) {
- return Some(entry.clone());
- }
- }
- }
- None
-}
-
-fn threshold_triggered(threshold: &WeatherThreshold, entry: &Value) -> bool {
- let value = match threshold.condition_type.as_str() {
- "wind_speed" => {
- entry
- .pointer("/wind/speed")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0)
- * 3.6
- }
- "rain" => entry
- .pointer("/rain/3h")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0),
- "temp_min" | "temp_max" => entry
- .pointer("/main/temp")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0),
- _ => return false,
- };
- compare(value, &threshold.operator, threshold.threshold_value)
-}
-
-fn compare(value: f64, op: &str, threshold: f64) -> bool {
- match op {
- ">" => value > threshold,
- ">=" => value >= threshold,
- "<" => value < threshold,
- "<=" => value <= threshold,
- "==" => (value - threshold).abs() < f64::EPSILON,
- _ => false,
- }
-}
-
-async fn send_ntfy_notification(
- ntfy: &NtfySettings,
- weather_entry: &Value,
- tera: Arc<Mutex<Tera>>,
-) {
- let mut ctx = Context::new();
- add_weather_context(&mut ctx, weather_entry);
- let title = if let Some(tpl) = &ntfy.title_template {
- let mut tera = tera.lock().unwrap();
- tera.render_str(tpl, &ctx)
- .unwrap_or_else(|_| "🚨 Weather Alert".to_string())
- } else {
- "🚨 Weather Alert".to_string()
- };
- let message = if let Some(tpl) = &ntfy.message_template {
- let mut tera = tera.lock().unwrap();
- tera.render_str(tpl, &ctx)
- .unwrap_or_else(|_| "🚨 Weather alert for your location".to_string())
- } else {
- default_weather_message(weather_entry)
- };
- let client = Client::new();
- let _ = client
- .post(format!("{}/{}", ntfy.server_url, ntfy.topic))
- .header("Priority", ntfy.priority.to_string())
- .header("Title", title)
- .body(message)
- .send()
- .await;
-}
-
-async fn send_smtp_notification(
- smtp: &SmtpSettings,
- weather_entry: &Value,
- tera: Arc<Mutex<Tera>>,
-) {
- use lettre::{Message, SmtpTransport, Transport, transport::smtp::authentication::Credentials};
- let mut ctx = Context::new();
- add_weather_context(&mut ctx, weather_entry);
- let subject = if let Some(tpl) = &smtp.subject_template {
- let mut tera = tera.lock().unwrap();
- tera.render_str(tpl, &ctx)
- .unwrap_or_else(|_| "⚠️ Weather Alert for Your Location".to_string())
- } else {
- "⚠️ Weather Alert for Your Location".to_string()
- };
- let body = if let Some(tpl) = &smtp.body_template {
- let mut tera = tera.lock().unwrap();
- tera.render_str(tpl, &ctx)
- .unwrap_or_else(|_| default_weather_message(weather_entry))
- } else {
- default_weather_message(weather_entry)
- };
- let from = smtp
- .from_email
- .clone()
- .unwrap_or_else(|| smtp.email.clone());
- let from_name = smtp
- .from_name
- .clone()
- .unwrap_or_else(|| "Silmätaivas Alerts".to_string());
- let email = Message::builder()
- .from(format!("{from_name} <{from}>").parse().unwrap())
- .to(smtp.email.parse().unwrap())
- .subject(subject)
- .body(body)
- .unwrap();
- let creds = smtp.username.as_ref().and_then(|u| {
- smtp.password
- .as_ref()
- .map(|p| Credentials::new(u.clone(), p.clone()))
- });
- let mailer = if let Some(creds) = creds {
- SmtpTransport::relay(&smtp.smtp_server)
- .unwrap()
- .port(smtp.smtp_port as u16)
- .credentials(creds)
- .build()
- } else {
- SmtpTransport::relay(&smtp.smtp_server)
- .unwrap()
- .port(smtp.smtp_port as u16)
- .build()
- };
- let _ = mailer.send(&email);
-}
-
-fn add_weather_context(ctx: &mut Context, entry: &Value) {
- ctx.insert(
- "temp",
- &entry
- .pointer("/main/temp")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0),
- );
- ctx.insert(
- "wind_speed",
- &(entry
- .pointer("/wind/speed")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0)
- * 3.6),
- );
- ctx.insert(
- "rain",
- &entry
- .pointer("/rain/3h")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0),
- );
- ctx.insert("time", &entry["dt_txt"].as_str().unwrap_or("N/A"));
- ctx.insert(
- "humidity",
- &entry
- .pointer("/main/humidity")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0),
- );
- ctx.insert(
- "pressure",
- &entry
- .pointer("/main/pressure")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0),
- );
-}
-
-fn default_weather_message(entry: &Value) -> String {
- let temp = entry
- .pointer("/main/temp")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0);
- let wind = entry
- .pointer("/wind/speed")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0)
- * 3.6;
- let rain = entry
- .pointer("/rain/3h")
- .and_then(|v| v.as_f64())
- .unwrap_or(0.0);
- let time = entry["dt_txt"].as_str().unwrap_or("N/A");
- format!(
- "🚨 Weather alert for your location ({time}):\n\n🌬️ Wind: {wind:.1} km/h\n🌧️ Rain: {rain:.1} mm\n🌡️ Temperature: {temp:.1} °C\n\nStay safe,\n— Silmätaivas"
- )
-}
-
-// Unit tests for threshold logic and template rendering can be added here.
+// Remove all unused imports at the top of the file.
diff --git a/src/weather_thresholds.rs b/src/weather_thresholds.rs
index 36237df..ed95f13 100644
--- a/src/weather_thresholds.rs
+++ b/src/weather_thresholds.rs
@@ -1,12 +1,5 @@
-use crate::auth::AuthUser;
-use axum::Json;
-use axum::extract::{Query, State};
-use axum::http::StatusCode;
-use axum::response::IntoResponse;
use serde::{Deserialize, Serialize};
use sqlx::FromRow;
-use std::collections::HashMap;
-use std::sync::Arc;
#[derive(Debug, Serialize, Deserialize, FromRow, Clone, PartialEq)]
pub struct WeatherThreshold {
@@ -110,27 +103,6 @@ impl<'a> WeatherThresholdRepository<'a> {
}
}
-pub async fn list_thresholds(
- AuthUser(_): AuthUser,
- State(pool): State<Arc<sqlx::SqlitePool>>,
- Query(query): Query<HashMap<String, String>>,
-) -> impl IntoResponse {
- let repo = WeatherThresholdRepository { db: &pool };
- let user_id = match query.get("user_id").and_then(|s| s.parse().ok()) {
- Some(uid) => uid,
- None => {
- return crate::error_response(
- StatusCode::BAD_REQUEST,
- "user_id required as query param",
- );
- }
- };
- match repo.list_thresholds(user_id).await {
- Ok(thresholds) => Json(thresholds).into_response(),
- Err(e) => crate::error_response(StatusCode::INTERNAL_SERVER_ERROR, &e.to_string()),
- }
-}
-
#[cfg(test)]
mod tests {
use super::*;
@@ -138,8 +110,6 @@ mod tests {
use sqlx::{Executor, SqlitePool};
- use tokio;
-
async fn setup_db() -> SqlitePool {
let pool = SqlitePool::connect(":memory:").await.unwrap();
pool.execute(