summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorDawid Rycerz <dawid@rycerz.xyz>2025-07-16 23:03:40 +0300
committerDawid Rycerz <dawid@rycerz.xyz>2025-07-16 23:03:40 +0300
commit1aee0b802cad9fc9343b6c2966ba112f9b762f7c (patch)
tree53d9551fbfd3df01ac61ecd1128060a9a9727a84 /README.md
parentdbb25297da61fe393ca1e8a6b6c6beace2513e0a (diff)
feat: refactor and remove lib usage
Diffstat (limited to 'README.md')
-rw-r--r--README.md257
1 files changed, 36 insertions, 221 deletions
diff --git a/README.md b/README.md
index 3910fc4..6faebe1 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
-# Silmataivas (Rust Rewrite)
+# Silmataivas (Rust Version)
-Silmataivas is a weather monitoring service that sends personalized alerts based on user-defined thresholds and notification preferences. This is the Rust rewrite, providing a RESTful API for managing users, locations, weather thresholds, and notification settings.
+Silmataivas is a weather monitoring service that sends personalized alerts based on user-defined thresholds and notification preferences. This is the Rust version, providing a RESTful API for managing users, locations, weather thresholds, and notification settings.
## Features
- Weather monitoring using OpenWeatherMap API
@@ -8,244 +8,59 @@ Silmataivas is a weather monitoring service that sends personalized alerts based
- Flexible notifications: NTFY (push) and SMTP (email)
- User-specific configuration
- RESTful API for all resources
-- Automatic OpenAPI documentation at `/docs`
-## API Usage
-All API endpoints (except `/health` and `/docs`) require authentication using a Bearer token:
-
-```
-Authorization: Bearer <user_id>
-```
-
-### Main Endpoints
-- `GET /health` — Health check
-- `GET /api/users` — List users
-- `POST /api/users` — Create user
-- `GET /api/users/:id` — Get user
-- `PUT /api/users/:id` — Update user
-- `DELETE /api/users/:id` — Delete user
-- `GET /api/locations` — List locations
-- `POST /api/locations` — Create location
-- `GET /api/locations/:id` — Get location
-- `PUT /api/locations/:id` — Update location
-- `DELETE /api/locations/:id` — Delete location
-- `GET /api/weather-thresholds?user_id=...` — List thresholds for user
-- `POST /api/weather-thresholds` — Create threshold
-- `GET /api/weather-thresholds/:id/:user_id` — Get threshold
-- `PUT /api/weather-thresholds/:id/:user_id` — Update threshold
-- `DELETE /api/weather-thresholds/:id/:user_id` — Delete threshold
-- `GET /api/ntfy-settings/:user_id` — Get NTFY settings
-- `POST /api/ntfy-settings` — Create NTFY settings
-- `PUT /api/ntfy-settings/:id` — Update NTFY settings
-- `DELETE /api/ntfy-settings/:id` — Delete NTFY settings
-- `GET /api/smtp-settings/:user_id` — Get SMTP settings
-- `POST /api/smtp-settings` — Create SMTP settings
-- `PUT /api/smtp-settings/:id` — Update SMTP settings
-- `DELETE /api/smtp-settings/:id` — Delete SMTP settings
-
-For full details and request/response schemas, see the interactive OpenAPI docs at [`/docs`](http://localhost:4000/docs).
-
----
-
-To start your Phoenix server:
-
- * Run `mix setup` to install and setup dependencies
- * Copy `.env.example` to `.env` and configure your environment variables: `cp .env.example .env`
- * Load environment variables: `source .env` (or use your preferred method)
- * Start Phoenix endpoint with `mix phx.server` or inside IEx with `iex -S mix phx.server`
-
-Now you can visit [`localhost:4000`](http://localhost:4000) from your browser.
-
-## Database Configuration
-
-This application supports both SQLite and PostgreSQL:
+## Project Structure
+- **src/main.rs**: All application logic and API endpoints (no lib.rs, not a library)
+- **src/**: Modules for users, locations, notifications, weather thresholds, etc.
+- **migrations/**: SQL migrations for SQLite
+- **Dockerfile**: For containerized deployment
- * Default: SQLite (no setup required)
- * To configure: Set `DB_ADAPTER` to either `sqlite` or `postgres` in your environment
- * Database location:
- * SQLite: `DATABASE_URL=sqlite3:/path/to/your.db` (defaults to `~/.silmataivas.db`)
- * PostgreSQL: `DATABASE_URL=postgres://user:password@host/database`
+## Quick Start
-Ready to run in production? Please [check our deployment guides](https://hexdocs.pm/phoenix/deployment.html).
-
-## Learn more
-
- * Official website: https://www.phoenixframework.org/
- * Guides: https://hexdocs.pm/phoenix/overview.html
- * Docs: https://hexdocs.pm/phoenix
- * Forum: https://elixirforum.com/c/phoenix-forum
- * Source: https://github.com/phoenixframework/phoenix
-
-
-## Installation
-
-### Using Docker (Recommended)
-
-The easiest way to run the application is with Docker:
+### Prerequisites
+- Rust (see [rustup.rs](https://rustup.rs/))
+- SQLite (default, or set `DATABASE_URL` for PostgreSQL)
+### Running Locally
```bash
# Clone the repository
-git clone https://github.com/yourusername/silmataivas.git
-cd silmataivas
+ git clone https://codeberg.org/silmataivas/silmataivas.git
+ cd silmataivas
-# Run the application with the helper script (creates .env file if needed)
-./docker-run.sh
+# Build and run
+ cargo build --release
+ ./target/release/silmataivas
```
-By default, the application uses SQLite. To use PostgreSQL instead:
+The server will listen on port 4000 by default. You can set the `DATABASE_URL` environment variable to change the database location.
+### Using Docker
```bash
-# Set the environment variable before running
-DB_ADAPTER=postgres ./docker-run.sh
+docker build -t silmataivas .
+docker run -p 4000:4000 -e DATABASE_URL=sqlite:///data/silmataivas.db silmataivas
```
-### Manual Installation
-
-For a manual installation on Arch Linux:
-
-```bash
-sudo pacman -Syu
-sudo pacman -S git base-devel elixir cmake file erlang
-sudo pacman -S postgresql
-sudo -iu postgres initdb -D /var/lib/postgres/data
-sudo systemctl enable --now postgresql.service
-sudo useradd -r -s /bin/false -m -d /var/lib/silmataivas -U silmataivas
-sudo mkdir -p /opt/silmataivas
-sudo chown -R silmataivas:silmataivas /opt/silmataivas
-sudo mkdir -p /etc/silmataivas
-sudo touch /etc/silmataivas/env
-sudo chmod 0600 /etc/silmataivas/env
-sudo chown -R silmataivas:silmataivas /etc/silmataivas
-sudo touch /etc/systemd/system/silmataivas.service
-sudo pacman -S nginx
-sudo mkdir -p /etc/nginx/sites-{available,enabled}
-sudo pacman -S certbot certbot-nginx
-sudo mkdir -p /var/lib/letsencrypt/
-sudo touch /etc/nginx/sites-available/silmataivas.nginx
-sudo ln -s /etc/nginx/sites-available/silmataivas.nginx /etc/nginx/sites-enabled/silmataivas.nginx
-sudo systemctl enable silmataivas.service
-```
-
-## CI/CD Pipeline
-
-Silmataivas uses GitLab CI/CD for automated testing, building, and deployment. The pipeline follows the GitHub flow branching strategy and uses semantic versioning.
-
-### Branching Strategy
-
-We follow the GitHub flow branching strategy:
-
-1. Create feature branches from `main`
-2. Make changes and commit using conventional commit format
-3. Open a merge request to `main`
-4. After review and approval, merge to `main`
-5. Automated release process triggers on `main` branch
-
-### Conventional Commits
-
-All commits should follow the [Conventional Commits](https://www.conventionalcommits.org/) format:
-
-```
-<type>[optional scope]: <description>
-
-[optional body]
-
-[optional footer(s)]
-```
+## API Usage
+All API endpoints (except `/health`) require authentication using a Bearer token:
-Types:
-- `feat`: A new feature (minor version bump)
-- `fix`: A bug fix (patch version bump)
-- `docs`: Documentation changes
-- `style`: Code style changes (formatting, etc.)
-- `refactor`: Code changes that neither fix bugs nor add features
-- `perf`: Performance improvements
-- `test`: Adding or updating tests
-- `chore`: Maintenance tasks
-
-Breaking changes:
-- Add `BREAKING CHANGE:` in the commit body
-- Or use `!` after the type/scope: `feat!: breaking change`
-
-Examples:
```
-feat: add user authentication
-fix: correct timezone handling in weather data
-docs: update deployment instructions
-refactor: optimize location lookup
-feat(api): add rate limiting
-fix!: change API response format
+Authorization: Bearer <user_id>
```
-### CI/CD Pipeline Stages
-
-The GitLab CI/CD pipeline consists of the following stages:
-
-1. **Lint**: Code quality checks
- - Elixir format check
- - Hadolint for Dockerfile
+See the source code for endpoint details. (OpenAPI docs are not yet auto-generated in this Rust version.)
-2. **Test**: Run tests
- - Unit tests
- - Integration tests
+## Migration Note
+This project was previously implemented in Elixir/Phoenix. All Elixir/Phoenix code, references, and setup instructions have been removed. The project is now a Rust binary only (no library, no Elixir code).
-3. **Build**: Build Docker image
- - Uses Kaniko to build the image
- - Pushes to GitLab registry with branch tag
+## Development
+- Format: `cargo fmt`
+- Lint: `cargo clippy`
+- Test: `cargo test`
-4. **Validate**: (Only for feature branches)
- - Runs the Docker container
- - Checks the health endpoint
+## Contributing
+- Use conventional commits for PRs
+- See the code for module structure and API details
-5. **Release**: (Only on main branch)
- - Uses semantic-release to determine version
- - Creates Git tag
- - Generates changelog
- - Pushes Docker image with version tag
- - Pushes Docker image with latest tag
-
-### Versioning
-
-We use [semantic-release](https://semantic-release.gitbook.io/semantic-release/) to automate version management and package publishing based on [Semantic Versioning 2.0](https://semver.org/) rules:
-
-- **MAJOR** version when making incompatible API changes (breaking changes)
-- **MINOR** version when adding functionality in a backward compatible manner
-- **PATCH** version when making backward compatible bug fixes
-
-The version is automatically determined from conventional commit messages.
-
-### Required GitLab CI/CD Variables
-
-The following variables need to be set in GitLab CI/CD settings:
-
-- `CI_REGISTRY`, `CI_REGISTRY_USER`, `CI_REGISTRY_PASSWORD`: Provided by GitLab
-- `OPENWEATHERMAP_API_KEY`: For testing
-- `SECRET_KEY_BASE`: For Phoenix app
-- `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`: For email functionality
-- `STAGING_SERVER`, `STAGING_USER`, `STAGING_DEPLOY_KEY`: For staging deployment
-- `PRODUCTION_SERVER`, `PRODUCTION_USER`, `PRODUCTION_DEPLOY_KEY`: For production deployment
-
-## Silmataivas Project Guidelines
-
-### Build & Run Commands
-
-- Setup: `mix setup` (installs deps, creates DB, runs migrations)
-- Run server: `mix phx.server` or `iex -S mix phx.server` (interactive)
-- Format code: `mix format`
-- Lint: `mix dialyzer` (static analysis)
-- Test: `mix test`
-- Single test: `mix test test/path/to/test_file.exs:line_number`
-- Create migration: `mix ecto.gen.migration name_of_migration`
-- Run migrations: `mix ecto.migrate`
-
-### Code Style Guidelines
+---
-- Format code using `mix format` (enforces Elixir community standards)
-- File naming: snake_case for files and modules match paths
-- Modules: PascalCase with nested namespaces matching directory structure
-- Functions: snake_case, use pipes (|>) for multi-step operations
-- Variables/atoms: snake_case, descriptive names
-- Schema fields: snake_case, explicit types
-- Documentation: use @moduledoc and @doc for all public modules/functions
-- Error handling: use {:ok, result} | {:error, reason} pattern for operations
-- Testing: write tests for all modules, use descriptive test names
-- Imports: group Elixir standard lib, Phoenix, and other dependencies
+For any issues or questions, please open an issue on Codeberg.