-`calendar_feeds.py`: ICS/webcal feed loading and parsing
-`settings.py`: env-driven runtime settings
-`frontend/src/App.tsx`: client routes and page composition
-`frontend/src/MarkdownContent.tsx`: safe markdown renderer used in lessons and discussions
-`scripts/start.sh`: main startup command for local runs
## Repo Layout Notes
- The root repository is the site application.
-`examples/quadrature-encoder-course/` is a separate nested git repo used as sample content. It is intentionally ignored by the root repo and should stay that way.
## First-Time Setup
### Python
```bash
python3 -m venv .venv
.venv/bin/pip install -r requirements.txt
```
### Frontend
```bash
cd frontend
~/.bun/bin/bun install
```
## Environment
Runtime configuration is loaded from shell env, then `.env`, then `.env.local` through `scripts/start.sh`.
- Browser sign-in uses Forgejo OAuth/OIDC. `APP_BASE_URL` must match the URL opened in the browser, `CORS_ALLOW_ORIGINS` should include that origin, and the Forgejo OAuth app must include `/api/auth/forgejo/callback` under that base URL.
- Browser OAuth requests only identity scopes. The backend stores the resulting Forgejo token in an encrypted `HttpOnly` cookie and may use it only after enforcing public-repository checks for writes.
-`FORGEJO_TOKEN` is optional and should be treated as a read-only local fallback for the public content cache. Browser sessions and API token calls may write issues/comments only after verifying the target repo is public.
-`/api/prototype` uses a server-side cache for public Forgejo content. `FORGEJO_CACHE_TTL_SECONDS=0` disables it; successful discussion replies invalidate it.
- General discussion creation requires `FORGEJO_GENERAL_DISCUSSION_REPO`. Linked discussions are created in the content repo and include canonical app URLs in the Forgejo issue body.
- Forgejo webhooks should POST to `/api/forgejo/webhook`; when `FORGEJO_WEBHOOK_SECRET` is set, the backend validates Forgejo/Gitea-style HMAC headers.
- Container port mapping: host `8800` to container `8000`
- Reverse proxy: LXC `102` routes `discourse.onl` to `192.168.1.220:8800`
The local `.env.proxmox` file contains Proxmox credentials and LXC settings. It is ignored by git and must not be printed, committed, or copied into the app container.
The deployed app uses `/opt/robot-u-site/.env` on the LXC. That file contains Forgejo OAuth settings, `AUTH_SECRET_KEY`, optional `FORGEJO_TOKEN` for the server-side public content cache, calendar feeds, and the deployed `APP_BASE_URL`. Treat it as secret material and do not print values.
The current deployed OAuth redirect URI is:
```text
https://discourse.onl/api/auth/forgejo/callback
```
Forgejo OAuth sign-in from the public URL requires that exact callback URL to be allowed in the Forgejo OAuth app.
Important deployment notes:
- The LXC was initially created with gateway/DNS `192.168.1.1`, but this network uses `192.168.1.2`. If package installs hang or outbound network fails, check `ip route` and `/etc/resolv.conf` first.
- Proxmox persistent LXC config was updated so `net0` uses `gw=192.168.1.2`, and nameserver is `192.168.1.2`.
- Docker inside the unprivileged LXC requires Proxmox features `nesting=1,keyctl=1`; those are set on the current container.
- Ubuntu package installs were made reliable by adding `/etc/apt/apt.conf.d/99force-ipv4` with `Acquire::ForceIPv4 "true";`.
- The current LXC has `512MiB` memory and `512MiB` swap. It runs the app, but large builds or future services may need more memory.
-`FORGEJO_TOKEN` is needed server-side if anonymous Forgejo API discovery returns no content. Without that token, `/api/prototype` can return zero courses/posts/discussions even though the app is healthy.
Do not overwrite `/opt/robot-u-site/.env` during rsync. Update it deliberately when runtime config changes.
Current production env notes:
-`/opt/robot-u-site/.env` should use `APP_BASE_URL=https://discourse.onl`.
-`AUTH_COOKIE_SECURE=true` is required for the public HTTPS site.
-`CORS_ALLOW_ORIGINS=https://discourse.onl` is the current public origin.
- A pre-domain backup exists on the app LXC at `/opt/robot-u-site/.env.backup.20260415T101957Z`.
CI state:
-`.forgejo/workflows/ci.yml` runs on `docker`.
- The `check` job manually installs `CI_REPO_SSH_KEY`, clones `git@aksal.cloud:Robot-U/robot-u-site.git`, installs `uv` and Bun, then runs Python and frontend checks.
- The `deploy` job runs after `check` on `push` events, installs `DEPLOY_SSH_KEY`, clones the repo, rsyncs it to `root@192.168.1.220:/opt/robot-u-site/`, rebuilds Docker Compose, and checks `/health`.
- The repo has a read-only deploy key and matching Forgejo Actions secret for CI clone.
- The app LXC has a CI deploy public key in `root`'s `authorized_keys`, and the matching private key is stored in the Forgejo Actions secret `DEPLOY_SSH_KEY`.
-`scripts/bootstrap_lxc_deploy_key.py` recreates or rotates the LXC deploy key. It uses `FORGEJO_API_TOKEN`, appends the generated public key to the LXC user's `authorized_keys`, verifies SSH, and stores the generated private key in `DEPLOY_SSH_KEY`.
- The deploy rsync excludes `.env` and `.env.*`, so production runtime secrets and backups on `/opt/robot-u-site` are preserved.
## Reverse Proxy LXC 102
The reverse proxy host is Proxmox LXC `102`:
- LXC hostname: `reverse-proxy`
- LXC IP: `192.168.1.203/24`
- Gateway: `192.168.1.2`
- Main jobs: nginx reverse proxy, LiteLLM proxy, and custom Porkbun DDNS script
- nginx service: `nginx.service`
- LiteLLM service: `litellm.service`
- Porkbun service: `porkbun-ddns.service`
- Robot U public site: `discourse.onl`
- Robot U nginx config: `/etc/nginx/sites-available/discourse.onl`
- Robot U certificate: `/etc/letsencrypt/live/discourse.onl/`
- Robot U upstream: `http://192.168.1.220:8800`
Do not bundle unrelated maintenance. If asked to update LiteLLM, do not change nginx or Porkbun DNS config unless explicitly requested. As of the last LiteLLM update, `porkbun-ddns.service` was failed and was intentionally left untouched.
The `discourse.onl` nginx site was created on April 15, 2026 following the existing `aksal.cloud` pattern:
- Service user/group: `porkbun-discourse:porkbun-discourse`
- Service: `porkbun-ddns-discourse-onl.service`
- Timer: `porkbun-ddns-discourse-onl.timer`
- Managed records: `A discourse.onl` and `A *.discourse.onl`
- Current managed IP as of setup: `64.30.74.112`
The `discourse.onl` copy of `updateDNS.sh` was patched locally to make Porkbun curl calls use `--fail` and stronger retries, preventing transient 503 HTML bodies from being concatenated with JSON. A PR with the same fix was opened against the upstream Porkbun DDNS repo: `https://aksal.cloud/Amargius_Commons/porkbun_ddns_script/pulls/1`.
Direct SSH to `root@192.168.1.203`, `litellm@192.168.1.203`, or `root@192.168.1.200` may not work from this workspace. If SSH fails, use the Proxmox API credentials in the ignored `.env.proxmox` file to open a Proxmox node terminal and run `pct exec 102 -- ...`.
Proxmox API terminal access pattern:
1. Read `.env.proxmox`; never print credentials.
2.`POST /api2/json/access/ticket` with the Proxmox username/password.
3.`POST /api2/json/nodes/proxmox/termproxy` using the returned ticket and CSRF token.
4. Connect to `wss://<proxmox-host>:8006/api2/json/nodes/proxmox/vncwebsocket?port=<port>&vncticket=<ticket>`.
3. Stop LiteLLM before upgrading. Container `102` has only `512MiB` RAM and tends to use swap; stopping the proxy keeps pip from competing with the running process.
After the April 15, 2026 update, LiteLLM was upgraded from `1.81.15` to `1.83.7`, `/health/liveliness` returned `"I'm alive!"`, `/health/readiness` reported `db=connected`, and `pip check` reported no broken requirements. Startup logs may briefly print `Unable to connect to DB. DATABASE_URL found in environment, but prisma package not found.`; treat readiness and the Prisma process/import check as the source of truth before deciding it is an actual failure.