This repository deploys through the local RNet/Gitea Actions runner documented at:
http://192.168.1.13/#python
https://git.sh-inc.ru/avm/tg-bots.gitpython for tests, main for deployment192.168.1.2103tg-bots112tg.sh-inc.ru, tg.sh-inc.devTELEGRAM_API_ID
TELEGRAM_API_HASH
The workflow writes a runtime .env during deployment. The file is not committed.
cd /path/to/tg-bots
pve-deploy ensure 103 tg-bots 8192 4 64
bash deploy/proxmox/configure-lxc.sh 103
pve-deploy deploy 103 . deploy/docker-compose.lxc.yml
APP_IP="$(bash deploy/proxmox/ct-ip.sh 103)"
APP_UPSTREAM="http://${APP_IP}:8000" bash deploy/nginx/update-nginx-ui.sh 103
curl -fsS "http://${APP_IP}:8000/api/v1/health"
The LXC configure script mounts the OMV media share through NFSv4:
192.168.1.23:/media -> /mnt/omw-media
OMV showmount -e exposes the NFSv3 path as /export/media, but /export is the
NFSv4 pseudo-root (fsid=0), so NFSv4 clients must mount /media.
The compose file mounts it read-only into app containers as:
/mnt/omw-media:/shared/media:ro
Telegram egress provider configs are file-backed and should live under:
/data/telegram-egress
Relevant runtime env values:
TELEGRAM_EGRESS_MODE=direct
TELEGRAM_EGRESS_ENABLED=false
TELEGRAM_EGRESS_PROVIDER=
TELEGRAM_EGRESS_STATE_DIR=/data/telegram-egress
TELEGRAM_EGRESS_CONTROL_URL=
Current deployment note:
deploy/docker-compose.lxc.telegram-egress.ymltelegram-egress on Gluetun and moves the app to internal port 8001http://127.0.0.1:8081 for the local Bot API and http://127.0.0.1:8000 for the Gluetun control serverExample deploy command for the VPN-enabled stack:
pve-deploy deploy 103 . deploy/docker-compose.lxc.telegram-egress.yml
APP_IP="$(bash deploy/proxmox/ct-ip.sh 103)"
ssh "root@${APP_IP}" \
"cd /opt/app && docker compose -f deploy/docker-compose.lxc.telegram-egress.yml up -d --build --force-recreate --remove-orphans"
curl -fsS "http://${APP_IP}:8000/api/v1/health"