If a bot token previously had a webhook configured (e.g. from a prior
install), getUpdates returns nothing and chat ID detection fails.
Adding deleteWebhook call before polling fixes fresh installs that
reuse an existing bot token.
The Telegram bot's CPU alert was reading from docker stats (per-container
CPU), producing nonsensical values like 183%, 228%, 297% on multi-core
systems. Replaced with /proc/stat delta-based system-wide CPU — the same
method the TUI dashboard uses.
Changes:
- CPU alert now reads system-wide CPU from /proc/stat (0-100% range)
- Threshold raised from 90% to 96%
- Added TELEGRAM_CPU_ALERT toggle (option 6 in Telegram settings menu)
- RAM alert kept separate, still uses docker stats (per-container memory)
- /settings Telegram command now shows CPU alert status
load_servers() used `label`, `conn`, `auth_type` as while-read loop
variables, which clobbered identically-named locals in
add_server_interactive() due to bash dynamic scoping. This caused the
server label to be empty when adding a 2nd+ server.
Rename loop variables to `_l`, `_c`, `_a` to avoid the collision.
Closes#39
awk printf "%.2f" outputs "1,00" instead of "1.00" on systems with
comma-decimal locales (Turkish, German, etc.), causing docker to
reject the --cpus value. Force LC_ALL=C for the awk call.
- Capture and display docker run stderr instead of silently discarding it
- Suppress docker volume create / docker rm stdout leaks (snowflake-data)
- Auto-pull snowflake image if not cached locally before docker run
- Add Phase 4 to update flow: pull snowflake image alongside conduit image
- Fix same stdout leaks in start_conduit/restart_conduit/recreate_containers
- Cache get_net_speed() every 2 cycles (saves 500ms on alternate refreshes)
- Single-pass awk for 6h/12h/24h connection snapshots (1 read instead of 3)
- Parallelize Snowflake Prometheus metric fetches across instances
- Cache systemctl init system detection per session
- Reuse running_count for service status instead of extra docker ps call
- Remove dead get_connection_snapshot() function
Major features:
- Snowflake proxy management with per-country stats and Prometheus metrics
- Multi-server dashboard TUI with live refresh, bulk actions, server management
- Non-root SSH support with automatic sudo prefix and passwordless sudo detection
- Data cap monitoring with per-direction (upload/download/total) enforcement
- Remote server table with CPU(temp), upload, download columns
Improvements:
- TB support in all byte formatters (format_bytes, _fmt_bytes, format_gb)
- Snowflake timeout display in detailed views, "connections served" labeling
- Stricter Docker container name matching in service status check
- check_alerts() now aggregates CPU/RAM across all containers
- Network interface detection uses keyword matching instead of fragile position
- Tracker stuck-container Telegram notification uses direct curl (standalone fix)
- Timeout exit code check corrected (SIGTERM=143, not SIGKILL=137)
- Hardened backup filename quoting in docker sh -c
- README updated for v1.3 with full English and Farsi changelogs
- Split CPU display into App CPU (containers) and System CPU (host)
- Added CPU temperature from hwmon (cross-platform)
- Added system RAM usage alongside app RAM
- Added get_container_stats() to telegram script for proper aggregation
- Fixed build_report() to aggregate all containers instead of head -1
Use in-memory offset as primary tracker instead of relying solely on
last_update_id file. File is only read on first call for persistence
across restarts. Commands process correctly even if file is not writable.
Fixes#34
- Tracker used wrong container names (intgpsiphonclient instead of conduit)
- Tracker parsed numClients instead of [STATS] Connected: format
- Tracker script was never regenerated on update (function was inside heredoc)
- Added regen-tracker command and --update-components now calls it
- Dashboard skips recording when tracker is active (no double entries)
Tracker was not loading settings.conf, so CONTAINER_COUNT defaulted to 1.
This caused connection history to record 0 for all entries, resulting in
wrong average and 6h/12h/24h snapshot values.
- Read CPU temp from hwmon (coretemp/k10temp) instead of thermal_zone0
- Average all CPU core temperatures for accurate reading
- Support Intel, AMD, and ARM thermal drivers
- Cache average connections for 5 minutes to reduce file I/O
- Split dashboard footer into two lines for better readability
- Add peak connections tracking (persistent, resets on container restart)
- Add average connections display on dashboard
- Add 6h/12h/24h connection history snapshots
- Background tracker records connections 24/7 (not just when dashboard open)
- Implement temporal sampling: 15s capture, 15s sleep, 2x multiplier (~40-50% CPU reduction)
- Add info page [4] explaining Peak, Avg, and history stats
- Smart data reset: only resets when ALL containers restart
- Update README with v1.2.1 features (English & Farsi)
- Simplify RAM recommendations to focus on CPU
- Fix tracker double-counting by capturing only on external interface
instead of all interfaces (-i any → -i $CAPTURE_IFACE)
- Increase docker logs tail from 30 to 200 for reliable stats capture
- Add stats caching to show last known values when logs are busy
- Reduce dashboard refresh interval from 4s to 10s for lower CPU load
- Add docker stats caching (refresh every 20s) to reduce docker overhead
- Simplify recommendation to 1 container per core, limited by RAM (1 per GB)
- Show [1-32] range so users know min/max limits
- Add rec_containers calculation in manage_containers() (was missing)
- Block adding containers when already at 32 maximum
- Use docker --filter for reliable health check instead of grep
- Allow unlimited container count with dynamic recommendations based on CPU/RAM
- Add hard cap of 32 containers to prevent excessive scaling
- Fix regex patterns for container/volume cleanup (conduit-2, conduit-data-2, etc.)
- Clean up stale per-container settings on scale-down
- Update README recommendations
Co-authored-by: Antoine <a.bayard@live.fr>
Adds toggle in Settings & Tools menu to enable/disable the traffic
tracker. When disabled, saves ~15-25% CPU on busy servers. Features
that depend on tracker (live peers, advanced stats, country breakdown)
show helpful messages directing users to re-enable.
Compare md5 hash before/after regenerating tracker script to avoid
unnecessary restarts and traffic data loss on every menu open.
Also consolidate daemon-reload calls in show_menu auto-fix block.
Show service status based on actual container state instead of systemd
oneshot status. Remove systemctl calls from restart to prevent tracker
flip-flopping.
Replace Requires=docker.service with Wants=docker.service so services
can start when Docker is installed via snap instead of apt. Auto-patch
existing service files on menu load for users upgrading from older versions.
- Atomic settings.conf writes (write to tmp, then mv)
- Secure temp dirs with mktemp (5 locations)
- Add set -eo pipefail for pipe failure detection
- Add timeout 10 to all docker stats calls
- Update version from 1.2-Beta to 1.2
- Update URL from beta-releases to main
- README updated for stable release
- Increase --tail from 50 to 400 in all Telegram docker logs calls
- Add connecting peer count to match TUI format
- Format: "Clients: X connected, Y connecting" consistently
Features:
- Per-container CPU and memory limits via Settings menu
- Resource limit prompts when adding containers in Container Management
- Smart defaults based on system specs (cores, RAM)
- Limits persist in settings.conf and apply on container create/recreate
- Settings table shows CPU/Memory columns alongside max-clients/bandwidth
- Resource limit changes detected on restart/start, triggering container recreation
Performance optimizations:
- Parallelize all docker logs calls across containers (background jobs)
- Run docker stats, system stats, and net speed concurrently
- Batch docker inspect calls instead of per-container
- Parallel container stop/remove with -t 3 timeout (was 10s default)
- All screens optimized: Status, Container Management, Advanced Stats, Live Peers
Bug fixes:
- Normalize grep pattern to [STATS] across all screens
- Clean temp dirs before reuse to prevent stale data reads
- Check exit status on container remove operations
Co-Authored-By: Claude <noreply@anthropic.com>
- Add /containers command: per-container status with peers, data up/down
- Add /restart_N, /stop_N, /start_N commands for remote container control
- Add /uptime command: per-container uptime + 24h availability
- Self-updating script: menu option downloads latest conduit.sh from GitHub
- Add --update-components flag for script/tracker/telegram regeneration
- Extract recreate_containers() for reusable container recreation
- Interactive Docker image update with user confirmation
- 24h availability window instead of all-time percentage
- Uptime streak tracking in reports
- Container restart counts in reports
- Top countries by connected peers section
- Separate "Top by upload" vs "Top by peers" in reports
- Defensive variable defaults (CONTAINER_COUNT, DATA_CAP_GB)
- Fix get_container_name in Telegram script (conduit-N not conduitN)
- Rename Telegram "Active clients" to "Unique IPs served"
- Remove cumulative IP count from peers view columns
- Show only estimated current clients per country
- Traffic header now says "current session" to indicate it resets on restart
- TOP 5 UPLOAD header now says "cumulative" to show it persists across restarts
- Prepend server label + public IP to all Telegram messages (multi-server support)
- Configurable server label via menu option 8 (defaults to hostname)
- Wall-clock aligned report scheduling with configurable start hour
- Start hour selection in both interval menu and setup wizard
- Markdown-escape server label in both main and standalone scripts
- Preserve start hour and server label on wizard cancel/failure paths
New features:
- Uptime tracking with availability % in reports
- Alert system (CPU/RAM >90%, all containers down, zero peers 2h)
- Daily and weekly summary reports with bandwidth/peers/uptime stats
- Telegram bot commands (/status, /peers, /help)
- Toggle menu for alerts, daily/weekly summaries (options 5-7)
- Health check: tracker service, tcpdump, GeoIP, data validation
- Cumulative data log rotation with monthly archives (3-month retention)
Improvements:
- Smart restart: only recreate containers when settings change
- Stopped containers resumed with docker start instead of recreate
- Upgrade path regenerates Telegram script automatically
- Update conduit backs up tracker data and refreshes Telegram service
- Daily/weekly summary timestamps persist across service restarts
Bug fixes:
- Empty container list no longer triggers docker stats on all host containers
- process_commands recovers from malformed Telegram API responses
- Tracker service stopped before data backup to prevent write races
- Docker logs calls wrapped with timeout to prevent hangs
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix [s] Start in container menu to resume stopped containers with
docker start instead of docker run (name already in use error)
- Add safety check in run_conduit_container() to remove existing
containers before docker run to prevent conflicts
- Fix [t] Stop and [x] Restart to check exit codes and show accurate status
- Add warning in main menu Start (option 5) when recreating stopped containers