- Inline telegram_disable_service in uninstall() since the function
is defined inside the MANAGEMENT heredoc and not available in the
outer script scope
- Add port forwarding tip on MTProto proxy settings page
- Iran Connectivity Status: real-time monitoring with IODA, OONI,
irinter.net, and Cloudflare Radar data with 7-day charts
- Iran Connectivity Test: diagnostics across 88 servers in 24 cities
with quick test, full report, stability test, and MTU discovery
- Psiphon Network Stats: global analytics with daily users, bytes
transferred, running proxies, and country distribution charts
- CLI Commands Reference page and reorganized Info & Help menu
- Updated license and screenshots
Replace 70-line two-method checker (API + 530KB download fallback)
with 20-line SHA-only approach. Self-heals missing baseline on first
menu open. Add hex validation and --connect-timeout to all SHA saves.
When multiple servers share the same bot token, /status and /peers now
show a per-server inline button instead of responding immediately.
Only the server whose label matches the tapped callback responds;
others silently ignore. Fixes duplicate/endless responses.
Root cause: after update, .update_sha was deleted. First menu open stored
whatever the remote SHA was as baseline, silently swallowing any commits
pushed between the update and first menu open.
Fix: save commit SHA via GitHub API during install/update (baseline = what
was installed). Menu checker only compares, never establishes baseline.
Falls through to Method 2 (md5) if .update_sha is missing.
- cumulative_data: single awk pass for per-country bytes + grand totals
- tracker_snapshot: single awk pass for bytes, unique IP counts, and totals
- Remove unused cumulative_ips parsing (dead code — populated but never displayed)
- Fix awk dedup bug: ft++/tt++ were outside if-block, counting total lines instead of unique IPs
Primary check uses GitHub API (Accept: application/vnd.github.sha)
for a ~40-byte response instead of downloading the full ~530KB file.
Falls back to full-file hash comparison if API is unreachable.
- Hex validation prevents false positives from proxy/CDN HTML responses
- All badge writes guarded with 2>/dev/null (no terminal noise)
- Separate connect-timeout (5s) from max-time for fast failure detection
- SHA baseline reset on install/update for clean re-establishment
Match the dashboard's estimation logic: use snapshot country distribution
proportionally scaled to the actual connected client count, instead of
showing raw snapshot counts that don't align with the connected total.
When no .update_hash exists, compare VERSION strings to distinguish
fresh installs (same version = save baseline, no badge) from old
installs pre-hash-support (different version = show badge).
- Top by peers: count unique IPs instead of lines (was 2x inflated from FROM+TO entries)
- Total lifetime IPs: use cumulative_ips file instead of tracker_snapshot (was showing last 15s cycle count)
- Add MTProxy status + traffic to both Telegram report functions when enabled and running
- Add Telegram MTProto Proxy (mtg v2) with TOML config, fake-TLS,
share link & QR code, send to Telegram bot, resource management
- Add background auto-update checker with menu badge and optional
48h cron-based automatic updates (--auto flag)
- Add MTProxy status line in main live dashboard when enabled
- Pin mtg image to nineseconds/mtg:2.1.7 for stability
- Add flock concurrency guard to prevent simultaneous updates
- Fix MTProto container startup (switch from CLI flags to TOML config)
- Clean up orphaned config on MTProto remove and full uninstall
- Gate initial setup QR/link display on successful container start
- Remove duplicate session traffic line from MTProto submenu
- Remove extra blank line in status display for better screen fit
- Move screenshots to screenshots/ folder
- Update README with v1.3.1 features (English + Farsi)
- Bump version badge to 1.3.1
If `conduit` is launched without arguments in a non-interactive context
(no terminal), show help and exit instead of entering the TUI menu
which could spin as a stray process.
If a bot token previously had a webhook configured (e.g. from a prior
install), getUpdates returns nothing and chat ID detection fails.
Adding deleteWebhook call before polling fixes fresh installs that
reuse an existing bot token.
The Telegram bot's CPU alert was reading from docker stats (per-container
CPU), producing nonsensical values like 183%, 228%, 297% on multi-core
systems. Replaced with /proc/stat delta-based system-wide CPU — the same
method the TUI dashboard uses.
Changes:
- CPU alert now reads system-wide CPU from /proc/stat (0-100% range)
- Threshold raised from 90% to 96%
- Added TELEGRAM_CPU_ALERT toggle (option 6 in Telegram settings menu)
- RAM alert kept separate, still uses docker stats (per-container memory)
- /settings Telegram command now shows CPU alert status
load_servers() used `label`, `conn`, `auth_type` as while-read loop
variables, which clobbered identically-named locals in
add_server_interactive() due to bash dynamic scoping. This caused the
server label to be empty when adding a 2nd+ server.
Rename loop variables to `_l`, `_c`, `_a` to avoid the collision.
Closes#39
awk printf "%.2f" outputs "1,00" instead of "1.00" on systems with
comma-decimal locales (Turkish, German, etc.), causing docker to
reject the --cpus value. Force LC_ALL=C for the awk call.
- Capture and display docker run stderr instead of silently discarding it
- Suppress docker volume create / docker rm stdout leaks (snowflake-data)
- Auto-pull snowflake image if not cached locally before docker run
- Add Phase 4 to update flow: pull snowflake image alongside conduit image
- Fix same stdout leaks in start_conduit/restart_conduit/recreate_containers
- Cache get_net_speed() every 2 cycles (saves 500ms on alternate refreshes)
- Single-pass awk for 6h/12h/24h connection snapshots (1 read instead of 3)
- Parallelize Snowflake Prometheus metric fetches across instances
- Cache systemctl init system detection per session
- Reuse running_count for service status instead of extra docker ps call
- Remove dead get_connection_snapshot() function
Major features:
- Snowflake proxy management with per-country stats and Prometheus metrics
- Multi-server dashboard TUI with live refresh, bulk actions, server management
- Non-root SSH support with automatic sudo prefix and passwordless sudo detection
- Data cap monitoring with per-direction (upload/download/total) enforcement
- Remote server table with CPU(temp), upload, download columns
Improvements:
- TB support in all byte formatters (format_bytes, _fmt_bytes, format_gb)
- Snowflake timeout display in detailed views, "connections served" labeling
- Stricter Docker container name matching in service status check
- check_alerts() now aggregates CPU/RAM across all containers
- Network interface detection uses keyword matching instead of fragile position
- Tracker stuck-container Telegram notification uses direct curl (standalone fix)
- Timeout exit code check corrected (SIGTERM=143, not SIGKILL=137)
- Hardened backup filename quoting in docker sh -c
- README updated for v1.3 with full English and Farsi changelogs
- Split CPU display into App CPU (containers) and System CPU (host)
- Added CPU temperature from hwmon (cross-platform)
- Added system RAM usage alongside app RAM
- Added get_container_stats() to telegram script for proper aggregation
- Fixed build_report() to aggregate all containers instead of head -1
Use in-memory offset as primary tracker instead of relying solely on
last_update_id file. File is only read on first call for persistence
across restarts. Commands process correctly even if file is not writable.
Fixes#34
- Tracker used wrong container names (intgpsiphonclient instead of conduit)
- Tracker parsed numClients instead of [STATS] Connected: format
- Tracker script was never regenerated on update (function was inside heredoc)
- Added regen-tracker command and --update-components now calls it
- Dashboard skips recording when tracker is active (no double entries)
Tracker was not loading settings.conf, so CONTAINER_COUNT defaulted to 1.
This caused connection history to record 0 for all entries, resulting in
wrong average and 6h/12h/24h snapshot values.
- Read CPU temp from hwmon (coretemp/k10temp) instead of thermal_zone0
- Average all CPU core temperatures for accurate reading
- Support Intel, AMD, and ARM thermal drivers
- Cache average connections for 5 minutes to reduce file I/O
- Split dashboard footer into two lines for better readability
- Add peak connections tracking (persistent, resets on container restart)
- Add average connections display on dashboard
- Add 6h/12h/24h connection history snapshots
- Background tracker records connections 24/7 (not just when dashboard open)
- Implement temporal sampling: 15s capture, 15s sleep, 2x multiplier (~40-50% CPU reduction)
- Add info page [4] explaining Peak, Avg, and history stats
- Smart data reset: only resets when ALL containers restart
- Update README with v1.2.1 features (English & Farsi)
- Simplify RAM recommendations to focus on CPU
- Fix tracker double-counting by capturing only on external interface
instead of all interfaces (-i any → -i $CAPTURE_IFACE)
- Increase docker logs tail from 30 to 200 for reliable stats capture
- Add stats caching to show last known values when logs are busy
- Reduce dashboard refresh interval from 4s to 10s for lower CPU load
- Add docker stats caching (refresh every 20s) to reduce docker overhead
- Simplify recommendation to 1 container per core, limited by RAM (1 per GB)
- Show [1-32] range so users know min/max limits
- Add rec_containers calculation in manage_containers() (was missing)
- Block adding containers when already at 32 maximum
- Use docker --filter for reliable health check instead of grep
- Allow unlimited container count with dynamic recommendations based on CPU/RAM
- Add hard cap of 32 containers to prevent excessive scaling
- Fix regex patterns for container/volume cleanup (conduit-2, conduit-data-2, etc.)
- Clean up stale per-container settings on scale-down
- Update README recommendations
Co-authored-by: Antoine <a.bayard@live.fr>
Adds toggle in Settings & Tools menu to enable/disable the traffic
tracker. When disabled, saves ~15-25% CPU on busy servers. Features
that depend on tracker (live peers, advanced stats, country breakdown)
show helpful messages directing users to re-enable.
Compare md5 hash before/after regenerating tracker script to avoid
unnecessary restarts and traffic data loss on every menu open.
Also consolidate daemon-reload calls in show_menu auto-fix block.
Show service status based on actual container state instead of systemd
oneshot status. Remove systemctl calls from restart to prevent tracker
flip-flopping.
Replace Requires=docker.service with Wants=docker.service so services
can start when Docker is installed via snap instead of apt. Auto-patch
existing service files on menu load for users upgrading from older versions.
- Atomic settings.conf writes (write to tmp, then mv)
- Secure temp dirs with mktemp (5 locations)
- Add set -eo pipefail for pipe failure detection
- Add timeout 10 to all docker stats calls
- Update version from 1.2-Beta to 1.2
- Update URL from beta-releases to main
- README updated for stable release
- Increase --tail from 50 to 400 in all Telegram docker logs calls
- Add connecting peer count to match TUI format
- Format: "Clients: X connected, Y connecting" consistently
Features:
- Per-container CPU and memory limits via Settings menu
- Resource limit prompts when adding containers in Container Management
- Smart defaults based on system specs (cores, RAM)
- Limits persist in settings.conf and apply on container create/recreate
- Settings table shows CPU/Memory columns alongside max-clients/bandwidth
- Resource limit changes detected on restart/start, triggering container recreation
Performance optimizations:
- Parallelize all docker logs calls across containers (background jobs)
- Run docker stats, system stats, and net speed concurrently
- Batch docker inspect calls instead of per-container
- Parallel container stop/remove with -t 3 timeout (was 10s default)
- All screens optimized: Status, Container Management, Advanced Stats, Live Peers
Bug fixes:
- Normalize grep pattern to [STATS] across all screens
- Clean temp dirs before reuse to prevent stale data reads
- Check exit status on container remove operations
Co-Authored-By: Claude <noreply@anthropic.com>