feat: Add peak, average, and connection history tracking (v1.2.1)

- Add peak connections tracking (persistent, resets on container restart)
- Add average connections display on dashboard
- Add 6h/12h/24h connection history snapshots
- Background tracker records connections 24/7 (not just when dashboard open)
- Implement temporal sampling: 15s capture, 15s sleep, 2x multiplier (~40-50% CPU reduction)
- Add info page [4] explaining Peak, Avg, and history stats
- Smart data reset: only resets when ALL containers restart
- Update README with v1.2.1 features (English & Farsi)
- Simplify RAM recommendations to focus on CPU
This commit is contained in:
SamNet-dev
2026-02-05 15:39:12 -06:00
parent a2baa16a9b
commit 03f69f4b04
2 changed files with 676 additions and 110 deletions

View File

@@ -10,7 +10,7 @@
M A N A G E R
```
![Version](https://img.shields.io/badge/version-1.2-blue)
![Version](https://img.shields.io/badge/version-1.2.1-blue)
![License](https://img.shields.io/badge/license-MIT-green)
![Platform](https://img.shields.io/badge/platform-Linux-orange)
![Docker](https://img.shields.io/badge/Docker-Required-2496ED?logo=docker&logoColor=white)
@@ -41,6 +41,18 @@ wget https://raw.githubusercontent.com/SamNet-dev/conduit-manager/main/conduit.s
sudo bash conduit.sh
```
## What's New in v1.2.1
- **Peak & Average Tracking** — Dashboard shows peak and average connected clients since container start
- **Connection History** — See client counts from 6h, 12h, and 24h ago on the dashboard
- **24/7 Background Recording** — Tracker records connection stats even when dashboard is closed
- **Temporal Sampling** — Tracker captures 15s, sleeps 15s, multiplies by 2x for ~40-50% CPU reduction
- **CPU Temperature Display** — System CPU load now shows temperature when available
- **Unlimited Containers** — Removed 5-container limit, scale based on your hardware
- **Tracker Toggle** — Option to disable tracker for additional CPU savings
- **Smart Data Reset** — Peak, average, and history reset only when ALL containers restart
- **New Info Page** — Added "Peak, Average & Client History" guide explaining all stats
## What's New in v1.2
- **Per-Container Resource Limits** — Set CPU and memory limits per container via Settings menu with smart defaults
@@ -56,20 +68,21 @@ sudo bash conduit.sh
## Features
- **One-Click Deployment** — Automatically installs Docker and configures everything
- **Multi-Container Scaling** — Run 15 containers to maximize your server's capacity
- **Scalable Containers** — Run unlimited containers based on your server's capacity
- **Multi-Distro Support** — Works on Ubuntu, Debian, CentOS, Fedora, Arch, Alpine, openSUSE
- **Auto-Start on Boot** — Supports systemd, OpenRC, and SysVinit
- **Live Dashboard** — Real-time connection stats with CPU/RAM monitoring and per-country client breakdown
- **Live Dashboard** — Real-time stats with peak, average, CPU/RAM, temperature, and per-country breakdown
- **Connection History** — Track client counts over time with 6h, 12h, 24h snapshots
- **Advanced Stats** — Top countries by connected peers, download, upload, and unique IPs with bar charts
- **Live Peer Traffic** — Real-time traffic table by country with speed, total bytes, and IP/client counts
- **Background Tracker** — Continuous traffic monitoring via systemd service with GeoIP resolution
- **Background Tracker** — 24/7 traffic and connection monitoring via systemd service with GeoIP resolution
- **Telegram Bot** — On-demand `/status`, `/peers`, `/uptime`, `/containers` and remote container management via Telegram
- **Per-Container Settings** — Configure max-clients, bandwidth, CPU, and memory per container
- **Resource Limits** — Set CPU and memory limits with smart defaults based on system specs
- **Easy Management** — Powerful CLI commands or interactive menu
- **Backup & Restore** — Backup and restore your node identity keys
- **Health Checks** — Comprehensive diagnostics for troubleshooting
- **Info & Help** — Built-in multi-page guide explaining how everything works
- **Info & Help** — Built-in multi-page guide explaining traffic, stats, and how everything works
- **Complete Uninstall** — Clean removal of all components including Telegram service
## Supported Distributions
@@ -141,7 +154,7 @@ The interactive menu (`conduit menu`) provides access to all features:
| Option | Description |
|--------|-------------|
| **1** | View status dashboard — real-time stats with active clients and top upload by country |
| **1** | View status dashboard — real-time stats with peak, average, 6h/12h/24h history, active clients |
| **2** | Live connection stats — streaming stats from Docker logs |
| **3** | View logs — raw Docker log output |
| **4** | Live peers by country — per-country traffic table with speed and client counts |
@@ -152,7 +165,7 @@ The interactive menu (`conduit menu`) provides access to all features:
| **9** | Settings & Tools — resource limits, QR code, backup, restore, health check, Telegram, uninstall |
| **c** | Manage containers — add or remove containers (up to 5) |
| **a** | Advanced stats — top 5 charts for peers, download, upload, unique IPs |
| **i** | Info & Help — multi-page guide with tracker, stats, containers, privacy, about |
| **i** | Info & Help — multi-page guide explaining traffic, network, stats, peak/avg/history |
| **0** | Exit |
## Configuration Options
@@ -164,14 +177,16 @@ The interactive menu (`conduit menu`) provides access to all features:
| `cpu` | Unlimited | 0.1N cores | CPU limit per container (e.g. 1.0 = one core) |
| `memory` | Unlimited | 64msystem RAM | Memory limit per container (e.g. 256m, 1g) |
**Recommended values based on server hardware:**
**Recommended values based on CPU:**
| CPU Cores | RAM | Recommended Containers | Max Clients (per container) |
|-----------|-----|------------------------|-----------------------------|
| 1 Core | < 1 GB | 1 | 100 |
| 2 Cores | 2 GB | 12 | 200 |
| 4 Cores | 4 GB+ | 23 | 400 |
| 8+ Cores | 8 GB+ | 3+ | 800 |
| CPU Cores | Recommended Containers | Max Clients (per container) |
|-----------|------------------------|-----------------------------|
| 1 Core | 1 | 100 |
| 2 Cores | 12 | 200 |
| 4 Cores | 24 | 400 |
| 8+ Cores | 4+ | 800 |
> **RAM:** Minimum 512MB. For 3+ containers, 4GB+ recommended.
## Installation Options
@@ -265,6 +280,18 @@ wget https://raw.githubusercontent.com/SamNet-dev/conduit-manager/main/conduit.s
sudo bash conduit.sh
```
## تازه‌های نسخه 1.2.1
- **ردیابی پیک و میانگین** — نمایش بیشترین و میانگین کلاینت‌های متصل از زمان شروع کانتینر
- **تاریخچه اتصال** — مشاهده تعداد کلاینت‌ها در ۶، ۱۲ و ۲۴ ساعت گذشته
- **ضبط ۲۴/۷ پس‌زمینه** — ردیاب آمار اتصال را حتی وقتی داشبورد بسته است ثبت می‌کند
- **نمونه‌برداری زمانی** — ردیاب ۱۵ ثانیه ضبط، ۱۵ ثانیه استراحت با ضریب ۲ برای ~۴۰-۵۰٪ کاهش CPU
- **نمایش دمای CPU** — نمایش دما در کنار مصرف CPU سیستم
- **کانتینر نامحدود** — حذف محدودیت ۵ کانتینر، مقیاس‌بندی بر اساس سخت‌افزار
- **غیرفعال‌سازی ردیاب** — گزینه خاموش کردن ردیاب برای صرفه‌جویی بیشتر CPU
- **ریست هوشمند داده** — پیک، میانگین و تاریخچه فقط وقتی همه کانتینرها ریستارت شوند ریست می‌شوند
- **صفحه راهنمای جدید** — توضیح پیک، میانگین و تاریخچه کلاینت‌ها
## تازه‌های نسخه 1.2
- **محدودیت منابع هر کانتینر** — تنظیم محدودیت CPU و حافظه برای هر کانتینر با پیش‌فرض‌های هوشمند
@@ -279,20 +306,21 @@ sudo bash conduit.sh
## ویژگی‌ها
- **نصب با یک کلیک** — داکر و تمام موارد مورد نیاز به صورت خودکار نصب می‌شود
- **مقیاس‌پذیری چند کانتینره** — اجرای ۱ تا ۵ کانتینر برای حداکثر استفاده از سرور
- **مقیاس‌پذیری نامحدود** — اجرای کانتینرهای نامحدود بر اساس ظرفیت سرور
- **پشتیبانی از توزیع‌های مختلف** — اوبونتو، دبیان، سنت‌اواس، فدورا، آرچ، آلپاین، اوپن‌سوزه
- **راه‌اندازی خودکار** — پس از ریستارت سرور، سرویس به صورت خودکار اجرا می‌شود
- **داشبورد زنده** — نمایش لحظه‌ای وضعیت، تعداد کاربران، مصرف CPU و RAM
- **داشبورد زنده** — نمایش لحظه‌ای پیک، میانگین، CPU، RAM، دما و تفکیک کشوری
- **تاریخچه اتصال** — ردیابی تعداد کلاینت‌ها با اسنپ‌شات ۶، ۱۲ و ۲۴ ساعته
- **آمار پیشرفته** — نمودار میله‌ای برترین کشورها بر اساس اتصال، دانلود، آپلود و IP
- **مانیتورینگ ترافیک** — جدول لحظه‌ای ترافیک بر اساس کشور با سرعت و تعداد کلاینت
- **ردیاب پس‌زمینه** — سرویس ردیابی مداوم ترافیک با تشخیص جغرافیایی
- **ردیاب پس‌زمینه** — سرویس ردیابی ۲۴/۷ ترافیک و اتصالات با تشخیص جغرافیایی
- **ربات تلگرام** — دستورات `/status`، `/peers`، `/uptime`، `/containers` و مدیریت کانتینر از راه دور (اختیاری)
- **تنظیمات هر کانتینر** — پیکربندی حداکثر کاربران، پهنای باند، CPU و حافظه برای هر کانتینر
- **محدودیت منابع** — تنظیم محدودیت CPU و حافظه با پیش‌فرض‌های هوشمند
- **مدیریت آسان** — دستورات قدرتمند CLI یا منوی تعاملی
- **پشتیبان‌گیری و بازیابی** — پشتیبان‌گیری و بازیابی کلیدهای هویت نود
- **بررسی سلامت** — تشخیص جامع برای عیب‌یابی
- **راهنما و اطلاعات** — راهنمای چندصفحه‌ای داخلی
- **راهنما و اطلاعات** — راهنمای چندصفحه‌ای توضیح ترافیک، آمار و نحوه کارکرد
- **حذف کامل** — پاکسازی تمام فایل‌ها و تنظیمات شامل سرویس تلگرام
## پشتیبانی از macOS
@@ -350,7 +378,7 @@ conduit help # راهنما
| گزینه | توضیحات |
|-------|---------|
| **1** | داشبورد وضعیت — آمار لحظه‌ای با کلاینت‌های فعال و آپلود برتر |
| **1** | داشبورد وضعیت — آمار لحظه‌ای با پیک، میانگین، تاریخچه ۶/۱۲/۲۴ ساعته |
| **2** | آمار زنده اتصال — استریم آمار از لاگ داکر |
| **3** | مشاهده لاگ — خروجی لاگ داکر |
| **4** | ترافیک زنده به تفکیک کشور — جدول ترافیک با سرعت و تعداد کلاینت |
@@ -361,7 +389,7 @@ conduit help # راهنما
| **9** | تنظیمات و ابزارها — محدودیت منابع، QR کد، پشتیبان‌گیری، بازیابی، تلگرام، حذف نصب |
| **c** | مدیریت کانتینرها — اضافه یا حذف (تا ۵) |
| **a** | آمار پیشرفته — نمودار برترین کشورها |
| **i** | راهنما — توضیحات ردیاب، آمار، کانتینرها، حریم خصوصی |
| **i** | راهنما — توضیحات ترافیک، شبکه، آمار، پیک/میانگین/تاریخچه |
| **0** | خروج |
## تنظیمات
@@ -373,14 +401,16 @@ conduit help # راهنما
| `cpu` | نامحدود | 0.1N هسته | محدودیت CPU هر کانتینر (مثلاً 1.0 = یک هسته) |
| `memory` | نامحدود | 64mحافظه سیستم | محدودیت حافظه هر کانتینر (مثلاً 256m، 1g) |
**مقادیر پیشنهادی بر اساس سخت‌افزار سرور:**
**مقادیر پیشنهادی بر اساس CPU:**
| پردازنده | رم | کانتینر پیشنهادی | حداکثر کاربران (هر کانتینر) |
|----------|-----|-------------------|----------------------------|
| ۱ هسته | کمتر از ۱ گیگ | ۱ | ۱۰۰ |
| ۲ هسته | ۲ گیگ | ۱–۲ | ۲۰۰ |
| ۴ هسته | ۴ گیگ+ | ۲–۳ | ۴۰۰ |
| ۸+ هسته | ۸ گیگ+ | ۳+ | ۸۰۰ |
| پردازنده | کانتینر پیشنهادی | حداکثر کاربران (هر کانتینر) |
|----------|-------------------|----------------------------|
| ۱ هسته | ۱ | ۱۰۰ |
| ۲ هسته | ۱–۲ | ۲۰۰ |
| ۴ هسته | ۲–۴ | ۴۰۰ |
| ۸+ هسته | ۴+ | ۸۰۰ |
> **رم:** حداقل ۵۱۲ مگابایت. برای ۳+ کانتینر، ۴ گیگابایت+ پیشنهاد می‌شود.
## گزینه‌های نصب

View File

@@ -1,7 +1,7 @@
#!/bin/bash
#
# ╔═══════════════════════════════════════════════════════════════════╗
# ║ 🚀 PSIPHON CONDUIT MANAGER v1.2 ║
# ║ 🚀 PSIPHON CONDUIT MANAGER v1.2.1
# ║ ║
# ║ One-click setup for Psiphon Conduit ║
# ║ ║
@@ -31,7 +31,7 @@ if [ -z "$BASH_VERSION" ]; then
exit 1
fi
VERSION="1.2"
VERSION="1.2.1"
CONDUIT_IMAGE="ghcr.io/ssmirr/conduit/conduit:latest"
INSTALL_DIR="${INSTALL_DIR:-/opt/conduit}"
BACKUP_DIR="$INSTALL_DIR/backups"
@@ -867,7 +867,7 @@ create_management_script() {
# Reference: https://github.com/ssmirr/conduit/releases/latest
#
VERSION="1.2"
VERSION="1.2.1"
INSTALL_DIR="REPLACE_ME_INSTALL_DIR"
BACKUP_DIR="$INSTALL_DIR/backups"
CONDUIT_IMAGE="ghcr.io/ssmirr/conduit/conduit:latest"
@@ -1293,7 +1293,7 @@ show_dashboard() {
echo -e "\033[K"
fi
echo -e "${BOLD}Refreshes every 10 seconds. Press any key to return to menu...${NC}\033[K"
echo -e "${BOLD}Refreshes every 10 seconds. ${CYAN}[i]${NC} ${DIM}What do these numbers mean?${NC} ${DIM}[any key] Menu${NC}\033[K"
# Clear any leftover lines below the dashboard content (Erase to End of Display)
# This only cleans up if the dashboard gets shorter
@@ -1303,9 +1303,13 @@ show_dashboard() {
# Wait 10 seconds for keypress (balances responsiveness with CPU usage)
# Redirect from /dev/tty ensures it works when the script is piped
if read -t 10 -n 1 -s < /dev/tty 2>/dev/null; then
if read -t 10 -n 1 -s key < /dev/tty 2>/dev/null; then
if [[ "$key" == "i" || "$key" == "I" ]]; then
show_dashboard_info
else
stop_dashboard=1
fi
fi
done
echo -ne "\033[?25h" # Show cursor
@@ -1560,6 +1564,119 @@ SNAPSHOT_FILE="$PERSIST_DIR/tracker_snapshot"
C_START_FILE="$PERSIST_DIR/container_start"
GEOIP_CACHE="$PERSIST_DIR/geoip_cache"
# Temporal sampling configuration (capture 15s, sleep 15s, multiply by 2)
# This reduces CPU usage by ~40-50% while maintaining accurate traffic estimates
SAMPLE_CAPTURE_TIME=15 # Seconds to capture packets
SAMPLE_SLEEP_TIME=15 # Seconds to sleep between captures
TRAFFIC_MULTIPLIER=2 # Multiply byte counts to compensate for sampling
# Connection tracking files
CONN_HISTORY_FILE="$PERSIST_DIR/connection_history"
CONN_HISTORY_START="$PERSIST_DIR/connection_history_start"
PEAK_CONN_FILE="$PERSIST_DIR/peak_connections"
LAST_CONN_RECORD=0
CONN_RECORD_INTERVAL=300 # Record every 5 minutes
# Get earliest container start time (for reset detection)
get_container_start() {
local earliest=""
local count=${CONTAINER_COUNT:-1}
for i in $(seq 1 $count); do
local cname
if [ "$count" -eq 1 ]; then
cname="intgpsiphonclient"
else
cname="intgpsiphonclient${i}"
fi
local start=$(docker inspect --format='{{.State.StartedAt}}' "$cname" 2>/dev/null | cut -d'.' -f1)
[ -z "$start" ] && continue
if [ -z "$earliest" ] || [[ "$start" < "$earliest" ]]; then
earliest="$start"
fi
done
echo "$earliest"
}
# Check if containers restarted and reset data if needed
check_container_restart() {
local current_start=$(get_container_start)
[ -z "$current_start" ] && return
# Check history file
if [ -f "$CONN_HISTORY_START" ]; then
local saved=$(cat "$CONN_HISTORY_START" 2>/dev/null)
if [ "$saved" != "$current_start" ]; then
# Container restarted - clear history and peak
rm -f "$CONN_HISTORY_FILE" "$PEAK_CONN_FILE" 2>/dev/null
echo "$current_start" > "$CONN_HISTORY_START"
fi
else
echo "$current_start" > "$CONN_HISTORY_START"
fi
}
# Count current connections from docker logs (lightweight)
count_connections() {
local total_conn=0
local total_cing=0
local count=${CONTAINER_COUNT:-1}
for i in $(seq 1 $count); do
local cname
if [ "$count" -eq 1 ]; then
cname="intgpsiphonclient"
else
cname="intgpsiphonclient${i}"
fi
# Quick tail of recent logs
local stats=$(docker logs --tail 50 "$cname" 2>&1 | grep -o 'numClients":[0-9]*' | tail -1 | grep -o '[0-9]*')
local cing=$(docker logs --tail 50 "$cname" 2>&1 | grep -o 'connectingClients":[0-9]*' | tail -1 | grep -o '[0-9]*')
total_conn=$((total_conn + ${stats:-0}))
total_cing=$((total_cing + ${cing:-0}))
done
echo "$total_conn|$total_cing"
}
# Record connection history and update peak
record_connections() {
local now=$(date +%s)
# Only record every 5 minutes
if [ $((now - LAST_CONN_RECORD)) -lt $CONN_RECORD_INTERVAL ]; then
return
fi
LAST_CONN_RECORD=$now
# Check for container restart
check_container_restart
# Get current connections
local counts=$(count_connections)
local connected=$(echo "$counts" | cut -d'|' -f1)
local connecting=$(echo "$counts" | cut -d'|' -f2)
# Record to history
echo "${now}|${connected}|${connecting}" >> "$CONN_HISTORY_FILE"
# Prune old entries (keep 25 hours)
local cutoff=$((now - 90000))
if [ -f "$CONN_HISTORY_FILE" ]; then
awk -F'|' -v cutoff="$cutoff" '$1 >= cutoff' "$CONN_HISTORY_FILE" > "${CONN_HISTORY_FILE}.tmp" 2>/dev/null
mv -f "${CONN_HISTORY_FILE}.tmp" "$CONN_HISTORY_FILE" 2>/dev/null
fi
# Update peak if needed
local current_peak=0
if [ -f "$PEAK_CONN_FILE" ]; then
current_peak=$(tail -1 "$PEAK_CONN_FILE" 2>/dev/null)
current_peak=${current_peak:-0}
fi
if [ "$connected" -gt "$current_peak" ] 2>/dev/null; then
local start=$(cat "$CONN_HISTORY_START" 2>/dev/null)
echo "$start" > "$PEAK_CONN_FILE"
echo "$connected" >> "$PEAK_CONN_FILE"
fi
}
# Detect local IPs
get_local_ips() {
ip -4 addr show 2>/dev/null | awk '/inet /{split($2,a,"/"); print a[1]}' | tr '\n' '|'
@@ -1679,17 +1796,18 @@ process_batch() {
done < "$PERSIST_DIR/batch_ips"
# Step 2: Single awk pass — merge batch into cumulative_data + write snapshot
$AWK_BIN -F'|' -v snap="${SNAPSHOT_TMP:-$SNAPSHOT_FILE}" '
BEGIN { OFMT = "%.0f"; CONVFMT = "%.0f" }
# MULT applies traffic multiplier for temporal sampling (capture 15s, sleep 15s = multiply by 2)
$AWK_BIN -F'|' -v snap="${SNAPSHOT_TMP:-$SNAPSHOT_FILE}" -v MULT="$TRAFFIC_MULTIPLIER" '
BEGIN { OFMT = "%.0f"; CONVFMT = "%.0f"; if (MULT == "") MULT = 1 }
FILENAME == ARGV[1] { geo[$1] = $2; next }
FILENAME == ARGV[2] { existing[$1] = $2 "|" $3; next }
FILENAME == ARGV[3] {
dir = $1; ip = $2; bytes = $3 + 0
dir = $1; ip = $2; bytes = ($3 + 0) * MULT
c = geo[ip]
if (c == "") c = "Unknown"
if (dir == "FROM") from_bytes[c] += bytes
else to_bytes[c] += bytes
# Also collect snapshot lines
# Also collect snapshot lines (with multiplied bytes for rate display)
print dir "|" c "|" bytes "|" ip > snap
next
}
@@ -1818,16 +1936,18 @@ Container ${safe_cname} was stuck (no peers for $((idle_time/3600))h) and has be
done
}
# Main capture loop: tcpdump -> awk -> batch process
# Main capture loop with temporal sampling: capture -> process -> sleep -> repeat
# This reduces CPU usage by ~40-50% while maintaining accurate traffic estimates
LAST_BACKUP=0
while true; do
BATCH_FILE="$PERSIST_DIR/batch_tmp"
> "$BATCH_FILE"
while true; do
if IFS= read -t 60 -r line; then
# Capture phase: run tcpdump for SAMPLE_CAPTURE_TIME seconds
# timeout kills tcpdump after the specified time, AWK END block flushes remaining data
while IFS= read -r line; do
if [ "$line" = "SYNC_MARKER" ]; then
# Process entire batch at once
# Process batch when we receive sync marker
if [ -s "$BATCH_FILE" ]; then
> "${SNAPSHOT_FILE}.new"
SNAPSHOT_TMP="${SNAPSHOT_FILE}.new"
@@ -1836,6 +1956,7 @@ while true; do
fi
fi
> "$BATCH_FILE"
# Periodic backup every 3 hours
NOW=$(date +%s)
if [ $((NOW - LAST_BACKUP)) -ge 10800 ]; then
@@ -1843,31 +1964,11 @@ while true; do
[ -s "$IPS_FILE" ] && cp "$IPS_FILE" "$PERSIST_DIR/cumulative_ips.bak"
LAST_BACKUP=$NOW
fi
# Check for stuck containers every 15 minutes
if [ $((NOW - LAST_STUCK_CHECK)) -ge "$STUCK_CHECK_INTERVAL" ]; then
check_stuck_containers
LAST_STUCK_CHECK=$NOW
fi
continue
fi
else
echo "$line" >> "$BATCH_FILE"
else
# read timed out or EOF — check stuck containers even with no traffic
rc=$?
if [ $rc -gt 128 ]; then
# Timeout — no traffic, still check for stuck containers
NOW=$(date +%s)
if [ $((NOW - LAST_STUCK_CHECK)) -ge "$STUCK_CHECK_INTERVAL" ]; then
check_stuck_containers
LAST_STUCK_CHECK=$NOW
fi
else
# EOF — tcpdump exited, break to outer loop to restart
break
fi
fi
done < <($TCPDUMP_BIN -tt -l -ni "$CAPTURE_IFACE" -n -q -s 96 "(tcp or udp) and not port 22" 2>/dev/null | $AWK_BIN -v local_ip="$LOCAL_IP" '
BEGIN { last_sync = 0; OFMT = "%.0f"; CONVFMT = "%.0f" }
done < <(timeout "$SAMPLE_CAPTURE_TIME" $TCPDUMP_BIN -tt -l -ni "$CAPTURE_IFACE" -n -q -s 64 "(tcp or udp) and not port 22" 2>/dev/null | $AWK_BIN -v local_ip="$LOCAL_IP" '
BEGIN { OFMT = "%.0f"; CONVFMT = "%.0f" }
{
# Parse timestamp
ts = $1 + 0
@@ -1901,7 +2002,7 @@ while true; do
if (src ~ /^(10\.|172\.(1[6-9]|2[0-9]|3[01])\.|192\.168\.|127\.|0\.|169\.254\.)/) src=""
if (dst ~ /^(10\.|172\.(1[6-9]|2[0-9]|3[01])\.|192\.168\.|127\.|0\.|169\.254\.)/) dst=""
# Determine direction
# Determine direction and accumulate
if (src == local_ip && dst != "" && dst != local_ip) {
to[dst] += len
} else if (dst == local_ip && src != "" && src != local_ip) {
@@ -1911,19 +2012,28 @@ while true; do
} else if (dst != "" && dst != local_ip) {
to[dst] += len
}
# Sync every 30 seconds
if (last_sync == 0) last_sync = ts
if (ts - last_sync >= 30) {
}
END {
# Flush all accumulated data when tcpdump exits (after timeout)
for (ip in from) { if (from[ip] > 0) print "FROM|" ip "|" from[ip] }
for (ip in to) { if (to[ip] > 0) print "TO|" ip "|" to[ip] }
print "SYNC_MARKER"
delete from; delete to; last_sync = ts; fflush()
}
fflush()
}')
# If tcpdump exits, wait and retry
sleep 5
# Check for stuck containers during each cycle
NOW=$(date +%s)
if [ $((NOW - LAST_STUCK_CHECK)) -ge "$STUCK_CHECK_INTERVAL" ]; then
check_stuck_containers
LAST_STUCK_CHECK=$NOW
fi
# Record connection history and peak (every 5 min, lightweight)
record_connections
# Sleep phase: pause before next capture cycle
# This is where CPU savings come from - tcpdump not running during sleep
sleep "$SAMPLE_SLEEP_TIME"
done
TRACKER_SCRIPT
@@ -2471,6 +2581,373 @@ get_net_speed() {
fi
}
# Show detailed info about dashboard metrics
# Info page 1: Traffic & Bandwidth Explained
show_info_traffic() {
clear
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo -e "${CYAN} TRAFFIC & BANDWIDTH EXPLAINED${NC}"
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo ""
echo -e "${YELLOW}Traffic (current session)${NC}"
echo -e " ${BOLD}Source:${NC} Container logs ([STATS] lines from Conduit)"
echo -e " ${BOLD}Measures:${NC} Application-level payload data"
echo -e " ${BOLD}Meaning:${NC} Actual content delivered to/from users"
echo -e " ${BOLD}Resets:${NC} When containers restart"
echo ""
echo -e "${YELLOW}Top 5 Upload/Download (cumulative)${NC}"
echo -e " ${BOLD}Source:${NC} Network tracker (tcpdump on interface)"
echo -e " ${BOLD}Measures:${NC} Network-level bytes on the wire"
echo -e " ${BOLD}Meaning:${NC} Actual bandwidth used (what your ISP sees)"
echo -e " ${BOLD}Resets:${NC} Via Settings > Reset tracker data"
echo ""
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo -e "${BOLD}WHY ARE THESE NUMBERS DIFFERENT?${NC}"
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo ""
echo -e " The tracker typically shows ${YELLOW}5-20x more${NC} traffic than container stats."
echo -e " This is ${GREEN}normal${NC} for encrypted tunneling proxies like Conduit."
echo ""
echo -e " ${BOLD}The difference is protocol overhead:${NC}"
echo -e " • TLS/encryption framing"
echo -e " • Tunnel protocol headers"
echo -e " • TCP acknowledgments (ACKs)"
echo -e " • Keep-alive packets"
echo -e " • Connection handshakes"
echo -e " • Retransmissions"
echo ""
echo -e " ${BOLD}Example:${NC}"
echo -e " Container reports: 10 GB payload delivered"
echo -e " Network actual: 60 GB bandwidth used"
echo -e " Overhead ratio: 6x (typical for encrypted tunnels)"
echo ""
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
read -n 1 -s -r -p " Press any key to go back..." < /dev/tty
}
# Info page 2: Network Mode & Docker
show_info_network() {
clear
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo -e "${CYAN} NETWORK MODE & DOCKER${NC}"
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo ""
echo -e "${YELLOW}Why --network=host mode?${NC}"
echo ""
echo -e " Conduit containers run with ${YELLOW}--network=host${NC} for best performance."
echo -e " This mode gives containers direct access to the host's network stack,"
echo -e " eliminating Docker's network bridge overhead and reducing latency."
echo ""
echo -e "${YELLOW}The trade-off${NC}"
echo ""
echo -e " Docker cannot track per-container network I/O in host mode."
echo -e " Running 'docker stats' will show ${DIM}0B / 0B${NC} for network - this is"
echo -e " expected behavior, not a bug."
echo ""
echo -e "${YELLOW}Our solution${NC}"
echo ""
echo -e " • ${BOLD}Container traffic:${NC} Parsed from Conduit's own [STATS] log lines"
echo -e " • ${BOLD}Network traffic:${NC} Captured via tcpdump on the host interface"
echo -e " • Both methods work reliably with --network=host mode"
echo ""
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo -e "${BOLD}TECHNICAL DETAILS${NC}"
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo ""
echo -e " ${BOLD}Container stats:${NC}"
echo -e " Parsed from: docker logs [container] | grep '[STATS]'"
echo -e " Fields: Up (upload), Down (download), Connected, Uptime"
echo -e " Scope: Per-container, aggregated for display"
echo ""
echo -e " ${BOLD}Tracker stats:${NC}"
echo -e " Captured by: tcpdump on primary network interface"
echo -e " Processed: GeoIP lookup for country attribution"
echo -e " Storage: /opt/conduit/traffic_stats/cumulative_data"
echo ""
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
read -n 1 -s -r -p " Press any key to go back..." < /dev/tty
}
# Info page 3: Which Numbers To Use
show_info_client_stats() {
clear
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo -e "${CYAN} PEAK, AVERAGE & CLIENT HISTORY${NC}"
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo ""
echo -e "${YELLOW}What these numbers mean${NC}"
echo ""
echo -e " ${BOLD}Peak${NC} Highest number of connected clients since container"
echo -e " started. Useful to see your maximum capacity usage."
echo ""
echo -e " ${BOLD}Avg${NC} Average connected clients over time. Gives you a"
echo -e " realistic picture of typical load."
echo ""
echo -e " ${BOLD}6h/12h/24h${NC} How many clients were connected at that time ago."
echo -e " Shows '-' if no data exists for that time."
echo ""
echo -e "${YELLOW}When does data reset?${NC}"
echo ""
echo -e " All stats reset when ${BOLD}ALL${NC} containers restart."
echo -e " If only some containers restart, data is preserved."
echo -e " Closing the dashboard does ${BOLD}NOT${NC} reset any data."
echo ""
echo -e "${YELLOW}Tracker ON vs OFF${NC}"
echo ""
echo -e " ┌──────────────┬─────────────────────┬─────────────────────┐"
echo -e " │ ${BOLD}Feature${NC} │ ${GREEN}Tracker ON${NC} │ ${RED}Tracker OFF${NC} │"
echo -e " ├──────────────┼─────────────────────┼─────────────────────┤"
echo -e " │ Peak │ Records 24/7 │ Only when dashboard │"
echo -e " │ │ │ is open │"
echo -e " ├──────────────┼─────────────────────┼─────────────────────┤"
echo -e " │ Avg │ All time average │ Only times when │"
echo -e " │ │ │ dashboard was open │"
echo -e " ├──────────────┼─────────────────────┼─────────────────────┤"
echo -e " │ 6h/12h/24h │ Shows data even if │ Shows '-' if dash │"
echo -e " │ │ dashboard was closed│ wasn't open then │"
echo -e " └──────────────┴─────────────────────┴─────────────────────┘"
echo ""
echo -e " ${DIM}Tip: Keep tracker enabled for complete, accurate stats.${NC}"
echo ""
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
read -n 1 -s -r -p " Press any key to go back..." < /dev/tty
}
show_info_which_numbers() {
clear
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo -e "${CYAN} WHICH NUMBERS SHOULD I USE?${NC}"
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo ""
echo -e "${YELLOW}For bandwidth & cost planning${NC}"
echo ""
echo -e " Use ${BOLD}Top 5 Upload/Download${NC} (tracker) numbers"
echo ""
echo -e " → This is what your ISP bills you for"
echo -e " → This is your actual network usage"
echo -e " → Use this for server cost calculations"
echo -e " → Use this to monitor bandwidth caps"
echo ""
echo -e "${YELLOW}For user impact metrics${NC}"
echo ""
echo -e " Use ${BOLD}Traffic (current session)${NC} numbers"
echo ""
echo -e " → This is actual content delivered to users"
echo -e " → This matches Conduit's internal reporting"
echo -e " → Use this to measure user activity"
echo -e " → Use this to compare with Psiphon stats"
echo ""
echo -e "${YELLOW}Quick reference${NC}"
echo ""
echo -e " ┌─────────────────────┬─────────────────────────────────────┐"
echo -e " │ ${BOLD}Question${NC} │ ${BOLD}Use This${NC} │"
echo -e " ├─────────────────────┼─────────────────────────────────────┤"
echo -e " │ ISP bandwidth used? │ Top 5 (tracker) │"
echo -e " │ User data served? │ Traffic (session) │"
echo -e " │ Monthly costs? │ Top 5 (tracker) │"
echo -e " │ Users helped? │ Traffic (session) + Connections │"
echo -e " └─────────────────────┴─────────────────────────────────────┘"
echo ""
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
read -n 1 -s -r -p " Press any key to go back..." < /dev/tty
}
# Main info menu
show_dashboard_info() {
while true; do
clear
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo -e "${CYAN} UNDERSTANDING YOUR DASHBOARD${NC}"
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo ""
echo -e " Select a topic to learn more:"
echo ""
echo -e " ${CYAN}[1]${NC} Traffic & Bandwidth Explained"
echo -e " ${DIM}Why tracker shows more than container stats${NC}"
echo ""
echo -e " ${CYAN}[2]${NC} Network Mode & Docker"
echo -e " ${DIM}Why we use --network=host and how stats work${NC}"
echo ""
echo -e " ${CYAN}[3]${NC} Which Numbers To Use"
echo -e " ${DIM}Choosing the right metric for your needs${NC}"
echo ""
echo -e " ${CYAN}[4]${NC} Peak, Average & Client History"
echo -e " ${DIM}Understanding Peak, Avg, and 6h/12h/24h stats${NC}"
echo ""
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
echo ""
echo -e " ${DIM}Press ${NC}${BOLD}1${NC}${DIM}-${NC}${BOLD}4${NC}${DIM} to view a topic, or any other key to go back${NC}"
read -n 1 -s -r key < /dev/tty
case "$key" in
1) show_info_traffic ;;
2) show_info_network ;;
3) show_info_which_numbers ;;
4) show_info_client_stats ;;
*) return ;;
esac
done
}
# Connection history file for tracking connections over time
CONNECTION_HISTORY_FILE="/opt/conduit/traffic_stats/connection_history"
_LAST_HISTORY_RECORD=0
# Peak connections tracking (persistent, resets on container restart)
PEAK_CONNECTIONS_FILE="/opt/conduit/traffic_stats/peak_connections"
_PEAK_CONNECTIONS=0
_PEAK_CONTAINER_START=""
# Get the earliest container start time (used to detect restarts)
get_container_start_time() {
local earliest=""
for i in $(seq 1 ${CONTAINER_COUNT:-1}); do
local cname=$(get_container_name $i 2>/dev/null)
[ -z "$cname" ] && continue
local start=$(docker inspect --format='{{.State.StartedAt}}' "$cname" 2>/dev/null | cut -d'.' -f1)
[ -z "$start" ] && continue
if [ -z "$earliest" ] || [[ "$start" < "$earliest" ]]; then
earliest="$start"
fi
done
echo "$earliest"
}
# Load peak from file (resets if containers restarted)
load_peak_connections() {
local current_start=$(get_container_start_time)
if [ -f "$PEAK_CONNECTIONS_FILE" ]; then
local saved_start=$(head -1 "$PEAK_CONNECTIONS_FILE" 2>/dev/null)
local saved_peak=$(tail -1 "$PEAK_CONNECTIONS_FILE" 2>/dev/null)
# If container start time matches, restore peak
if [ "$saved_start" = "$current_start" ] && [ -n "$saved_peak" ]; then
_PEAK_CONNECTIONS=$saved_peak
_PEAK_CONTAINER_START="$current_start"
return
fi
fi
# Container restarted or no saved data - reset peak
_PEAK_CONNECTIONS=0
_PEAK_CONTAINER_START="$current_start"
save_peak_connections
}
# Save peak to file
save_peak_connections() {
mkdir -p "$(dirname "$PEAK_CONNECTIONS_FILE")" 2>/dev/null
echo "$_PEAK_CONTAINER_START" > "$PEAK_CONNECTIONS_FILE"
echo "$_PEAK_CONNECTIONS" >> "$PEAK_CONNECTIONS_FILE"
}
# Connection history container tracking (resets when containers restart)
CONNECTION_HISTORY_START_FILE="/opt/conduit/traffic_stats/connection_history_start"
_CONNECTION_HISTORY_CONTAINER_START=""
# Check and reset connection history if containers restarted
check_connection_history_reset() {
local current_start=$(get_container_start_time)
# Check if we have a saved container start time
if [ -f "$CONNECTION_HISTORY_START_FILE" ]; then
local saved_start=$(cat "$CONNECTION_HISTORY_START_FILE" 2>/dev/null)
if [ "$saved_start" = "$current_start" ] && [ -n "$saved_start" ]; then
# Same container session, keep history
_CONNECTION_HISTORY_CONTAINER_START="$current_start"
return
fi
fi
# Container restarted or new session - clear history
_CONNECTION_HISTORY_CONTAINER_START="$current_start"
mkdir -p "$(dirname "$CONNECTION_HISTORY_START_FILE")" 2>/dev/null
echo "$current_start" > "$CONNECTION_HISTORY_START_FILE"
# Clear history file
rm -f "$CONNECTION_HISTORY_FILE" 2>/dev/null
}
# Record current connection count to history (called every ~5 minutes)
record_connection_history() {
local connected=$1
local connecting=$2
local now=$(date +%s)
# Only record every 5 minutes (300 seconds)
if [ $(( now - _LAST_HISTORY_RECORD )) -lt 300 ]; then
return
fi
_LAST_HISTORY_RECORD=$now
# Check if containers restarted (reset history if so)
check_connection_history_reset
# Ensure directory exists
mkdir -p "$(dirname "$CONNECTION_HISTORY_FILE")" 2>/dev/null
# Append current snapshot
echo "${now}|${connected}|${connecting}" >> "$CONNECTION_HISTORY_FILE"
# Prune entries older than 25 hours (keep some buffer)
local cutoff=$((now - 90000))
if [ -f "$CONNECTION_HISTORY_FILE" ]; then
awk -F'|' -v cutoff="$cutoff" '$1 >= cutoff' "$CONNECTION_HISTORY_FILE" > "${CONNECTION_HISTORY_FILE}.tmp" 2>/dev/null
mv -f "${CONNECTION_HISTORY_FILE}.tmp" "$CONNECTION_HISTORY_FILE" 2>/dev/null
fi
}
# Get average connections since container started
get_average_connections() {
# Check if containers restarted (clear stale history)
check_connection_history_reset
if [ ! -f "$CONNECTION_HISTORY_FILE" ]; then
echo "-"
return
fi
# Calculate average from all entries in history
local avg=$(awk -F'|' '
NF >= 2 { sum += $2; count++ }
END { if (count > 0) printf "%.0f", sum/count; else print "-" }
' "$CONNECTION_HISTORY_FILE" 2>/dev/null)
echo "${avg:--}"
}
# Get connection snapshot from N hours ago (returns "connected|connecting" or "-|-")
get_connection_snapshot() {
local hours_ago=$1
local now=$(date +%s)
local target=$((now - (hours_ago * 3600)))
local tolerance=1800 # 30 minute tolerance window
# Check if containers restarted (clear stale history)
check_connection_history_reset
if [ ! -f "$CONNECTION_HISTORY_FILE" ]; then
echo "-|-"
return
fi
# Find closest entry to target time within tolerance
local result=$(awk -F'|' -v target="$target" -v tol="$tolerance" '
BEGIN { best_diff = tol + 1; best = "-|-" }
{
diff = ($1 > target) ? ($1 - target) : (target - $1)
if (diff < best_diff) {
best_diff = diff
best = $2 "|" $3
}
}
END { print best }
' "$CONNECTION_HISTORY_FILE" 2>/dev/null)
echo "${result:--|-}"
}
# Global cache for container stats (persists between show_status calls)
declare -A _STATS_CACHE_UP _STATS_CACHE_DOWN _STATS_CACHE_CONN _STATS_CACHE_CING
@@ -2485,6 +2962,11 @@ show_status() {
EL="\033[K" # Erase Line escape code
fi
# Load peak connections from file (only once per session)
if [ -z "$_PEAK_CONTAINER_START" ]; then
load_peak_connections
fi
echo ""
@@ -2565,6 +3047,12 @@ show_status() {
# Export for parent function to reuse (avoids duplicate docker logs calls)
_total_connected=$total_connected
# Update peak connections if current exceeds peak (and save to file)
if [ "$connected" -gt "$_PEAK_CONNECTIONS" ] 2>/dev/null; then
_PEAK_CONNECTIONS=$connected
save_peak_connections
fi
# Aggregate upload/download across all containers
local upload=""
local download=""
@@ -2671,15 +3159,29 @@ show_status() {
local net_display="↓ ${rx_mbps} Mbps ↑ ${tx_mbps} Mbps"
if [ -n "$upload" ] || [ "$connected" -gt 0 ] || [ "$connecting" -gt 0 ]; then
local avg_conn=$(get_average_connections)
local status_line="${BOLD}Status:${NC} ${GREEN}Running${NC}"
[ -n "$uptime" ] && status_line="${status_line} (${uptime})"
status_line="${status_line} ${DIM}|${NC} ${BOLD}Peak:${NC} ${CYAN}${_PEAK_CONNECTIONS}${NC}"
status_line="${status_line} ${DIM}|${NC} ${BOLD}Avg:${NC} ${CYAN}${avg_conn}${NC}"
echo -e "${status_line}${EL}"
echo -e " Containers: ${GREEN}${running_count}${NC}/${CONTAINER_COUNT} Clients: ${GREEN}${connected}${NC} connected, ${YELLOW}${connecting}${NC} connecting${EL}"
echo -e "${EL}"
echo -e "${CYAN}═══ Traffic (current session) ═══${NC}${EL}"
[ -n "$upload" ] && echo -e " Upload: ${CYAN}${upload}${NC}${EL}"
[ -n "$download" ] && echo -e " Download: ${CYAN}${download}${NC}${EL}"
# Record connection history (every 5 min)
record_connection_history "$connected" "$connecting"
# Get connection history snapshots
local snap_6h=$(get_connection_snapshot 6)
local snap_12h=$(get_connection_snapshot 12)
local snap_24h=$(get_connection_snapshot 24)
local conn_6h=$(echo "$snap_6h" | cut -d'|' -f1)
local conn_12h=$(echo "$snap_12h" | cut -d'|' -f1)
local conn_24h=$(echo "$snap_24h" | cut -d'|' -f1)
# Display traffic and history side by side
printf " Upload: ${CYAN}%-12s${NC} ${DIM}|${NC} Clients: ${DIM}6h:${NC}${GREEN}%-4s${NC} ${DIM}12h:${NC}${GREEN}%-4s${NC} ${DIM}24h:${NC}${GREEN}%s${NC}${EL}\n" \
"${upload:-0 B}" "${conn_6h}" "${conn_12h}" "${conn_24h}"
printf " Download: ${CYAN}%-12s${NC} ${DIM}|${NC}${EL}\n" "${download:-0 B}"
echo -e "${EL}"
echo -e "${CYAN}═══ Resource Usage ═══${NC}${EL}"
@@ -5945,6 +6447,7 @@ show_info_menu() {
echo -e " 3. 📦 Containers & Scaling"
echo -e " 4. 🔒 Privacy & Security"
echo -e " 5. 🚀 About Psiphon Conduit"
echo -e " 6. 📈 Dashboard Metrics Explained"
echo ""
echo -e " [b] Back to menu"
echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}"
@@ -5958,6 +6461,7 @@ show_info_menu() {
3) _info_containers; redraw=true ;;
4) _info_privacy; redraw=true ;;
5) show_about; redraw=true ;;
6) show_dashboard_info; redraw=true ;;
b|"") break ;;
*) echo -e " ${RED}Invalid.${NC}"; sleep 1; redraw=true ;;
esac
@@ -6128,6 +6632,7 @@ show_help() {
echo " menu Open interactive menu (default)"
echo " version Show version information"
echo " about About Psiphon Conduit"
echo " info Dashboard metrics explained"
echo " help Show this help"
}
@@ -6537,35 +7042,62 @@ update_conduit() {
echo -e "${CYAN}═══ UPDATE CONDUIT ═══${NC}"
echo ""
local script_updated=false
# --- Phase 1: Script update ---
echo "Checking for script updates..."
echo -e "${BOLD}Phase 1: Checking for script updates...${NC}"
local update_url="https://raw.githubusercontent.com/SamNet-dev/conduit-manager/main/conduit.sh"
local tmp_script="/tmp/conduit_update_$$.sh"
if curl -sL --max-time 30 --max-filesize 2097152 -o "$tmp_script" "$update_url" 2>/dev/null; then
if curl -fsSL --max-time 30 --max-filesize 2097152 -o "$tmp_script" "$update_url" 2>/dev/null; then
# Validate downloaded script (basic sanity checks)
if grep -q "CONDUIT_IMAGE=" "$tmp_script" && grep -q "create_management_script" "$tmp_script" && bash -n "$tmp_script" 2>/dev/null; then
echo -e "${GREEN}✓ Latest script downloaded${NC}"
local new_version=$(grep -m1 '^VERSION=' "$tmp_script" 2>/dev/null | cut -d'"' -f2)
echo -e " ${GREEN}✓ Downloaded v${new_version:-?} from GitHub${NC}"
echo -e " Installing..."
# Always install latest from GitHub (run new script's update-components)
bash "$tmp_script" --update-components
local update_status=$?
rm -f "$tmp_script"
if [ $update_status -eq 0 ]; then
echo -e "${GREEN}✓ Management script updated${NC}"
echo -e "${GREEN}✓ Tracker service updated${NC}"
echo -e " ${GREEN}✓ Script installed (v${new_version:-?})${NC}"
script_updated=true
else
echo -e "${RED}Script update failed. Continuing with Docker check...${NC}"
echo -e " ${RED}✗ Installation failed${NC}"
fi
else
echo -e "${RED}Downloaded file doesn't look valid. Skipping script update.${NC}"
echo -e " ${RED}Downloaded file invalid or corrupted${NC}"
rm -f "$tmp_script"
fi
else
echo -e "${YELLOW}Could not download latest script. Skipping script update.${NC}"
rm -f "$tmp_script"
echo -e " ${YELLOW}Could not download (check internet connection)${NC}"
rm -f "$tmp_script" 2>/dev/null
fi
# --- Phase 2: Docker image update ---
# --- Phase 2: Restart tracker service (picks up any script changes) ---
echo ""
echo "Checking for Docker image updates..."
echo -e "${BOLD}Phase 2: Updating tracker service...${NC}"
if [ "${TRACKER_ENABLED:-true}" = "true" ]; then
# Regenerate and restart tracker to pick up new code
if command -v systemctl &>/dev/null; then
systemctl restart conduit-tracker.service 2>/dev/null
if systemctl is-active conduit-tracker.service &>/dev/null; then
echo -e " ${GREEN}✓ Tracker service restarted${NC}"
else
echo -e " ${YELLOW}✗ Tracker restart failed (will retry on next start)${NC}"
fi
else
echo -e " ${DIM}Tracker service not available (no systemd)${NC}"
fi
else
echo -e " ${DIM}Tracker is disabled, skipping${NC}"
fi
# --- Phase 3: Docker image update ---
echo ""
echo -e "${BOLD}Phase 3: Checking for Docker image updates...${NC}"
local pull_output
pull_output=$(docker pull "$CONDUIT_IMAGE" 2>&1)
local pull_status=$?
@@ -6574,7 +7106,7 @@ update_conduit() {
if [ $pull_status -ne 0 ]; then
echo -e "${RED}Failed to check for Docker updates. Check your internet connection.${NC}"
echo ""
echo -e "${GREEN}Script update complete.${NC}"
echo -e "${GREEN}Update complete.${NC}"
return 1
fi
@@ -6594,7 +7126,10 @@ update_conduit() {
fi
echo ""
echo -e "${GREEN}Update complete.${NC}"
echo -e "${GREEN}═══ Update complete ═══${NC}"
if [ "$script_updated" = true ]; then
echo -e "${DIM}Note: Some changes may require restarting the menu to take effect.${NC}"
fi
}
case "${1:-menu}" in
@@ -6612,6 +7147,7 @@ case "${1:-menu}" in
restore) restore_key ;;
scale) manage_containers ;;
about) show_about ;;
info) show_dashboard_info ;;
uninstall) uninstall_all ;;
version|-v|--version) show_version ;;
help|-h|--help) show_help ;;
@@ -6956,7 +7492,7 @@ SVCEOF
fi
}
#
# REACHED END OF SCRIPT - VERSION 1.2
# REACHED END OF SCRIPT - VERSION 1.2.1
# ###############################################################################
main "$@"