diff --git a/README.md b/README.md index cd058e2..f04d7bb 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,21 @@ # Conduit Manager +``` + ██████╗ ██████╗ ███╗ ██╗██████╗ ██╗ ██╗██╗████████╗ + ██╔════╝██╔═══██╗████╗ ██║██╔══██╗██║ ██║██║╚══██╔══╝ + ██║ ██║ ██║██╔██╗ ██║██║ ██║██║ ██║██║ ██║ + ██║ ██║ ██║██║╚██╗██║██║ ██║██║ ██║██║ ██║ + ╚██████╗╚██████╔╝██║ ╚████║██████╔╝╚██████╔╝██║ ██║ + ╚═════╝ ╚═════╝ ╚═╝ ╚═══╝╚═════╝ ╚═════╝ ╚═╝ ╚═╝ + M A N A G E R +``` + +![Version](https://img.shields.io/badge/version-1.1-blue) +![License](https://img.shields.io/badge/license-MIT-green) +![Platform](https://img.shields.io/badge/platform-Linux-orange) +![Docker](https://img.shields.io/badge/Docker-Required-2496ED?logo=docker&logoColor=white) +![Bash](https://img.shields.io/badge/Bash-Script-4EAA25?logo=gnubash&logoColor=white) + A powerful management tool for deploying and managing Psiphon Conduit nodes on Linux servers. Help users access the open internet during network restrictions. ## Quick Install @@ -15,21 +31,35 @@ wget https://raw.githubusercontent.com/SamNet-dev/conduit-manager/main/conduit.s sudo bash conduit.sh ``` +## What's New in v1.1 + +- **Multi-Container Support** — Run up to 5 Conduit containers on a single server for higher throughput +- **Background Traffic Tracker** — Continuous tcpdump-based tracker service with per-country GeoIP stats +- **Advanced Stats Page** — Live dashboard with top countries by peers, download, upload, and unique IPs (bar charts, auto-refresh) +- **Live Dashboard Overhaul** — Side-by-side active clients and top upload by country with real-time bars +- **Per-Container Settings** — Configure max-clients and bandwidth individually for each container +- **Container Manager** — Add or remove containers on the fly with auto-refreshing status view +- **Smart Install** — Detects CPU cores and RAM, recommends container count for your hardware +- **Info & Help Hub** — Multi-page guide covering the tracker, stats, containers, privacy, and about +- **Service Auto-Recovery** — Automatically restarts failed conduit.service on script launch +- **Seamless Upgrade** — Existing v1.0.x users can run the new script without reinstalling; old containers are recognized automatically + ## Features -- **One-Click Deployment** - Automatically installs Docker and configures everything -- **Multi-Distro Support** - Works on Ubuntu, Debian, CentOS, Fedora, Arch, Alpine, openSUSE -- **Auto-Start on Boot** - Supports systemd, OpenRC, and SysVinit -- **Live Monitoring** - Real-time connection stats with CPU/RAM monitoring -- **Live Peer Traffic** - Real-time traffic monitoring by country with GeoIP lookup -- **Easy Management** - Powerful CLI commands or interactive menu -- **Backup & Restore** - Backup and restore your node identity key -- **Health Checks** - Comprehensive diagnostics for troubleshooting -- **Complete Uninstall** - Clean removal of all components - -![Conduit Manager Menu](conduitmenu.png) - -![Live Peer Traffic](conduitpeers.png) +- **One-Click Deployment** — Automatically installs Docker and configures everything +- **Multi-Container Scaling** — Run 1–5 containers to maximize your server's capacity +- **Multi-Distro Support** — Works on Ubuntu, Debian, CentOS, Fedora, Arch, Alpine, openSUSE +- **Auto-Start on Boot** — Supports systemd, OpenRC, and SysVinit +- **Live Dashboard** — Real-time connection stats with CPU/RAM monitoring and per-country client breakdown +- **Advanced Stats** — Top countries by connected peers, download, upload, and unique IPs with bar charts +- **Live Peer Traffic** — Real-time traffic table by country with speed, total bytes, and IP/client counts +- **Background Tracker** — Continuous traffic monitoring via systemd service with GeoIP resolution +- **Per-Container Settings** — Configure max-clients and bandwidth per container +- **Easy Management** — Powerful CLI commands or interactive menu +- **Backup & Restore** — Backup and restore your node identity keys +- **Health Checks** — Comprehensive diagnostics for troubleshooting +- **Info & Help** — Built-in multi-page guide explaining how everything works +- **Complete Uninstall** — Clean removal of all components ## Supported Distributions @@ -48,7 +78,7 @@ After installation, use the `conduit` command: ### Status & Monitoring ```bash conduit status # Show current status and resource usage -conduit stats # View live statistics (real-time) +conduit stats # View live statistics (real-time dashboard) conduit logs # View raw Docker logs conduit health # Run health check diagnostics conduit peers # Live peer traffic by country (GeoIP) @@ -56,21 +86,21 @@ conduit peers # Live peer traffic by country (GeoIP) ### Container Management ```bash -conduit start # Start the Conduit container -conduit stop # Stop the Conduit container -conduit restart # Restart the Conduit container +conduit start # Start all Conduit containers +conduit stop # Stop all Conduit containers +conduit restart # Restart all Conduit containers conduit update # Update to the latest Conduit image ``` ### Configuration ```bash -conduit settings # Change max-clients and bandwidth +conduit settings # Change max-clients and bandwidth per container conduit menu # Open interactive management menu ``` ### Backup & Restore ```bash -conduit backup # Backup your node identity key +conduit backup # Backup your node identity keys conduit restore # Restore node identity from backup ``` @@ -81,21 +111,40 @@ conduit version # Show version information conduit help # Show help message ``` +## Interactive Menu + +The interactive menu (`conduit menu`) provides access to all features: + +| Option | Description | +|--------|-------------| +| **1** | Live Dashboard — real-time stats with active clients and top upload by country | +| **2** | Start / Stop / Restart containers | +| **3** | Update Conduit image | +| **4** | Live Peer Traffic — per-country traffic table with speed and client counts | +| **5** | Container Settings — configure max-clients and bandwidth per container | +| **6** | Manage Containers — add or remove containers (up to 5) | +| **7** | Backup & Restore node identity | +| **8** | Health Check diagnostics | +| **9** | Uninstall | +| **a** | Advanced Stats — top 5 charts for peers, download, upload, unique IPs | +| **i** | Info & Help — multi-page guide with tracker, stats, containers, privacy, about | +| **0** | Exit | + ## Configuration Options | Option | Default | Range | Description | |--------|---------|-------|-------------| -| `max-clients` | 200 | 1-1000 | Maximum concurrent proxy clients | -| `bandwidth` | 5 | 1-40, -1 | Bandwidth limit per peer (Mbps). Use -1 for unlimited. | +| `max-clients` | 200 | 1–1000 | Maximum concurrent proxy clients per container | +| `bandwidth` | 5 | 1–40, -1 | Bandwidth limit per peer (Mbps). Use -1 for unlimited. | -**Recommended values based on server CPU:** +**Recommended values based on server hardware:** -| CPU Cores | Max Clients | -|-----------|-------------| -| 8+ Cores | 800 | -| 4 Cores | 400 | -| 2 Cores | 200 | -| 1 Core | 100 | +| CPU Cores | RAM | Recommended Containers | Max Clients (per container) | +|-----------|-----|------------------------|-----------------------------| +| 1 Core | < 1 GB | 1 | 100 | +| 2 Cores | 2 GB | 1–2 | 200 | +| 4 Cores | 4 GB+ | 2–3 | 400 | +| 8+ Cores | 8 GB+ | 3–5 | 800 | ## Installation Options @@ -113,25 +162,32 @@ sudo bash conduit.sh --uninstall sudo bash conduit.sh --help ``` +## Upgrading from v1.0.x + +Just run the new script. When prompted, select **"Open management menu"** — your existing container is recognized automatically. No reinstall needed. The background tracker service starts when you next start/restart from the menu. + ## Requirements - Linux server (any supported distribution) - Root/sudo access - Internet connection -- Minimum 512MB RAM (1GB+ recommended) +- Minimum 512MB RAM (1GB+ recommended for multi-container) ## How It Works -1. **Detection** - Identifies your Linux distribution and init system -2. **Docker Setup** - Installs Docker if not present -3. **Container Deployment** - Pulls and runs the official Psiphon Conduit image -5. **Auto-Start Configuration** - Sets up systemd/OpenRC/SysVinit service -6. **CLI Installation** - Creates the `conduit` management command +1. **Detection** — Identifies your Linux distribution and init system +2. **Docker Setup** — Installs Docker if not present +3. **Hardware Check** — Detects CPU/RAM and recommends container count +4. **Container Deployment** — Pulls and runs the official Psiphon Conduit image +5. **Auto-Start Configuration** — Sets up systemd/OpenRC/SysVinit service +6. **Tracker Service** — Starts background traffic tracker with GeoIP resolution +7. **CLI Installation** — Creates the `conduit` management command ## Security - **Secure Backups**: Node identity keys are stored with restricted permissions (600) - **No Telemetry**: The manager collects no data and sends nothing externally +- **Local Tracking Only**: Traffic stats are stored locally and never transmitted --- @@ -139,7 +195,7 @@ sudo bash conduit.sh --help # راهنمای فارسی - مدیریت کاندوییت -ابزار قدرتمند برای راه‌اندازی و مدیریت نود سایفون کاندوییت روی سرورهای لینوکس. +ابزار قدرتمند برای راه‌اندازی و مدیریت نود سایفون کاندوییت روی سرورهای لینوکس. به کاربران کمک کنید تا در زمان محدودیت‌های اینترنتی به اینترنت آزاد دسترسی داشته باشند. ## نصب سریع @@ -156,24 +212,42 @@ wget https://raw.githubusercontent.com/SamNet-dev/conduit-manager/main/conduit.s sudo bash conduit.sh ``` +## تازه‌های نسخه 1.1 + +- **پشتیبانی از چند کانتینر** — اجرای تا ۵ کانتینر روی یک سرور برای ظرفیت بیشتر +- **ردیاب ترافیک پس‌زمینه** — سرویس ردیابی مداوم با آمار جغرافیایی به تفکیک کشور +- **صفحه آمار پیشرفته** — داشبورد زنده با نمودار میله‌ای برای برترین کشورها +- **داشبورد بازطراحی شده** — نمایش کلاینت‌های فعال و آپلود برتر به تفکیک کشور +- **تنظیمات هر کانتینر** — پیکربندی جداگانه حداکثر کاربران و پهنای باند +- **مدیریت کانتینرها** — اضافه یا حذف کانتینر به صورت آنی +- **نصب هوشمند** — تشخیص CPU و RAM و پیشنهاد تعداد کانتینر مناسب +- **بخش راهنما** — راهنمای چندصفحه‌ای شامل ردیاب، آمار، کانتینرها، حریم خصوصی و درباره ما +- **بازیابی خودکار سرویس** — ریستارت خودکار سرویس در صورت خرابی +- **ارتقا بدون نصب مجدد** — کاربران نسخه قبلی بدون نیاز به نصب مجدد می‌توانند آپدیت کنند + ## ویژگی‌ها -- **نصب با یک کلیک** - داکر و تمام موارد مورد نیاز به صورت خودکار نصب می‌شود -- **پشتیبانی از توزیع‌های مختلف** - اوبونتو، دبیان، سنت‌اواس، فدورا، آرچ، آلپاین -- **راه‌اندازی خودکار** - پس از ریستارت سرور، سرویس به صورت خودکار اجرا می‌شود -- **مانیتورینگ زنده** - نمایش تعداد کاربران متصل و مصرف منابع -- **مانیتورینگ ترافیک** - نمایش لحظه‌ای ترافیک بر اساس کشور با GeoIP -- **مدیریت آسان** - دستورات قدرتمند CLI یا منوی تعاملی -- **پشتیبان‌گیری و بازیابی** - پشتیبان‌گیری و بازیابی کلید هویت نود -- **بررسی سلامت** - تشخیص جامع برای عیب‌یابی -- **حذف کامل** - پاکسازی تمام فایل‌ها و تنظیمات +- **نصب با یک کلیک** — داکر و تمام موارد مورد نیاز به صورت خودکار نصب می‌شود +- **مقیاس‌پذیری چند کانتینره** — اجرای ۱ تا ۵ کانتینر برای حداکثر استفاده از سرور +- **پشتیبانی از توزیع‌های مختلف** — اوبونتو، دبیان، سنت‌اواس، فدورا، آرچ، آلپاین +- **راه‌اندازی خودکار** — پس از ریستارت سرور، سرویس به صورت خودکار اجرا می‌شود +- **داشبورد زنده** — نمایش لحظه‌ای وضعیت، تعداد کاربران، مصرف CPU و RAM +- **آمار پیشرفته** — نمودار میله‌ای برترین کشورها بر اساس اتصال، دانلود، آپلود و IP +- **مانیتورینگ ترافیک** — جدول لحظه‌ای ترافیک بر اساس کشور با سرعت و تعداد کلاینت +- **ردیاب پس‌زمینه** — سرویس ردیابی مداوم ترافیک با تشخیص جغرافیایی +- **تنظیمات هر کانتینر** — پیکربندی حداکثر کاربران و پهنای باند برای هر کانتینر +- **مدیریت آسان** — دستورات قدرتمند CLI یا منوی تعاملی +- **پشتیبان‌گیری و بازیابی** — پشتیبان‌گیری و بازیابی کلیدهای هویت نود +- **بررسی سلامت** — تشخیص جامع برای عیب‌یابی +- **راهنما و اطلاعات** — راهنمای چندصفحه‌ای داخلی +- **حذف کامل** — پاکسازی تمام فایل‌ها و تنظیمات ## دستورات CLI ### وضعیت و مانیتورینگ ```bash conduit status # نمایش وضعیت و مصرف منابع -conduit stats # آمار زنده (لحظه‌ای) +conduit stats # داشبورد زنده (لحظه‌ای) conduit logs # لاگ‌های داکر conduit health # بررسی سلامت سیستم conduit peers # ترافیک بر اساس کشور (GeoIP) @@ -181,22 +255,22 @@ conduit peers # ترافیک بر اساس کشور (GeoIP) ### مدیریت کانتینر ```bash -conduit start # شروع کانتینر -conduit stop # توقف کانتینر -conduit restart # ریستارت کانتینر +conduit start # شروع تمام کانتینرها +conduit stop # توقف تمام کانتینرها +conduit restart # ریستارت تمام کانتینرها conduit update # به‌روزرسانی به آخرین نسخه ``` ### پیکربندی ```bash -conduit settings # تغییر تنظیمات +conduit settings # تغییر تنظیمات هر کانتینر conduit menu # منوی تعاملی ``` ### پشتیبان‌گیری و بازیابی ```bash -conduit backup # پشتیبان‌گیری از کلید نود -conduit restore # بازیابی کلید نود از پشتیبان +conduit backup # پشتیبان‌گیری از کلیدهای نود +conduit restore # بازیابی کلیدهای نود از پشتیبان ``` ### نگهداری @@ -206,28 +280,65 @@ conduit version # نمایش نسخه conduit help # راهنما ``` +## منوی تعاملی + +| گزینه | توضیحات | +|-------|---------| +| **1** | داشبورد زنده — آمار لحظه‌ای با کلاینت‌های فعال و آپلود برتر | +| **2** | شروع / توقف / ریستارت کانتینرها | +| **3** | به‌روزرسانی ایمیج | +| **4** | ترافیک زنده — جدول ترافیک به تفکیک کشور | +| **5** | تنظیمات کانتینر — پیکربندی هر کانتینر | +| **6** | مدیریت کانتینرها — اضافه یا حذف (تا ۵) | +| **7** | پشتیبان‌گیری و بازیابی | +| **8** | بررسی سلامت | +| **9** | حذف نصب | +| **a** | آمار پیشرفته — نمودار برترین کشورها | +| **i** | راهنما — توضیحات ردیاب، آمار، کانتینرها، حریم خصوصی | +| **0** | خروج | + ## تنظیمات | گزینه | پیش‌فرض | محدوده | توضیحات | |-------|---------|--------|---------| -| `max-clients` | 200 | 1-1000 | حداکثر کاربران همزمان | -| `bandwidth` | 5 | 1-40, -1 | محدودیت پهنای باند (Mbps). برای نامحدود -1 وارد کنید. | +| `max-clients` | 200 | ۱–۱۰۰۰ | حداکثر کاربران همزمان برای هر کانتینر | +| `bandwidth` | 5 | ۱–۴۰ یا ۱- | محدودیت پهنای باند (Mbps). برای نامحدود ۱- وارد کنید. | -**مقادیر پیشنهادی بر اساس پردازنده (CPU):** +**مقادیر پیشنهادی بر اساس سخت‌افزار سرور:** -| تعداد هسته | حداکثر کاربران | -|------------|----------------| -| +8 هسته | 800 | -| 4 هسته | 400 | -| 2 هسته | 200 | -| 1 هسته | 100 | +| پردازنده | رم | کانتینر پیشنهادی | حداکثر کاربران (هر کانتینر) | +|----------|-----|-------------------|----------------------------| +| ۱ هسته | کمتر از ۱ گیگ | ۱ | ۱۰۰ | +| ۲ هسته | ۲ گیگ | ۱–۲ | ۲۰۰ | +| ۴ هسته | ۴ گیگ+ | ۲–۳ | ۴۰۰ | +| ۸+ هسته | ۸ گیگ+ | ۳–۵ | ۸۰۰ | + +## ارتقا از نسخه 1.0.x + +فقط اسکریپت جدید را اجرا کنید. وقتی سوال پرسیده شد، گزینه **«Open management menu»** را انتخاب کنید. کانتینر موجود شما به صورت خودکار شناسایی می‌شود. نیازی به نصب مجدد نیست. ## پیش‌نیازها - سرور لینوکس - دسترسی root یا sudo - اتصال اینترنت -- حداقل 512 مگابایت رم +- حداقل ۵۱۲ مگابایت رم (۱ گیگ+ برای چند کانتینر پیشنهاد می‌شود) + +## نحوه عملکرد + +1. **تشخیص** — شناسایی توزیع لینوکس و سیستم init +2. **نصب داکر** — در صورت نبود، داکر نصب می‌شود +3. **بررسی سخت‌افزار** — تشخیص CPU و RAM و پیشنهاد تعداد کانتینر +4. **راه‌اندازی کانتینر** — دانلود و اجرای ایمیج رسمی سایفون +5. **پیکربندی سرویس** — تنظیم سرویس خودکار (systemd/OpenRC/SysVinit) +6. **سرویس ردیاب** — شروع ردیاب ترافیک پس‌زمینه +7. **نصب CLI** — ایجاد دستور مدیریت `conduit` + +## امنیت + +- **پشتیبان‌گیری امن**: کلیدهای هویت نود با دسترسی محدود (600) ذخیره می‌شوند +- **بدون تلمتری**: هیچ داده‌ای جمع‌آوری یا ارسال نمی‌شود +- **ردیابی محلی**: آمار ترافیک فقط به صورت محلی ذخیره شده و هرگز ارسال نمی‌شود diff --git a/conduit.sh b/conduit.sh index 9875077..0a56e9b 100644 --- a/conduit.sh +++ b/conduit.sh @@ -1,12 +1,12 @@ #!/bin/bash # # ╔═══════════════════════════════════════════════════════════════════╗ -# ║ 🚀 PSIPHON CONDUIT MANAGER v1.0.2 ║ +# ║ 🚀 PSIPHON CONDUIT MANAGER v1.1 ║ # ║ ║ # ║ One-click setup for Psiphon Conduit ║ # ║ ║ # ║ • Installs Docker (if needed) ║ -# ║ • Runs Conduit in Docker with live stats ║ +# ║ • Runs Conduit in Docker with live stats # ║ • Auto-start on boot via systemd/OpenRC/SysVinit ║ # ║ • Easy management via CLI or interactive menu ║ # ║ ║ @@ -16,7 +16,7 @@ # Usage: # curl -sL https://raw.githubusercontent.com/SamNet-dev/conduit-manager/main/conduit.sh | sudo bash # -# Reference: https://github.com/ssmirr/conduit/releases/tag/d8522a8 +# Reference: https://github.com/ssmirr/conduit/releases/latest # Conduit CLI options: # -m, --max-clients int maximum number of proxy clients (1-1000) (default 200) # -b, --bandwidth float bandwidth limit per peer in Mbps (1-40, or -1 for unlimited) (default 5) @@ -25,14 +25,14 @@ set -e -# Ensure we're running in bash (not sh/dash) +# Require bash if [ -z "$BASH_VERSION" ]; then echo "Error: This script requires bash. Please run with: bash $0" exit 1 fi -VERSION="1.0.2" -CONDUIT_IMAGE="ghcr.io/ssmirr/conduit/conduit:d8522a8" +VERSION="1.1" +CONDUIT_IMAGE="ghcr.io/ssmirr/conduit/conduit:latest" INSTALL_DIR="${INSTALL_DIR:-/opt/conduit}" BACKUP_DIR="$INSTALL_DIR/backups" FORCE_REINSTALL=false @@ -43,7 +43,9 @@ GREEN='\033[0;32m' YELLOW='\033[1;33m' BLUE='\033[0;34m' CYAN='\033[0;36m' +MAGENTA='\033[0;35m' BOLD='\033[1m' +DIM='\033[2m' NC='\033[0m' #═══════════════════════════════════════════════════════════════════════ @@ -90,7 +92,7 @@ detect_os() { HAS_SYSTEMD=false PKG_MANAGER="unknown" - # Detect OS from /etc/os-release + # Detect OS if [ -f /etc/os-release ]; then . /etc/os-release OS="$ID" @@ -109,7 +111,7 @@ detect_os() { OS=$(uname -s | tr '[:upper:]' '[:lower:]') fi - # Determine OS family and package manager + # Map OS family and package manager case "$OS" in ubuntu|debian|linuxmint|pop|elementary|zorin|kali|raspbian) OS_FAMILY="debian" @@ -141,11 +143,10 @@ detect_os() { ;; esac - # Check for systemd if command -v systemctl &>/dev/null && [ -d /run/systemd/system ]; then HAS_SYSTEMD=true fi - + log_info "Detected: $OS ($OS_FAMILY family), Package manager: $PKG_MANAGER" if command -v podman &>/dev/null && ! command -v docker &>/dev/null; then @@ -160,8 +161,7 @@ install_package() { case "$PKG_MANAGER" in apt) - # Make update failure non-fatal but log it - apt-get update -q || log_warn "apt-get update failed, attempting to install regardless..." + apt-get update -q || log_warn "apt-get update failed, attempting install anyway..." if apt-get install -y -q "$package"; then log_success "$package installed successfully" else @@ -217,20 +217,17 @@ install_package() { } check_dependencies() { - # Check for bash if [ "$OS_FAMILY" = "alpine" ]; then if ! command -v bash &>/dev/null; then - log_info "Installing bash (required for this script)..." + log_info "Installing bash..." apk add --no-cache bash 2>/dev/null fi fi - # Check for curl if ! command -v curl &>/dev/null; then install_package curl || log_warn "Could not install curl automatically" fi - # Check for basic tools if ! command -v awk &>/dev/null; then case "$PKG_MANAGER" in apt) install_package gawk || log_warn "Could not install gawk" ;; @@ -239,7 +236,6 @@ check_dependencies() { esac fi - # Check for free command if ! command -v free &>/dev/null; then case "$PKG_MANAGER" in apt|dnf|yum) install_package procps || log_warn "Could not install procps" ;; @@ -249,7 +245,6 @@ check_dependencies() { esac fi - # Check for tput (ncurses) if ! command -v tput &>/dev/null; then case "$PKG_MANAGER" in apt) install_package ncurses-bin || log_warn "Could not install ncurses-bin" ;; @@ -258,26 +253,32 @@ check_dependencies() { esac fi - # Check for tcpdump if ! command -v tcpdump &>/dev/null; then install_package tcpdump || log_warn "Could not install tcpdump automatically" fi - # Check for GeoIP tools - if ! command -v geoiplookup &>/dev/null; then + # GeoIP (geoiplookup or mmdblookup fallback) + if ! command -v geoiplookup &>/dev/null && ! command -v mmdblookup &>/dev/null; then case "$PKG_MANAGER" in - apt) - # geoip-bin and geoip-database for newer systems + apt) install_package geoip-bin || log_warn "Could not install geoip-bin" install_package geoip-database || log_warn "Could not install geoip-database" ;; - dnf|yum) - # On RHEL/CentOS + dnf|yum) if ! rpm -q epel-release &>/dev/null; then - log_info "Enabling EPEL repository for GeoIP..." $PKG_MANAGER install -y epel-release &>/dev/null || true fi - install_package GeoIP || log_warn "Could not install GeoIP." + if ! install_package GeoIP 2>/dev/null; then + # AL2023/Fedora: fallback to libmaxminddb + log_info "Legacy GeoIP not available, trying libmaxminddb..." + install_package libmaxminddb || log_warn "Could not install libmaxminddb" + if [ ! -f /usr/share/GeoIP/GeoLite2-Country.mmdb ] && [ ! -f /var/lib/GeoIP/GeoLite2-Country.mmdb ]; then + mkdir -p /usr/share/GeoIP + local mmdb_url="https://raw.githubusercontent.com/P3TERX/GeoLite.mmdb/download/GeoLite2-Country.mmdb" + curl -sL "$mmdb_url" -o /usr/share/GeoIP/GeoLite2-Country.mmdb 2>/dev/null || \ + log_warn "Could not download GeoLite2-Country.mmdb" + fi + fi ;; pacman) install_package geoip || log_warn "Could not install geoip." ;; zypper) install_package GeoIP || log_warn "Could not install GeoIP." ;; @@ -285,18 +286,18 @@ check_dependencies() { *) log_warn "Could not install geoiplookup automatically" ;; esac fi + + if ! command -v qrencode &>/dev/null; then + install_package qrencode || log_warn "Could not install qrencode automatically" + fi } get_ram_mb() { - # Get RAM in MB local ram="" - - # Try free command first if command -v free &>/dev/null; then ram=$(free -m 2>/dev/null | awk '/^Mem:/{print $2}') fi - # Fallback: parse /proc/meminfo if [ -z "$ram" ] || [ "$ram" = "0" ]; then if [ -f /proc/meminfo ]; then local kb=$(awk '/^MemTotal:/{print $2}' /proc/meminfo 2>/dev/null) @@ -306,7 +307,6 @@ get_ram_mb() { fi fi - # Ensure minimum of 1 if [ -z "$ram" ] || [ "$ram" -lt 1 ] 2>/dev/null; then echo 1 else @@ -322,7 +322,6 @@ get_cpu_cores() { cores=$(grep -c ^processor /proc/cpuinfo) fi - # Safety check if [ -z "$cores" ] || [ "$cores" -lt 1 ] 2>/dev/null; then echo 1 else @@ -332,7 +331,6 @@ get_cpu_cores() { calculate_recommended_clients() { local cores=$(get_cpu_cores) - # Logic: 100 clients per CPU core, max 1000 local recommended=$((cores * 100)) if [ "$recommended" -gt 1000 ]; then echo 1000 @@ -346,6 +344,7 @@ calculate_recommended_clients() { #═══════════════════════════════════════════════════════════════════════ prompt_settings() { + while true; do local ram_mb=$(get_ram_mb) local cpu_cores=$(get_cpu_cores) local recommended=$(calculate_recommended_clients) @@ -370,7 +369,6 @@ prompt_settings() { echo -e " ${YELLOW}--bandwidth${NC} Bandwidth per peer in Mbps (1-40, or -1 for unlimited)" echo "" - # Max clients prompt echo -e "${CYAN}───────────────────────────────────────────────────────────────${NC}" echo -e " Enter max-clients (1-1000)" echo -e " Press Enter for recommended: ${GREEN}${recommended}${NC}" @@ -388,7 +386,6 @@ prompt_settings() { echo "" - # Bandwidth prompt echo -e "${CYAN}───────────────────────────────────────────────────────────────${NC}" echo -e " Do you want to set ${BOLD}UNLIMITED${NC} bandwidth? (Recommended for servers)" echo -e " ${YELLOW}Note: High bandwidth usage may attract attention.${NC}" @@ -424,6 +421,45 @@ prompt_settings() { fi fi + echo "" + + # Detect CPU cores and RAM for recommendation + local cpu_cores=$(nproc 2>/dev/null || grep -c ^processor /proc/cpuinfo 2>/dev/null || echo 1) + local ram_mb=$(awk '/MemTotal/{printf "%.0f", $2/1024}' /proc/meminfo 2>/dev/null || echo 512) + local rec_containers=2 + if [ "$cpu_cores" -le 1 ] || [ "$ram_mb" -lt 1024 ]; then + rec_containers=1 + elif [ "$cpu_cores" -ge 4 ] && [ "$ram_mb" -ge 4096 ]; then + rec_containers=3 + fi + + echo -e "${CYAN}───────────────────────────────────────────────────────────────${NC}" + echo -e " How many Conduit containers to run? (1-5)" + echo -e " More containers = more connections served" + echo "" + echo -e " ${DIM}System: ${cpu_cores} CPU core(s), ${ram_mb}MB RAM${NC}" + if [ "$cpu_cores" -le 1 ] || [ "$ram_mb" -lt 1024 ]; then + echo -e " ${YELLOW}⚠ Low-end system detected. Recommended: 1 container.${NC}" + echo -e " ${YELLOW} Multiple containers may cause high CPU and instability.${NC}" + elif [ "$cpu_cores" -le 2 ]; then + echo -e " ${DIM}Recommended: 1-2 containers for this system.${NC}" + else + echo -e " ${DIM}Recommended: up to ${rec_containers} containers for this system.${NC}" + fi + echo "" + echo -e " Press Enter for default: ${GREEN}${rec_containers}${NC}" + echo -e "${CYAN}───────────────────────────────────────────────────────────────${NC}" + read -p " containers: " input_containers < /dev/tty || true + + if [ -z "$input_containers" ]; then + CONTAINER_COUNT=$rec_containers + elif [[ "$input_containers" =~ ^[1-5]$ ]]; then + CONTAINER_COUNT=$input_containers + else + log_warn "Invalid input. Using default: ${rec_containers}" + CONTAINER_COUNT=$rec_containers + fi + echo "" echo -e "${CYAN}───────────────────────────────────────────────────────────────${NC}" echo -e " ${BOLD}Your Settings:${NC}" @@ -433,13 +469,16 @@ prompt_settings() { else echo -e " Bandwidth: ${GREEN}${BANDWIDTH}${NC} Mbps" fi + echo -e " Containers: ${GREEN}${CONTAINER_COUNT}${NC}" echo -e "${CYAN}───────────────────────────────────────────────────────────────${NC}" echo "" - + read -p " Proceed with these settings? [Y/n] " confirm < /dev/tty || true if [[ "$confirm" =~ ^[Nn] ]]; then - prompt_settings + continue fi + break + done } #═══════════════════════════════════════════════════════════════════════ @@ -454,32 +493,30 @@ install_docker() { log_info "Installing Docker..." - # Check OS family for specific requirements if [ "$OS_FAMILY" = "rhel" ]; then - log_info "Installing RHEL-specific Docker dependencies..." + log_info "Adding Docker repo for RHEL..." $PKG_MANAGER install -y -q dnf-plugins-core 2>/dev/null || true dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 2>/dev/null || true fi - # Alpine if [ "$OS_FAMILY" = "alpine" ]; then - apk add --no-cache docker docker-cli-compose 2>/dev/null + if ! apk add --no-cache docker docker-cli-compose 2>/dev/null; then + log_error "Failed to install Docker on Alpine" + return 1 + fi rc-update add docker boot 2>/dev/null || true service docker start 2>/dev/null || rc-service docker start 2>/dev/null || true else - # Use official Docker install if ! curl -fsSL https://get.docker.com | sh; then log_error "Official Docker installation script failed." log_info "Try installing docker manually: https://docs.docker.com/engine/install/" return 1 fi - # Enable and start Docker if [ "$HAS_SYSTEMD" = "true" ]; then systemctl enable docker 2>/dev/null || true systemctl start docker 2>/dev/null || true else - # Fallback for non-systemd (SysVinit, OpenRC, etc.) if command -v update-rc.d &>/dev/null; then update-rc.d docker defaults 2>/dev/null || true elif command -v chkconfig &>/dev/null; then @@ -491,7 +528,6 @@ install_docker() { fi fi - # Wait for Docker to be ready sleep 3 local retries=27 while ! docker info &>/dev/null && [ $retries -gt 0 ]; do @@ -508,39 +544,23 @@ install_docker() { } -#═══════════════════════════════════════════════════════════════════════ -# check_and_offer_backup_restore() - Check for existing backup keys -#═══════════════════════════════════════════════════════════════════════ -# Backup location: /opt/conduit/backups/ -# Key file format: conduit_key_YYYYMMDD_HHMMSS.json -# -# Returns: -# 0 - Backup was restored (or none existed) -# 1 - User declined restore (fresh install) -#═══════════════════════════════════════════════════════════════════════ +# Check for backup keys and offer restore during install check_and_offer_backup_restore() { - if [ ! -d "$BACKUP_DIR" ]; then - return 0 + return 0 fi - # Find the most recent backup file local latest_backup=$(ls -t "$BACKUP_DIR"/conduit_key_*.json 2>/dev/null | head -1) if [ -z "$latest_backup" ]; then - return 0 + return 0 fi - # Extract timestamp from filename for display local backup_filename=$(basename "$latest_backup") local backup_date=$(echo "$backup_filename" | sed -E 's/conduit_key_([0-9]{8})_([0-9]{6})\.json/\1/') local backup_time=$(echo "$backup_filename" | sed -E 's/conduit_key_([0-9]{8})_([0-9]{6})\.json/\2/') - - # Format date for display (YYYYMMDD -> YYYY-MM-DD) local formatted_date="${backup_date:0:4}-${backup_date:4:2}-${backup_date:6:2}" local formatted_time="${backup_time:0:2}:${backup_time:2:2}:${backup_time:4:2}" - - # Prompt user about restoring the backup echo "" echo -e "${CYAN}═══════════════════════════════════════════════════════════════════${NC}" echo -e "${CYAN} 📁 PREVIOUS NODE IDENTITY BACKUP FOUND${NC}" @@ -564,12 +584,26 @@ check_and_offer_backup_restore() { echo "" log_info "Restoring node identity from backup..." - # Ensure the Docker volume exists docker volume create conduit-data 2>/dev/null || true - docker run --rm -v conduit-data:/home/conduit/data -v "$BACKUP_DIR":/backup alpine \ - sh -c "cp /backup/$backup_filename /home/conduit/data/conduit_key.json && chown -R 1000:1000 /home/conduit/data" - if [ $? -eq 0 ]; then + # Try bind-mount, fall back to docker cp (Snap Docker compatibility) + local restore_ok=false + if docker run --rm -v conduit-data:/home/conduit/data -v "$BACKUP_DIR":/backup alpine \ + sh -c "cp /backup/$backup_filename /home/conduit/data/conduit_key.json && chown -R 1000:1000 /home/conduit/data" 2>/dev/null; then + restore_ok=true + else + log_info "Bind-mount failed (Snap Docker?), trying docker cp..." + local tmp_ctr="conduit-restore-tmp" + docker create --name "$tmp_ctr" -v conduit-data:/home/conduit/data alpine true 2>/dev/null || true + if docker cp "$latest_backup" "$tmp_ctr:/home/conduit/data/conduit_key.json" 2>/dev/null; then + docker run --rm -v conduit-data:/home/conduit/data alpine \ + chown -R 1000:1000 /home/conduit/data 2>/dev/null || true + restore_ok=true + fi + docker rm -f "$tmp_ctr" 2>/dev/null || true + fi + + if [ "$restore_ok" = "true" ]; then log_success "Node identity restored successfully!" echo "" return 0 @@ -586,52 +620,49 @@ check_and_offer_backup_restore() { fi } -# run_conduit() - Pull image, verify digest, and start container run_conduit() { - log_info "Starting Conduit container..." + local count=${CONTAINER_COUNT:-1} + log_info "Starting Conduit ($count container(s))..." - # Check for existing conduit containers (any image containing conduit) - local existing=$(docker ps -a --filter "ancestor=ghcr.io/ssmirr/conduit/conduit" --format "{{.Names}}") - if [ -n "$existing" ] && [ "$existing" != "conduit" ]; then - log_warn "Detected other Conduit containers: $existing" - log_warn "Running multiple instances may cause port conflicts." - fi - - # Stop and remove any existing container - docker rm -f conduit 2>/dev/null || true - - # Pull the official Conduit image from GitHub Container Registry log_info "Pulling Conduit image ($CONDUIT_IMAGE)..." - if ! docker pull $CONDUIT_IMAGE; then + if ! docker pull "$CONDUIT_IMAGE"; then log_error "Failed to pull Conduit image. Check your internet connection." exit 1 fi + for i in $(seq 1 $count); do + local cname="conduit" + local vname="conduit-data" + [ "$i" -gt 1 ] && cname="conduit-${i}" && vname="conduit-data-${i}" - # Ensure volume exists and has correct permissions for the conduit user (uid 1000) - docker volume create conduit-data 2>/dev/null || true - docker run --rm -v conduit-data:/home/conduit/data alpine \ - sh -c "chown -R 1000:1000 /home/conduit/data" 2>/dev/null || true + docker rm -f "$cname" 2>/dev/null || true - # Start the Conduit container - docker run -d \ - --name conduit \ - --restart unless-stopped \ - -v conduit-data:/home/conduit/data \ - --network host \ - $CONDUIT_IMAGE \ - start --max-clients "$MAX_CLIENTS" --bandwidth "$BANDWIDTH" --stats-file + # Ensure volume exists with correct permissions (uid 1000) + docker volume create "$vname" 2>/dev/null || true + docker run --rm -v "${vname}:/home/conduit/data" alpine \ + sh -c "chown -R 1000:1000 /home/conduit/data" 2>/dev/null || true - # Wait for container to initialize - sleep 3 + docker run -d \ + --name "$cname" \ + --restart unless-stopped \ + -v "${vname}:/home/conduit/data" \ + --network host \ + "$CONDUIT_IMAGE" \ + start --max-clients "$MAX_CLIENTS" --bandwidth "$BANDWIDTH" --stats-file - # Verify container is running - if docker ps | grep -q conduit; then - log_success "Conduit container is running" - if [ "$BANDWIDTH" == "-1" ]; then - log_success "Settings: max-clients=$MAX_CLIENTS, bandwidth=Unlimited" + if [ $? -eq 0 ]; then + log_success "$cname started" else - log_success "Settings: max-clients=$MAX_CLIENTS, bandwidth=${BANDWIDTH}Mbps" + log_error "Failed to start $cname" + fi + done + + sleep 3 + if docker ps | grep -q conduit; then + if [ "$BANDWIDTH" == "-1" ]; then + log_success "Settings: max-clients=$MAX_CLIENTS, bandwidth=Unlimited, containers=$count" + else + log_success "Settings: max-clients=$MAX_CLIENTS, bandwidth=${BANDWIDTH}Mbps, containers=$count" fi else log_error "Conduit failed to start" @@ -640,20 +671,26 @@ run_conduit() { fi } -save_settings() { +save_settings_install() { mkdir -p "$INSTALL_DIR" - - # Save settings cat > "$INSTALL_DIR/settings.conf" << EOF MAX_CLIENTS=$MAX_CLIENTS BANDWIDTH=$BANDWIDTH +CONTAINER_COUNT=${CONTAINER_COUNT:-1} +DATA_CAP_GB=0 +DATA_CAP_IFACE= +DATA_CAP_BASELINE_RX=0 +DATA_CAP_BASELINE_TX=0 +DATA_CAP_PRIOR_USAGE=0 EOF - + + chmod 600 "$INSTALL_DIR/settings.conf" 2>/dev/null || true + if [ ! -f "$INSTALL_DIR/settings.conf" ]; then log_error "Failed to save settings. Check disk space and permissions." return 1 fi - + log_success "Settings saved" } @@ -661,8 +698,6 @@ setup_autostart() { log_info "Setting up auto-start on boot..." if [ "$HAS_SYSTEMD" = "true" ]; then - # Systemd-based systems - local docker_path=$(command -v docker) cat > /etc/systemd/system/conduit.service << EOF [Unit] Description=Psiphon Conduit Service @@ -672,14 +707,14 @@ Requires=docker.service [Service] Type=oneshot RemainAfterExit=yes -ExecStart=$docker_path start conduit -ExecStop=$docker_path stop conduit +ExecStart=/usr/local/bin/conduit start +ExecStop=/usr/local/bin/conduit stop [Install] WantedBy=multi-user.target EOF - systemctl daemon-reload + systemctl daemon-reload 2>/dev/null || true systemctl enable conduit.service 2>/dev/null || true systemctl start conduit.service 2>/dev/null || true log_success "Systemd service created, enabled, and started" @@ -697,12 +732,12 @@ depend() { } start() { ebegin "Starting Conduit" - docker start conduit + /usr/local/bin/conduit start eend $? } stop() { ebegin "Stopping Conduit" - docker stop conduit + /usr/local/bin/conduit stop eend $? } EOF @@ -725,13 +760,13 @@ EOF case "$1" in start) - docker start conduit + /usr/local/bin/conduit start ;; stop) - docker stop conduit + /usr/local/bin/conduit stop ;; restart) - docker restart conduit + /usr/local/bin/conduit restart ;; status) docker ps | grep -q conduit && echo "Running" || echo "Stopped" @@ -766,26 +801,34 @@ create_management_script() { #!/bin/bash # # Psiphon Conduit Manager -# Reference: https://github.com/ssmirr/conduit/releases/tag/d8522a8 +# Reference: https://github.com/ssmirr/conduit/releases/latest # -VERSION="1.0.2" +VERSION="1.1" INSTALL_DIR="REPLACE_ME_INSTALL_DIR" BACKUP_DIR="$INSTALL_DIR/backups" -CONDUIT_IMAGE="ghcr.io/ssmirr/conduit/conduit:d8522a8" +CONDUIT_IMAGE="ghcr.io/ssmirr/conduit/conduit:latest" # Colors RED='\033[0;31m' GREEN='\033[0;32m' YELLOW='\033[1;33m' CYAN='\033[0;36m' +MAGENTA='\033[0;35m' BOLD='\033[1m' +DIM='\033[2m' NC='\033[0m' # Load settings [ -f "$INSTALL_DIR/settings.conf" ] && source "$INSTALL_DIR/settings.conf" MAX_CLIENTS=${MAX_CLIENTS:-200} BANDWIDTH=${BANDWIDTH:-5} +CONTAINER_COUNT=${CONTAINER_COUNT:-1} +DATA_CAP_GB=${DATA_CAP_GB:-0} +DATA_CAP_IFACE=${DATA_CAP_IFACE:-} +DATA_CAP_BASELINE_RX=${DATA_CAP_BASELINE_RX:-0} +DATA_CAP_BASELINE_TX=${DATA_CAP_BASELINE_TX:-0} +DATA_CAP_PRIOR_USAGE=${DATA_CAP_PRIOR_USAGE:-0} # Ensure we're running as root if [ "$EUID" -ne 0 ]; then @@ -825,21 +868,71 @@ if ! command -v awk &>/dev/null; then echo -e "${YELLOW}Warning: awk not found. Some stats may not display correctly.${NC}" fi +# Helper: Get container name by index (1-based) +get_container_name() { + local idx=${1:-1} + if [ "$idx" -eq 1 ]; then + echo "conduit" + else + echo "conduit-${idx}" + fi +} + +# Helper: Get volume name by index (1-based) +get_volume_name() { + local idx=${1:-1} + if [ "$idx" -eq 1 ]; then + echo "conduit-data" + else + echo "conduit-data-${idx}" + fi +} + # Helper: Fix volume permissions for conduit user (uid 1000) fix_volume_permissions() { - docker run --rm -v conduit-data:/home/conduit/data alpine \ - sh -c "chown -R 1000:1000 /home/conduit/data" 2>/dev/null || true + local idx=${1:-0} + if [ "$idx" -eq 0 ]; then + # Fix all volumes + for i in $(seq 1 $CONTAINER_COUNT); do + local vol=$(get_volume_name $i) + docker run --rm -v "${vol}:/home/conduit/data" alpine \ + sh -c "chown -R 1000:1000 /home/conduit/data" 2>/dev/null || true + done + else + local vol=$(get_volume_name $idx) + docker run --rm -v "${vol}:/home/conduit/data" alpine \ + sh -c "chown -R 1000:1000 /home/conduit/data" 2>/dev/null || true + fi } # Helper: Start/recreate conduit container with current settings +get_container_max_clients() { + local idx=${1:-1} + local var="MAX_CLIENTS_${idx}" + local val="${!var}" + echo "${val:-$MAX_CLIENTS}" +} + +get_container_bandwidth() { + local idx=${1:-1} + local var="BANDWIDTH_${idx}" + local val="${!var}" + echo "${val:-$BANDWIDTH}" +} + run_conduit_container() { + local idx=${1:-1} + local name=$(get_container_name $idx) + local vol=$(get_volume_name $idx) + local mc=$(get_container_max_clients $idx) + local bw=$(get_container_bandwidth $idx) docker run -d \ - --name conduit \ + --name "$name" \ --restart unless-stopped \ - -v conduit-data:/home/conduit/data \ + -v "${vol}:/home/conduit/data" \ --network host \ - $CONDUIT_IMAGE \ - start --max-clients "$MAX_CLIENTS" --bandwidth "$BANDWIDTH" --stats-file + "$CONDUIT_IMAGE" \ + start --max-clients "$mc" --bandwidth "$bw" --stats-file } print_header() { @@ -853,15 +946,35 @@ print_header() { print_live_stats_header() { local EL="\033[K" echo -e "${CYAN}╔═══════════════════════════════════════════════════════════════════╗${EL}" - echo -e "║ CONDUIT LIVE STATISTICS ║${EL}" + printf "║ ${NC}🚀 PSIPHON CONDUIT MANAGER v%-5s ${CYAN}CONDUIT LIVE STATISTICS ║${EL}\n" "${VERSION}" echo -e "╠═══════════════════════════════════════════════════════════════════╣${EL}" - printf "║ Max Clients: ${GREEN}%-52s${CYAN}║${EL}\n" "${MAX_CLIENTS}" - if [ "$BANDWIDTH" == "-1" ]; then - printf "║ Bandwidth: ${GREEN}%-52s${CYAN}║${EL}\n" "Unlimited" + # Check for per-container overrides + local has_overrides=false + for i in $(seq 1 $CONTAINER_COUNT); do + local mc_var="MAX_CLIENTS_${i}" + local bw_var="BANDWIDTH_${i}" + if [ -n "${!mc_var}" ] || [ -n "${!bw_var}" ]; then + has_overrides=true + break + fi + done + if [ "$has_overrides" = true ] && [ "$CONTAINER_COUNT" -gt 1 ]; then + for i in $(seq 1 $CONTAINER_COUNT); do + local mc=$(get_container_max_clients $i) + local bw=$(get_container_bandwidth $i) + local bw_d="Unlimited" + [ "$bw" != "-1" ] && bw_d="${bw}Mbps" + local line="$(get_container_name $i): ${mc} clients, ${bw_d}" + printf "║ ${GREEN}%-64s${CYAN}║${EL}\n" "$line" + done else - printf "║ Bandwidth: ${GREEN}%-52s${CYAN}║${EL}\n" "${BANDWIDTH} Mbps" + printf "║ Max Clients: ${GREEN}%-52s${CYAN}║${EL}\n" "${MAX_CLIENTS}" + if [ "$BANDWIDTH" == "-1" ]; then + printf "║ Bandwidth: ${GREEN}%-52s${CYAN}║${EL}\n" "Unlimited" + else + printf "║ Bandwidth: ${GREEN}%-52s${CYAN}║${EL}\n" "${BANDWIDTH} Mbps" + fi fi - echo -e "║ ║${EL}" echo -e "╚═══════════════════════════════════════════════════════════════════╝${EL}" echo -e "${NC}\033[K" } @@ -869,16 +982,101 @@ print_live_stats_header() { get_node_id() { - if docker volume inspect conduit-data >/dev/null 2>&1; then - local mountpoint=$(docker volume inspect conduit-data --format '{{ .Mountpoint }}') - if [ -f "$mountpoint/conduit_key.json" ]; then - # Extract privateKeyBase64, decode, take last 32 bytes, encode base64 - # Logic provided by user - cat "$mountpoint/conduit_key.json" | grep "privateKeyBase64" | awk -F'"' '{print $4}' | base64 -d 2>/dev/null | tail -c 32 | base64 | tr -d '=\n' + local vol="${1:-conduit-data}" + if docker volume inspect "$vol" >/dev/null 2>&1; then + local mountpoint=$(docker volume inspect "$vol" --format '{{ .Mountpoint }}' 2>/dev/null) + local key_json="" + if [ -n "$mountpoint" ] && [ -f "$mountpoint/conduit_key.json" ]; then + key_json=$(cat "$mountpoint/conduit_key.json" 2>/dev/null) + else + local tmp_ctr="conduit-nodeid-tmp" + docker rm -f "$tmp_ctr" 2>/dev/null || true + docker create --name "$tmp_ctr" -v "$vol":/data alpine true 2>/dev/null || true + key_json=$(docker cp "$tmp_ctr:/data/conduit_key.json" - 2>/dev/null | tar -xO 2>/dev/null) + docker rm -f "$tmp_ctr" 2>/dev/null || true + fi + if [ -n "$key_json" ]; then + echo "$key_json" | grep "privateKeyBase64" | awk -F'"' '{print $4}' | base64 -d 2>/dev/null | tail -c 32 | base64 | tr -d '=\n' fi fi } +get_raw_key() { + local vol="${1:-conduit-data}" + if docker volume inspect "$vol" >/dev/null 2>&1; then + local mountpoint=$(docker volume inspect "$vol" --format '{{ .Mountpoint }}' 2>/dev/null) + local key_json="" + if [ -n "$mountpoint" ] && [ -f "$mountpoint/conduit_key.json" ]; then + key_json=$(cat "$mountpoint/conduit_key.json" 2>/dev/null) + else + local tmp_ctr="conduit-rawkey-tmp" + docker rm -f "$tmp_ctr" 2>/dev/null || true + docker create --name "$tmp_ctr" -v "$vol":/data alpine true 2>/dev/null || true + key_json=$(docker cp "$tmp_ctr:/data/conduit_key.json" - 2>/dev/null | tar -xO 2>/dev/null) + docker rm -f "$tmp_ctr" 2>/dev/null || true + fi + if [ -n "$key_json" ]; then + echo "$key_json" | grep "privateKeyBase64" | awk -F'"' '{print $4}' + fi + fi +} + +show_qr_code() { + local idx="${1:-}" + # If multiple containers and no index specified, prompt + if [ -z "$idx" ] && [ "$CONTAINER_COUNT" -gt 1 ]; then + echo "" + echo -e "${CYAN}═══ SELECT CONTAINER ═══${NC}" + for ci in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $ci) + echo -e " ${ci}. ${cname}" + done + echo "" + read -p " Which container? (1-${CONTAINER_COUNT}): " idx < /dev/tty || true + if ! [[ "$idx" =~ ^[1-5]$ ]] || [ "$idx" -gt "$CONTAINER_COUNT" ]; then + echo -e "${RED} Invalid selection.${NC}" + return + fi + fi + [ -z "$idx" ] && idx=1 + local vol=$(get_volume_name $idx) + local cname=$(get_container_name $idx) + + clear + local node_id=$(get_node_id "$vol") + local raw_key=$(get_raw_key "$vol") + echo "" + echo -e "${CYAN}╔═══════════════════════════════════════════════════════════════════╗${NC}" + echo -e "${CYAN}║ CONDUIT ID & QR CODE ║${NC}" + echo -e "${CYAN}╠═══════════════════════════════════════════════════════════════════╣${NC}" + if [ "$CONTAINER_COUNT" -gt 1 ]; then + printf "${CYAN}║${NC} Container: ${BOLD}%-52s${CYAN}║${NC}\n" "$cname" + fi + if [ -n "$node_id" ]; then + printf "${CYAN}║${NC} Conduit ID: ${GREEN}%-52s${CYAN}║${NC}\n" "$node_id" + else + printf "${CYAN}║${NC} Conduit ID: ${YELLOW}%-52s${CYAN}║${NC}\n" "Not available (start container first)" + fi + echo -e "${CYAN}╚═══════════════════════════════════════════════════════════════════╝${NC}" + echo "" + if [ -n "$raw_key" ] && command -v qrencode &>/dev/null; then + local hostname_str=$(hostname 2>/dev/null || echo "conduit") + local claim_json="{\"version\":1,\"data\":{\"key\":\"${raw_key}\",\"name\":\"${hostname_str}\"}}" + local claim_b64=$(echo -n "$claim_json" | base64 | tr -d '\n') + local claim_url="network.ryve.app://(app)/conduits?claim=${claim_b64}" + echo -e "${BOLD} Scan to claim rewards:${NC}" + echo "" + qrencode -t ANSIUTF8 "$claim_url" 2>/dev/null + elif ! command -v qrencode &>/dev/null; then + echo -e "${YELLOW} qrencode not installed. Install with: sudo apt install qrencode${NC}" + echo -e " ${CYAN}Claim rewards at: https://network.ryve.app${NC}" + else + echo -e "${YELLOW} Key not available. Start container first.${NC}" + fi + echo "" + read -n 1 -s -r -p " Press any key to return..." < /dev/tty || true +} + show_dashboard() { local stop_dashboard=0 # Setup trap to catch signals gracefully @@ -901,11 +1099,109 @@ show_dashboard() { show_status "live" - # Show Node ID in its own section - local node_id=$(get_node_id) - if [ -n "$node_id" ]; then - echo -e "${CYAN}═══ CONDUIT ID ═══${NC}\033[K" - echo -e " ${CYAN}${node_id}${NC}\033[K" + # Check data cap + if [ "$DATA_CAP_GB" -gt 0 ] 2>/dev/null; then + local usage=$(get_data_usage) + local used_rx=$(echo "$usage" | awk '{print $1}') + local used_tx=$(echo "$usage" | awk '{print $2}') + local total_used=$((used_rx + used_tx + ${DATA_CAP_PRIOR_USAGE:-0})) + local cap_gb_fmt=$(format_gb $total_used) + echo -e "${CYAN}═══ DATA USAGE ═══${NC}\033[K" + echo -e " Usage: ${YELLOW}${cap_gb_fmt} GB${NC} / ${GREEN}${DATA_CAP_GB} GB${NC}\033[K" + if ! check_data_cap; then + echo -e " ${RED}⚠ DATA CAP EXCEEDED - Containers stopped!${NC}\033[K" + fi + echo -e "\033[K" + fi + + # Side-by-side: Active Clients | Top Upload + local snap_file="$INSTALL_DIR/traffic_stats/tracker_snapshot" + local data_file="$INSTALL_DIR/traffic_stats/cumulative_data" + if [ -s "$snap_file" ] || [ -s "$data_file" ]; then + # Get actual total connected clients from docker logs + local dash_clients=0 + local dash_ps=$(docker ps --format '{{.Names}}' 2>/dev/null) + for ci in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $ci) + if echo "$dash_ps" | grep -q "^${cname}$"; then + local lg=$(docker logs --tail 50 "$cname" 2>&1 | grep "\[STATS\]" | tail -1) + local cn=$(echo "$lg" | sed -n 's/.*Connected:[[:space:]]*\([0-9]*\).*/\1/p') + [[ "$cn" =~ ^[0-9]+$ ]] && dash_clients=$((dash_clients + cn)) + fi + done + + # Left column: Active Clients per country (estimated from snapshot distribution) + local left_lines=() + if [ -s "$snap_file" ] && [ "$dash_clients" -gt 0 ]; then + local snap_data + snap_data=$(awk -F'|' '{if($2!=""&&$4!="") seen[$2"|"$4]=1} END{for(k in seen){split(k,a,"|");c[a[1]]++} for(co in c) print c[co]"|"co}' "$snap_file" 2>/dev/null | sort -t'|' -k1 -nr | head -5) + local snap_total=0 + if [ -n "$snap_data" ]; then + while IFS='|' read -r cnt co; do + snap_total=$((snap_total + cnt)) + done <<< "$snap_data" + fi + [ "$snap_total" -eq 0 ] && snap_total=1 + if [ -n "$snap_data" ]; then + while IFS='|' read -r cnt country; do + [ -z "$country" ] && continue + country="${country%% - #*}" + local est=$(( (cnt * dash_clients) / snap_total )) + [ "$est" -eq 0 ] && [ "$cnt" -gt 0 ] && est=1 + local pct=$((est * 100 / dash_clients)) + [ "$pct" -gt 100 ] && pct=100 + local bl=$((pct / 20)); [ "$bl" -lt 1 ] && bl=1; [ "$bl" -gt 5 ] && bl=5 + local bf=""; local bp=""; for ((bi=0; bi0) print $3"|"$1}' "$data_file" 2>/dev/null | sort -t'|' -k1 -nr | head -5) + local total_upload=0 + if [ -n "$top5_upload" ]; then + while IFS='|' read -r bytes co; do + total_upload=$((total_upload + bytes)) + done < <(awk -F'|' '{if($1!="" && $3+0>0) print $3"|"$1}' "$data_file" 2>/dev/null) + fi + [ "$total_upload" -eq 0 ] && total_upload=1 + if [ -n "$top5_upload" ]; then + while IFS='|' read -r bytes country; do + [ -z "$country" ] && continue + country="${country%% - #*}" + local pct=$((bytes * 100 / total_upload)) + local bl=$((pct / 20)); [ "$bl" -lt 1 ] && bl=1; [ "$bl" -gt 5 ] && bl=5 + local bf=""; local bp=""; for ((bi=0; bi /dev/tty 2>/dev/null; then + if read -t 4 -n 1 -s < /dev/tty 2>/dev/null; then stop_dashboard=1 fi done @@ -931,14 +1227,53 @@ show_dashboard() { } get_container_stats() { - # Get CPU and RAM usage for conduit container + # Get CPU and RAM usage across all conduit containers # Returns: "CPU_PERCENT RAM_USAGE" - local stats=$(docker stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" conduit 2>/dev/null) - if [ -z "$stats" ]; then - echo "0% 0MiB" + if [ "$CONTAINER_COUNT" -le 1 ]; then + local stats=$(docker stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" conduit 2>/dev/null) + if [ -z "$stats" ]; then + echo "0% 0MiB" + else + echo "$stats" + fi else - # Extract just the raw numbers/units, simpler format - echo "$stats" + # Aggregate stats across all containers + local total_cpu=0 + local total_mem=0 + local mem_limit="" + local any_found=false + for i in $(seq 1 $CONTAINER_COUNT); do + local name=$(get_container_name $i) + local stats=$(docker stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" "$name" 2>/dev/null) + if [ -n "$stats" ]; then + any_found=true + local cpu=$(echo "$stats" | awk '{print $1}' | tr -d '%') + total_cpu=$(awk -v a="$total_cpu" -v b="$cpu" 'BEGIN{printf "%.2f", a+b}') + # Parse mem: "123.4MiB / 1.5GiB" + local mem_used=$(echo "$stats" | awk '{print $2}') + if [ -z "$mem_limit" ]; then + mem_limit=$(echo "$stats" | awk '{print $4}') + fi + # Convert to MiB for summing + local mem_val=$(echo "$mem_used" | sed 's/[^0-9.]//g') + if echo "$mem_used" | grep -qi "gib"; then + mem_val=$(awk -v v="$mem_val" 'BEGIN{printf "%.1f", v*1024}') + fi + total_mem=$(awk -v a="$total_mem" -v b="$mem_val" 'BEGIN{printf "%.1f", a+b}') + fi + done + if [ "$any_found" = true ]; then + # Format mem back + local mem_display + if [ "$(echo "$total_mem" | awk '{print ($1 >= 1024)}')" = "1" ]; then + mem_display=$(awk -v m="$total_mem" 'BEGIN{printf "%.2fGiB", m/1024}') + else + mem_display="${total_mem}MiB" + fi + echo "${total_cpu}% ${mem_display} / ${mem_limit}" + else + echo "0% 0MiB" + fi fi } @@ -1003,8 +1338,16 @@ get_system_stats() { } show_live_stats() { - # Check if container is running first - if ! docker ps 2>/dev/null | grep -q "[[:space:]]conduit$"; then + # Check if any container is running + local any_running=false + for i in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $i) + if docker ps 2>/dev/null | grep -q "[[:space:]]${cname}$"; then + any_running=true + break + fi + done + if [ "$any_running" = false ]; then print_header echo -e "${RED}Conduit is not running!${NC}" echo "Start it first with option 6 or 'conduit start'" @@ -1012,23 +1355,47 @@ show_live_stats() { return 1 fi - echo -e "${CYAN}Streaming live statistics... Press Ctrl+C to return to menu${NC}" - echo -e "${YELLOW}(showing live logs filtered for [STATS])${NC}" - echo "" - - # Trap Ctrl+C to allow handled exit from the log stream - trap 'echo -e "\n${CYAN}Returning to menu...${NC}"; return' SIGINT - - # Stream logs and filter for [STATS] - # We check if grep supports --line-buffered for smoother output, fallback to standard grep - if grep --help 2>&1 | grep -q -- --line-buffered; then - docker logs -f --tail 20 conduit 2>&1 | grep --line-buffered "\[STATS\]" + if [ "$CONTAINER_COUNT" -le 1 ]; then + # Single container - stream directly + echo -e "${CYAN}Streaming live statistics... Press Ctrl+C to return to menu${NC}" + echo -e "${YELLOW}(showing live logs filtered for [STATS])${NC}" + echo "" + trap 'echo -e "\n${CYAN}Returning to menu...${NC}"; return' SIGINT + if grep --help 2>&1 | grep -q -- --line-buffered; then + docker logs -f --tail 20 conduit 2>&1 | grep --line-buffered "\[STATS\]" + else + docker logs -f --tail 20 conduit 2>&1 | grep "\[STATS\]" + fi + trap - SIGINT else - docker logs -f --tail 20 conduit 2>&1 | grep "\[STATS\]" + # Multi container - show container picker + echo "" + echo -e "${CYAN}Select container to view live stats:${NC}" + echo "" + for i in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $i) + local status="${RED}Stopped${NC}" + docker ps 2>/dev/null | grep -q "[[:space:]]${cname}$" && status="${GREEN}Running${NC}" + echo -e " ${i}. ${cname} [${status}]" + done + echo "" + read -p " Select (1-${CONTAINER_COUNT}): " idx < /dev/tty || true + if ! [[ "$idx" =~ ^[0-9]+$ ]] || [ "$idx" -lt 1 ] || [ "$idx" -gt "$CONTAINER_COUNT" ]; then + echo -e "${RED}Invalid selection.${NC}" + return 1 + fi + local target=$(get_container_name $idx) + echo "" + echo -e "${CYAN}Streaming live statistics from ${target}... Press Ctrl+C to return${NC}" + echo "" + trap 'echo -e "\n${CYAN}Returning to menu...${NC}"; return' SIGINT + if grep --help 2>&1 | grep -q -- --line-buffered; then + docker logs -f --tail 20 "$target" 2>&1 | grep --line-buffered "\[STATS\]" + else + docker logs -f --tail 20 "$target" 2>&1 | grep "\[STATS\]" + fi + trap - SIGINT fi - - # Reset trap - trap - SIGINT } # format_bytes() - Convert bytes to human-readable format (B, KB, MB, GB) @@ -1056,474 +1423,658 @@ format_bytes() { fi } -# show_peers() - Live peer traffic by country using tcpdump + GeoIP -show_peers() { - # Flag to control the main loop - set to 1 on user interrupt - local stop_peers=0 - trap 'stop_peers=1' SIGINT SIGTERM - - # Verify required dependencies are installed - if ! command -v tcpdump &>/dev/null || ! command -v geoiplookup &>/dev/null; then - echo -e "${RED}Error: tcpdump or geoiplookup not found!${NC}" - echo "Please re-run the main installer to fix dependencies." - read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true - return 1 +# Background tracker helper +is_tracker_active() { + if command -v systemctl &>/dev/null; then + systemctl is-active conduit-tracker.service &>/dev/null + return $? fi + # Fallback: check if tracker process is running + pgrep -f "conduit-tracker.sh" &>/dev/null + return $? +} - # Network interface detection - # Use "any" to capture on all interfaces - local iface="any" +# Generate the background tracker script +regenerate_tracker_script() { + local tracker_script="$INSTALL_DIR/conduit-tracker.sh" + local persist_dir="$INSTALL_DIR/traffic_stats" + mkdir -p "$INSTALL_DIR" "$persist_dir" - # Detect local IP address to determine traffic direction - # Method 1: Query the route to a public IP (most reliable) - # Method 2: Fallback to hostname -I - local local_ip=$(ip route get 1.1.1.1 2>/dev/null | awk '{print $7}') - [ -z "$local_ip" ] && local_ip=$(hostname -I | awk '{print $1}') + cat > "$tracker_script" << 'TRACKER_SCRIPT' +#!/bin/bash +# Psiphon Conduit Background Tracker +set -u - # Clean temporary working files (per-cycle data only) - rm -f /tmp/conduit_peers_current /tmp/conduit_peers_raw - rm -f /tmp/conduit_traffic_from /tmp/conduit_traffic_to - touch /tmp/conduit_traffic_from /tmp/conduit_traffic_to +INSTALL_DIR="/opt/conduit" +PERSIST_DIR="/opt/conduit/traffic_stats" +mkdir -p "$PERSIST_DIR" +STATS_FILE="$PERSIST_DIR/cumulative_data" +IPS_FILE="$PERSIST_DIR/cumulative_ips" +SNAPSHOT_FILE="$PERSIST_DIR/tracker_snapshot" +C_START_FILE="$PERSIST_DIR/container_start" +GEOIP_CACHE="$PERSIST_DIR/geoip_cache" - # Persistent data directory - survives across option 9 sessions - local persist_dir="/opt/conduit/traffic_stats" - mkdir -p "$persist_dir" +# Detect local IPs +get_local_ips() { + ip -4 addr show 2>/dev/null | awk '/inet /{split($2,a,"/"); print a[1]}' | tr '\n' '|' + echo "" +} - # Get container start time to detect restarts - local container_start=$(docker inspect --format='{{.State.StartedAt}}' conduit 2>/dev/null | cut -d'.' -f1) - local stored_start="" - [ -f "$persist_dir/container_start" ] && stored_start=$(cat "$persist_dir/container_start") - - # If container was restarted, reset all cumulative data - if [ "$container_start" != "$stored_start" ]; then - echo "$container_start" > "$persist_dir/container_start" - rm -f "$persist_dir/cumulative_data" "$persist_dir/cumulative_ips" "$persist_dir/session_start" - fi - - # Cumulative data files persist until Conduit restarts - # Format: Country|TotalFrom|TotalTo (bytes received from / sent to) - [ ! -f "$persist_dir/cumulative_data" ] && touch "$persist_dir/cumulative_data" - # Format: Country|IP (one line per unique IP seen) - [ ! -f "$persist_dir/cumulative_ips" ] && touch "$persist_dir/cumulative_ips" - - # Session start time - when we first started tracking (persists until Conduit restart) - if [ ! -f "$persist_dir/session_start" ]; then - date +%s > "$persist_dir/session_start" - fi - local session_start=$(cat "$persist_dir/session_start") - - # Enter alternate screen buffer (preserves terminal history) - tput smcup 2>/dev/null || true - # Hide cursor for cleaner display - echo -ne "\033[?25l" - - #═══════════════════════════════════════════════════════════════════ - # Main display loop - runs until user presses a key - #═══════════════════════════════════════════════════════════════════ - while [ $stop_peers -eq 0 ]; do - # Clear screen completely and move to top-left - clear - printf "\033[H" - - #─────────────────────────────────────────────────────────────── - # Header Section - Compact title bar with live status indicator - # Shows: Title, session duration, and [LIVE - last 15s] indicator - #─────────────────────────────────────────────────────────────── - # Calculate how long this view session has been running - local now=$(date +%s) - local duration=$((now - session_start)) - local dur_min=$((duration / 60)) - local dur_sec=$((duration % 60)) - local duration_str=$(printf "%02d:%02d" $dur_min $dur_sec) - - echo -e "${CYAN}╔═══════════════════════════════════════════════════════════════════╗${NC}" - echo -e "║ LIVE PEER TRAFFIC BY COUNTRY ║" - echo -e "${CYAN}╠═══════════════════════════════════════════════════════════════════╣${NC}" - if [ -f /tmp/conduit_peers_current ]; then - # Data is available - show last update time - local update_time=$(date '+%H:%M:%S') - echo -e "║ Last Update: ${update_time} ${GREEN}[LIVE]${NC} ║" - else - # Waiting for first data capture - echo -e "║ Status: ${YELLOW}Initializing...${NC} ║" +# GeoIP lookup with file-based cache +geo_lookup() { + local ip="$1" + # Check cache + if [ -f "$GEOIP_CACHE" ]; then + local cached=$(grep "^${ip}|" "$GEOIP_CACHE" 2>/dev/null | head -1 | cut -d'|' -f2) + if [ -n "$cached" ]; then + echo "$cached" + return fi - echo -e "${CYAN}╚═══════════════════════════════════════════════════════════════════╝${NC}" - echo -e "" - - #─────────────────────────────────────────────────────────────── - # Data Tables - Display TOP 10 countries by traffic volume - # - # "TRAFFIC FROM" = Data received from that country (incoming) - # These are peers connecting TO your Conduit node - # "TRAFFIC TO" = Data sent to that country (outgoing) - # This is data your node sends back to peers - # - # Columns explained: - # Total = Cumulative bytes since this view started - # Speed = Current transfer rate (from last 15-second window) - # IPs = Unique IP addresses (Total seen / Currently active) - # - # Colors: GREEN = incoming traffic, YELLOW = outgoing traffic - # #FreeIran = RED (solidarity highlight) - #─────────────────────────────────────────────────────────────── - if [ -s /tmp/conduit_traffic_from ]; then - # Section 1: Top 10 countries by incoming traffic (data FROM them) - # This shows which countries have peers connecting to your node - echo -e "${GREEN}${BOLD} 📥 TOP 10 TRAFFIC FROM (peers connecting to you)${NC}" - echo -e " ─────────────────────────────────────────────────────────────────────────" - printf " ${BOLD}%-26s${NC} ${GREEN}${BOLD}%10s %12s${NC} %-12s\n" "Country" "Total" "Speed" "IPs (all/now)" - echo -e " ─────────────────────────────────────────────────────────────────────────" - # Read top 10 entries from incoming-traffic-sorted file - head -10 /tmp/conduit_traffic_from | while read -r line; do - # Parse pipe-delimited fields: Country|TotalFrom|TotalTo|SpeedFrom|SpeedTo|TotalIPs|ActiveIPs - local country=$(echo "$line" | cut -d'|' -f1) - local from_bytes=$(echo "$line" | cut -d'|' -f2) - local from_speed=$(echo "$line" | cut -d'|' -f4) - local total_ips=$(echo "$line" | cut -d'|' -f6) - local active_ips=$(echo "$line" | cut -d'|' -f7) - # Format bytes to human-readable (KB/MB/GB) - local from_fmt=$(format_bytes "$from_bytes") - local from_spd_fmt=$(format_bytes "$from_speed")/s - # Format IP counts - handle empty values - [ -z "$total_ips" ] && total_ips="0" - [ -z "$active_ips" ] && active_ips="0" - local ip_display="${total_ips}/${active_ips}" - # Print row: CYAN country, GREEN values (Total/Speed right-aligned, IPs left-aligned) - printf " ${CYAN}%-26s${NC} ${GREEN}${BOLD}%10s %12s${NC} %-12s\n" "$country" "$from_fmt" "$from_spd_fmt" "$ip_display" - done - echo "" - - # Section 2: Top 10 countries by outgoing traffic (data TO them) - # This shows which countries you're sending the most data to - echo -e "${YELLOW}${BOLD} 📤 TOP 10 TRAFFIC TO (data sent to peers)${NC}" - echo -e " ─────────────────────────────────────────────────────────────────────────" - printf " ${BOLD}%-26s${NC} ${YELLOW}${BOLD}%10s %12s${NC} %-12s\n" "Country" "Total" "Speed" "IPs (all/now)" - echo -e " ─────────────────────────────────────────────────────────────────────────" - # Read top 10 entries from outgoing-traffic-sorted file - head -10 /tmp/conduit_traffic_to | while read -r line; do - # Parse pipe-delimited fields: Country|TotalFrom|TotalTo|SpeedFrom|SpeedTo|TotalIPs|ActiveIPs - local country=$(echo "$line" | cut -d'|' -f1) - local to_bytes=$(echo "$line" | cut -d'|' -f3) - local to_speed=$(echo "$line" | cut -d'|' -f5) - local total_ips=$(echo "$line" | cut -d'|' -f6) - local active_ips=$(echo "$line" | cut -d'|' -f7) - # Format bytes to human-readable (KB/MB/GB) - local to_fmt=$(format_bytes "$to_bytes") - local to_spd_fmt=$(format_bytes "$to_speed")/s - # Format IP counts - handle empty values - [ -z "$total_ips" ] && total_ips="0" - [ -z "$active_ips" ] && active_ips="0" - local ip_display="${total_ips}/${active_ips}" - # Print row: CYAN country, YELLOW values (Total/Speed right-aligned, IPs left-aligned) - printf " ${CYAN}%-26s${NC} ${YELLOW}${BOLD}%10s %12s${NC} %-12s\n" "$country" "$to_fmt" "$to_spd_fmt" "$ip_display" - done - else - # No data yet - show waiting message with padding - echo -e " ${YELLOW}Waiting for first snapshot... (High traffic helps speed this up)${NC}" - for i in {1..20}; do echo ""; done - fi - - echo -e "" - echo -e "${CYAN}════════════════════════════════════════════════════════════════════════════${NC}" - - #═══════════════════════════════════════════════════════════════════ - # Background Traffic Capture - #═══════════════════════════════════════════════════════════════════ - # Uses tcpdump to capture live network packets for 15 seconds - # tcpdump flags: - # -n : Don't resolve hostnames (faster) - # -i : Interface to capture on ("any" = all interfaces) - # -q : Quiet output (less verbose) - # - # The captured output is piped to awk which: - # 1. Extracts source and destination IP addresses - # 2. Extracts packet length from each line - # 3. Filters out private/local IP ranges (RFC 1918) - # 4. Determines traffic direction (from vs to) - # 5. Aggregates bytes per IP address - # 6. Outputs: IP|bytes_from_remote|bytes_to_remote - # - # Traffic direction naming (from your server's perspective): - # "from" = bytes received FROM remote IP (remote -> local) - # "to" = bytes sent TO remote IP (local -> remote) - #═══════════════════════════════════════════════════════════════════ - # Wrap pipeline in subshell so $! captures the whole pipeline PID, not just awk - # This ensures the progress indicator runs for the full 15-second capture - ( - timeout 15 tcpdump -ni $iface -q '(tcp or udp)' 2>/dev/null | \ - awk -v local_ip="$local_ip" ' - # Portable awk script - works with mawk, gawk, and busybox awk - /IP/ { - # Parse tcpdump output to extract IPs and packet length - # Example format: "IP 192.168.1.1.443 > 8.8.8.8.12345: TCP, length 1460" - # Or: "IP 10.0.0.1.22 > 203.0.113.5.54321: UDP, length 64" - - src = "" - dst = "" - len = 0 - - # Find the field containing "IP" and extract source/dest - for (i = 1; i <= NF; i++) { - if ($i == "IP") { - # Next field is source IP.port - src_field = $(i+1) - # Field after ">" is dest IP.port - for (j = i+2; j <= NF; j++) { - if ($(j-1) == ">") { - dst_field = $j - # Remove trailing colon if present - gsub(/:$/, "", dst_field) - break - } - } - break - } - } - - # Extract IP from IP.port format (remove last .port segment) - # Example: 192.168.1.1.443 -> 192.168.1.1 - if (src_field != "") { - n = split(src_field, parts, ".") - if (n >= 4) { - src = parts[1] "." parts[2] "." parts[3] "." parts[4] - } - } - if (dst_field != "") { - n = split(dst_field, parts, ".") - if (n >= 4) { - dst = parts[1] "." parts[2] "." parts[3] "." parts[4] - } - } - - # Extract packet length - look for "length N" pattern - for (i = 1; i <= NF; i++) { - if ($i == "length") { - len = $(i+1) + 0 - break - } - } - # Fallback: use last numeric field if no "length" found - if (len == 0) { - for (i = NF; i > 0; i--) { - if ($i ~ /^[0-9]+$/) { - len = $i + 0 - break - } - } - } - - # Skip if we could not parse IPs - if (src == "" && dst == "") next - - # Filter out private/reserved IP ranges (RFC 1918 + others) - # 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 127.0.0.0/8, - # 0.0.0.0/8, 169.254.0.0/16 (link-local) - if (src ~ /^(10\.|172\.(1[6-9]|2[0-9]|3[01])\.|192\.168\.|127\.|0\.|169\.254\.)/) src = "" - if (dst ~ /^(10\.|172\.(1[6-9]|2[0-9]|3[01])\.|192\.168\.|127\.|0\.|169\.254\.)/) dst = "" - - # Determine traffic direction based on local IP - # "traffic_from" = bytes coming FROM remote (incoming to your server) - # "traffic_to" = bytes going TO remote (outgoing from your server) - if (src == local_ip && dst != "" && dst != local_ip) { - # Outgoing: packet going FROM local TO remote - traffic_to[dst] += len - ips[dst] = 1 - } else if (dst == local_ip && src != "" && src != local_ip) { - # Incoming: packet coming FROM remote TO local - traffic_from[src] += len - ips[src] = 1 - } else if (src != "" && src != local_ip) { - # Fallback: non-local source = incoming traffic - traffic_from[src] += len - ips[src] = 1 - } else if (dst != "" && dst != local_ip) { - # Fallback: non-local destination = outgoing traffic - traffic_to[dst] += len - ips[dst] = 1 - } - } - END { - # Output aggregated data: IP|bytes_from|bytes_to - for (ip in ips) { - from_bytes = traffic_from[ip] + 0 # Default to 0 if undefined - to_bytes = traffic_to[ip] + 0 - print ip "|" from_bytes "|" to_bytes - } - }' > /tmp/conduit_peers_raw - ) 2>/dev/null & - - # Store subshell PID for cleanup if user exits early - local tcpdump_pid=$! - - #─────────────────────────────────────────────────────────────── - # Progress Indicator Loop - runs for exactly 15 seconds - # Shows animated dots while tcpdump captures data - # Checks for user keypress every second to allow early exit - #─────────────────────────────────────────────────────────────── - local count=0 - while [ $count -lt 15 ]; do - if read -t 1 -n 1 -s <> /dev/tty 2>/dev/null; then - stop_peers=1 - kill $tcpdump_pid 2>/dev/null - break - fi - count=$((count + 1)) - echo -ne "\r [${YELLOW}" - for ((i=0; i/dev/null; then + country=$(geoiplookup "$ip" 2>/dev/null | awk -F: '/Country Edition/{print $2}' | sed 's/^ *//' | cut -d, -f2- | sed 's/^ *//') + elif command -v mmdblookup &>/dev/null; then + local mmdb="" + for f in /usr/share/GeoIP/GeoLite2-Country.mmdb /var/lib/GeoIP/GeoLite2-Country.mmdb; do + [ -f "$f" ] && mmdb="$f" && break done + if [ -n "$mmdb" ]; then + country=$(mmdblookup --file "$mmdb" --ip "$ip" country names en 2>/dev/null | grep -o '"[^"]*"' | tr -d '"') + fi + fi + [ -z "$country" ] && country="Unknown" + # Cache it (limit cache size) + if [ -f "$GEOIP_CACHE" ]; then + local cache_lines=$(wc -l < "$GEOIP_CACHE" 2>/dev/null || echo 0) + if [ "$cache_lines" -gt 10000 ]; then + tail -5000 "$GEOIP_CACHE" > "$GEOIP_CACHE.tmp" && mv "$GEOIP_CACHE.tmp" "$GEOIP_CACHE" + fi + fi + echo "${ip}|${country}" >> "$GEOIP_CACHE" + echo "$country" +} - # Wait for tcpdump to finish (should already be done after 15s) - wait $tcpdump_pid 2>/dev/null +# Check for container restart — reset data if restarted +container_start=$(docker inspect --format='{{.State.StartedAt}}' conduit 2>/dev/null | cut -d'.' -f1) +stored_start="" +[ -f "$C_START_FILE" ] && stored_start=$(cat "$C_START_FILE" 2>/dev/null) +if [ "$container_start" != "$stored_start" ]; then + echo "$container_start" > "$C_START_FILE" + rm -f "$STATS_FILE" "$IPS_FILE" "$GEOIP_CACHE" "$SNAPSHOT_FILE" +fi +touch "$STATS_FILE" "$IPS_FILE" - # Exit loop if user requested stop - if [ $stop_peers -eq 1 ]; then break; fi +# Detect tcpdump and awk paths +TCPDUMP_BIN=$(command -v tcpdump 2>/dev/null || echo "tcpdump") +AWK_BIN=$(command -v gawk 2>/dev/null || command -v awk 2>/dev/null || echo "awk") - #═══════════════════════════════════════════════════════════════════ - # GeoIP Resolution and Country Aggregation (Cumulative) - #═══════════════════════════════════════════════════════════════════ - # Process the raw IP data: - # 1. Read each IP with its from/to bytes from this cycle - # 2. Resolve IP to country using geoiplookup - # 3. Add to cumulative totals (persisted in temp file) - # 4. Track unique IPs per country (cumulative and active) - # 5. Calculate bandwidth speed (bytes per second from 15s window) - # 6. Create sorted output files for display - # - # Traffic direction naming: - # "from" = bytes received FROM remote IP (incoming to your server) - # "to" = bytes sent TO remote IP (outgoing from your server) - #═══════════════════════════════════════════════════════════════════ - if [ -s /tmp/conduit_peers_raw ]; then - # Associative arrays for this capture cycle - MUST unset first! - # In bash, 'declare -A' does NOT clear existing arrays, causing accumulation bug - unset cycle_from cycle_to cycle_ips ip_to_country - declare -A cycle_from # Bytes received FROM each country this cycle - declare -A cycle_to # Bytes sent TO each country this cycle - declare -A cycle_ips # IPs seen this cycle per country (for active count) - declare -A ip_to_country # Map IP -> country for deduplication +# Detect local IP +LOCAL_IP=$(ip route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($i=="src") print $(i+1)}') +[ -z "$LOCAL_IP" ] && LOCAL_IP=$(hostname -I 2>/dev/null | awk '{print $1}') - # Process each IP from the raw capture data - # Raw format: IP|bytes_from|bytes_to - while IFS='|' read -r ip from_bytes to_bytes; do - [ -z "$ip" ] && continue +# Main capture loop: tcpdump -> awk -> process +while true; do + # Process tcpdump output in awk, sync every 15 seconds + while IFS= read -r line; do + if [ "$line" = "SYNC_MARKER" ]; then + continue + fi + # Parse: direction|IP|bytes + local_dir=$(echo "$line" | cut -d'|' -f1) + local_ip_addr=$(echo "$line" | cut -d'|' -f2) + local_bytes=$(echo "$line" | cut -d'|' -f3) + [ -z "$local_ip_addr" ] && continue - # Resolve IP to country using GeoIP database - local country_info=$(geoiplookup "$ip" 2>/dev/null | awk -F: '/Country Edition/{print $2}' | sed 's/^ //') - [ -z "$country_info" ] && country_info="Unknown" + # Resolve country + country=$(geo_lookup "$local_ip_addr") - # Normalize certain country names for display - country_info=$(echo "$country_info" | sed 's/Iran, Islamic Republic of/Iran - #FreeIran/' | sed 's/Moldova, Republic of/Moldova/') + # Normalize country names + case "$country" in + *"Iran, Islamic Republic of"*) country="Iran - #FreeIran" ;; + *"Moldova, Republic of"*) country="Moldova" ;; + esac - # Store IP to country mapping for later - ip_to_country["$ip"]="$country_info" - - # Aggregate this cycle's traffic by country - cycle_from["$country_info"]=$((${cycle_from["$country_info"]:-0} + from_bytes)) - cycle_to["$country_info"]=$((${cycle_to["$country_info"]:-0} + to_bytes)) - - # Track active IPs this cycle (append IP to country's IP list) - cycle_ips["$country_info"]="${cycle_ips["$country_info"]} $ip" - done < /tmp/conduit_peers_raw - - # Load existing cumulative traffic data from persistent storage - unset cumul_from cumul_to - declare -A cumul_from - declare -A cumul_to - if [ -s "$persist_dir/cumulative_data" ]; then - while IFS='|' read -r country cfrom cto; do - [ -z "$country" ] && continue - cumul_from["$country"]=$cfrom - cumul_to["$country"]=$cto - done < "$persist_dir/cumulative_data" + # Update cumulative data + if [ -f "$STATS_FILE" ]; then + existing=$(grep "^${country}|" "$STATS_FILE" 2>/dev/null | head -1) + if [ -n "$existing" ]; then + old_from=$(echo "$existing" | cut -d'|' -f2) + old_to=$(echo "$existing" | cut -d'|' -f3) + if [ "$local_dir" = "FROM" ]; then + new_from=$((old_from + local_bytes)) + new_to=$old_to + else + new_from=$old_from + new_to=$((old_to + local_bytes)) + fi + # Update in place using temp file + grep -v "^${country}|" "$STATS_FILE" > "$STATS_FILE.tmp" 2>/dev/null || true + echo "${country}|${new_from}|${new_to}" >> "$STATS_FILE.tmp" + mv "$STATS_FILE.tmp" "$STATS_FILE" + else + if [ "$local_dir" = "FROM" ]; then + echo "${country}|${local_bytes}|0" >> "$STATS_FILE" + else + echo "${country}|0|${local_bytes}" >> "$STATS_FILE" + fi fi + fi - # Add this cycle's traffic to cumulative totals - for country in "${!cycle_from[@]}"; do - cumul_from["$country"]=$((${cumul_from["$country"]:-0} + ${cycle_from["$country"]})) - cumul_to["$country"]=$((${cumul_to["$country"]:-0} + ${cycle_to["$country"]})) - done + # Update cumulative IPs + if ! grep -q "^${country}|${local_ip_addr}$" "$IPS_FILE" 2>/dev/null; then + echo "${country}|${local_ip_addr}" >> "$IPS_FILE" + fi - # Save updated cumulative traffic data to persistent storage - > "$persist_dir/cumulative_data" - for country in "${!cumul_from[@]}"; do - echo "${country}|${cumul_from[$country]}|${cumul_to[$country]}" >> "$persist_dir/cumulative_data" - done + # Write snapshot for speed calculation + echo "${local_dir}|${country}|${local_bytes}|${local_ip_addr}" >> "$SNAPSHOT_FILE" - # Update cumulative IP tracking (add new IPs seen this cycle) - for ip in "${!ip_to_country[@]}"; do - local country="${ip_to_country[$ip]}" - # Check if this IP|Country combo already exists - if ! grep -q "^${country}|${ip}$" "$persist_dir/cumulative_ips" 2>/dev/null; then - echo "${country}|${ip}" >> "$persist_dir/cumulative_ips" + done < <($TCPDUMP_BIN -tt -l -ni any -n -q "(tcp or udp) and not port 22" 2>/dev/null | $AWK_BIN -v local_ip="$LOCAL_IP" ' + BEGIN { last_sync = 0 } + { + # Parse timestamp + ts = $1 + 0 + if (ts == 0) next + + # Find IP keyword and extract src/dst + src = ""; dst = "" + for (i = 1; i <= NF; i++) { + if ($i == "IP") { + sf = $(i+1) + for (j = i+2; j <= NF; j++) { + if ($(j-1) == ">") { + df = $j + gsub(/:$/, "", df) + break + } + } + break + } + } + # Extract IP from IP.port + if (sf != "") { n=split(sf,p,"."); if(n>=4) src=p[1]"."p[2]"."p[3]"."p[4] } + if (df != "") { n=split(df,p,"."); if(n>=4) dst=p[1]"."p[2]"."p[3]"."p[4] } + + # Get length + len = 0 + for (i=1; i<=NF; i++) { if ($i=="length") { len=$(i+1)+0; break } } + if (len==0) { for (i=NF; i>0; i--) { if ($i ~ /^[0-9]+$/) { len=$i+0; break } } } + + # Skip private IPs + if (src ~ /^(10\.|172\.(1[6-9]|2[0-9]|3[01])\.|192\.168\.|127\.|0\.|169\.254\.)/) src="" + if (dst ~ /^(10\.|172\.(1[6-9]|2[0-9]|3[01])\.|192\.168\.|127\.|0\.|169\.254\.)/) dst="" + + # Determine direction + if (src == local_ip && dst != "" && dst != local_ip) { + to[dst] += len + } else if (dst == local_ip && src != "" && src != local_ip) { + from[src] += len + } else if (src != "" && src != local_ip) { + from[src] += len + } else if (dst != "" && dst != local_ip) { + to[dst] += len + } + + # Sync every 15 seconds + if (last_sync == 0) last_sync = ts + if (ts - last_sync >= 15) { + for (ip in from) { if (from[ip] > 0) print "FROM|" ip "|" from[ip] } + for (ip in to) { if (to[ip] > 0) print "TO|" ip "|" to[ip] } + print "SYNC_MARKER" + delete from; delete to; last_sync = ts; fflush() + } + }') + + # If tcpdump exits, wait and retry + sleep 5 +done +TRACKER_SCRIPT + + chmod +x "$tracker_script" +} + +# Setup tracker systemd service +setup_tracker_service() { + regenerate_tracker_script + + if command -v systemctl &>/dev/null; then + cat > /etc/systemd/system/conduit-tracker.service << EOF +[Unit] +Description=Conduit Traffic Tracker +After=network.target docker.service +Requires=docker.service + +[Service] +Type=simple +ExecStart=/bin/bash $INSTALL_DIR/conduit-tracker.sh +Restart=on-failure +RestartSec=5 + +[Install] +WantedBy=multi-user.target +EOF + systemctl daemon-reload 2>/dev/null || true + systemctl enable conduit-tracker.service 2>/dev/null || true + systemctl restart conduit-tracker.service 2>/dev/null || true + fi +} + +# Stop tracker service +stop_tracker_service() { + if command -v systemctl &>/dev/null; then + systemctl stop conduit-tracker.service 2>/dev/null || true + else + pkill -f "conduit-tracker.sh" 2>/dev/null || true + fi +} + +# Advanced Statistics page with 15-second soft refresh +show_advanced_stats() { + local persist_dir="$INSTALL_DIR/traffic_stats" + local exit_stats=0 + trap 'exit_stats=1' SIGINT SIGTERM + + local L="══════════════════════════════════════════════════════════════" + local D="──────────────────────────────────────────────────────────────" + + # Enter alternate screen buffer + tput smcup 2>/dev/null || true + echo -ne "\033[?25l" + printf "\033[2J\033[H" + + local cycle_start=$(date +%s) + local last_refresh=0 + + while [ "$exit_stats" -eq 0 ]; do + local now=$(date +%s) + local term_height=$(tput lines 2>/dev/null || echo 24) + + local cycle_elapsed=$(( (now - cycle_start) % 15 )) + local time_until_next=$((15 - cycle_elapsed)) + + # Build progress bar + local bar="" + for ((i=0; i/dev/null) + local container_count=0 + local total_cpu=0 total_conn=0 + local total_up_bytes=0 total_down_bytes=0 + local total_mem_mib=0 first_mem_limit="" + + echo -e "${CYAN}║${NC} ${GREEN}CONTAINER${NC} ${DIM}|${NC} ${YELLOW}NETWORK${NC} ${DIM}|${NC} ${MAGENTA}TRACKER${NC}\033[K" + for ci in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $ci) + if echo "$docker_ps_cache" | grep -q "^${cname}$"; then + container_count=$((container_count + 1)) + + local stats=$(docker stats --no-stream --format "{{.CPUPerc}}|{{.MemUsage}}" "$cname" 2>/dev/null) + local cpu=$(echo "$stats" | cut -d'|' -f1 | tr -d '%') + [[ "$cpu" =~ ^[0-9.]+$ ]] && total_cpu=$(awk -v a="$total_cpu" -v b="$cpu" 'BEGIN{printf "%.2f", a+b}') + + local cmem_str=$(echo "$stats" | cut -d'|' -f2 | awk '{print $1}') + local cmem_val=$(echo "$cmem_str" | sed 's/[^0-9.]//g') + local cmem_unit=$(echo "$cmem_str" | sed 's/[0-9.]//g') + if [[ "$cmem_val" =~ ^[0-9.]+$ ]]; then + case "$cmem_unit" in + GiB) cmem_val=$(awk -v v="$cmem_val" 'BEGIN{printf "%.2f", v*1024}') ;; + KiB) cmem_val=$(awk -v v="$cmem_val" 'BEGIN{printf "%.2f", v/1024}') ;; + esac + total_mem_mib=$(awk -v a="$total_mem_mib" -v b="$cmem_val" 'BEGIN{printf "%.2f", a+b}') + fi + [ -z "$first_mem_limit" ] && first_mem_limit=$(echo "$stats" | cut -d'|' -f2 | awk -F'/' '{print $2}' | xargs) + + local logs=$(docker logs --tail 50 "$cname" 2>&1 | grep "\[STATS\]" | tail -1) + local conn=$(echo "$logs" | sed -n 's/.*Connected:[[:space:]]*\([0-9]*\).*/\1/p') + [[ "$conn" =~ ^[0-9]+$ ]] && total_conn=$((total_conn + conn)) + + # Parse upload/download to bytes + local up_raw=$(echo "$logs" | sed -n 's/.*Up:[[:space:]]*\([^|]*\).*/\1/p' | xargs) + local down_raw=$(echo "$logs" | sed -n 's/.*Down:[[:space:]]*\([^|]*\).*/\1/p' | xargs) + if [ -n "$up_raw" ]; then + local up_val=$(echo "$up_raw" | sed 's/[^0-9.]//g') + local up_unit=$(echo "$up_raw" | sed 's/[0-9. ]//g') + if [[ "$up_val" =~ ^[0-9.]+$ ]]; then + case "$up_unit" in + GB) total_up_bytes=$(awk -v a="$total_up_bytes" -v v="$up_val" 'BEGIN{printf "%.0f", a+v*1073741824}') ;; + MB) total_up_bytes=$(awk -v a="$total_up_bytes" -v v="$up_val" 'BEGIN{printf "%.0f", a+v*1048576}') ;; + KB) total_up_bytes=$(awk -v a="$total_up_bytes" -v v="$up_val" 'BEGIN{printf "%.0f", a+v*1024}') ;; + B) total_up_bytes=$(awk -v a="$total_up_bytes" -v v="$up_val" 'BEGIN{printf "%.0f", a+v}') ;; + esac + fi + fi + if [ -n "$down_raw" ]; then + local down_val=$(echo "$down_raw" | sed 's/[^0-9.]//g') + local down_unit=$(echo "$down_raw" | sed 's/[0-9. ]//g') + if [[ "$down_val" =~ ^[0-9.]+$ ]]; then + case "$down_unit" in + GB) total_down_bytes=$(awk -v a="$total_down_bytes" -v v="$down_val" 'BEGIN{printf "%.0f", a+v*1073741824}') ;; + MB) total_down_bytes=$(awk -v a="$total_down_bytes" -v v="$down_val" 'BEGIN{printf "%.0f", a+v*1048576}') ;; + KB) total_down_bytes=$(awk -v a="$total_down_bytes" -v v="$down_val" 'BEGIN{printf "%.0f", a+v*1024}') ;; + B) total_down_bytes=$(awk -v a="$total_down_bytes" -v v="$down_val" 'BEGIN{printf "%.0f", a+v}') ;; + esac + fi + fi fi done - # Count total unique IPs per country (cumulative) - unset total_ips_count - declare -A total_ips_count - if [ -s "$persist_dir/cumulative_ips" ]; then - while IFS='|' read -r country ip; do + if [ "$container_count" -gt 0 ]; then + local cpu_display="${total_cpu}%" + [ "$container_count" -gt 1 ] && cpu_display="${total_cpu}% (${container_count} containers)" + local mem_display="${total_mem_mib}MiB" + if [ -n "$first_mem_limit" ] && [ "$container_count" -gt 1 ]; then + mem_display="${total_mem_mib}MiB (${container_count}x ${first_mem_limit})" + elif [ -n "$first_mem_limit" ]; then + mem_display="${total_mem_mib}MiB / ${first_mem_limit}" + fi + printf "${CYAN}║${NC} CPU: ${YELLOW}%s${NC} Mem: ${YELLOW}%s${NC} Clients: ${GREEN}%d${NC}\033[K\n" "$cpu_display" "$mem_display" "$total_conn" + local up_display=$(format_bytes "$total_up_bytes") + local down_display=$(format_bytes "$total_down_bytes") + printf "${CYAN}║${NC} Upload: ${GREEN}%s${NC} Download: ${GREEN}%s${NC}\033[K\n" "$up_display" "$down_display" + else + echo -e "${CYAN}║${NC} ${RED}No Containers Running${NC}\033[K" + fi + + # Network info + local ip=$(ip route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($i=="src") print $(i+1)}') + local iface=$(ip route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($i=="dev") print $(i+1)}') + printf "${CYAN}║${NC} Net: ${GREEN}%s${NC} (%s)\033[K\n" "${ip:-N/A}" "${iface:-?}" + + echo -e "${CYAN}╠${D}${NC}\033[K" + + # Load tracker data + local total_active=0 total_in=0 total_out=0 + unset cips cbw_in cbw_out + declare -A cips cbw_in cbw_out + + if [ -s "$persist_dir/cumulative_data" ]; then + while IFS='|' read -r country from_bytes to_bytes; do [ -z "$country" ] && continue - total_ips_count["$country"]=$((${total_ips_count["$country"]:-0} + 1)) + cbw_in["$country"]=${from_bytes:-0} + cbw_out["$country"]=${to_bytes:-0} + total_in=$((total_in + ${from_bytes:-0})) + total_out=$((total_out + ${to_bytes:-0})) + done < "$persist_dir/cumulative_data" + fi + + if [ -s "$persist_dir/cumulative_ips" ]; then + while IFS='|' read -r country ip_addr; do + [ -z "$country" ] && continue + cips["$country"]=$((${cips["$country"]:-0} + 1)) + total_active=$((total_active + 1)) done < "$persist_dir/cumulative_ips" fi - # Count active IPs this cycle per country - unset active_ips_count - declare -A active_ips_count - for country in "${!cycle_ips[@]}"; do - # Count unique IPs in this cycle's IP list for this country - local unique_count=$(echo "${cycle_ips[$country]}" | tr ' ' '\n' | sort -u | grep -c '.') - active_ips_count["$country"]=$unique_count - done + local tstat="${RED}Off${NC}"; is_tracker_active && tstat="${GREEN}On${NC}" + printf "${CYAN}║${NC} Tracker: %b Clients: ${GREEN}%d${NC} Unique IPs: ${YELLOW}%d${NC} In: ${GREEN}%s${NC} Out: ${YELLOW}%s${NC}\033[K\n" "$tstat" "$total_conn" "$total_active" "$(format_bytes $total_in)" "$(format_bytes $total_out)" - # Generate sorted output with all metrics - # Format: Country|TotalFrom|TotalTo|SpeedFrom|SpeedTo|TotalIPs|ActiveIPs - > /tmp/conduit_traffic_from - > /tmp/conduit_traffic_to - for country in "${!cumul_from[@]}"; do - local total_from=${cumul_from[$country]} - local total_to=${cumul_to[$country]} - local cycle_from_val=${cycle_from["$country"]:-0} - local cycle_to_val=${cycle_to["$country"]:-0} - # Calculate speed (bytes per second) from 15-second capture - local speed_from=$((cycle_from_val / 15)) - local speed_to=$((cycle_to_val / 15)) - # Get IP counts - local total_ips=${total_ips_count["$country"]:-0} - local active_ips=${active_ips_count["$country"]:-0} - echo "${country}|${total_from}|${total_to}|${speed_from}|${speed_to}|${total_ips}|${active_ips}" >> /tmp/conduit_traffic_from - done + # TOP 5 by Unique IPs (from tracker) + echo -e "${CYAN}╠─── ${CYAN}TOP 5 BY UNIQUE IPs${NC} ${DIM}(tracked)${NC}\033[K" + local total_traffic=$((total_in + total_out)) + if [ "$total_conn" -gt 0 ] && [ "$total_active" -gt 0 ]; then + for c in "${!cips[@]}"; do echo "${cips[$c]}|$c"; done | sort -t'|' -k1 -nr | head -7 | while IFS='|' read -r active_cnt country; do + local peers=$(( (active_cnt * total_conn) / total_active )) + [ "$peers" -eq 0 ] && [ "$active_cnt" -gt 0 ] && peers=1 + local pct=$((peers * 100 / total_conn)) + local blen=$((pct / 8)); [ "$blen" -lt 1 ] && blen=1; [ "$blen" -gt 14 ] && blen=14 + local bfill=""; for ((i=0; i/dev/null || true # Exit alternate screen buffer - # Remove only temporary working files (not persistent cumulative data) - rm -f /tmp/conduit_peers_current /tmp/conduit_peers_raw - rm -f /tmp/conduit_traffic_from /tmp/conduit_traffic_to - trap - SIGINT SIGTERM # Remove signal handlers + if read -t 1 -n 1 -s key < /dev/tty 2>/dev/null; then + case "$key" in + q|Q) exit_stats=1 ;; + esac + fi + done + + echo -ne "\033[?25h" + tput rmcup 2>/dev/null || true + trap - SIGINT SIGTERM +} + +# show_peers() - Live peer traffic by country using tcpdump + GeoIP +show_peers() { + local stop_peers=0 + trap 'stop_peers=1' SIGINT SIGTERM + + local persist_dir="$INSTALL_DIR/traffic_stats" + + # Ensure tracker is running + if ! is_tracker_active; then + setup_tracker_service 2>/dev/null || true + fi + + tput smcup 2>/dev/null || true + echo -ne "\033[?25l" + printf "\033[2J\033[H" + + local EL="\033[K" + local cycle_start=$(date +%s) + local last_refresh=0 + + while [ $stop_peers -eq 0 ]; do + local now=$(date +%s) + local term_height=$(tput lines 2>/dev/null || echo 24) + local cycle_elapsed=$(( (now - cycle_start) % 15 )) + local time_left=$((15 - cycle_elapsed)) + + # Progress bar + local bar="" + for ((i=0; i/dev/null + declare -A cumul_from cumul_to total_ips_count + + local grand_in=0 grand_out=0 + + if [ -s "$persist_dir/cumulative_data" ]; then + while IFS='|' read -r c f t; do + [ -z "$c" ] && continue + [[ "$c" == *"can't"* || "$c" == *"error"* ]] && continue + cumul_from["$c"]=${f:-0} + cumul_to["$c"]=${t:-0} + grand_in=$((grand_in + ${f:-0})) + grand_out=$((grand_out + ${t:-0})) + done < "$persist_dir/cumulative_data" + fi + + if [ -s "$persist_dir/cumulative_ips" ]; then + while IFS='|' read -r c ip; do + [ -z "$c" ] && continue + [[ "$c" == *"can't"* || "$c" == *"error"* ]] && continue + total_ips_count["$c"]=$((${total_ips_count["$c"]:-0} + 1)) + done < "$persist_dir/cumulative_ips" + fi + + # Get actual connected clients from docker logs + local total_clients=0 + local docker_ps_cache=$(docker ps --format '{{.Names}}' 2>/dev/null) + for ci in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $ci) + if echo "$docker_ps_cache" | grep -q "^${cname}$"; then + local logs=$(docker logs --tail 50 "$cname" 2>&1 | grep "\[STATS\]" | tail -1) + local conn=$(echo "$logs" | sed -n 's/.*Connected:[[:space:]]*\([0-9]*\).*/\1/p') + [[ "$conn" =~ ^[0-9]+$ ]] && total_clients=$((total_clients + conn)) + fi + done + + echo -e "${EL}" + + # Parse snapshot for speed and country distribution + unset snap_from_bytes snap_to_bytes snap_from_ips snap_to_ips 2>/dev/null + declare -A snap_from_bytes snap_to_bytes snap_from_ips snap_to_ips + local snap_total_from_ips=0 snap_total_to_ips=0 + if [ -s "$persist_dir/tracker_snapshot" ]; then + while IFS='|' read -r dir c bytes ip; do + [ -z "$c" ] && continue + [[ "$c" == *"can't"* || "$c" == *"error"* ]] && continue + if [ "$dir" = "FROM" ]; then + snap_from_bytes["$c"]=$(( ${snap_from_bytes["$c"]:-0} + ${bytes:-0} )) + snap_from_ips["$c|$ip"]=1 + elif [ "$dir" = "TO" ]; then + snap_to_bytes["$c"]=$(( ${snap_to_bytes["$c"]:-0} + ${bytes:-0} )) + snap_to_ips["$c|$ip"]=1 + fi + done < "$persist_dir/tracker_snapshot" + fi + + # Count unique snapshot IPs per country + totals + unset snap_from_ip_cnt snap_to_ip_cnt 2>/dev/null + declare -A snap_from_ip_cnt snap_to_ip_cnt + for k in "${!snap_from_ips[@]}"; do + local sc="${k%%|*}" + snap_from_ip_cnt["$sc"]=$(( ${snap_from_ip_cnt["$sc"]:-0} + 1 )) + snap_total_from_ips=$((snap_total_from_ips + 1)) + done + for k in "${!snap_to_ips[@]}"; do + local sc="${k%%|*}" + snap_to_ip_cnt["$sc"]=$(( ${snap_to_ip_cnt["$sc"]:-0} + 1 )) + snap_total_to_ips=$((snap_total_to_ips + 1)) + done + + # TOP 10 TRAFFIC FROM (peers connecting to you) + echo -e "${GREEN}${BOLD} 📥 TOP 10 TRAFFIC FROM ${NC}${DIM}(peers connecting to you)${NC}${EL}" + echo -e "${EL}" + printf " ${BOLD}%-26s %10s %12s %-12s${NC}${EL}\n" "Country" "Total" "Speed" "IPs / Clients" + echo -e "${EL}" + if [ "$grand_in" -gt 0 ]; then + while IFS='|' read -r bytes country; do + [ -z "$country" ] && continue + local snap_b=${snap_from_bytes[$country]:-0} + local speed_val=$((snap_b / 15)) + local speed_str=$(format_bytes $speed_val) + local ips_all=${total_ips_count[$country]:-0} + # Estimate clients per country using snapshot distribution + local snap_cnt=${snap_from_ip_cnt[$country]:-0} + local est_clients=0 + if [ "$snap_total_from_ips" -gt 0 ] && [ "$snap_cnt" -gt 0 ]; then + est_clients=$(( (snap_cnt * total_clients) / snap_total_from_ips )) + [ "$est_clients" -eq 0 ] && [ "$snap_cnt" -gt 0 ] && est_clients=1 + fi + printf " ${GREEN}%-26.26s${NC} %10s %10s/s %5d/%d${EL}\n" "$country" "$(format_bytes $bytes)" "$speed_str" "$ips_all" "$est_clients" + done < <(for c in "${!cumul_from[@]}"; do echo "${cumul_from[$c]:-0}|$c"; done | sort -t'|' -k1 -nr | head -10) + else + echo -e " ${DIM}Waiting for data...${NC}${EL}" + fi + echo -e "${EL}" + + # TOP 10 TRAFFIC TO (data sent to peers) + echo -e "${YELLOW}${BOLD} 📤 TOP 10 TRAFFIC TO ${NC}${DIM}(data sent to peers)${NC}${EL}" + echo -e "${EL}" + printf " ${BOLD}%-26s %10s %12s %-12s${NC}${EL}\n" "Country" "Total" "Speed" "IPs / Clients" + echo -e "${EL}" + if [ "$grand_out" -gt 0 ]; then + while IFS='|' read -r bytes country; do + [ -z "$country" ] && continue + local snap_b=${snap_to_bytes[$country]:-0} + local speed_val=$((snap_b / 15)) + local speed_str=$(format_bytes $speed_val) + local ips_all=${total_ips_count[$country]:-0} + local snap_cnt=${snap_to_ip_cnt[$country]:-0} + local est_clients=0 + if [ "$snap_total_to_ips" -gt 0 ] && [ "$snap_cnt" -gt 0 ]; then + est_clients=$(( (snap_cnt * total_clients) / snap_total_to_ips )) + [ "$est_clients" -eq 0 ] && [ "$snap_cnt" -gt 0 ] && est_clients=1 + fi + printf " ${YELLOW}%-26.26s${NC} %10s %10s/s %5d/%d${EL}\n" "$country" "$(format_bytes $bytes)" "$speed_str" "$ips_all" "$est_clients" + done < <(for c in "${!cumul_to[@]}"; do echo "${cumul_to[$c]:-0}|$c"; done | sort -t'|' -k1 -nr | head -10) + else + echo -e " ${DIM}Waiting for data...${NC}${EL}" + fi + + echo -e "${EL}" + printf "\033[J" + fi + + # Progress bar at bottom + printf "\033[${term_height};1H${EL}" + printf "[${YELLOW}${bar}${NC}] Next refresh in %2ds ${DIM}[q] Back${NC}" "$time_left" + + if read -t 1 -n 1 -s key < /dev/tty 2>/dev/null; then + case "$key" in q|Q) stop_peers=1 ;; esac + fi + done + echo -ne "\033[?25h" + tput rmcup 2>/dev/null || true + rm -f /tmp/conduit_peers_sorted + trap - SIGINT SIGTERM } get_net_speed() { @@ -1567,9 +2118,95 @@ show_status() { echo "" - if docker ps 2>/dev/null | grep -q "[[:space:]]conduit$"; then - # Fetch stats once - local logs=$(docker logs --tail 1000 conduit 2>&1 | grep "STATS" | tail -1) + # Cache docker ps output once + local docker_ps_cache=$(docker ps 2>/dev/null) + + # Count running containers and cache per-container stats + local running_count=0 + declare -A _c_running _c_conn _c_cing _c_up _c_down + local total_connecting=0 + local total_connected=0 + local uptime="" + + for i in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $i) + _c_running[$i]=false + _c_conn[$i]="0" + _c_cing[$i]="0" + _c_up[$i]="" + _c_down[$i]="" + + if echo "$docker_ps_cache" | grep -q "[[:space:]]${cname}$"; then + _c_running[$i]=true + running_count=$((running_count + 1)) + local logs=$(docker logs --tail 1000 "$cname" 2>&1 | grep "STATS" | tail -1) + if [ -n "$logs" ]; then + local c_connecting=$(echo "$logs" | sed -n 's/.*Connecting:[[:space:]]*\([0-9]*\).*/\1/p') + local c_connected=$(echo "$logs" | sed -n 's/.*Connected:[[:space:]]*\([0-9]*\).*/\1/p') + _c_conn[$i]="${c_connected:-0}" + _c_cing[$i]="${c_connecting:-0}" + _c_up[$i]=$(echo "$logs" | sed -n 's/.*Up:[[:space:]]*\([^|]*\).*/\1/p' | xargs) + _c_down[$i]=$(echo "$logs" | sed -n 's/.*Down:[[:space:]]*\([^|]*\).*/\1/p' | xargs) + total_connecting=$((total_connecting + ${c_connecting:-0})) + total_connected=$((total_connected + ${c_connected:-0})) + if [ -z "$uptime" ]; then + uptime=$(echo "$logs" | sed -n 's/.*Uptime:[[:space:]]*\(.*\)/\1/p' | xargs) + fi + fi + fi + done + local connecting=$total_connecting + local connected=$total_connected + + # Aggregate upload/download across all containers + local upload="" + local download="" + local total_up_bytes=0 + local total_down_bytes=0 + for i in $(seq 1 $CONTAINER_COUNT); do + if [ -n "${_c_up[$i]}" ]; then + local bytes=$(echo "${_c_up[$i]}" | awk '{ + val=$1; unit=toupper($2) + if (unit ~ /^KB/) val*=1024 + else if (unit ~ /^MB/) val*=1048576 + else if (unit ~ /^GB/) val*=1073741824 + else if (unit ~ /^TB/) val*=1099511627776 + printf "%.0f", val + }') + total_up_bytes=$((total_up_bytes + ${bytes:-0})) + fi + if [ -n "${_c_down[$i]}" ]; then + local bytes=$(echo "${_c_down[$i]}" | awk '{ + val=$1; unit=toupper($2) + if (unit ~ /^KB/) val*=1024 + else if (unit ~ /^MB/) val*=1048576 + else if (unit ~ /^GB/) val*=1073741824 + else if (unit ~ /^TB/) val*=1099511627776 + printf "%.0f", val + }') + total_down_bytes=$((total_down_bytes + ${bytes:-0})) + fi + done + if [ "$total_up_bytes" -gt 0 ]; then + upload=$(awk -v b="$total_up_bytes" 'BEGIN { + if (b >= 1099511627776) printf "%.2f TB", b/1099511627776 + else if (b >= 1073741824) printf "%.2f GB", b/1073741824 + else if (b >= 1048576) printf "%.2f MB", b/1048576 + else if (b >= 1024) printf "%.2f KB", b/1024 + else printf "%d B", b + }') + fi + if [ "$total_down_bytes" -gt 0 ]; then + download=$(awk -v b="$total_down_bytes" 'BEGIN { + if (b >= 1099511627776) printf "%.2f TB", b/1099511627776 + else if (b >= 1073741824) printf "%.2f GB", b/1073741824 + else if (b >= 1048576) printf "%.2f MB", b/1048576 + else if (b >= 1024) printf "%.2f KB", b/1024 + else printf "%d B", b + }') + fi + + if [ "$running_count" -gt 0 ]; then # Get Resource Stats local stats=$(get_container_stats) @@ -1601,50 +2238,34 @@ show_status() { local sys_ram_used=$(echo "$sys_stats" | awk '{print $2}') local sys_ram_total=$(echo "$sys_stats" | awk '{print $3}') local sys_ram_pct=$(echo "$sys_stats" | awk '{print $4}') - - local sys_ram_pct=$(echo "$sys_stats" | awk '{print $4}') - + # New Metric: Network Speed (System Wide) local net_speed=$(get_net_speed) local rx_mbps=$(echo "$net_speed" | awk '{print $1}') local tx_mbps=$(echo "$net_speed" | awk '{print $2}') local net_display="↓ ${rx_mbps} Mbps ↑ ${tx_mbps} Mbps" - if [ -n "$logs" ]; then - local connecting=$(echo "$logs" | sed -n 's/.*Connecting:[[:space:]]*\([0-9]*\).*/\1/p') - local connected=$(echo "$logs" | sed -n 's/.*Connected:[[:space:]]*\([0-9]*\).*/\1/p') - local upload=$(echo "$logs" | sed -n 's/.*Up:[[:space:]]*\([^|]*\).*/\1/p' | xargs) - local download=$(echo "$logs" | sed -n 's/.*Down:[[:space:]]*\([^|]*\).*/\1/p' | xargs) - local uptime=$(echo "$logs" | sed -n 's/.*Uptime:[[:space:]]*\(.*\)/\1/p' | xargs) - - # Default to 0 if missing/empty - connecting=${connecting:-0} - connected=${connected:-0} - - echo -e "🚀 PSIPHON CONDUIT MANAGER v${VERSION}${EL}" - echo -e "${NC}${EL}" - - if [ -n "$uptime" ]; then - echo -e "${BOLD}Status:${NC} ${GREEN}Running${NC} (${uptime}) | ${BOLD}Clients:${NC} ${GREEN}${connected}${NC} connected, ${YELLOW}${connecting}${NC} connecting${EL}" - else - echo -e "${BOLD}Status:${NC} ${GREEN}Running${NC} | ${BOLD}Clients:${NC} ${GREEN}${connected}${NC} connected, ${YELLOW}${connecting}${NC} connecting${EL}" - fi - + if [ -n "$upload" ] || [ "$connected" -gt 0 ] || [ "$connecting" -gt 0 ]; then + local status_line="${BOLD}Status:${NC} ${GREEN}Running${NC}" + [ -n "$uptime" ] && status_line="${status_line} (${uptime})" + echo -e "${status_line}${EL}" + echo -e " Containers: ${GREEN}${running_count}${NC}/${CONTAINER_COUNT} Clients: ${GREEN}${connected}${NC} connected, ${YELLOW}${connecting}${NC} connecting${EL}" + echo -e "${EL}" echo -e "${CYAN}═══ Traffic ═══${NC}${EL}" [ -n "$upload" ] && echo -e " Upload: ${CYAN}${upload}${NC}${EL}" [ -n "$download" ] && echo -e " Download: ${CYAN}${download}${NC}${EL}" - + echo -e "${EL}" echo -e "${CYAN}═══ Resource Usage ═══${NC}${EL}" printf " %-8s CPU: ${YELLOW}%-20s${NC} | RAM: ${YELLOW}%-20s${NC}${EL}\n" "App:" "$app_cpu_display" "$app_ram" printf " %-8s CPU: ${YELLOW}%-20s${NC} | RAM: ${YELLOW}%-20s${NC}${EL}\n" "System:" "$sys_cpu" "$sys_ram_used / $sys_ram_total" printf " %-8s Net: ${YELLOW}%-43s${NC}${EL}\n" "Total:" "$net_display" - + + else - echo -e "🚀 PSIPHON CONDUIT MANAGER v${VERSION}${EL}" - echo -e "${NC}${EL}" echo -e "${BOLD}Status:${NC} ${GREEN}Running${NC}${EL}" + echo -e " Containers: ${GREEN}${running_count}${NC}/${CONTAINER_COUNT}${EL}" echo -e "${EL}" echo -e "${CYAN}═══ Resource Usage ═══${NC}${EL}" printf " %-8s CPU: ${YELLOW}%-20s${NC} | RAM: ${YELLOW}%-20s${NC}${EL}\n" "App:" "$app_cpu_display" "$app_ram" @@ -1655,191 +2276,319 @@ show_status() { fi else - echo -e "🚀 PSIPHON CONDUIT MANAGER v${VERSION}${EL}" - echo -e "${NC}${EL}" echo -e "${BOLD}Status:${NC} ${RED}Stopped${NC}${EL}" fi - echo "" + echo -e "${EL}" echo -e "${CYAN}═══ SETTINGS ═══${NC}${EL}" - echo -e " Max Clients: ${MAX_CLIENTS}${EL}" - if [ "$BANDWIDTH" == "-1" ]; then - echo -e " Bandwidth: Unlimited${EL}" + # Check if any per-container overrides exist + local has_overrides=false + for i in $(seq 1 $CONTAINER_COUNT); do + local mc_var="MAX_CLIENTS_${i}" + local bw_var="BANDWIDTH_${i}" + if [ -n "${!mc_var}" ] || [ -n "${!bw_var}" ]; then + has_overrides=true + break + fi + done + if [ "$has_overrides" = true ]; then + echo -e " Containers: ${CONTAINER_COUNT}${EL}" + for i in $(seq 1 $CONTAINER_COUNT); do + local mc=$(get_container_max_clients $i) + local bw=$(get_container_bandwidth $i) + local bw_d="Unlimited" + [ "$bw" != "-1" ] && bw_d="${bw} Mbps" + printf " %-12s clients: %-5s bw: %s${EL}\n" "$(get_container_name $i)" "$mc" "$bw_d" + done else - echo -e " Bandwidth: ${BANDWIDTH} Mbps${EL}" + echo -e " Max Clients: ${MAX_CLIENTS}${EL}" + if [ "$BANDWIDTH" == "-1" ]; then + echo -e " Bandwidth: Unlimited${EL}" + else + echo -e " Bandwidth: ${BANDWIDTH} Mbps${EL}" + fi + echo -e " Containers: ${CONTAINER_COUNT}${EL}" + fi + if [ "$DATA_CAP_GB" -gt 0 ] 2>/dev/null; then + local usage=$(get_data_usage) + local used_rx=$(echo "$usage" | awk '{print $1}') + local used_tx=$(echo "$usage" | awk '{print $2}') + local total_used=$((used_rx + used_tx + ${DATA_CAP_PRIOR_USAGE:-0})) + echo -e " Data Cap: $(format_gb $total_used) / ${DATA_CAP_GB} GB${EL}" fi - echo "" - echo -e "${CYAN}═══ AUTO-START SERVICE ═══${NC}" + echo -e "${EL}" + echo -e "${CYAN}═══ AUTO-START SERVICE ═══${NC}${EL}" # Check for systemd if command -v systemctl &>/dev/null && systemctl is-enabled conduit.service 2>/dev/null | grep -q "enabled"; then - echo -e " Auto-start: ${GREEN}Enabled (systemd)${NC}" + echo -e " Auto-start: ${GREEN}Enabled (systemd)${NC}${EL}" local svc_status=$(systemctl is-active conduit.service 2>/dev/null) - echo -e " Service: ${svc_status:-unknown}" + echo -e " Service: ${svc_status:-unknown}${EL}" # Check for OpenRC elif command -v rc-status &>/dev/null && rc-status -a 2>/dev/null | grep -q "conduit"; then - echo -e " Auto-start: ${GREEN}Enabled (OpenRC)${NC}" + echo -e " Auto-start: ${GREEN}Enabled (OpenRC)${NC}${EL}" # Check for SysVinit elif [ -f /etc/init.d/conduit ]; then - echo -e " Auto-start: ${GREEN}Enabled (SysVinit)${NC}" + echo -e " Auto-start: ${GREEN}Enabled (SysVinit)${NC}${EL}" else - echo -e " Auto-start: ${YELLOW}Not configured${NC}" - echo -e " Note: Docker restart policy handles restarts" + echo -e " Auto-start: ${YELLOW}Not configured${NC}${EL}" + echo -e " Note: Docker restart policy handles restarts${EL}" fi - echo "" + # Check Background Tracker + if is_tracker_active; then + echo -e " Tracker: ${GREEN}Active${NC}${EL}" + else + echo -e " Tracker: ${YELLOW}Inactive${NC}${EL}" + fi + echo -e "${EL}" } start_conduit() { - echo "Starting Conduit..." + # Check data cap before starting + if [ "$DATA_CAP_GB" -gt 0 ] 2>/dev/null; then + local usage=$(get_data_usage) + local used_rx=$(echo "$usage" | awk '{print $1}') + local used_tx=$(echo "$usage" | awk '{print $2}') + local total_used=$((used_rx + used_tx + ${DATA_CAP_PRIOR_USAGE:-0})) + local cap_bytes=$(awk -v gb="$DATA_CAP_GB" 'BEGIN{printf "%.0f", gb * 1073741824}') + if [ "$total_used" -ge "$cap_bytes" ] 2>/dev/null; then + echo -e "${RED}⚠ Data cap exceeded ($(format_gb $total_used) / ${DATA_CAP_GB} GB). Containers will not start.${NC}" + echo -e "${YELLOW}Reset or increase the data cap from the menu to start containers.${NC}" + return 1 + fi + fi - # Check if container exists (running or stopped) - if docker ps -a 2>/dev/null | grep -q "[[:space:]]conduit$"; then - # Check if container is already running - if docker ps 2>/dev/null | grep -q "[[:space:]]conduit$"; then - echo -e "${GREEN}✓ Conduit is already running${NC}" - return 0 + echo "Starting Conduit ($CONTAINER_COUNT container(s))..." + + for i in $(seq 1 $CONTAINER_COUNT); do + local name=$(get_container_name $i) + local vol=$(get_volume_name $i) + + # Check if container exists (running or stopped) + if docker ps -a 2>/dev/null | grep -q "[[:space:]]${name}$"; then + if docker ps 2>/dev/null | grep -q "[[:space:]]${name}$"; then + echo -e "${GREEN}✓ ${name} is already running${NC}" + continue + fi + echo "Recreating ${name}..." + docker rm "$name" 2>/dev/null || true fi - # Container exists but stopped - recreate it to ensure -v flag is included - echo "Recreating container with stats enabled..." - docker rm conduit 2>/dev/null || true - fi + docker volume create "$vol" 2>/dev/null || true + fix_volume_permissions $i + run_conduit_container $i - # Create new container - echo "Creating Conduit container..." - docker volume create conduit-data 2>/dev/null || true - - fix_volume_permissions - run_conduit_container - - if [ $? -eq 0 ]; then - echo -e "${GREEN}✓ Conduit started with stats enabled${NC}" - else - echo -e "${RED}✗ Failed to start Conduit${NC}" - return 1 - fi + if [ $? -eq 0 ]; then + echo -e "${GREEN}✓ ${name} started${NC}" + else + echo -e "${RED}✗ Failed to start ${name}${NC}" + fi + done + # Start background tracker + setup_tracker_service 2>/dev/null || true + return 0 } stop_conduit() { echo "Stopping Conduit..." - if docker ps 2>/dev/null | grep -q "[[:space:]]conduit$"; then - docker stop conduit 2>/dev/null - echo -e "${YELLOW}✓ Conduit stopped${NC}" - else - echo -e "${YELLOW}Conduit is not running${NC}" - fi + local stopped=0 + for i in $(seq 1 $CONTAINER_COUNT); do + local name=$(get_container_name $i) + if docker ps 2>/dev/null | grep -q "[[:space:]]${name}$"; then + docker stop "$name" 2>/dev/null + echo -e "${YELLOW}✓ ${name} stopped${NC}" + stopped=$((stopped + 1)) + fi + done + # Also stop any extra containers beyond current count (from previous scaling) + for i in $(seq $((CONTAINER_COUNT + 1)) 5); do + local name=$(get_container_name $i) + if docker ps -a 2>/dev/null | grep -q "[[:space:]]${name}$"; then + docker stop "$name" 2>/dev/null || true + docker rm "$name" 2>/dev/null || true + echo -e "${YELLOW}✓ ${name} stopped and removed (extra)${NC}" + fi + done + [ "$stopped" -eq 0 ] && echo -e "${YELLOW}No Conduit containers are running${NC}" + # Stop background tracker + stop_tracker_service 2>/dev/null || true + return 0 } restart_conduit() { - echo "Restarting Conduit..." - if docker ps -a 2>/dev/null | grep -q "[[:space:]]conduit$"; then - # Stop and remove the existing container - docker stop conduit 2>/dev/null || true - docker rm conduit 2>/dev/null || true - - fix_volume_permissions - run_conduit_container - - if [ $? -eq 0 ]; then - echo -e "${GREEN}✓ Conduit restarted with stats enabled${NC}" - else - echo -e "${RED}✗ Failed to restart Conduit${NC}" + # Check data cap before restarting + if [ "$DATA_CAP_GB" -gt 0 ] 2>/dev/null; then + local usage=$(get_data_usage) + local used_rx=$(echo "$usage" | awk '{print $1}') + local used_tx=$(echo "$usage" | awk '{print $2}') + local total_used=$((used_rx + used_tx + ${DATA_CAP_PRIOR_USAGE:-0})) + local cap_bytes=$(awk -v gb="$DATA_CAP_GB" 'BEGIN{printf "%.0f", gb * 1073741824}') + if [ "$total_used" -ge "$cap_bytes" ] 2>/dev/null; then + echo -e "${RED}⚠ Data cap exceeded ($(format_gb $total_used) / ${DATA_CAP_GB} GB). Containers will not restart.${NC}" + echo -e "${YELLOW}Reset or increase the data cap from the menu to restart containers.${NC}" return 1 fi - else - echo -e "${RED}Conduit container not found. Use 'conduit start' to create it.${NC}" - return 1 fi + + echo "Restarting Conduit ($CONTAINER_COUNT container(s))..." + local any_found=false + for i in $(seq 1 $CONTAINER_COUNT); do + local name=$(get_container_name $i) + local vol=$(get_volume_name $i) + if docker ps -a 2>/dev/null | grep -q "[[:space:]]${name}$"; then + any_found=true + docker stop "$name" 2>/dev/null || true + docker rm "$name" 2>/dev/null || true + fi + docker volume create "$vol" 2>/dev/null || true + fix_volume_permissions $i + run_conduit_container $i + if [ $? -eq 0 ]; then + echo -e "${GREEN}✓ ${name} restarted${NC}" + else + echo -e "${RED}✗ Failed to restart ${name}${NC}" + fi + done + # Remove extra containers beyond current count + for i in $(seq $((CONTAINER_COUNT + 1)) 5); do + local name=$(get_container_name $i) + if docker ps -a 2>/dev/null | grep -q "[[:space:]]${name}$"; then + docker stop "$name" 2>/dev/null || true + docker rm "$name" 2>/dev/null || true + echo -e "${YELLOW}✓ ${name} removed (scaled down)${NC}" + fi + done } change_settings() { echo "" - echo -e "${CYAN}Current Settings:${NC}" - echo -e " Max Clients: ${MAX_CLIENTS}" - if [ "$BANDWIDTH" == "-1" ]; then - echo -e " Bandwidth: Unlimited" - else - echo -e " Bandwidth: ${BANDWIDTH} Mbps" - fi + echo -e "${CYAN}═══ Current Settings ═══${NC}" + echo "" + printf " ${BOLD}%-12s %-12s %-12s${NC}\n" "Container" "Max Clients" "Bandwidth" + echo -e " ${CYAN}────────────────────────────────────────${NC}" + for i in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $i) + local mc=$(get_container_max_clients $i) + local bw=$(get_container_bandwidth $i) + local bw_display="Unlimited" + [ "$bw" != "-1" ] && bw_display="${bw} Mbps" + printf " %-12s %-12s %-12s\n" "$cname" "$mc" "$bw_display" + done + echo "" + echo -e " Default: Max Clients=${GREEN}${MAX_CLIENTS}${NC} Bandwidth=${GREEN}$([ "$BANDWIDTH" = "-1" ] && echo "Unlimited" || echo "${BANDWIDTH} Mbps")${NC}" echo "" - - read -p "New max-clients (1-1000) [${MAX_CLIENTS}]: " new_clients < /dev/tty || true - - # Bandwidth prompt logic for settings menu + # Select target + echo -e " ${BOLD}Apply settings to:${NC}" + echo -e " ${GREEN}a${NC}) All containers (set same values)" + for i in $(seq 1 $CONTAINER_COUNT); do + echo -e " ${GREEN}${i}${NC}) $(get_container_name $i)" + done echo "" - if [ "$BANDWIDTH" == "-1" ]; then - echo "Current bandwidth: Unlimited" + read -p " Select (a/1-${CONTAINER_COUNT}): " target < /dev/tty || true + + local targets=() + if [ "$target" = "a" ] || [ "$target" = "A" ]; then + for i in $(seq 1 $CONTAINER_COUNT); do targets+=($i); done + elif [[ "$target" =~ ^[0-9]+$ ]] && [ "$target" -ge 1 ] && [ "$target" -le "$CONTAINER_COUNT" ]; then + targets+=($target) else - echo "Current bandwidth: ${BANDWIDTH} Mbps" + echo -e " ${RED}Invalid selection.${NC}" + return fi - read -p "Set unlimited bandwidth (-1)? [y/N]: " set_unlimited < /dev/tty || true - + + # Get new values + local cur_mc=$(get_container_max_clients ${targets[0]}) + local cur_bw=$(get_container_bandwidth ${targets[0]}) + echo "" + read -p " New max-clients (1-1000) [${cur_mc}]: " new_clients < /dev/tty || true + + echo "" + local cur_bw_display="Unlimited" + [ "$cur_bw" != "-1" ] && cur_bw_display="${cur_bw} Mbps" + echo " Current bandwidth: ${cur_bw_display}" + read -p " Set unlimited bandwidth? [y/N]: " set_unlimited < /dev/tty || true + + local new_bandwidth="" if [[ "$set_unlimited" =~ ^[Yy] ]]; then new_bandwidth="-1" else - read -p "New bandwidth in Mbps (1-40) [${BANDWIDTH}]: " input_bw < /dev/tty || true - if [ -n "$input_bw" ]; then - new_bandwidth="$input_bw" - fi + read -p " New bandwidth in Mbps (1-40) [${cur_bw}]: " input_bw < /dev/tty || true + [ -n "$input_bw" ] && new_bandwidth="$input_bw" fi - + # Validate max-clients + local valid_mc="" if [ -n "$new_clients" ]; then if [[ "$new_clients" =~ ^[0-9]+$ ]] && [ "$new_clients" -ge 1 ] && [ "$new_clients" -le 1000 ]; then - MAX_CLIENTS=$new_clients + valid_mc="$new_clients" else - echo -e "${YELLOW}Invalid max-clients. Keeping current: ${MAX_CLIENTS}${NC}" + echo -e " ${YELLOW}Invalid max-clients. Keeping current.${NC}" fi fi - + # Validate bandwidth + local valid_bw="" if [ -n "$new_bandwidth" ]; then if [ "$new_bandwidth" = "-1" ]; then - BANDWIDTH="-1" + valid_bw="-1" elif [[ "$new_bandwidth" =~ ^[0-9]+$ ]] && [ "$new_bandwidth" -ge 1 ] && [ "$new_bandwidth" -le 40 ]; then - BANDWIDTH=$new_bandwidth + valid_bw="$new_bandwidth" elif [[ "$new_bandwidth" =~ ^[0-9]*\.[0-9]+$ ]]; then local float_ok=$(awk -v val="$new_bandwidth" 'BEGIN { print (val >= 1 && val <= 40) ? "yes" : "no" }') - if [ "$float_ok" = "yes" ]; then - BANDWIDTH=$new_bandwidth - else - echo -e "${YELLOW}Invalid bandwidth. Keeping current: ${BANDWIDTH}${NC}" - fi + [ "$float_ok" = "yes" ] && valid_bw="$new_bandwidth" || echo -e " ${YELLOW}Invalid bandwidth. Keeping current.${NC}" else - echo -e "${YELLOW}Invalid bandwidth. Keeping current: ${BANDWIDTH}${NC}" + echo -e " ${YELLOW}Invalid bandwidth. Keeping current.${NC}" fi fi - - # Save settings - cat > "$INSTALL_DIR/settings.conf" << EOF -MAX_CLIENTS=$MAX_CLIENTS -BANDWIDTH=$BANDWIDTH -EOF - echo "" - echo "Updating and recreating Conduit container with new settings..." - docker rm -f conduit 2>/dev/null || true - sleep 2 # Wait for container cleanup to complete - echo "Pulling latest image..." - docker pull $CONDUIT_IMAGE 2>/dev/null || echo -e "${YELLOW}Could not pull latest image, using cached version${NC}" - fix_volume_permissions - run_conduit_container - - if [ $? -eq 0 ]; then - echo -e "${GREEN}✓ Settings updated and Conduit restarted${NC}" - echo -e " Max Clients: ${MAX_CLIENTS}" - if [ "$BANDWIDTH" == "-1" ]; then - echo -e " Bandwidth: Unlimited" - else - echo -e " Bandwidth: ${BANDWIDTH} Mbps" - fi + # Apply to targets + if [ "$target" = "a" ] || [ "$target" = "A" ]; then + # Apply to all = update global defaults and clear per-container overrides + [ -n "$valid_mc" ] && MAX_CLIENTS="$valid_mc" + [ -n "$valid_bw" ] && BANDWIDTH="$valid_bw" + for i in $(seq 1 5); do + unset "MAX_CLIENTS_${i}" 2>/dev/null || true + unset "BANDWIDTH_${i}" 2>/dev/null || true + done else - echo -e "${RED}✗ Failed to restart Conduit${NC}" + # Apply to specific container + local idx=${targets[0]} + if [ -n "$valid_mc" ]; then + eval "MAX_CLIENTS_${idx}=${valid_mc}" + fi + if [ -n "$valid_bw" ]; then + eval "BANDWIDTH_${idx}=${valid_bw}" + fi fi + + save_settings + + # Recreate affected containers + echo "" + echo " Recreating container(s) with new settings..." + for i in "${targets[@]}"; do + local name=$(get_container_name $i) + docker rm -f "$name" 2>/dev/null || true + done + sleep 1 + for i in "${targets[@]}"; do + local name=$(get_container_name $i) + fix_volume_permissions $i + run_conduit_container $i + if [ $? -eq 0 ]; then + local mc=$(get_container_max_clients $i) + local bw=$(get_container_bandwidth $i) + local bw_d="Unlimited" + [ "$bw" != "-1" ] && bw_d="${bw} Mbps" + echo -e " ${GREEN}✓ ${name}${NC} — clients: ${mc}, bandwidth: ${bw_d}" + else + echo -e " ${RED}✗ Failed to restart ${name}${NC}" + fi + done } #═══════════════════════════════════════════════════════════════════════ @@ -1859,11 +2608,30 @@ show_logs() { return 1 fi - echo -e "${CYAN}Streaming all logs (filtered, no [STATS])... Press Ctrl+C to stop${NC}" + local target="conduit" + if [ "$CONTAINER_COUNT" -gt 1 ]; then + echo "" + echo -e "${CYAN}Select container to view logs:${NC}" + echo "" + for i in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $i) + local status="${RED}Stopped${NC}" + docker ps 2>/dev/null | grep -q "[[:space:]]${cname}$" && status="${GREEN}Running${NC}" + echo -e " ${i}. ${cname} [${status}]" + done + echo "" + read -p " Select (1-${CONTAINER_COUNT}): " idx < /dev/tty || true + if ! [[ "$idx" =~ ^[0-9]+$ ]] || [ "$idx" -lt 1 ] || [ "$idx" -gt "$CONTAINER_COUNT" ]; then + echo -e "${RED}Invalid selection.${NC}" + return 1 + fi + target=$(get_container_name $idx) + fi + + echo -e "${CYAN}Streaming logs from ${target} (filtered, no [STATS])... Press Ctrl+C to stop${NC}" echo "" - # Stream ALL docker logs, filtering out [STATS] lines for cleaner output - docker logs -f conduit 2>&1 | grep -v "\[STATS\]" + docker logs -f "$target" 2>&1 | grep -v "\[STATS\]" } uninstall_all() { @@ -1873,12 +2641,14 @@ uninstall_all() { echo -e "${RED}╚═══════════════════════════════════════════════════════════════════╝${NC}" echo "" echo "This will completely remove:" - echo " • Conduit Docker container" + echo " • All Conduit Docker containers (conduit, conduit-2..5)" + echo " • All Conduit data volumes" echo " • Conduit Docker image" - echo " • Conduit data volume (all stored data)" echo " • Auto-start service (systemd/OpenRC/SysVinit)" - echo " • Configuration files" - echo " • Management CLI" + echo " • Background tracker service & stats data" + echo " • Configuration files & Management CLI" + echo "" + echo -e "${YELLOW}Docker engine will NOT be removed.${NC}" echo "" echo -e "${RED}WARNING: This action cannot be undone!${NC}" echo "" @@ -1891,7 +2661,7 @@ uninstall_all() { # Check for backup keys local keep_backups=false - if [ -d "$BACKUP_DIR" ] && [ "$(ls -A $BACKUP_DIR 2>/dev/null)" ]; then + if [ -d "$BACKUP_DIR" ] && [ "$(ls -A "$BACKUP_DIR" 2>/dev/null)" ]; then echo "" echo -e "${YELLOW}═══════════════════════════════════════════════════════════════════${NC}" echo -e "${YELLOW} 📁 Backup keys found in: ${BACKUP_DIR}${NC}" @@ -1912,19 +2682,28 @@ uninstall_all() { fi echo "" - echo -e "${BLUE}[INFO]${NC} Stopping Conduit container..." - docker stop conduit 2>/dev/null || true - - echo -e "${BLUE}[INFO]${NC} Removing Conduit container..." - docker rm -f conduit 2>/dev/null || true + echo -e "${BLUE}[INFO]${NC} Stopping Conduit container(s)..." + for i in $(seq 1 5); do + local name=$(get_container_name $i) + docker stop "$name" 2>/dev/null || true + docker rm -f "$name" 2>/dev/null || true + done echo -e "${BLUE}[INFO]${NC} Removing Conduit Docker image..." docker rmi "$CONDUIT_IMAGE" 2>/dev/null || true - echo -e "${BLUE}[INFO]${NC} Removing Conduit data volume..." - docker volume rm conduit-data 2>/dev/null || true + echo -e "${BLUE}[INFO]${NC} Removing Conduit data volume(s)..." + for i in $(seq 1 5); do + local vol=$(get_volume_name $i) + docker volume rm "$vol" 2>/dev/null || true + done echo -e "${BLUE}[INFO]${NC} Removing auto-start service..." + # Tracker service + systemctl stop conduit-tracker.service 2>/dev/null || true + systemctl disable conduit-tracker.service 2>/dev/null || true + rm -f /etc/systemd/system/conduit-tracker.service + pkill -f "conduit-tracker.sh" 2>/dev/null || true # Systemd systemctl stop conduit.service 2>/dev/null || true systemctl disable conduit.service 2>/dev/null || true @@ -1945,6 +2724,8 @@ uninstall_all() { # Remove files in /opt/conduit but keep backups subdirectory rm -f /opt/conduit/config.env 2>/dev/null || true rm -f /opt/conduit/conduit 2>/dev/null || true + rm -f /opt/conduit/conduit-tracker.sh 2>/dev/null || true + rm -rf /opt/conduit/traffic_stats 2>/dev/null || true find /opt/conduit -maxdepth 1 -type f -delete 2>/dev/null || true else # Remove everything including backups @@ -1964,11 +2745,411 @@ uninstall_all() { echo " You can use these to restore your node identity after reinstalling." fi echo "" - echo "Note: Docker itself was NOT removed." + echo "Note: Docker engine was NOT removed." echo "" } -show_menu() { +manage_containers() { + local stop_manage=0 + trap 'stop_manage=1' SIGINT SIGTERM + + tput smcup 2>/dev/null || true + echo -ne "\033[?25l" + printf "\033[2J\033[H" + + local EL="\033[K" + local need_input=true + local mc_choice="" + + while [ $stop_manage -eq 0 ]; do + # Soft update: cursor home, no clear + printf "\033[H" + + echo -e "${EL}" + echo -e "${CYAN}═══════════════════════════════════════════════════════════════${NC}${EL}" + echo -e "${CYAN} MANAGE CONTAINERS${NC} ${GREEN}${CONTAINER_COUNT}${NC}/5 Host networking${EL}" + echo -e "${CYAN}═══════════════════════════════════════════════════════════════${NC}${EL}" + echo -e "${EL}" + + # Per-container stats table + local docker_ps_cache=$(docker ps --format '{{.Names}}' 2>/dev/null) + printf " ${BOLD}%-2s %-11s %-8s %-7s %-8s %-8s %-6s %-7s${NC}${EL}\n" \ + "#" "Container" "Status" "Clients" "Up" "Down" "CPU" "RAM" + echo -e " ${CYAN}─────────────────────────────────────────────────────────${NC}${EL}" + + for ci in $(seq 1 5); do + local cname=$(get_container_name $ci) + local status_text status_color + local c_clients="-" c_up="-" c_down="-" c_cpu="-" c_ram="-" + + if [ "$ci" -le "$CONTAINER_COUNT" ]; then + if echo "$docker_ps_cache" | grep -q "^${cname}$"; then + status_text="Running" + status_color="${GREEN}" + local logs=$(docker logs --tail 1000 "$cname" 2>&1 | grep "STATS" | tail -1) + if [ -n "$logs" ]; then + local conn=$(echo "$logs" | sed -n 's/.*Connected:[[:space:]]*\([0-9]*\).*/\1/p') + local cing=$(echo "$logs" | sed -n 's/.*Connecting:[[:space:]]*\([0-9]*\).*/\1/p') + c_clients="${conn:-0}/${cing:-0}" + c_up=$(echo "$logs" | sed -n 's/.*Up:[[:space:]]*\([^|]*\).*/\1/p' | xargs) + c_down=$(echo "$logs" | sed -n 's/.*Down:[[:space:]]*\([^|]*\).*/\1/p' | xargs) + [ -z "$c_up" ] && c_up="-" + [ -z "$c_down" ] && c_down="-" + fi + local dstats=$(docker stats --no-stream --format "{{.CPUPerc}} {{.MemUsage}}" "$cname" 2>/dev/null) + if [ -n "$dstats" ]; then + c_cpu=$(echo "$dstats" | awk '{print $1}') + c_ram=$(echo "$dstats" | awk '{print $2}') + fi + else + status_text="Stopped" + status_color="${RED}" + fi + else + status_text="--" + status_color="${YELLOW}" + fi + printf " %-2s %-11s %b%-8s%b %-7s %-8s %-8s %-6s %-7s${EL}\n" \ + "$ci" "$cname" "$status_color" "$status_text" "${NC}" "$c_clients" "$c_up" "$c_down" "$c_cpu" "$c_ram" + done + + echo -e "${EL}" + echo -e "${CYAN}═══════════════════════════════════════════════════════════════${NC}${EL}" + local max_add=$((5 - CONTAINER_COUNT)) + [ "$max_add" -gt 0 ] && echo -e " ${GREEN}[a]${NC} Add container(s) (max: ${max_add} more)${EL}" + [ "$CONTAINER_COUNT" -gt 1 ] && echo -e " ${RED}[r]${NC} Remove container(s) (min: 1 required)${EL}" + echo -e " ${GREEN}[s]${NC} Start a container${EL}" + echo -e " ${RED}[t]${NC} Stop a container${EL}" + echo -e " ${YELLOW}[x]${NC} Restart a container${EL}" + echo -e " ${CYAN}[q]${NC} QR code for container${EL}" + echo -e " [b] Back to menu${EL}" + echo -e "${EL}" + printf "\033[J" + + echo -ne "\033[?25h" + read -t 5 -p " Select option: " mc_choice < /dev/tty 2>/dev/null || { mc_choice=""; } + echo -ne "\033[?25l" + + # Empty = just refresh + [ -z "$mc_choice" ] && continue + + case "$mc_choice" in + a) + if [ "$CONTAINER_COUNT" -ge 5 ]; then + echo -e " ${RED}Already at maximum (5).${NC}" + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + continue + fi + local max_add=$((5 - CONTAINER_COUNT)) + read -p " How many to add? (1-${max_add}): " add_count < /dev/tty || true + if ! [[ "$add_count" =~ ^[0-9]+$ ]] || [ "$add_count" -lt 1 ] || [ "$add_count" -gt "$max_add" ]; then + echo -e " ${RED}Invalid.${NC}" + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + continue + fi + local old_count=$CONTAINER_COUNT + CONTAINER_COUNT=$((CONTAINER_COUNT + add_count)) + save_settings + for i in $(seq $((old_count + 1)) $CONTAINER_COUNT); do + local name=$(get_container_name $i) + local vol=$(get_volume_name $i) + docker volume create "$vol" 2>/dev/null || true + fix_volume_permissions $i + run_conduit_container $i + if [ $? -eq 0 ]; then + echo -e " ${GREEN}✓ ${name} started${NC}" + else + echo -e " ${RED}✗ Failed to start ${name}${NC}" + fi + done + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + ;; + r) + if [ "$CONTAINER_COUNT" -le 1 ]; then + echo -e " ${RED}Must keep at least 1 container.${NC}" + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + continue + fi + local max_rm=$((CONTAINER_COUNT - 1)) + read -p " How many to remove? (1-${max_rm}): " rm_count < /dev/tty || true + if ! [[ "$rm_count" =~ ^[0-9]+$ ]] || [ "$rm_count" -lt 1 ] || [ "$rm_count" -gt "$max_rm" ]; then + echo -e " ${RED}Invalid.${NC}" + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + continue + fi + local old_count=$CONTAINER_COUNT + CONTAINER_COUNT=$((CONTAINER_COUNT - rm_count)) + save_settings + for i in $(seq $((CONTAINER_COUNT + 1)) $old_count); do + local name=$(get_container_name $i) + docker stop "$name" 2>/dev/null || true + docker rm "$name" 2>/dev/null || true + echo -e " ${YELLOW}✓ ${name} removed${NC}" + done + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + ;; + s) + read -p " Start which container? (1-${CONTAINER_COUNT}): " sc_idx < /dev/tty || true + if [[ "$sc_idx" =~ ^[1-5]$ ]] && [ "$sc_idx" -le "$CONTAINER_COUNT" ]; then + local name=$(get_container_name $sc_idx) + local vol=$(get_volume_name $sc_idx) + docker volume create "$vol" 2>/dev/null || true + fix_volume_permissions $sc_idx + run_conduit_container $sc_idx + if [ $? -eq 0 ]; then + echo -e " ${GREEN}✓ ${name} started${NC}" + else + echo -e " ${RED}✗ Failed to start ${name}${NC}" + fi + else + echo -e " ${RED}Invalid.${NC}" + fi + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + ;; + t) + read -p " Stop which container? (1-${CONTAINER_COUNT}): " sc_idx < /dev/tty || true + if [[ "$sc_idx" =~ ^[1-5]$ ]] && [ "$sc_idx" -le "$CONTAINER_COUNT" ]; then + local name=$(get_container_name $sc_idx) + docker stop "$name" 2>/dev/null || true + echo -e " ${YELLOW}✓ ${name} stopped${NC}" + else + echo -e " ${RED}Invalid.${NC}" + fi + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + ;; + x) + read -p " Restart which container? (1-${CONTAINER_COUNT}): " sc_idx < /dev/tty || true + if [[ "$sc_idx" =~ ^[1-5]$ ]] && [ "$sc_idx" -le "$CONTAINER_COUNT" ]; then + local name=$(get_container_name $sc_idx) + docker restart "$name" 2>/dev/null || true + echo -e " ${GREEN}✓ ${name} restarted${NC}" + else + echo -e " ${RED}Invalid.${NC}" + fi + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + ;; + q) + show_qr_code + ;; + b|"") + stop_manage=1 + ;; + *) + echo -e " ${RED}Invalid option.${NC}" + read -n 1 -s -r -p " Press any key..." < /dev/tty || true + ;; + esac + done + echo -ne "\033[?25h" + tput rmcup 2>/dev/null || true + trap - SIGINT SIGTERM +} + +# Get default network interface +get_default_iface() { + local iface=$(ip route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($i=="dev") print $(i+1)}') + [ -z "$iface" ] && iface=$(ip route list default 2>/dev/null | awk '{print $5}') + echo "${iface:-eth0}" +} + +# Get current data usage since baseline (in bytes) +get_data_usage() { + local iface="${DATA_CAP_IFACE:-$(get_default_iface)}" + if [ ! -f "/sys/class/net/$iface/statistics/rx_bytes" ]; then + echo "0 0" + return + fi + local rx=$(cat /sys/class/net/$iface/statistics/rx_bytes 2>/dev/null || echo 0) + local tx=$(cat /sys/class/net/$iface/statistics/tx_bytes 2>/dev/null || echo 0) + local used_rx=$((rx - DATA_CAP_BASELINE_RX)) + local used_tx=$((tx - DATA_CAP_BASELINE_TX)) + # Handle counter reset (reboot) - re-baseline to current counters + # Prior usage is preserved in DATA_CAP_PRIOR_USAGE via check_data_cap + if [ "$used_rx" -lt 0 ] || [ "$used_tx" -lt 0 ]; then + DATA_CAP_BASELINE_RX=$rx + DATA_CAP_BASELINE_TX=$tx + save_settings + used_rx=0 + used_tx=0 + fi + echo "$used_rx $used_tx" +} + +# Check data cap and stop containers if exceeded +# Returns 1 if cap exceeded, 0 if OK or no cap set +DATA_CAP_EXCEEDED=false +_DATA_CAP_LAST_SAVED=0 +check_data_cap() { + [ "$DATA_CAP_GB" -eq 0 ] 2>/dev/null && return 0 + # Validate DATA_CAP_GB is numeric + if ! [[ "$DATA_CAP_GB" =~ ^[0-9]+$ ]]; then + return 0 # invalid cap value, treat as no cap + fi + local usage=$(get_data_usage) + local used_rx=$(echo "$usage" | awk '{print $1}') + local used_tx=$(echo "$usage" | awk '{print $2}') + local session_used=$((used_rx + used_tx)) + local total_used=$((session_used + ${DATA_CAP_PRIOR_USAGE:-0})) + # Periodically persist usage so it survives reboots (save every ~100MB change) + local save_threshold=104857600 + local diff=$((total_used - _DATA_CAP_LAST_SAVED)) + [ "$diff" -lt 0 ] && diff=$((-diff)) + if [ "$diff" -ge "$save_threshold" ]; then + DATA_CAP_PRIOR_USAGE=$total_used + DATA_CAP_BASELINE_RX=$(cat /sys/class/net/${DATA_CAP_IFACE:-$(get_default_iface)}/statistics/rx_bytes 2>/dev/null || echo 0) + DATA_CAP_BASELINE_TX=$(cat /sys/class/net/${DATA_CAP_IFACE:-$(get_default_iface)}/statistics/tx_bytes 2>/dev/null || echo 0) + save_settings + _DATA_CAP_LAST_SAVED=$total_used + fi + local cap_bytes=$(awk -v gb="$DATA_CAP_GB" 'BEGIN{printf "%.0f", gb * 1073741824}') + if [ "$total_used" -ge "$cap_bytes" ] 2>/dev/null; then + # Only stop containers once when cap is first exceeded + if [ "$DATA_CAP_EXCEEDED" = false ]; then + DATA_CAP_EXCEEDED=true + DATA_CAP_PRIOR_USAGE=$total_used + DATA_CAP_BASELINE_RX=$(cat /sys/class/net/${DATA_CAP_IFACE:-$(get_default_iface)}/statistics/rx_bytes 2>/dev/null || echo 0) + DATA_CAP_BASELINE_TX=$(cat /sys/class/net/${DATA_CAP_IFACE:-$(get_default_iface)}/statistics/tx_bytes 2>/dev/null || echo 0) + save_settings + _DATA_CAP_LAST_SAVED=$total_used + for i in $(seq 1 $CONTAINER_COUNT); do + local name=$(get_container_name $i) + docker stop "$name" 2>/dev/null || true + done + fi + return 1 # cap exceeded + else + DATA_CAP_EXCEEDED=false + fi + return 0 +} + +# Format bytes to GB with 2 decimal places +format_gb() { + awk -v b="$1" 'BEGIN{printf "%.2f", b / 1073741824}' +} + +set_data_cap() { + local iface=$(get_default_iface) + echo "" + echo -e "${CYAN}═══ DATA USAGE CAP ═══${NC}" + if [ "$DATA_CAP_GB" -gt 0 ] 2>/dev/null; then + local usage=$(get_data_usage) + local used_rx=$(echo "$usage" | awk '{print $1}') + local used_tx=$(echo "$usage" | awk '{print $2}') + local total_used=$((used_rx + used_tx)) + echo -e " Current cap: ${GREEN}${DATA_CAP_GB} GB${NC}" + echo -e " Used: $(format_gb $total_used) GB" + echo -e " Interface: ${DATA_CAP_IFACE:-$iface}" + else + echo -e " Current cap: ${YELLOW}None${NC}" + echo -e " Interface: $iface" + fi + echo "" + echo " Options:" + echo " 1. Set new data cap" + echo " 2. Reset usage counter" + echo " 3. Remove cap" + echo " 4. Back" + echo "" + read -p " Choice: " cap_choice < /dev/tty || true + + case "$cap_choice" in + 1) + read -p " Enter cap in GB (e.g. 50): " new_cap < /dev/tty || true + if [[ "$new_cap" =~ ^[0-9]+$ ]] && [ "$new_cap" -gt 0 ]; then + DATA_CAP_GB=$new_cap + DATA_CAP_IFACE=$iface + DATA_CAP_PRIOR_USAGE=0 + # Snapshot current bytes as baseline + DATA_CAP_BASELINE_RX=$(cat /sys/class/net/$iface/statistics/rx_bytes 2>/dev/null || echo 0) + DATA_CAP_BASELINE_TX=$(cat /sys/class/net/$iface/statistics/tx_bytes 2>/dev/null || echo 0) + save_settings + echo -e " ${GREEN}✓ Data cap set to ${new_cap} GB on ${iface}${NC}" + else + echo -e " ${RED}Invalid value.${NC}" + fi + ;; + 2) + DATA_CAP_PRIOR_USAGE=0 + DATA_CAP_BASELINE_RX=$(cat /sys/class/net/${DATA_CAP_IFACE:-$iface}/statistics/rx_bytes 2>/dev/null || echo 0) + DATA_CAP_BASELINE_TX=$(cat /sys/class/net/${DATA_CAP_IFACE:-$iface}/statistics/tx_bytes 2>/dev/null || echo 0) + save_settings + echo -e " ${GREEN}✓ Usage counter reset${NC}" + ;; + 3) + DATA_CAP_GB=0 + DATA_CAP_BASELINE_RX=0 + DATA_CAP_BASELINE_TX=0 + DATA_CAP_PRIOR_USAGE=0 + DATA_CAP_IFACE="" + save_settings + echo -e " ${GREEN}✓ Data cap removed${NC}" + ;; + 4|"") + return + ;; + esac +} + +# Save all settings to file +save_settings() { + cat > "$INSTALL_DIR/settings.conf" << EOF +MAX_CLIENTS=$MAX_CLIENTS +BANDWIDTH=$BANDWIDTH +CONTAINER_COUNT=$CONTAINER_COUNT +DATA_CAP_GB=$DATA_CAP_GB +DATA_CAP_IFACE=$DATA_CAP_IFACE +DATA_CAP_BASELINE_RX=$DATA_CAP_BASELINE_RX +DATA_CAP_BASELINE_TX=$DATA_CAP_BASELINE_TX +DATA_CAP_PRIOR_USAGE=${DATA_CAP_PRIOR_USAGE:-0} +EOF + # Save per-container overrides + for i in $(seq 1 5); do + local mc_var="MAX_CLIENTS_${i}" + local bw_var="BANDWIDTH_${i}" + [ -n "${!mc_var}" ] && echo "${mc_var}=${!mc_var}" >> "$INSTALL_DIR/settings.conf" + [ -n "${!bw_var}" ] && echo "${bw_var}=${!bw_var}" >> "$INSTALL_DIR/settings.conf" + done + chmod 600 "$INSTALL_DIR/settings.conf" 2>/dev/null || true +} + +show_about() { + clear + echo -e "${CYAN}══════════════════════════════════════════════════════════════════${NC}" + echo -e " ${BOLD}ABOUT PSIPHON CONDUIT MANAGER${NC}" + echo -e "${CYAN}══════════════════════════════════════════════════════════════════${NC}" + echo "" + echo -e " ${BOLD}${GREEN}What is Psiphon Conduit?${NC}" + echo -e " Psiphon is a free anti-censorship tool helping millions access" + echo -e " the open internet. Conduit is their ${BOLD}P2P volunteer network${NC}." + echo -e " By running a node, you help users in censored regions connect." + echo "" + echo -e " ${BOLD}${GREEN}How P2P Works${NC}" + echo -e " Unlike centralized VPNs, Conduit is ${CYAN}decentralized${NC}:" + echo -e " ${YELLOW}1.${NC} Your server registers with Psiphon's broker" + echo -e " ${YELLOW}2.${NC} Users discover your node through the P2P network" + echo -e " ${YELLOW}3.${NC} Direct encrypted WebRTC tunnels are established" + echo -e " ${YELLOW}4.${NC} Traffic: ${GREEN}User${NC} <--P2P--> ${CYAN}You${NC} <--> ${YELLOW}Internet${NC}" + echo "" + echo -e " ${BOLD}${GREEN}Technical${NC}" + echo -e " Protocol: WebRTC + DTLS (looks like video calls)" + echo -e " Ports: TCP 443 required | Turbo: UDP 16384-32768" + echo -e " Resources: ~50MB RAM per 100 clients, runs in Docker" + echo "" + echo -e " ${BOLD}${GREEN}Privacy${NC}" + echo -e " ${GREEN}✓${NC} End-to-end encrypted - you can't see user traffic" + echo -e " ${GREEN}✓${NC} No logs stored | Clean uninstall available" + echo "" + echo -e "${CYAN}──────────────────────────────────────────────────────────────────${NC}" + echo -e " ${BOLD}Made by Sam${NC}" + echo -e " GitHub: ${CYAN}https://github.com/SamNet-dev/conduit-manager${NC}" + echo -e " Psiphon: ${CYAN}https://psiphon.ca${NC}" + echo -e "${CYAN}══════════════════════════════════════════════════════════════════${NC}" + echo "" + read -n 1 -s -r -p " Press any key to return..." < /dev/tty || true +} + +show_settings_menu() { local redraw=true while true; do if [ "$redraw" = true ]; then @@ -1976,34 +3157,151 @@ show_menu() { print_header echo -e "${CYAN}─────────────────────────────────────────────────────────────────${NC}" - echo -e "${CYAN} MANAGEMENT OPTIONS${NC}" + echo -e "${CYAN} SETTINGS & TOOLS${NC}" + echo -e "${CYAN}─────────────────────────────────────────────────────────────────${NC}" + echo -e " 1. ⚙️ Change settings (max-clients, bandwidth)" + echo -e " 2. 📊 Set data usage cap" + echo "" + echo -e " 3. 💾 Backup node key" + echo -e " 4. 📥 Restore node key" + echo -e " 5. 🩺 Health check" + echo "" + echo -e " 6. 📱 Show QR Code & Conduit ID" + echo -e " 7. ℹ️ Version info" + echo -e " 8. 📖 About Conduit" + echo "" + echo -e " 9. 🔄 Reset tracker data" + echo -e " u. 🗑️ Uninstall" + echo -e " 0. ← Back to main menu" + echo -e "${CYAN}─────────────────────────────────────────────────────────────────${NC}" + echo "" + redraw=false + fi + + read -p " Enter choice: " choice < /dev/tty || { return; } + + case $choice in + 1) + change_settings + redraw=true + ;; + 2) + set_data_cap + read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + redraw=true + ;; + 3) + backup_key + read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + redraw=true + ;; + 4) + restore_key + read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + redraw=true + ;; + 5) + health_check + read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + redraw=true + ;; + 6) + show_qr_code + redraw=true + ;; + 7) + show_version + read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + redraw=true + ;; + 8) + show_about + redraw=true + ;; + 9) + echo "" + read -p "Reset tracker and delete all stats data? (y/n): " confirm < /dev/tty || true + if [[ "$confirm" =~ ^[Yy]$ ]]; then + echo "Stopping tracker service..." + stop_tracker_service 2>/dev/null || true + echo "Deleting tracker data..." + rm -rf /opt/conduit/traffic_stats 2>/dev/null || true + rm -f /opt/conduit/conduit-tracker.sh 2>/dev/null || true + echo "Restarting tracker service..." + regenerate_tracker_script + setup_tracker_service + echo -e "${GREEN}Tracker data has been reset.${NC}" + else + echo "Cancelled." + fi + read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + redraw=true + ;; + u) + uninstall_all + exit 0 + ;; + 0) + return + ;; + "") + ;; + *) + echo -e "${RED}Invalid choice${NC}" + ;; + esac + done +} + +show_menu() { + # Auto-fix conduit.service if it's in failed state + if command -v systemctl &>/dev/null; then + local svc_state=$(systemctl is-active conduit.service 2>/dev/null) + if [ "$svc_state" = "failed" ]; then + systemctl reset-failed conduit.service 2>/dev/null || true + systemctl restart conduit.service 2>/dev/null || true + fi + fi + + # Auto-start tracker if not running and containers are up + if ! is_tracker_active; then + local any_running=$(docker ps --format '{{.Names}}' 2>/dev/null | grep -c "^conduit") + if [ "${any_running:-0}" -gt 0 ]; then + setup_tracker_service + fi + fi + + local redraw=true + while true; do + if [ "$redraw" = true ]; then + clear + print_header + + echo -e "${CYAN}─────────────────────────────────────────────────────────────────${NC}" + echo -e "${CYAN} MAIN MENU${NC}" echo -e "${CYAN}─────────────────────────────────────────────────────────────────${NC}" echo -e " 1. 📈 View status dashboard" echo -e " 2. 📊 Live connection stats" - echo -e " 3. 📋 View logs (filtered)" - echo -e " 4. ⚙️ Change settings (max-clients, bandwidth)" + echo -e " 3. 📋 View logs" + echo -e " 4. 🌍 Live peers by country" echo "" - echo -e " 5. 🔄 Update Conduit" - echo -e " 6. ▶️ Start Conduit" - echo -e " 7. ⏹️ Stop Conduit" - echo -e " 8. 🔁 Restart Conduit" + echo -e " 5. ▶️ Start Conduit" + echo -e " 6. ⏹️ Stop Conduit" + echo -e " 7. 🔁 Restart Conduit" + echo -e " 8. 🔄 Update Conduit" echo "" - echo -e " 9. 🌍 View live peers by country (Live Map)" - echo "" - echo -e " h. 🩺 Health check" - echo -e " b. 💾 Backup node key" - echo -e " r. 📥 Restore node key" - echo "" - echo -e " u. 🗑️ Uninstall (remove everything)" - echo -e " v. ℹ️ Version info" + echo -e " 9. ⚙️ Settings & Tools" + echo -e " c. 📦 Manage containers" + echo -e " a. 📊 Advanced stats" + echo -e " i. ℹ️ Info & Help" echo -e " 0. 🚪 Exit" echo -e "${CYAN}─────────────────────────────────────────────────────────────────${NC}" echo "" redraw=false fi - + read -p " Enter choice: " choice < /dev/tty || { echo "Input error. Exiting."; exit 1; } - + case $choice in 1) show_dashboard @@ -2018,55 +3316,43 @@ show_menu() { redraw=true ;; 4) - change_settings + show_peers redraw=true ;; 5) - update_conduit - read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true - redraw=true - ;; - 6) start_conduit read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true redraw=true ;; - 7) + 6) stop_conduit read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true redraw=true ;; - 8) + 7) restart_conduit read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true redraw=true ;; + 8) + update_conduit + read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + redraw=true + ;; 9) - show_peers + show_settings_menu redraw=true ;; - h|H) - health_check - read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + c) + manage_containers redraw=true ;; - b|B) - backup_key - read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + a) + show_advanced_stats redraw=true ;; - r|R) - restore_key - read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true - redraw=true - ;; - u) - uninstall_all - exit 0 - ;; - v|V) - show_version - read -n 1 -s -r -p "Press any key to return..." < /dev/tty || true + i) + show_info_menu redraw=true ;; 0) @@ -2074,16 +3360,191 @@ show_menu() { exit 0 ;; "") - # Ignore empty Enter key ;; *) echo -e "${RED}Invalid choice: ${NC}${YELLOW}$choice${NC}" - echo -e "${CYAN}Choose an option from 0-9, h, b, r, u, or v.${NC}" ;; esac done } +# Info hub - sub-page menu +show_info_menu() { + local redraw=true + while true; do + if [ "$redraw" = true ]; then + clear + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo -e "${BOLD} INFO & HELP${NC}" + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + echo -e " 1. 📡 How the Tracker Works" + echo -e " 2. 📊 Understanding the Stats Pages" + echo -e " 3. 📦 Containers & Scaling" + echo -e " 4. 🔒 Privacy & Security" + echo -e " 5. 🚀 About Psiphon Conduit" + echo "" + echo -e " [b] Back to menu" + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + redraw=true + fi + read -p " Select page: " info_choice < /dev/tty || break + case "$info_choice" in + 1) _info_tracker; redraw=true ;; + 2) _info_stats; redraw=true ;; + 3) _info_containers; redraw=true ;; + 4) _info_privacy; redraw=true ;; + 5) show_about; redraw=true ;; + b|"") break ;; + *) echo -e " ${RED}Invalid.${NC}"; sleep 1; redraw=true ;; + esac + done +} + +_info_tracker() { + clear + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo -e "${BOLD} HOW THE TRACKER WORKS${NC}" + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + echo -e " ${BOLD}What is it?${NC}" + echo -e " A background systemd service (conduit-tracker.service) that" + echo -e " monitors network traffic on your server using tcpdump." + echo -e " It runs continuously and captures ALL TCP/UDP traffic" + echo -e " (excluding SSH port 22) to track where traffic goes." + echo "" + echo -e " ${BOLD}How it works${NC}" + echo -e " Every 15 seconds the tracker:" + echo -e " ${YELLOW}1.${NC} Captures network packets via tcpdump" + echo -e " ${YELLOW}2.${NC} Extracts source/destination IPs and byte counts" + echo -e " ${YELLOW}3.${NC} Resolves each IP to a country using GeoIP" + echo -e " ${YELLOW}4.${NC} Saves cumulative data to disk" + echo "" + echo -e " ${BOLD}Data files${NC} ${DIM}(in /opt/conduit/traffic_stats/)${NC}" + echo -e " ${CYAN}cumulative_data${NC} - Country traffic totals (bytes in/out)" + echo -e " ${CYAN}cumulative_ips${NC} - All unique IPs ever seen + country" + echo -e " ${CYAN}tracker_snapshot${NC} - Last 15-second cycle (for live views)" + echo "" + echo -e " ${BOLD}Important${NC}" + echo -e " The tracker captures ALL server traffic, not just Conduit." + echo -e " IP counts include system updates, DNS, Docker pulls, etc." + echo -e " This is why unique IP counts are higher than client counts." + echo -e " To reset all data: Settings > Reset tracker data." + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + read -n 1 -s -r -p " Press any key to return..." < /dev/tty || true +} + +_info_stats() { + clear + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo -e "${BOLD} UNDERSTANDING THE STATS PAGES${NC}" + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + echo -e " ${BOLD}Unique IPs vs Clients${NC}" + echo -e " ${YELLOW}IPs${NC} = Total unique IP addresses seen in ALL network" + echo -e " traffic. Includes non-Conduit traffic (system" + echo -e " updates, DNS, Docker, etc). Always higher." + echo -e " ${GREEN}Clients${NC} = Actual Psiphon peers connected to your Conduit" + echo -e " containers. Comes from Docker logs. This is" + echo -e " the real number of people you are helping." + echo "" + echo -e " ${BOLD}Dashboard (option 1)${NC}" + echo -e " Shows status, resources, traffic totals, and two" + echo -e " side-by-side TOP 5 charts:" + echo -e " ${GREEN}Active Clients${NC} - Estimated clients per country" + echo -e " ${YELLOW}Top Upload${NC} - Countries you upload most to" + echo "" + echo -e " ${BOLD}Live Peers (option 4)${NC}" + echo -e " Full-page traffic breakdown by country. Shows:" + echo -e " Total bytes, Speed (KB/s), IPs / Clients per country" + echo -e " Client counts are estimated from the snapshot" + echo -e " distribution scaled to actual connected count." + echo "" + echo -e " ${BOLD}Advanced Stats (a)${NC}" + echo -e " Container resources (CPU, RAM, clients, bandwidth)," + echo -e " network speed, tracker status, and TOP 7 charts" + echo -e " for unique IPs, download, and upload by country." + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + read -n 1 -s -r -p " Press any key to return..." < /dev/tty || true +} + +_info_containers() { + clear + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo -e "${BOLD} CONTAINERS & SCALING${NC}" + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + echo -e " ${BOLD}What are containers?${NC}" + echo -e " Each container is an independent Conduit node running" + echo -e " in Docker. Multiple containers let you serve more" + echo -e " clients simultaneously from the same server." + echo "" + echo -e " ${BOLD}Naming${NC}" + echo -e " Container 1: ${CYAN}conduit${NC} Volume: ${CYAN}conduit-data${NC}" + echo -e " Container 2: ${CYAN}conduit-2${NC} Volume: ${CYAN}conduit-data-2${NC}" + echo -e " Container 3: ${CYAN}conduit-3${NC} Volume: ${CYAN}conduit-data-3${NC}" + echo -e " ...up to 5 containers." + echo "" + echo -e " ${BOLD}Scaling recommendations${NC}" + echo -e " ${YELLOW}1 CPU / <1GB RAM:${NC} Stick with 1 container" + echo -e " ${YELLOW}2 CPUs / 2GB RAM:${NC} 1-2 containers" + echo -e " ${GREEN}4+ CPUs / 4GB RAM:${NC} 3-5 containers" + echo -e " Each container uses ~50MB RAM per 100 clients." + echo "" + echo -e " ${BOLD}Per-container settings${NC}" + echo -e " You can set different max-clients and bandwidth for" + echo -e " each container in Settings > Change settings. Choose" + echo -e " 'Apply to specific container' to customize individually." + echo "" + echo -e " ${BOLD}Managing${NC}" + echo -e " Use Manage Containers (c) to add/remove containers," + echo -e " start/stop individual ones, or view per-container stats." + echo -e " Each container has its own volume (identity key)." + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + read -n 1 -s -r -p " Press any key to return..." < /dev/tty || true +} + +_info_privacy() { + clear + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo -e "${BOLD} PRIVACY & SECURITY${NC}" + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + echo -e " ${BOLD}Is my traffic visible?${NC}" + echo -e " ${GREEN}No.${NC} All Conduit traffic is end-to-end encrypted using" + echo -e " WebRTC + DTLS. You cannot see what users are browsing." + echo -e " The connection looks like a regular video call." + echo "" + echo -e " ${BOLD}What data is stored?${NC}" + echo -e " Conduit Manager stores:" + echo -e " ${GREEN}Node identity key${NC} - Your unique node ID (in Docker volume)" + echo -e " ${GREEN}Settings${NC} - Max clients, bandwidth, container count" + echo -e " ${GREEN}Tracker stats${NC} - Country-level traffic aggregates" + echo -e " ${RED}No${NC} user browsing data, IP logs, or personal info is stored." + echo "" + echo -e " ${BOLD}What can the tracker see?${NC}" + echo -e " The tracker only records:" + echo -e " - Which countries connect (via GeoIP lookup)" + echo -e " - How many bytes flow in/out per country" + echo -e " - Total unique IP addresses (not logged individually)" + echo -e " It cannot see URLs, content, or decrypt any traffic." + echo "" + echo -e " ${BOLD}Uninstall${NC}" + echo -e " Full uninstall (option 9 > Uninstall) removes:" + echo -e " - All containers and Docker volumes" + echo -e " - Tracker service and all stats data" + echo -e " - Settings, systemd service files" + echo -e " - The conduit command itself" + echo -e " Nothing is left behind on your system." + echo -e "${CYAN}══════════════════════════════════════════════════════════════${NC}" + echo "" + read -n 1 -s -r -p " Press any key to return..." < /dev/tty || true +} + # Command line interface show_help() { echo "Usage: conduit [command]" @@ -2098,11 +3559,13 @@ show_help() { echo " restart Restart Conduit container" echo " update Update to latest Conduit image" echo " settings Change max-clients/bandwidth" + echo " scale Scale containers (1-5)" echo " backup Backup Conduit node identity key" echo " restore Restore Conduit node identity from backup" echo " uninstall Remove everything (container, data, service)" echo " menu Open interactive menu (default)" echo " version Show version information" + echo " about About Psiphon Conduit" echo " help Show this help" } @@ -2134,90 +3597,112 @@ health_check() { all_ok=false fi - # 2. Check if container exists - echo -n "Container exists: " - if docker ps -a 2>/dev/null | grep -q "[[:space:]]conduit$"; then - echo -e "${GREEN}OK${NC}" - else - echo -e "${RED}FAILED${NC} - Container not found" - all_ok=false - fi + # 2-5. Check each container + for i in $(seq 1 $CONTAINER_COUNT); do + local cname=$(get_container_name $i) + local vname=$(get_volume_name $i) - # 3. Check if container is running - echo -n "Container running: " - if docker ps 2>/dev/null | grep -q "[[:space:]]conduit$"; then - echo -e "${GREEN}OK${NC}" - else - echo -e "${RED}FAILED${NC} - Container is stopped" - all_ok=false - fi + if [ "$CONTAINER_COUNT" -gt 1 ]; then + echo "" + echo -e "${CYAN}--- ${cname} ---${NC}" + fi - # 4. Check container health/restart count - echo -n "Restart count: " - local restarts=$(docker inspect --format='{{.RestartCount}}' conduit 2>/dev/null) - if [ -n "$restarts" ]; then - if [ "$restarts" -eq 0 ]; then - echo -e "${GREEN}${restarts}${NC} (healthy)" - elif [ "$restarts" -lt 5 ]; then - echo -e "${YELLOW}${restarts}${NC} (some restarts)" + echo -n "Container exists: " + if docker ps -a 2>/dev/null | grep -q "[[:space:]]${cname}$"; then + echo -e "${GREEN}OK${NC}" else - echo -e "${RED}${restarts}${NC} (excessive restarts)" + echo -e "${RED}FAILED${NC} - Container not found" all_ok=false fi - else - echo -e "${YELLOW}N/A${NC}" - fi - # 5. Check if Conduit has connected to network - echo -n "Network connection: " - local connected=$(docker logs --tail 100 conduit 2>&1 | grep -c "Connected to Psiphon" || true) - if [ "$connected" -gt 0 ]; then - echo -e "${GREEN}OK${NC} (Connected to Psiphon network)" - else - local info_lines=$(docker logs --tail 100 conduit 2>&1 | grep -c "\[INFO\]" || true) - if [ "$info_lines" -gt 0 ]; then - echo -e "${YELLOW}CONNECTING${NC} - Establishing connection..." + echo -n "Container running: " + if docker ps 2>/dev/null | grep -q "[[:space:]]${cname}$"; then + echo -e "${GREEN}OK${NC}" else - echo -e "${YELLOW}WAITING${NC} - Starting up..." + echo -e "${RED}FAILED${NC} - Container is stopped" + all_ok=false fi - fi - # 5b. Check if STATS output is enabled (requires -v flag) - echo -n "Stats output: " - local stats_count=$(docker logs --tail 100 conduit 2>&1 | grep -c "\[STATS\]" || true) - if [ "$stats_count" -gt 0 ]; then - echo -e "${GREEN}OK${NC} (${stats_count} entries)" - else - echo -e "${YELLOW}NONE${NC} - Run 'conduit restart' to enable" - fi + echo -n "Restart count: " + local restarts=$(docker inspect --format='{{.RestartCount}}' "$cname" 2>/dev/null) + if [ -n "$restarts" ]; then + if [ "$restarts" -eq 0 ]; then + echo -e "${GREEN}${restarts}${NC} (healthy)" + elif [ "$restarts" -lt 5 ]; then + echo -e "${YELLOW}${restarts}${NC} (some restarts)" + else + echo -e "${RED}${restarts}${NC} (excessive restarts)" + all_ok=false + fi + else + echo -e "${YELLOW}N/A${NC}" + fi - # 6. Check data volume - echo -n "Data volume: " - if docker volume inspect conduit-data &>/dev/null; then - echo -e "${GREEN}OK${NC}" - else - echo -e "${RED}FAILED${NC} - Volume not found" - all_ok=false - fi + echo -n "Network connection: " + local connected=$(docker logs --tail 100 "$cname" 2>&1 | grep -c "Connected to Psiphon" || true) + if [ "$connected" -gt 0 ]; then + echo -e "${GREEN}OK${NC} (Connected to Psiphon network)" + else + local info_lines=$(docker logs --tail 100 "$cname" 2>&1 | grep -c "\[INFO\]" || true) + if [ "$info_lines" -gt 0 ]; then + echo -e "${YELLOW}CONNECTING${NC} - Establishing connection..." + else + echo -e "${YELLOW}WAITING${NC} - Starting up..." + fi + fi - # 7. Check node key exists + echo -n "Stats output: " + local stats_count=$(docker logs --tail 100 "$cname" 2>&1 | grep -c "\[STATS\]" || true) + if [ "$stats_count" -gt 0 ]; then + echo -e "${GREEN}OK${NC} (${stats_count} entries)" + else + echo -e "${YELLOW}NONE${NC} - Run 'conduit restart' to enable" + fi + + echo -n "Data volume: " + if docker volume inspect "$vname" &>/dev/null; then + echo -e "${GREEN}OK${NC}" + else + echo -e "${RED}FAILED${NC} - Volume not found" + all_ok=false + fi + + echo -n "Network (host mode): " + local network_mode=$(docker inspect --format='{{.HostConfig.NetworkMode}}' "$cname" 2>/dev/null) + if [ "$network_mode" = "host" ]; then + echo -e "${GREEN}OK${NC}" + else + echo -e "${YELLOW}WARN${NC} - Not using host network mode" + fi + done + + # Node key check (only on first volume) + if [ "$CONTAINER_COUNT" -gt 1 ]; then + echo "" + echo -e "${CYAN}--- Shared ---${NC}" + fi echo -n "Node identity key: " local mountpoint=$(docker volume inspect conduit-data --format '{{ .Mountpoint }}' 2>/dev/null) + local key_found=false if [ -n "$mountpoint" ] && [ -f "$mountpoint/conduit_key.json" ]; then + key_found=true + else + # Snap Docker fallback: check via docker cp + local tmp_ctr="conduit-health-tmp" + docker rm -f "$tmp_ctr" 2>/dev/null || true + if docker create --name "$tmp_ctr" -v conduit-data:/data alpine true 2>/dev/null; then + if docker cp "$tmp_ctr:/data/conduit_key.json" - >/dev/null 2>&1; then + key_found=true + fi + docker rm -f "$tmp_ctr" 2>/dev/null || true + fi + fi + if [ "$key_found" = true ]; then echo -e "${GREEN}OK${NC}" else echo -e "${YELLOW}PENDING${NC} - Will be created on first run" fi - # 8. Check network connectivity (port binding) - echo -n "Network (host mode): " - local network_mode=$(docker inspect --format='{{.HostConfig.NetworkMode}}' conduit 2>/dev/null) - if [ "$network_mode" = "host" ]; then - echo -e "${GREEN}OK${NC}" - else - echo -e "${YELLOW}WARN${NC} - Not using host network mode" - fi - echo "" if [ "$all_ok" = true ]; then echo -e "${GREEN}✓ All health checks passed${NC}" @@ -2232,18 +3717,6 @@ backup_key() { echo -e "${CYAN}═══ BACKUP CONDUIT NODE KEY ═══${NC}" echo "" - local mountpoint=$(docker volume inspect conduit-data --format '{{ .Mountpoint }}' 2>/dev/null) - - if [ -z "$mountpoint" ]; then - echo -e "${RED}Error: Could not find conduit-data volume${NC}" - return 1 - fi - - if [ ! -f "$mountpoint/conduit_key.json" ]; then - echo -e "${RED}Error: No node key found. Has Conduit been started at least once?${NC}" - return 1 - fi - # Create backup directory mkdir -p "$INSTALL_DIR/backups" @@ -2251,11 +3724,30 @@ backup_key() { local timestamp=$(date '+%Y%m%d_%H%M%S') local backup_file="$INSTALL_DIR/backups/conduit_key_${timestamp}.json" - cp "$mountpoint/conduit_key.json" "$backup_file" + # Try direct mountpoint access first, fall back to docker cp (Snap Docker) + local mountpoint=$(docker volume inspect conduit-data --format '{{ .Mountpoint }}' 2>/dev/null) + + if [ -n "$mountpoint" ] && [ -f "$mountpoint/conduit_key.json" ]; then + if ! cp "$mountpoint/conduit_key.json" "$backup_file"; then + echo -e "${RED}Error: Failed to copy key file${NC}" + return 1 + fi + else + # Use docker cp fallback (works with Snap Docker) + local tmp_ctr="conduit-backup-tmp" + docker create --name "$tmp_ctr" -v conduit-data:/data alpine true 2>/dev/null || true + if ! docker cp "$tmp_ctr:/data/conduit_key.json" "$backup_file" 2>/dev/null; then + docker rm -f "$tmp_ctr" 2>/dev/null || true + echo -e "${RED}Error: No node key found. Has Conduit been started at least once?${NC}" + return 1 + fi + docker rm -f "$tmp_ctr" 2>/dev/null || true + fi + chmod 600 "$backup_file" # Get node ID for display - local node_id=$(cat "$mountpoint/conduit_key.json" | grep "privateKeyBase64" | awk -F'"' '{print $4}' | base64 -d 2>/dev/null | tail -c 32 | base64 | tr -d '=\n') + local node_id=$(cat "$backup_file" | grep "privateKeyBase64" | awk -F'"' '{print $4}' | base64 -d 2>/dev/null | tail -c 32 | base64 | tr -d '=\n') echo -e "${GREEN}✓ Backup created successfully${NC}" echo "" @@ -2278,7 +3770,7 @@ restore_key() { local backup_dir="$INSTALL_DIR/backups" # Check if backup directory exists and has files - if [ ! -d "$backup_dir" ] || [ -z "$(ls -A $backup_dir/*.json 2>/dev/null)" ]; then + if [ ! -d "$backup_dir" ] || [ -z "$(ls -A "$backup_dir"/*.json 2>/dev/null)" ]; then echo -e "${YELLOW}No backups found in ${backup_dir}${NC}" echo "" echo "To restore from a custom path, provide the file path:" @@ -2294,7 +3786,7 @@ restore_key() { return 1 fi - backup_file="$custom_path" + local backup_file="$custom_path" else # List available backups echo "Available backups:" @@ -2334,36 +3826,56 @@ restore_key() { return 0 fi - # Stop container + # Stop all containers echo "" echo "Stopping Conduit..." - docker stop conduit 2>/dev/null || true + stop_conduit - # Get volume mountpoint + # Try direct mountpoint access, fall back to docker cp (Snap Docker) local mountpoint=$(docker volume inspect conduit-data --format '{{ .Mountpoint }}' 2>/dev/null) + local use_docker_cp=false - if [ -z "$mountpoint" ]; then - echo -e "${RED}Error: Could not find conduit-data volume${NC}" - return 1 + if [ -z "$mountpoint" ] || [ ! -d "$mountpoint" ]; then + use_docker_cp=true fi # Backup current key if exists - if [ -f "$mountpoint/conduit_key.json" ]; then + if [ "$use_docker_cp" = "true" ]; then local timestamp=$(date '+%Y%m%d_%H%M%S') mkdir -p "$backup_dir" - cp "$mountpoint/conduit_key.json" "$backup_dir/conduit_key_pre_restore_${timestamp}.json" - echo " Current key backed up to: conduit_key_pre_restore_${timestamp}.json" + local tmp_ctr="conduit-restore-tmp" + docker create --name "$tmp_ctr" -v conduit-data:/data alpine true 2>/dev/null || true + if docker cp "$tmp_ctr:/data/conduit_key.json" "$backup_dir/conduit_key_pre_restore_${timestamp}.json" 2>/dev/null; then + echo " Current key backed up to: conduit_key_pre_restore_${timestamp}.json" + fi + # Copy new key in + if ! docker cp "$backup_file" "$tmp_ctr:/data/conduit_key.json" 2>/dev/null; then + docker rm -f "$tmp_ctr" 2>/dev/null || true + echo -e "${RED}Error: Failed to copy key into container volume${NC}" + return 1 + fi + docker rm -f "$tmp_ctr" 2>/dev/null || true + # Fix ownership + docker run --rm -v conduit-data:/data alpine chown 1000:1000 /data/conduit_key.json 2>/dev/null || true + else + if [ -f "$mountpoint/conduit_key.json" ]; then + local timestamp=$(date '+%Y%m%d_%H%M%S') + mkdir -p "$backup_dir" + cp "$mountpoint/conduit_key.json" "$backup_dir/conduit_key_pre_restore_${timestamp}.json" + echo " Current key backed up to: conduit_key_pre_restore_${timestamp}.json" + fi + if ! cp "$backup_file" "$mountpoint/conduit_key.json"; then + echo -e "${RED}Error: Failed to copy key to volume${NC}" + return 1 + fi + chmod 600 "$mountpoint/conduit_key.json" fi - # Restore the key - cp "$backup_file" "$mountpoint/conduit_key.json" - chmod 600 "$mountpoint/conduit_key.json" - - # Restart container + # Restart all containers echo "Starting Conduit..." - docker start conduit 2>/dev/null + start_conduit - local node_id=$(cat "$mountpoint/conduit_key.json" | grep "privateKeyBase64" | awk -F'"' '{print $4}' | base64 -d 2>/dev/null | tail -c 32 | base64 | tr -d '=\n') + local node_id=$(cat "$backup_file" | grep "privateKeyBase64" | awk -F'"' '{print $4}' | base64 -d 2>/dev/null | tail -c 32 | base64 | tr -d '=\n') echo "" echo -e "${GREEN}✓ Node key restored successfully${NC}" @@ -2379,33 +3891,30 @@ update_conduit() { # Check for updates by pulling echo "Checking for updates..." - if ! docker pull $CONDUIT_IMAGE 2>/dev/null; then + if ! docker pull "$CONDUIT_IMAGE" 2>/dev/null; then echo -e "${RED}Failed to check for updates. Check your internet connection.${NC}" return 1 fi echo "" - echo "Recreating container with updated image..." + echo "Recreating container(s) with updated image..." - # Save if container was running - local was_running=false - if docker ps 2>/dev/null | grep -q "[[:space:]]conduit$"; then - was_running=true - fi - - # Remove old container - docker rm -f conduit 2>/dev/null || true + # Remove and recreate all containers + for i in $(seq 1 $CONTAINER_COUNT); do + local name=$(get_container_name $i) + docker rm -f "$name" 2>/dev/null || true + done fix_volume_permissions - run_conduit_container - - if [ $? -eq 0 ]; then - echo -e "${GREEN}✓ Conduit updated and restarted${NC}" - else - echo -e "${RED}✗ Failed to start updated container${NC}" - return 1 - fi + for i in $(seq 1 $CONTAINER_COUNT); do + run_conduit_container $i + if [ $? -eq 0 ]; then + echo -e "${GREEN}✓ $(get_container_name $i) updated and restarted${NC}" + else + echo -e "${RED}✗ Failed to start $(get_container_name $i)${NC}" + fi + done } case "${1:-menu}" in @@ -2421,6 +3930,8 @@ case "${1:-menu}" in settings) change_settings ;; backup) backup_key ;; restore) restore_key ;; + scale) manage_containers ;; + about) show_about ;; uninstall) uninstall_all ;; version|-v|--version) show_version ;; help|-h|--help) show_help ;; @@ -2491,7 +4002,7 @@ print_summary() { uninstall() { echo "" echo -e "${CYAN}╔═══════════════════════════════════════════════════════════════════╗${NC}" - echo "║ ⚠️ UNINSTALL CONDUIT ║" + echo "║ ⚠️ UNINSTALL CONDUIT " echo "╚═══════════════════════════════════════════════════════════════════╝" echo "" echo "This will completely remove:" @@ -2512,18 +4023,19 @@ uninstall() { fi echo "" - log_info "Stopping Conduit container..." - docker stop conduit 2>/dev/null || true - - log_info "Removing Conduit container..." - docker rm -f conduit 2>/dev/null || true - + log_info "Stopping Conduit container(s)..." + for i in 1 2 3 4 5; do + local cname="conduit" + local vname="conduit-data" + [ "$i" -gt 1 ] && cname="conduit-${i}" && vname="conduit-data-${i}" + docker stop "$cname" 2>/dev/null || true + docker rm -f "$cname" 2>/dev/null || true + docker volume rm "$vname" 2>/dev/null || true + done + log_info "Removing Conduit Docker image..." docker rmi "$CONDUIT_IMAGE" 2>/dev/null || true - log_info "Removing Conduit data volume..." - docker volume rm conduit-data 2>/dev/null || true - log_info "Removing auto-start service..." # Systemd systemctl stop conduit.service 2>/dev/null || true @@ -2539,7 +4051,7 @@ uninstall() { rm -f /etc/init.d/conduit log_info "Removing configuration files..." - rm -rf "$INSTALL_DIR" + [ -n "$INSTALL_DIR" ] && rm -rf "$INSTALL_DIR" rm -f /usr/local/bin/conduit echo "" @@ -2602,7 +4114,7 @@ main() { check_dependencies # Check if already installed - if [ -f "$INSTALL_DIR/conduit" ] && [ "$FORCE_REINSTALL" != "true" ]; then + while [ -f "$INSTALL_DIR/conduit" ] && [ "$FORCE_REINSTALL" != "true" ]; do echo -e "${GREEN}Conduit is already installed!${NC}" echo "" echo "What would you like to do?" @@ -2613,7 +4125,7 @@ main() { echo " 0. 🚪 Exit" echo "" read -p " Enter choice: " choice < /dev/tty || true - + case $choice in 1) echo -e "${CYAN}Updating management script and opening menu...${NC}" @@ -2623,6 +4135,7 @@ main() { 2) echo "" log_info "Starting fresh reinstall..." + break ;; 3) uninstall @@ -2636,10 +4149,9 @@ main() { echo -e "${RED}Invalid choice: ${NC}${YELLOW}$choice${NC}" echo -e "${CYAN}Returning to installer...${NC}" sleep 1 - main "$@" ;; esac - fi + done # Interactive settings prompt (max-clients, bandwidth) prompt_settings @@ -2661,19 +4173,26 @@ main() { # Step 2: Check for and optionally restore backup keys # This preserves node identity if user had a previous installation log_info "Step 2/5: Checking for previous node identity..." - check_and_offer_backup_restore + check_and_offer_backup_restore || true echo "" # Step 3: Start Conduit container log_info "Step 3/5: Starting Conduit..." + # Clean up any existing containers from previous install/scaling + docker stop conduit 2>/dev/null || true + docker rm -f conduit 2>/dev/null || true + for i in 2 3 4 5; do + docker stop "conduit-${i}" 2>/dev/null || true + docker rm -f "conduit-${i}" 2>/dev/null || true + done run_conduit echo "" # Step 4: Save settings and configure auto-start service log_info "Step 4/5: Setting up auto-start..." - save_settings + save_settings_install setup_autostart echo "" @@ -2690,6 +4209,6 @@ main() { fi } # -# REACHED END OF SCRIPT - VERSION 1.0.2 +# REACHED END OF SCRIPT - VERSION 1.1 # ############################################################################### main "$@"