The Linux Vault: Main Terminal

Welcome to your central repository for Linux mastery. This vault contains everything you need to manage your Home Lab and master the ASIX modules.


Global Cheat Sheet

These are the commands you'll use every single day. Click any command to copy it!

Category Goal Command
FilesList Allls -la
FilesChange Dircd <path>
FilesPrint Working Dirpwd
NetworkCheck IPip addr
SystemCheck RAMfree -m
SystemCheck Diskdf -h
AdminUpdate Allsudo apt update && sudo apt upgrade
ScriptsCreate .sh filenano file.sh
ScriptsMake .sh file executablechmod +x file.sh && ./file.sh
ScriptsRun .sh file./file.sh

Navigating the Modules

Use the sidebar on the left to dive into specific expertise:

  • ISO: Process management, boot sequences, and Docker magic.
  • PAR: How to build the network "pipes" that everything runs on.
  • SRI: Setting up DNS, Web Servers, and fixing common service bugs.
  • SAD: Locking down your server and ensuring your data survives anything.

Pro-Tip

Use the browser's native search (Ctrl+F) to quickly filter through any page. It's the fastest way to find that one command you forgot.

Vault Guide

Welcome to the ASIX Vault! This guide explains how this documentation platform works, how to navigate it, and the underlying commands that power it.

Navigating the Vault

  • Sidebar: Use the left sidebar to access different modules (ISO, PAR, SRI, SAD). Click on group titles to expand sub-topics.
  • Search: Press Ctrl+F (or Cmd+F) to search through the entire vault instantly.

How It Works (Under the Hood)

The Vault is designed to be lightweight, incredibly fast, and easy to host anywhere. It doesn't require a complex database or backend framework.

The Stack

  • Frontend: Pure Vanilla HTML and CSS (with soft Neumorphic design and glassmorphism). ZERO JavaScript required!
  • Content: All knowledge is baked directly into this highly optimized static page.
  • Server: Hosted using a high-performance Nginx server running inside a Docker container.

Core Commands Used

Here is a breakdown of how the Vault is hosted and managed using Docker:

1. Hosting with Docker & Nginx

This Vault runs robustly via a Docker container named knowledge-web. The website directory is bound directly to Nginx, serving it securely on port 80:

docker run -d --name knowledge-web \ -v /home/turu-server/knolege-web-antigravity:/usr/share/nginx/html:ro \ -p 80:80 \ nginx:latest

Because the directory is mapped (-v), any changes you save to the Markdown files or CSS are instantly reflected on the live site!

2. Managing the Server

You can manage the web server lifecycle with standard Docker commands:

# Stop the Vault docker stop knowledge-web # Start it back up docker start knowledge-web # Check web traffic logs docker logs knowledge-web

Troubleshooting & Reliability

If you find the website is "down" or the launcher isn't responding, here is why that usually happens and how we've fixed it.

Common Issues

  • Terminal Closed: Previously, the manual Python server would die if you closed the terminal window. We have shifted to Docker to ensure the site stays up in the background.
  • Path Mismatches: If the folder was moved or the user changed, the desktop shortcut might break. We have updated the paths to point correctly to /home/turu-server/.
  • Port Confusion: The site is now standardized on Port 80. If you try to access it on 8080 and it's not there, the new launcher will automatically redirect you to the working Docker version on Port 80.

The "Smart Launcher" Solution

The start_vault.sh script has been upgraded with a Docker-First logic. When you run it:

  1. It checks if the knowledge-web container is already running.
  2. If it's stopped, it automatically runs docker start knowledge-web for you.
  3. It only uses the temporary Python server (port 8080) as a last resort if Docker is unavailable.

Quick Fix Command

If things feel stuck, you can always force a restart from your terminal:

# Navigate to the project cd /home/turu-server/knolege-web-antigravity # Run the smart launcher ./start_vault.sh

ISO: Boot & Targets

From the moment you press the power button, Linux follows a strict sequence to bring the system online. Understanding this helps you troubleshoot when a server refuses to boot.


Quick Reference: Boot Management

GoalCommand
Check Boot Logsjournalctl -b
Check Current Targetsystemctl get-default
Change to CLI Modesudo systemctl set-default multi-user.target
Change to GUI Modesudo systemctl set-default graphical.target

🧠 The Linux Boot Sequence

When you flip the switch, this is the chain reaction:

  1. BIOS/UEFI: "Is the hardware okay? Good. Where is the bootloader?"
  2. Bootloader (GRUB2): Loads the Linux Kernel into memory.
  3. Kernel: "I'm in charge now. Initializing CPU, memory, and devices."
  4. Init System (Systemd): The very first process (PID 1). It starts all other services (networking, SSH, web servers).

Systemd Targets (Runlevels)

In older Linux systems, we used "Runlevels" (0 through 6) to define the state of the machine. Modern Linux uses Targets. A target is just a group of services that should be running.

  • multi-user.target: Text-only terminal mode. This is what servers use. It saves RAM and CPU by not loading a desktop environment.
  • graphical.target: UI mode (GNOME, KDE). Used for desktop computers.
  • rescue.target: Single-user mode for fixing broken systems.
Shortcut

Switching on the Fly
You don't have to reboot to change modes! You can run sudo systemctl isolate multi-user.target to instantly kill the GUI and drop to a terminal.

Processes & Bash Scripting

Linux is a multi-tasking OS. Everything running is a "Process" with a unique Process ID (PID).


Quick Reference: Process Management

GoalCommand
Who's Hogging RAM?top or htop
Kill a Stuck Appkill -9 <PID>
List All Runningps aux
Find a Processpgrep <name>

🧠 Managing the Chaos

Priority (Niceness)

Some processes are more important than others. In Linux, priority is called "Niceness" (from -20 to 19).

  • A negative number means "I am NOT nice, give me all the CPU."
  • A positive number means "I am nice, I will wait my turn."

nice -n 10 <command> starts a process with low priority.


🚀 Hands-on: Bash Scripting

Why type something twice when you can script it once? Automation is the heart of system administration.

The "Save My Skin" Backup Script

Create a file called backup.sh:

#!/bin/bash # Backup the config files BACKUP_DIR="/backup" DATE=$(date +%Y-%m-%d) # Create the backup directory if it doesn't exist mkdir -p $BACKUP_DIR # Compress the /etc folder tar -czf $BACKUP_DIR/etc-backup-$DATE.tar.gz /etc echo "Backup completed successfully!"

Making it Executable

Linux won't run a file just because it ends in .sh. You must give it execute permissions:

chmod +x backup.sh ./backup.sh
Shortcut

The Shebang (#!/bin/bash)
That first line isn't a comment—it's an instruction to the OS telling it which interpreter to use to run the script. Never skip it!

Static IP Configuration

In a home lab, your servers need a Static IP. If their IP changes, your services break, your DNS records point to the wrong place, and your SSH connections fail.


Quick Reference: Network Commands

GoalCommand
Check My IPip addr show
View Routing Tableip route show
Test Connectivityping -c 4 1.1.1.1
Network Config UIsudo nmtui
Restart Networksudo systemctl restart NetworkManager

🚀 Hands-on: Setting a Static IP via nmtui

While modern Linux distributions use various network managers under the hood (like Netplan on Ubuntu), NetworkManager (nmtui) is a universally understood visual tool that works across Fedora and most Ubuntu server deployments.

  1. Launch the UI:
    sudo nmtui
  2. Edit Connection:
    Select Edit a connection and press Enter. Choose your primary network interface (e.g., eth0 or enp3s0).
  3. Manual Mode:
    Scroll down to IPv4 CONFIGURATION and change it from <Automatic> (DHCP) to <Manual>.
  4. Punch in the Data:
    • Addresses: Add your desired static IP and subnet (e.g., 192.168.1.50/24).
    • Gateway: Your router's IP (usually 192.168.1.1).
    • DNS servers: 1.1.1.1 or 8.8.8.8.
  5. Save & Activate:
    Scroll down to the bottom and select <OK>. Exit the tool.
  6. Apply the Changes:
    You must restart the connection for the changes to take effect:
    sudo nmcli connection up <interface_name>
Shortcut

Trust but Verify
Always run ip addr after you think you're done. If the IP didn't change, the config didn't "take". If you lose internet, ping your gateway (ping 192.168.1.1) first to see if you are even connected to your local router!

Networking Theory

Understanding how data travels between machines isn't just theory—it's what keeps your services from falling into the void.


🧠 How Data Actually Moves (The OSI Rabbit Hole)

The OSI model is a theoretical 7-layer cake. You don't need to be an expert in all of them, but you should know where the trouble usually lives:

  • Layer 7 (Application): HTTP, DNS, FTP. This is the web page you see.
  • Layer 4 (Transport): TCP vs UDP.
  • Layer 3 (Network): Routers and IP addresses live here.
  • Layer 2 (Data Link): Switches and MAC addresses.

TCP vs UDP

  • TCP: Like a registered letter—it establishes a connection (3-way handshake) and makes sure data arrives in order. Great for web and files.
  • UDP: Like shouting at a crowd—it's fast, but some people might not hear you. Great for gaming and video streaming.

💎 IP Addressing & Subnetting Pro-Tips

An IPv4 address (like 192.168.1.10) is a 32-bit number. But it's just a name. The Subnet Mask (like 255.255.255.0 or /24) defines who your "neighbors" are.

  • Network Address: The start of the block (e.g., 192.168.1.0). Identifies the network itself.
  • Broadcast Address: The shout-to-everyone address (e.g., 192.168.1.255).
  • Usable Range: Everything in between (.1 to .254).
Pro-Tip

The Default Gateway
If your server needs to talk to an IP that is outside of its Subnet Mask, it sends the packet to the Default Gateway (your router). Without a gateway, your server cannot reach the internet!

The Service Hub

A server without services is just a space heater. This module covers the essential services that make a network function.


Quick Reference: Service Management

GoalCommand
Check Local Webcurl -I localhost
Check DNSdig google.com
Check Active Portsss -tulpn

DNS: The Phonebook

DNS turns google.com into an IP like 142.250.190.46.

  • A Record: "This name points to this IPv4."
  • CNAME: "This name is just an alias for that other name."

DHCP: The Doorman

DHCP gives your devices their IP addresses when they join the network using the DORA process:

  1. Discover: Client broadcasts looking for a server.
  2. Offer: Server offers an IP.
  3. Request: Client accepts the IP.
  4. Acknowledge: Server confirms the lease.

Web Servers

  • Apache: The classic heavyweight. Versatile but can be slow under heavy load.
  • Nginx: The lightweight speedster. Perfect for modern apps and reverse proxies.
Shortcut

Modern Lab Tip
While you can run DNS (BIND9) and Web Servers (Nginx) directly on your OS, modern labs use Docker. Check the next sections in the sidebar to learn how!

Docker Installation Guide

Running services directly on the OS leaves a mess. Dependencies conflict, updates break things, and uninstalling is a nightmare. Docker solves this.

Instead of installing Nginx on your OS, you run an Nginx container. It's isolated, clean, and disposable.


Quick Reference: Docker Engine

GoalCommand
Start Enginesudo systemctl start docker
Enable on Bootsudo systemctl enable docker
Check Statussystemctl status docker
Verify Installdocker run hello-world

🚀 Hands-on: Installing Docker on Ubuntu/Fedora

Don't mess around with old versions in default repositories. Use the official convenience script to get the latest version.

  1. Download the Script:
    curl -fsSL https://get.docker.com -o get-docker.sh
  2. Run the Installer:
    sudo sh get-docker.sh
  3. Start and Enable the Service:
    Make sure Docker boots automatically when your server restarts.
    sudo systemctl enable --now docker
  4. The "No Sudo" Trick (Optional):
    If you are tired of typing sudo docker, add your user to the docker group:
    sudo usermod -aG docker $USER # You must log out and log back in for this to take effect!
Danger Zone

Security Implication
Adding your user to the docker group is essentially giving yourself passwordless root access. In a home lab, it's fine. In production, be very careful!

Portainer.io Configuration

You have Docker installed, but managing dozens of containers via the CLI (docker ps, docker run, docker rm) gets exhausting.

Portainer CE (Community Edition) is a web-based dashboard that lets you manage your containers with a beautiful UI.


Quick Reference: Container Basics

GoalCommand
List Runningdocker ps
List Alldocker ps -a
Stop Containerdocker stop <name>
View Logsdocker logs -f <name>

🚀 Hands-on: Deploying Portainer

Portainer itself runs as a Docker container! Here is the exact setup.

  1. Create Persistent Storage:
    We need a volume so Portainer doesn't lose its database when it restarts.
    docker volume create portainer_data
  2. Deploy the Container:
    Run this command to spin up Portainer. Notice we are passing the Docker socket (docker.sock) into the container—this is how Portainer talks to the host engine.
    docker run -d -p 8000:8000 -p 9443:9443 \ --name portainer \ --restart=always \ -v /var/run/docker.sock:/var/run/docker.sock \ -v portainer_data:/data \ portainer/portainer-ce:latest
  3. Initial Configuration:
    • Open your web browser and go to https://<YOUR_SERVER_IP>:9443.
    • (Your browser will warn you about a self-signed certificate. Click "Advanced" and "Proceed").
    • Create your admin username and password.
    • Click Get Started to connect to the local environment.

The Forbidden Fix

When you use Portainer to spin up a web server (like Nginx) and map a volume from your host to serve your HTML files, you might get a 403 Forbidden error.

Why? The Nginx container user doesn't have permission to read the files on your host machine.

The Proper Fix

Change the ownership of the files on your host to match the UID that Nginx uses (usually UID 33 or 101).

sudo chown -R 33:33 /path/to/your/web/files
🛡️ Guardrail

SELinux (Fedora/RHEL)
If you are on Fedora, you must also tell SELinux to allow the container to read the host directory. In Portainer, when configuring the Volume mapping, there is usually an option to bind it with the Z or z flag, or you can do it via CLI: -v /host/path:/container/path:Z.

🔒 Architecture: Zero-JS & HTTPS

The Vault you are currently viewing has gone through significant architectural upgrades to maximize speed, simplicity, and security. Here is a breakdown of the entire process that brought the Vault to its current state.


The Zero-JavaScript Refactoring

Initially, the Vault used JavaScript (marked.js) to dynamically fetch and render Markdown files on the fly. While functional, it added unnecessary complexity.

To create the ultimate "beginner-friendly" codebase, we deleted JavaScript completely.

  • Static HTML: All Markdown content is now pre-baked directly into a single index.html file.
  • Native Menus: The collapsible sidebar now uses native HTML <details> and <summary> tags instead of JS click listeners.
  • Native Search: The custom JS search bar was removed in favor of the browser's native Ctrl+F search.

The result is a website that loads instantly and can be understood by anyone with basic HTML knowledge.


🛡️ Securing the Vault with HTTPS (DuckDNS & NPM)

When hosting the Vault externally using a service like DuckDNS (e.g., linuxvault.duckdns.org), the site initially loaded as "Not Secure" (HTTP) and timed out when trying to use HTTPS.

Why? DuckDNS only provides DNS resolution (translating a name to your home IP). It does not provide the SSL certificates needed for HTTPS. Furthermore, our knowledge-web container was only listening on port 80.

Here is how we solved it using Nginx Proxy Manager (NPM):

1. Fixing the Port Conflict

NPM needs to be the "Front Door" of the server, meaning it must listen on port 80 (HTTP) and port 443 (HTTPS). Because our knowledge-web container was already hogging port 80, NPM couldn't bind to it.

The Fix: We stopped the knowledge-web container and restarted it on port 8080.

docker run -d --name knowledge-web \ -v /home/turu-server/knolege-web-antigravity:/usr/share/nginx/html:ro \ -p 8080:80 \ nginx:latest

2. Deploying Nginx Proxy Manager

With port 80 free, we started NPM via docker-compose, allowing it to successfully bind to ports 80, 81 (Admin UI), and 443.

3. Routing the Traffic

Inside the NPM Admin Panel (http://localhost:81), we configured a new Proxy Host:

  • Domain: linuxvault.duckdns.org
  • Forward Hostname / IP: The exact LAN IP of the server (e.g., 192.168.1.56)
  • Forward Port: 8080 (Pointing to our newly moved Vault container)
Danger Zone

IP Configuration Trap
Never type strings or combinations like domain.com / 192.168.1.56 in the Forward IP box in NPM. It must be only the raw IP address, otherwise NPM will throw a 502 Bad Gateway error!


The SSL Termination Misconception
When configuring the proxy host Details, you must keep the Scheme as http and the Port as 8080. Do NOT set it to https and 443. Nginx Proxy Manager handles all the HTTPS encryption at the "front door". Once it decrypts the traffic, it passes it to your Vault as normal unencrypted HTTP traffic. If you set it to https, NPM will try to talk to your local container securely, which will fail and result in a 502 Bad Gateway.

4. Let's Encrypt SSL

Finally, in the NPM SSL tab, we selected "Request a new SSL Certificate" and enabled "Force SSL". NPM automatically communicated with Let's Encrypt, verified our DuckDNS domain, and generated the certificates.

The Vault is now fully secure, lightning fast, and accessible from anywhere via HTTPS!

Security & High Availability

Security isn't something you "add" at the end—it's how you build from day one. This section covers the core concepts of securing data and keeping the server alive.


Cryptography 101 (The Shield)

  • Symmetric: One key to rule them all (like a safe combination). Fast, but hard to share securely. Examples: AES.
  • Asymmetric: Public key (for the world) + Private key (for you). Used for SSH and HTTPS.
  • Hashing: A one-way trip. You can't "un-hash" something. Used to safely store passwords (e.g., SHA-256).

High Availability & RAID

If a hard drive fails, does your server die? Not if you use RAID (Redundant Array of Independent Disks).

  • RAID 0 (Striping): Max speed, zero safety. Data is split across drives. If one dies, everything is lost.
  • RAID 1 (Mirroring): The safe bet for OS drives. Data is duplicated exactly on two drives. If one fails, the server keeps running.
  • RAID 5: Needs at least 3 drives. Uses mathematical "parity" to survive one drive failure while giving you more storage space than a mirror.
Danger Zone

RAID is NOT a Backup!
If you accidentally delete a file, RAID 1 will instantly delete it on both drives. Always keep off-site backups!

Secure SSH Keys

Stop using passwords—they're weak and easily brute-forced. Use SSH Keys instead. Here is the exact, foolproof way to set up a passwordless login from a Windows machine to your Linux server.


Quick Reference: SSH

GoalCommand
Login with Keyssh user@ip_address
Check SSH Statussystemctl status sshd
Restart SSHsudo systemctl restart sshd

Hands-on: The Lockdown Process

Step 1: Generate the Key (On Windows)

Open PowerShell on your local PC and run this exact command to create a modern, highly secure key:

ssh-keygen -t ed25519 -C "admin_key"

(Press Enter for all prompts to use the default settings and no passphrase).

Step 2: Copy the Public Key

Still in PowerShell, output the key so you can copy it:

cat ~/.ssh/id_ed25519.pub

(Highlight the output—it should start with ssh-ed25519...—and right-click to copy).

Step 3: Install the Key (On Linux)

SSH into your server normally (using your password). Run these commands one by one to securely save the key:

# 1. Create the SSH folder securely mkdir -p ~/.ssh chmod 700 ~/.ssh # 2. Open the authorized_keys file in the micro editor micro ~/.ssh/authorized_keys

(Paste the key you copied in Step 2. Save and exit).

# 3. Secure the file so only you can read it chmod 600 ~/.ssh/authorized_keys

Step 4: The Kill Switch (Disable Passwords)

Now that your key works, turn off password logins so hackers can't guess them.

sudo micro /etc/ssh/sshd_config

Find the line #PasswordAuthentication yes, remove the #, and change it to no:

PasswordAuthentication no

Restart the SSH service to apply the lockdown:

sudo systemctl restart sshd
Danger Zone

Don't Lock Yourself Out!
Always keep your current SSH session open while testing your new key in a second window. If the key fails and you've disabled passwords, you are locked out of your server!

Project: SSH Authentication Hardening

1. Executive Summary

The objective was to transition the server from password-based authentication to Public Key Authentication. This eliminates the risk of brute-force attacks by disabling the password prompt entirely at the SSH daemon level.

2. Implementation Steps

Step A: Key Generation and Distribution

Before disabling passwords, an identity was established for each client machine.

  • Command (Client-side): ssh-keygen -t ed25519
  • Action: The public key (id_ed25519.pub) was appended to the server's ~/.ssh/authorized_keys file.
  • Theory: This establishes a cryptographic trust between the client and the server.

Step B: Identifying Configuration Overrides

On modern Ubuntu systems, configuration is often split across multiple files.

  • Discovery Command: ls /etc/ssh/sshd_config.d/
  • Finding: A file named 50-cloud-init.conf was found to contain active overrides that took precedence over the main configuration file.

Step C: Modifying the SSH Daemon Configuration

To enforce the security policy, two specific parameters were modified in both /etc/ssh/sshd_config and the override file in /etc/ssh/sshd_config.d/.

  1. Open Configuration: sudo nano /etc/ssh/sshd_config (and the .conf file in the .d directory).
  2. Parameters Adjusted:
    • PasswordAuthentication no: Disables the password prompt.
    • ChallengeResponseAuthentication no: Disables keyboard-interactive authentication.
    • UsePAM no: (Optional/Recommended) Ensures PAM modules do not bypass SSH settings.

Step D: Service Restart and Verification

Changes to the SSH daemon do not take effect until the service is reloaded.

  • Syntax Check: sudo sshd -t (Ensures no typos exist that could cause a lockout).
  • Service Restart: sudo systemctl restart ssh
  • Verification Command: sudo sshd -T | grep -i passwordauthentication
  • Expected Output: passwordauthentication no

3. Troubleshooting & Safety Protocols

  • The "Golden Rule": Never close the active SSH session until a second, independent session is successfully established using a key.
  • Override Logic: Files in /etc/ssh/sshd_config.d/ are loaded alphabetically and will override settings in the main /etc/ssh/sshd_config file.

4. Final Security Status

  • Password Access: Disabled ❌
  • Public Key Access: Enabled ✅
  • Brute-Force Vulnerability: Neutralized

Firewall Configuration

A firewall filters incoming and outgoing network traffic based on rules. If a port isn't explicitly opened, the firewall blocks it.


Quick Reference: Firewalls

GoalCommand (UFW - Ubuntu)Command (Firewalld - Fedora)
Check Statussudo ufw statussudo firewall-cmd --state
Open Port 80sudo ufw allow 80/tcpsudo firewall-cmd --add-port=80/tcp --permanent
Reload Rulessudo ufw reloadsudo firewall-cmd --reload

🚀 Hands-on: Opening Ports for Web Services

Depending on your OS, you'll use either UFW (Uncomplicated Firewall) or Firewalld.

For Ubuntu/Debian (UFW)

UFW is extremely simple. To allow web traffic (HTTP and HTTPS):

sudo ufw allow 80/tcp sudo ufw allow 443/tcp sudo ufw enable

For Fedora/RHEL (Firewalld)

Firewalld uses "Zones" (like public, home, internal). The default is usually public. To open ports permanently, you must use the --permanent flag and reload.

sudo firewall-cmd --zone=public --add-service=http --permanent sudo firewall-cmd --zone=public --add-service=https --permanent sudo firewall-cmd --reload
Shortcut

Docker and Firewalls
By default, Docker manipulates iptables directly. This means if you map a port in Docker (-p 8080:80), Docker will automatically open that port to the world, bypassing UFW or Firewalld! Keep this in mind when deploying containers.