Technical Writing

Homelab: Fully Managed Server


Homelab: Fully Managed Server

PVE: NVIDIA GPU Passthrough

This guide walks you through creating a Linux VM in Proxmox (with GPU pass-through or related capabilities) and installing the NVIDIA driver on both Arch Linux and Debian (Bookworm).


1. Proxmox VM Configuration

  1. Locate and edit the VM configuration file:
/etc/pve/qemu-server/<VMID>.conf
  1. Add the following settings:
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'
bios: ovmf
machine: q35
vga: none
  1. Note about VGA:
    • For the initial operating system installation, you may need to set:
    vga: std
    
    • After installation, change it to:
    vga: none
    
    and then reboot the VM.

2. Update GRUB

2.1 For Arch Linux

  1. Edit the GRUB configuration:
sudo nano /etc/default/grub
  1. Modify the GRUB_CMDLINE_LINUX_DEFAULT line to include:
GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 quiet splash nvidia-drm.modeset=1"

(Add any other parameters you need for your setup here.) 3. Regenerate the GRUB configuration:

sudo grub-mkconfig -o /boot/grub/grub.cfg

2.2 For Debian (Bookworm)

  1. Edit the GRUB configuration:
sudo nano /etc/default/grub
  1. Modify or add these lines:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="nvidia-drm.modeset=1"
  1. Update GRUB:
sudo update-grub

3. Update Package Sources

3.1 For Arch Linux

Arch Linux uses rolling releases, so you typically do not need to edit package sources. Simply keep your system up to date:

sudo pacman -Syu

3.2 For Debian (Bookworm)

  1. Edit the sources list:
sudo nano /etc/apt/sources.list
  1. Ensure it contains something like:
deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
deb-src http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
deb-src http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
deb http://deb.debian.org/debian/ bookworm-updates main non-free-firmware contrib non-free
deb-src http://deb.debian.org/debian/ bookworm-updates main non-free-firmware contrib non-free
  1. Update and upgrade your packages:
sudo apt update
sudo apt upgrade

4. Install the NVIDIA Driver

4.1 For Arch Linux

  1. Install the NVIDIA driver:
sudo pacman -S nvidia
  1. Create/edit the modprobe configuration file:
sudo nano /etc/modprobe.d/nvidia.conf
  1. Add the following lines:
options nvidia_drm modeset=1
options nvidia_drm fbdev=1
  1. Save and close the file.

4.2 For Debian (Bookworm)

  1. Install the NVIDIA driver:
sudo apt install nvidia-driver
  1. Depending on your hardware and kernel, Debian may automatically create the appropriate modprobe files. If additional configuration is needed, you can place them in /etc/modprobe.d/.

5. Secure Boot Configuration

Before rebooting, decide how you want to handle Secure Boot. If your VM or host uses UEFI with Secure Boot enabled, you have two main options:

Option A: Disable Secure Boot (Easier)

  1. Disable validation:
mokutil --disable-validation
  1. Follow the on-screen prompts on the first reboot to complete the process.

Option B: Sign the Driver (More Secure)


6. Reboot

After completing all the steps relevant to your distribution and Secure Boot configuration:

  1. Reboot your system:
sudo reboot
  1. Verify that the NVIDIA drivers are in use by checking:
    • Arch Linux:
    nvidia-smi
    
    • Debian:
    nvidia-smi
    
    If the command shows information about your NVIDIA GPU, the driver is successfully loaded.

Final Notes

Homelab: Fully Managed Server

PVE: Custom VLANs

This guide presents two methods for setting up VLANs in Proxmox and configuring a UniFi switch to work with them.

Prerequisites

Method 1: Manual VLAN Creation Without Explicit VLAN Tagging

Proxmox Configuration

  1. Access the Proxmox host

    • SSH into your Proxmox host or access the console directly
  2. Edit the network configuration file

    • Open the network interfaces configuration file:
      nano /etc/network/interfaces
      
  3. Configure the main bridge (vmbr0) and VLAN bridge (vmbr1)

    • Add the following configuration:
      auto vmbr0
      iface vmbr0 inet static
              address 192.168.1.7/24
              gateway 192.168.1.1
              bridge-ports eno1
              bridge-stp off
              bridge-fd 0
      
      auto vmbr1
      iface vmbr1 inet static
              address 192.168.2.1/24
              bridge-ports eno1.2
              bridge-stp off
              bridge-fd 0
              bridge-vlan-aware yes
              bridge-vids 2-4094
      
      source /etc/network/interfaces.d/*
      
  4. Save and apply the configuration

    • Save the file and exit the editor
    • Restart networking or reboot the Proxmox host:
      systemctl restart networking
      
      or
      reboot
      

UniFi Switch Configuration for Method 1

  1. Access the UniFi Network Controller

    • Log in to your UniFi Network Controller interface
  2. Navigate to the Devices section

    • Find and select the UniFi switch connected to your Proxmox host
  3. Locate the correct port

    • Identify the port number that your Proxmox host is connected to
  4. Configure the port for multiple VLANs

    • Click on the port to open its configuration settings
    • Set the "Port Profile" to "All"
    • In the "Native VLAN" field, enter the VLAN ID for your main network (usually 1)
    • In the "Tagged VLANs" field, enter "2-4094" to allow all possible VLANs
  5. Enable VLAN awareness on the switch

    • In the switch settings, ensure that "VLAN Aware" is turned on
  6. Create VLANs in UniFi Controller

    • Go to the "Settings" > "Networks" section in your UniFi Controller
    • Create a new network for each VLAN you plan to use
    • Assign appropriate VLAN IDs to these networks (matching the ones you set up in Proxmox)
  7. Configure DHCP and routing (if needed)

    • If you want the UniFi Controller to handle DHCP for your VLANs, configure DHCP servers for each VLAN network
    • Set up appropriate firewall rules to control traffic between VLANs
  8. Apply the changes

    • Save the port configuration
    • Apply the changes to the switch
  9. Verify the configuration

    • Check the UniFi Controller's insights or statistics to ensure traffic is flowing correctly on the configured VLANs

Method 2: Using VLAN Tags in Proxmox VMs and UniFi

Proxmox Configuration

  1. Access the Proxmox host

    • SSH into your Proxmox host or access the console directly
  2. Edit the network configuration file

    • Open the network interfaces configuration file:
      nano /etc/network/interfaces
      
  3. Configure the main bridge (vmbr0)

    • The main bridge typically does not need to be changed. Here's an example of a basic default configuration:
      auto lo
      iface lo inet loopback
      
      iface eno1 inet manual
      
      auto vmbr0
      iface vmbr0 inet static
              address 192.168.1.100/24
              gateway 192.168.1.1
              bridge-ports eno1
              bridge-stp off
              bridge-fd 0
      
      source /etc/network/interfaces.d/*
      
    • Adjust the address and gateway as needed for your network
  4. Save and apply the configuration

    • Save the file and exit the editor
    • Restart networking:
      systemctl restart networking
      
  5. Configure VLAN tagging for VMs

    • When creating or editing a VM in the Proxmox web interface:
      • Go to the VM's "Hardware" tab
      • Add a new network device or edit an existing one
      • Set "Bridge" to vmbr0
      • In the "VLAN Tag" field, enter the desired VLAN ID (e.g., 10, 20, 30)

UniFi Switch Configuration for Method 2

  1. Access the UniFi Network Controller

    • Log in to your UniFi Network Controller interface
  2. Navigate to the Devices section

    • Find and select the UniFi switch connected to your Proxmox host
  3. Locate the correct port

    • Identify the port number that your Proxmox host is connected to
  4. Configure the port for tagged VLANs

    • Click on the port to open its configuration settings
    • Set the "Port Profile" to "All"
    • In the "Native VLAN" field, enter the VLAN ID for your main network (usually 1)
    • In the "Tagged VLANs" field, enter the VLAN IDs you plan to use in your Proxmox VMs (e.g., "10,20,30")
  5. Create VLANs in UniFi Controller

    • Go to the "Settings" > "Networks" section in your UniFi Controller
    • Create new networks for each VLAN, matching the IDs you plan to use in Proxmox VMs
  6. Configure DHCP and routing (if needed)

    • If you want the UniFi Controller to handle DHCP for your VLANs, configure DHCP servers for each VLAN network
    • Set up appropriate firewall rules to control traffic between VLANs
  7. Apply the changes

    • Save the port configuration
    • Apply the changes to the switch
  8. Verify the configuration

    • Check the UniFi Controller's insights or statistics to ensure traffic is flowing correctly on the configured VLANs

Comparison of Methods

Choose the method that best fits your network architecture and management preferences. Method 2 is often preferred for its simplicity and flexibility in managing VLANs on a per-VM basis.

Troubleshooting

Remember to adjust IP addresses, interfaces, and VLAN IDs as needed for your specific network setup.

Homelab: Fully Managed Server

PBS: Backup Strategy

3-2-1 Backup Setup

In this example, we are backing up a ZFS pool RAID 1 consisting of two 2TB Samsung 990 EVO PRO SSDs.

The goal is to implement the 3-2-1 backup strategy: three copies of your data on two different media, with one copy offsite. This setup is designed for a homelab using consumer software, which adds some challenges due to the lack of enterprise-level scalability. Here's how to do it.

Step 1: Install Proxmox Backup Server (PBS) with Drive Passthrough

First, install PBS with my drive passthrough script. This passthrough drive will be our first backup. Note that while this setup is fine for a homelab, there is a risk in having the backup drive on the host machine that you should be aware of.

Installation Steps:

  1. Access Administration:
  1. Update and Reboot:
apt update
apt upgrade
  1. Prepare the Backup Drive:
  1. Add PBS to Proxmox:
  1. Configure Proxmox:

Now, you can run your backup.

Step 2: Set Up Samba File Share

Since we need to spin up a Windows VM (because Rclone doesn't work well with Proton Drive and Proton Drive only supports NTFS), we'll set up a Samba file share.

Installation and Configuration:

  1. Install Samba:
apt install samba
  1. Configure Samba:
nano /etc/samba/smb.conf
[SharedDrive]
path = /mnt/new-storage
browseable = yes
read only = no
guest ok = yes
force user = root
  1. Set Permissions:
chmod -R 0777 /mnt/new-storage

     This makes the directory readable and writable by all users. Adjust permissions as necessary depending on your security requirements.

  1. Restart Samba Service:
systemctl restart smbd

Step 3: Access the Share from Windows

  1. Access the Share:
  1. Map the Network Drive Permanently (Optional):

Step 4: Secure Samba Share with a Password (Optional)

If you want to secure your Samba share with a password, follow these steps:

  1. Create a Samba User: Add a Linux user if it doesn't already exist:
adduser yourusername

Add the user to Samba:

smbpasswd -a yourusername
  1. Modify Samba Configuration: Edit the Samba configuration file:
nano /etc/samba/smb.conf

Modify the share definition to require authentication:

[SharedDrive]
path = /mnt/new-storage
browseable = yes
read only = no
valid users = yourusername
force user = root
  1. Restart Samba Service: Apply the changes by restarting the Samba service:
systemctl restart smbd
  1. Access the Share from Windows: When prompted, enter the username (yourusername) and the password set with smbpasswd.

By following these steps, you can share the passed-through drive on your Proxmox server with a Windows PC, allowing it to be accessed and used from both systems.

AWS

AWS

Multiple Public IPs, one EC2

Below is a step-by-step guide on how to configure your AWS EC2 instance (with Proxmox installed on Debian) so that multiple Elastic IPs can be assigned to different containers or virtual machines. This assumes:

  1. You already installed Proxmox on a Debian instance running in AWS.
  2. You have a bridge configured (e.g., vmbr1 with 172.31.14.1/24) to which you attach your containers/VMs.
  3. You know how to allocate and associate multiple Elastic IPs in the AWS console.

AWS Prerequisites

  1. Allocate & Associate EIPs

    • In the AWS console, go to EC2 → Elastic IPs and allocate the addresses you need.
    • Associate each Elastic IP to your EC2 instance's network interface (ENI) as a secondary private IP.
      For example:
      • EIP-1172.31.14.10
      • EIP-2172.31.14.11
      • EIP-3172.31.14.12
      • EIP-4172.31.14.13
  2. Disable Source/Dest Check

    • In the EC2 console, select your Proxmox instance → Actions → Networking → Change source/dest. check → set to Disable.
    • This is crucial if you plan to forward traffic (via NAT or routing) to internal guests.
  3. Security Groups

    • Make sure the Security Group on your instance allows inbound ports you need (e.g., 22 for SSH, 80 for HTTP, 443 for HTTPS, etc.).

Proxmox Host Setup

  1. Verify Secondary IPs on ens33

    • Once AWS associates the private IPs (e.g., 172.31.14.10–13), confirm they appear on the Proxmox host:
      ip addr show ens33
      
      • If they're not there, manually add them:
        sudo ip addr add 172.31.14.10/32 dev ens33
        sudo ip addr add 172.31.14.11/32 dev ens33
        sudo ip addr add 172.31.14.12/32 dev ens33
        sudo ip addr add 172.31.14.13/32 dev ens33
        
  2. Confirm the Bridge (vmbr1)

    • You mentioned you have vmbr1 at 172.31.14.1/24.
    • This means the Proxmox host uses 172.31.14.1 as its IP on that bridge.
    • Any container or VM assigned to vmbr1 can then use IPs in the 172.31.14.x/24 range, with .1 as the gateway.

Container / VM Network Configuration

  1. Create a Container/VM

    • In Proxmox, create or edit a container/VM and set its network device to use:
      • Bridge: vmbr1
      • Static IP: 172.31.14.101/24 (for example)
      • Gateway: 172.31.14.1
  2. Test Internal Connectivity

    • From the host, ping 172.31.14.101.
    • Inside the container, ping 172.31.14.1.
    • Confirm that traffic flows locally on the bridge.

Making the Guest Public

AWS won't allow traditional layer-2 bridging with random MAC addresses, so we typically do NAT or routed setups:

NAT Method (Recommended)

  1. Enable IP Forwarding

    echo 1 > /proc/sys/net/ipv4/ip_forward
    

    Or set net.ipv4.ip_forward=1 in /etc/sysctl.conf.

  2. Create DNAT/SNAT Rules

    • Suppose 172.31.14.10 is associated with a public EIP and you want to forward all traffic to a container at 172.31.14.101:
    # DNAT: traffic arriving at .10 -> container .101
    iptables -t nat -A PREROUTING -d 172.31.14.10 -j DNAT --to-destination 172.31.14.101
    
    # SNAT: traffic leaving .101 -> source it from .10
    iptables -t nat -A POSTROUTING -s 172.31.14.101 -j SNAT --to-source 172.31.14.10
    
    • Repeat for other secondary IPs (.11, .12, .13 → different containers).
  3. Persistent iptables

    • Put those rules in a script (e.g., /root/iptables.sh) and call it on boot (via /etc/rc.local, systemd unit, or iptables-persistent).

Result:

Direct / Routed Approach

  1. Disable Source/Dest Check (done).
  2. Enable proxy_arp on Proxmox:
    echo 1 > /proc/sys/net/ipv4/conf/ens33/proxy_arp
    echo 1 > /proc/sys/net/ipv4/conf/vmbr1/proxy_arp
    
  3. Assign the IP inside the Guest
    • The container itself configures 172.31.14.10/24 with gateway 172.31.14.1.
  4. Host ARP
    • The host must answer ARP for .10 on behalf of the container (that’s what proxy_arp does).

This method lets the container literally own the IP. However, NAT is typically easier and more common in AWS.


Final Checks & Tips

  1. Open Ports in AWS Security Group
    • Make sure inbound rules allow the ports you need for each EIP.
  2. Test Externally
    • From your local machine, try pinging or SSHing into the public EIP.
    • Run tcpdump on the Proxmox host (ens33) and inside the container to confirm packet flow if debugging is needed.
  3. Persist Your Config
    • If you added secondary IPs manually with ip addr add, incorporate those changes into /etc/network/interfaces or a startup script.
    • If you used iptables rules, ensure they load at boot.

Conclusion

By disabling source/dest check, attaching multiple private IPs (each mapped to an EIP), and either using NAT or a routed approach, you can give each Proxmox container (or VM) its own unique public IP address on AWS. The NAT method is simplest: each container has a private IP in the 172.31.14.x/24 range, and iptables translates the traffic to/from the Proxmox host’s secondary IPs. This way, you can host multiple external-facing services on a single AWS Proxmox instance.

Thanks for reading, and happy hosting!

AWS

Sharing S3 Buckets

This guide uses two roles—one in the Bucket Owner’s account and one in the Bucket User’s account—so that the Bucket User can manage (and delegate) who in their organization gains access to the shared S3 bucket.


Scenario Overview

  1. Bucket Owner’s Account (Account A)

    • Owns the S3 bucket.
    • Will create an IAM role that grants permissions to access the bucket, and trusts the Bucket User’s AWS account.
  2. Bucket User’s Account (Account B)

    • Needs to provide access for multiple users or roles within their own organization.
    • Will create an IAM role that trusted internal users can assume, which then “chains” into the role in Account A to access the bucket.

Diagrammatically:


Part A: Bucket Owner’s Steps

Step 1: Determine Which S3 Actions to Allow

Decide which S3 permissions to grant. This example allows read-only (List/Get) access. Adjust as needed (e.g., add s3:PutObject, s3:DeleteObject):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectAttributes",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::YOUR_BUCKET_NAME",
        "arn:aws:s3:::YOUR_BUCKET_NAME/path/to/shared/files/*"
      ]
    }
  ]
}

Tip: If you only want to permit certain folder paths, adjust the resources (e.g., arn:aws:s3:::YOUR_BUCKET_NAME/folder-name/*).


Step 2: Create a Role in the Bucket Owner’s Account

  1. Go to the AWS Console → IAMRolesCreate role.
  2. Select “Another AWS account” as the trusted entity.
  3. Enter the Bucket User’s AWS Account ID (Account B).
  4. Attach a custom policy (from Step 1) or create an inline policy that grants the desired S3 permissions.
  5. Name this role something like CrossAccountS3AccessRole.
  6. Create the role.

When done, open the role’s Trust relationships tab and ensure it looks roughly like:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::222222222222:root"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Important: Make sure the Principal is correct for the Bucket User’s account. If you plan to use an external ID or conditions, include them here.

Finally, copy the ARN for this role, for example:

arn:aws:iam::111111111111:role/CrossAccountS3AccessRole

You’ll share this with the Bucket User.


Part B: Bucket User’s Steps

Step 3: Create a Role in the Bucket User’s Account

In your AWS account (Account B), create a role that your internal users or IAM principals can assume. This “local” role will chain into the bucket owner’s role:

  1. Go to AWS Console → IAMRolesCreate role.
  2. Select your own account (Account B) as the trusted entity (or specify the user/group that can assume it).
  3. Add a policy that allows “sts:AssumeRole” on the Bucket Owner’s role, for example:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "arn:aws:iam::111111111111:role/CrossAccountS3AccessRole"
    }
  ]
}
  1. Name the role something like InternalToExternalS3AccessRole.
  2. Create the role.

Once created, the Trust relationships of this role might look like:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::222222222222:root"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Note: You can restrict the principal further to specific users or groups within your account.


Step 4: Have Internal Users Assume Your Local Role

Now, internal developers or automation in your account can do the following:

  1. Assume the InternalToExternalS3AccessRole in your account (Account B).
  2. That role policy grants sts:AssumeRole on the bucket owner’s CrossAccountS3AccessRole (in Account A).
  3. Finally, assume that external role to gain S3 access.

Example flow in a developer’s local environment:

# 1) Assume your local role (Account B) to “bridge” into the other account
aws sts assume-role \
  --role-arn arn:aws:iam::222222222222:role/InternalToExternalS3AccessRole \
  --role-session-name MyLocalSession

# 2) Using the temporary credentials from step (1), assume the external role:
aws sts assume-role \
  --role-arn arn:aws:iam::111111111111:role/CrossAccountS3AccessRole \
  --role-session-name MyCrossAccountSession

# 3) You'll get credentials that let you access the S3 bucket in Account A.

Part C: Using Environment Variables & jq

To automate environment variables in your shell:

# 1) Assume your local role (in Account B)
eval $(
  aws sts assume-role \
  --role-arn arn:aws:iam::222222222222:role/InternalToExternalS3AccessRole \
  --role-session-name MyLocalSession \
  | jq -r '.Credentials |
      "export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\n
       export AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\n
       export AWS_SESSION_TOKEN=\(.SessionToken)"'
)
# 2) Chain-assume the external (Bucket Owner) role
eval $(
  aws sts assume-role \
  --role-arn arn:aws:iam::111111111111:role/CrossAccountS3AccessRole \
  --role-session-name MyCrossAccountSession \
  | jq -r '.Credentials |
      "export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\n
       export AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\n
       export AWS_SESSION_TOKEN=\(.SessionToken)"'
)
# 3) Test S3 operations
aws s3 ls s3://YOUR_BUCKET_NAME/path/

Note: Each set of temporary credentials typically expires after 1 hour (or a configured max session duration). You’ll need to re-run these commands as needed.


Summary Diagram


FAQs

  1. Why two roles instead of one?

    • Security & Flexibility: The bucket owner sets S3 permissions, but doesn’t manage which individuals in your organization assume them. You (the Bucket User) control internal access separately.
  2. What if I just want direct access from my account without an extra role?

    • That’s possible if the Bucket Owner trusts the entire external account or specific principals directly. However, it’s often cleaner to separate them for security and delegated access control.
  3. Session duration?

    • Both roles have a MaxSessionDuration. By default, it’s 1 hour. It can be extended up to 12 hours in IAM settings.
  4. Access Denied errors?

    • Check the trust policies on both roles.
    • Ensure you’re assuming the correct role(s).
    • Verify the resource ARNs and AWS account IDs are correct.

Additional Best Practice Considerations

These measures can further strengthen your cross-account setup.


Conclusion

By having two roles—one controlled by the Bucket Owner and one by the Bucket User—you achieve a clear separation of responsibilities. The Bucket Owner decides which actions are allowed in the bucket, while the Bucket User decides who in their organization has permission to assume that cross-account role.

AWS

EC2 Recovery

This guide demonstrates how to recover access to an EC2 instance when both SSH and Serial Console access are unavailable. We'll use a Proxmox instance as an example, but this method works for any Linux-based EC2 instance.

Prerequisites

Recovery Steps

1. Create a Snapshot of the Affected Volume

  1. Navigate to EC2 Dashboard
  2. Go to Volumes and select the volume of the affected instance
  3. Actions → Create Snapshot
  4. Add descriptive name like "Pre-rescue-backup-[date]"
  5. Click Create Snapshot
  6. Wait for snapshot to reach 100% completion

2. Stop the Affected Instance

  1. Navigate to EC2 Dashboard
  2. Select the affected instance
  3. Actions → Instance State → Stop
  4. Wait until instance is fully stopped

3. Detach the Root Volume

  1. Select the stopped instance
  2. Scroll to 'Storage' tab
  3. Note the volume ID of the root volume
  4. Right-click the volume → Detach Volume
  5. Confirm detach

4. Launch a Rescue Instance

  1. Launch a new EC2 instance
  2. Use Amazon Linux 2 AMI
  3. Same availability zone as affected volume
  4. Configure security group to allow SSH access

5. Attach Problem Volume to Rescue Instance

  1. Select the detached volume
  2. Actions → Attach Volume
  3. Select rescue instance
  4. Note the device name (e.g., /dev/sdb or /dev/xvdb)

6. Access and Mount the Volume

# Connect to rescue instance
ssh -i your-key.pem ec2-user@rescue-instance-ip

# List available disks to find attached volume
sudo fdisk -l
# or
lsblk

# Create mount point
sudo mkdir -p /mnt/rescue

# Mount the root partition
sudo mount /dev/xvdb1 /mnt/rescue  # Adjust device name as needed

7. Troubleshoot and Fix Issues

Common File Locations

# Network Configuration
sudo nano /mnt/rescue/etc/network/interfaces    # Debian/Ubuntu/Proxmox
sudo nano /mnt/rescue/etc/sysconfig/network-scripts/ifcfg-eth0  # RHEL/CentOS

# SSH Configuration
sudo nano /mnt/rescue/etc/ssh/sshd_config

# System Logs
sudo less /mnt/rescue/var/log/syslog    # Debian/Ubuntu
sudo less /mnt/rescue/var/log/messages  # RHEL/CentOS

Example: Fixing Proxmox Network Configuration

# View current network config
sudo cat /mnt/rescue/etc/network/interfaces

# Edit if needed
sudo nano /mnt/rescue/etc/network/interfaces

# Example of working basic config:
auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports eth0
        bridge-stp off
        bridge-fd 0

8. Cleanup and Restore

# Unmount volume
cd ~  # Ensure you're not in mounted directory
sudo umount /mnt/rescue

After unmounting:

  1. Detach volume from rescue instance in AWS Console
  2. Reattach to original instance as root volume
  3. Start original instance
  4. Test connectivity

Common Issues and Solutions

Network Configuration Issues

Boot Issues

Permission Issues

Prevention Tips

  1. Always maintain a snapshot of working system
  2. Document working network configuration
  3. Use AWS Systems Manager Session Manager as backup access method
  4. Keep serial console access enabled
  5. Document network changes before implementing
  6. Test changes in staging environment first

Additional Resources

Remember: Always maintain current backups and document your system configuration to make recovery easier when needed.

AWS

LXQt with VNC

This guide will help you set up the lightweight LXQt desktop environment with VNC on a Debian EC2 instance, allowing for a graphical desktop interface through a secure connection.

Initial Setup

# Update system packages
sudo apt update
sudo apt upgrade -y

# Install LXQt desktop environment
sudo apt install lxqt -y

# Install TigerVNC server
sudo apt install tigervnc-standalone-server tigervnc-common -y

# Set VNC password - you will be prompted to enter a password
vncpasswd

# Create VNC config directory if it doesn't exist
mkdir -p ~/.vnc

# Create a simple xstartup file
cat > ~/.vnc/xstartup << 'EOF'
#!/bin/sh
export XDG_SESSION_TYPE=x11
export DESKTOP_SESSION=lxqt
exec startlxqt
EOF

# Make xstartup executable
chmod +x ~/.vnc/xstartup

# Kill any existing VNC server instances (optional, use if needed)
# vncserver -kill :1

# Start VNC server
vncserver :1

Connecting to Your VNC Server

From your local machine:

# Create SSH tunnel and keep it open
# Replace with your actual key file and EC2 public DNS
ssh -L 5901:localhost:5901 -i "your-key.pem" user@your-ec2-public-dns "vncserver :1; sleep infinity"

In a new terminal window on your local machine:

# Connect to the VNC server using TigerVNC viewer
xtigervncviewer localhost:1

Useful VNC Management Commands

# Check if VNC server is running
vncserver -list

# Manually kill VNC server
vncserver -kill :1

# Start VNC with specific resolution
vncserver :1 -geometry 1920x1080

# Start VNC with more parameters (depth, geometry, etc.)
vncserver :1 -depth 24 -geometry 1920x1080 -localhost no

Security Best Practices

Applications

Applications

GitLab: Migrate YH CE to BM EE

Overview

This guide covers migrating a GitLab instance from Yunohost to a standalone server, including:

1. Create and Transfer Backups

On the Yunohost server:

# Create GitLab backup
sudo gitlab-backup create

# Copy the three required files to new server (run these from the new server)
scp user@old-server:/home/yunohost.backup/archives/[TIMESTAMP]_gitlab_backup.tar user@new-server:/tmp/
scp user@old-server:/etc/gitlab/gitlab.rb user@new-server:/tmp/
scp user@old-server:/etc/gitlab/gitlab-secrets.json user@new-server:/tmp/

2. Set Up New Server

Initial Package Setup

# Update package list
sudo apt-get update

# Install required packages
sudo apt-get install -y curl openssh-server ca-certificates perl postfix git

# Add both CE and EE repositories
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh | sudo bash

# Update package list again after adding repos
sudo apt update

# Install GitLab CE first
sudo EXTERNAL_URL="https://gitlab.example.com" apt-get install gitlab-ce

3. Restore Data to CE Instance

# Stop GitLab services
sudo gitlab-ctl stop

# Move the backup files to correct locations
sudo mv /tmp/gitlab.rb /etc/gitlab/
sudo mv /tmp/gitlab-secrets.json /etc/gitlab/
sudo mv /tmp/*_gitlab_backup.tar /var/opt/gitlab/backups/

# Set correct permissions
sudo chmod 600 /etc/gitlab/gitlab-secrets.json
sudo chown root:root /etc/gitlab/gitlab*

# Restore the backup
sudo gitlab-backup restore BACKUP=[TIMESTAMP]

# Reconfigure and restart GitLab
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart

4. Configure Authentication Methods

First, access the Rails console:

sudo gitlab-rails console -e production

Then run these Ruby commands:

# Enable password authentication and sign-in
settings = ApplicationSetting.current
settings.update_column(:password_authentication_enabled_for_web, true)
settings.update_column(:signin_enabled, true)

# Exit console
quit

5. Migrate Users from LDAP

First, access the Rails console:

sudo gitlab-rails console -e production

Then run these Ruby commands:

# Find and update each LDAP user (repeat for each user)
user = User.find_by_username('username')

# Remove LDAP identities
user.identities.where(provider: 'ldap').destroy_all

# Reset authentication settings
if user.respond_to?(:authentication_type)
  user.update_column(:authentication_type, nil)
end

# Set password expiry to far future
if user.respond_to?(:password_expires_at)
  user.update_column(:password_expires_at, Time.now + 10.years)
end

# Ensure user account is active and reset login attempts
user.update_columns(
  state: 'active',
  failed_attempts: 0
)

# Set new password
user.password = 'temporary_password'
user.password_confirmation = 'temporary_password'
user.save!

# Verify changes
puts "Active: #{user.state}"
puts "Failed attempts: #{user.failed_attempts}"
puts "Password expires at: #{user.password_expires_at}"
puts "Identities: #{user.identities.pluck(:provider)}"

# Exit console
quit

Alternative password reset method:

sudo gitlab-rake "gitlab:password:reset[username]"

6. Verify CE Installation

  1. Verify GitLab CE is accessible via web browser
  2. Test user login with the new passwords
  3. Have users change their temporary passwords
  4. Confirm repositories and data are present
  5. Create a test issue and commit to verify functionality

7. Upgrade to Enterprise Edition

Only proceed after confirming CE is working correctly:

# Upgrade to GitLab EE
sudo apt install gitlab-ee

8. Install GitLab EE License

  1. Generate and install the license
  2. Navigate to Admin Area > Settings > General
  3. Upload your license file
  4. Accept Terms of Service
  5. Click "Add License"

9. Final Configuration

# Edit GitLab configuration
sudo nano /etc/gitlab/gitlab.rb

# Add these lines:
gitlab_rails['usage_ping_enabled'] = false
gitlab_rails['gitlab_url'] = 'http://your.gitlab.url'

# Reconfigure and restart
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart

Troubleshooting

# View logs
sudo gitlab-ctl tail

# Check GitLab status
sudo gitlab-ctl status

# If PostgreSQL issues occur
sudo mv /var/opt/gitlab/postgresql/data /var/opt/gitlab/postgresql/data.bak
sudo mkdir -p /var/opt/gitlab/postgresql/data
sudo chown -R gitlab-psql:gitlab-psql /var/opt/gitlab/postgresql/data
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart postgresql

Remember to:

Applications

GitLab Pages: Cloudflared

Below is a minimal example of how to configure a self-hosted GitLab instance to serve GitLab Pages behind Cloudflared. This guide walks through:

  1. Setting the GitLab Pages config on your self-hosted instance.  
  2. Creating a Cloudflared configuration to route traffic to GitLab Pages via a secure tunnel.

1. GitLab Configuration

In your GitLab configuration file (often /etc/gitlab/gitlab.rb for Omnibus installations), you might have something like:

external_url 'https://git.example.com'
pages_external_url 'https://pages.example.com'

# Disable usage reporting
gitlab_rails['usage_ping_enabled'] = false

# Pages configuration
gitlab_pages['enable'] = true
gitlab_pages['listen_proxy'] = "127.0.0.1:8090"
gitlab_pages['auth_servers'] = ["http://127.0.0.1:8090"]

# Domains served by GitLab Pages
gitlab_pages['domains'] = ["pages.example.com"]

# Enable the built-in Pages NGINX
pages_nginx['enable'] = true

# Listen for HTTPS in GitLab’s Pages component
nginx['listen_https'] = true
gitlab_pages['external_https'] = ['127.0.0.1:8443']

Note: The exact ports (e.g. 8090, 8443) may vary depending on your setup or preferences. Adjust as needed.

After editing, run:

sudo gitlab-ctl reconfigure

2. Example Cloudflared Configuration

Create or edit your Cloudflared configuration file, commonly found at:

/etc/cloudflared/config.yml

(Adjust the file path based on your system.)

Below is an example that:

ingress:
  - hostname: pages.example.com
    service: https://127.0.0.1:8443
    originRequest:
      noTLSVerify: true
      httpHostHeader: pages.example.com

  # (Optional) If you want to proxy the main GitLab web UI:
  - hostname: git.example.com
    service: https://127.0.0.1:443
    originRequest:
      noTLSVerify: true
      httpHostHeader: git.example.com

  # Catch-all for any other requests
  - service: http_status:404

Important Keys

  1. hostname: Must match the domain you configured for GitLab Pages or the main GitLab instance.  
  2. service: Points to your local GitLab Pages HTTPS port (8443 in this example).  
  3. noTLSVerify: Bypasses TLS verification if your certificate is self-signed.  
  4. httpHostHeader: Ensures GitLab Pages sees the correct Host header, preventing unwanted redirects.

3. Restart Services

Apply your changes by restarting the necessary services:

sudo systemctl restart cloudflared

If you’re using Omnibus GitLab:

sudo gitlab-ctl reconfigure

4. Verify the Domain

In GitLab (under Settings → Pages for your project), ensure pages.example.com is added as the project’s custom domain. If it’s the only domain, consider marking it as “Primary” so that all traffic is served without redirects to other domains.


5. Test Your Setup

  1. Go to:          https://pages.example.com    
  2. Confirm your Pages site loads as expected (and is no longer redirecting to the main GitLab URL).

If you still see redirection or an error, double-check:


That’s it! With this configuration, requests to https://pages.example.com will be routed securely through Cloudflared to your self-hosted GitLab Pages service.

Applications

GitLab: Metal EE to Turnkey EE

Objective

This guide covers the process of migrating to GitLab Enterprise Edition (EE) within a container environment, specifically using TurnKey Linux containers. We'll address how to properly restore a backup from another GitLab instance and upgrade to the latest version.

Background

When running GitLab in containerized environments, you can't modify the kernel directly. Using a pre-configured TurnKey instance that runs GitLab CE and upgrading it to EE is an efficient approach that avoids kernel modification issues.

Prerequisites

Step 0: Creating a Proper Full Backup

To avoid issues with missing repositories in backups, always create a complete backup:

# On your source GitLab server
# Create a full backup including repositories using STRATEGY=copy
sudo gitlab-backup create STRATEGY=copy

# This creates a backup file like: 1745848933_2025_04_28_17.11.1-ee_gitlab_backup.tar
# that includes both database AND repositories

# You should also backup these configuration files separately
sudo cp /etc/gitlab/gitlab.rb /path/to/safe/location/gitlab.rb
sudo cp /etc/gitlab/gitlab-secrets.json /path/to/safe/location/gitlab-secrets.json

# Verify backup contains repositories
sudo tar -tf /var/opt/gitlab/backups/your_backup_filename.tar | grep -i repositories | head -20

Step 1: Install GitLab EE with the Correct Version

First, determine the version of your backup:

# Backup filename format indicates version
# Example: 1745757520_2025_04_27_17.9.1-ee_gitlab_backup.tar

Install the matching GitLab EE version:

# Add the GitLab EE repository
curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh | sudo bash

# Check available versions
apt-cache madison gitlab-ee

# Install the specific version matching your backup
sudo apt-get install gitlab-ee=17.9.1-ee.0  # Replace with your version

Step 2: Prepare and Restore the Backup

# Stop GitLab services that connect to the database
sudo gitlab-ctl stop puma
sudo gitlab-ctl stop sidekiq

# Create backup directory if it doesn't exist
sudo mkdir -p /var/opt/gitlab/backups

# Copy your backup file to the correct location
sudo cp your_backup_file.tar /var/opt/gitlab/backups/

# Set correct ownership
sudo chown git:git /var/opt/gitlab/backups/your_backup_file.tar

# Restore the backup
sudo gitlab-backup restore BACKUP=your_backup_timestamp_version

Step 3: Post-Restore Configuration

# Reconfigure and restart GitLab
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart

# Verify all services are running
sudo gitlab-ctl status

Step 4: Update to the Latest GitLab EE Version

To properly update GitLab to the latest version, follow these steps to fix repository configuration issues:

1. Complete Repository Reset

Remove all conflicting and outdated GitLab repository configurations:

sudo rm -f /etc/apt/sources.list.d/gitlab*
sudo rm -f /etc/apt/preferences.d/gitlab*
sudo rm -f /usr/share/keyrings/gitlab*

2. Proper GPG Key Installation

Install the GPG key correctly using the modern method:

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://packages.gitlab.com/gitlab/gitlab-ee/gpgkey | sudo gpg --dearmor -o /etc/apt/keyrings/gitlab-ee-archive-keyring.gpg

3. Modern Repository Configuration

Use the newer signed-by syntax which explicitly links the repository to its key:

echo "deb [signed-by=/etc/apt/keyrings/gitlab-ee-archive-keyring.gpg] https://packages.gitlab.com/gitlab/gitlab-ee/debian bookworm main" | sudo tee /etc/apt/sources.list.d/gitlab_ee.list

4. Update and Install Latest Version

With proper configuration in place, update without version pinning:

sudo apt-get update
sudo apt-get install gitlab-ee
sudo gitlab-ctl reconfigure

Troubleshooting Guide

Issue: Version Mismatch During Restore

If the restore fails with a version mismatch error, make sure to install the exact GitLab version from the backup first, then upgrade after a successful restore.

Issue: GitLab Not Updating to Latest Version

If GitLab remains at the old version after trying to update, the issue is typically related to:

Follow the complete repository reset procedure in Step 4 to resolve these issues.

Issue: Repository Signing Error

If you see GPG key errors when updating packages, follow the GPG key installation in Step 4 to properly configure the signing key.

Best Practices

  1. Always match the GitLab version to your backup version during restoration
  2. Perform a complete system backup before attempting any upgrade
  3. Verify successful restoration before upgrading to the latest version
  4. Set up proper repository configurations to enable seamless future updates

Why This Approach Works

Using this method ensures that your GitLab instance is properly restored with all user data, repositories, and settings intact, while avoiding kernel modification issues common in containerized environments. The proper repository setup ensures you can easily update to newer versions as they become available.

Applications

Mastodon: Change Username

This article explains how to manually “rename” a local Mastodon account by transferring its content to a newly created account in your Mastodon instance’s database. This approach does not preserve followers/followings. It is a risky, unsupported method—proceed only if you fully understand the implications and have a complete database backup.


Important Disclaimers


1. Create the Target Account

  1. Log in to your Mastodon instance’s web interface as an admin.
  2. Create a new local account using the desired/target username.
  3. Confirm you can log in as this new user to ensure it is recognized in the system.

2. Access the Rails Console

  1. SSH into your Mastodon server.
  2. Switch to your Mastodon user (commonly mastodon or mastodon-user).
  3. Navigate to your Mastodon installation directory (e.g., /home/mastodon/live).
  4. Start the Rails console in production mode:
    RAILS_ENV=production bin/rails console
    
  5. You should see a Ruby prompt (irb(main):001:0>).

3. Identify the Old and New Accounts

In the Rails console, look up both the old and the new Account objects:

old_username = 'oldusername'  # Replace with the old username
new_username = 'newusername'  # Replace with the new username

old_account = Account.find_by(username: old_username, domain: nil)
new_account = Account.find_by(username: new_username, domain: nil)

if old_account.nil?
  puts "Old account not found! Check old_username."
end

if new_account.nil?
  puts "New account not found! Check new_username."
end

puts "Old account ID: #{old_account.id}"
puts "New account ID: #{new_account.id}"

Note: domain: nil ensures you are finding local (rather than remote) accounts.


4. Transfer Content (Without Followers)

The following code reassigns items like statuses, favorites, bookmarks, and notifications from the old account to the new account. We are deliberately skipping follower/following relationships:

# Move statuses to the new account
Status.where(account_id: old_account.id)
      .update_all(account_id: new_account.id)

# Move favourites
Favourite.where(account_id: old_account.id)
         .update_all(account_id: new_account.id)

# Move bookmarks
Bookmark.where(account_id: old_account.id)
        .update_all(account_id: new_account.id)

# Move notifications
Notification.where(account_id: old_account.id)
            .update_all(account_id: new_account.id)

# Optionally move pinned statuses
Pin.where(account_id: old_account.id)
   .update_all(account_id: new_account.id)

Note: You can adapt this pattern for other tables, like Poll or MediaAttachment, if needed.

# Polls
Poll.where(account_id: old_account.id).update_all(account_id: new_account.id)

# Media attachments (if you want them reassigned from old to new account explicitly)
MediaAttachment.where(account_id: old_account.id).update_all(account_id: new_account.id)

5. Retire or Archive the Old Account

Once you confirm data has moved, you can:


6. Verify the New Account

  1. Log in as the new user in the web UI.
  2. Check that statuses, favorites, and bookmarks have transferred.
  3. Confirm pinned statuses (if any) display properly.
  4. The old account should no longer have these items.

7. Reindex (If Using ElasticSearch)

If your instance uses ElasticSearch or advanced indexing:

RAILS_ENV=production bin/tootctl search deploy

This ensures the newly transferred posts are indexed correctly.


Final Notes

Applications

GitLab & GitLab Pages on Separate IPs

Self‑Hosted GitLab & GitLab Pages on Separate IPs

Goal  Run the core GitLab instance and the GitLab Pages service on different IP addresses while using Let’s Encrypt certificates managed outside of Omnibus. This guide documents every key gitlab.rb setting required, why it exists, and the common pitfalls that bite first‑time deployments.


1  Topology Overview

Component FQDN Listens on Description
GitLab (core) git.❱ PRIMARY_DOMAIN ❰ PRIMARY_IP:443/80 Standard web UI/API, served by Omnibus NGINX
GitLab Pages prod.❱ PRIMARY_DOMAIN ❰ PAGES_IP:443/80 Serves static pages; runs its own Go HTTP server
Internet ─▶ 443 ➜ PRIMARY_IP  ──┐
                               │  Omnibus NGINX  → GitLab Core
Internet ─▶ 443 ➜ PAGES_IP    ─┴── gitlab‑pages (direct bind)

2  gitlab.rb – Directive‑by‑Directive Explanation

external_url 'https://git.PRIMARY_DOMAIN'

Sets the canonical URL for the core GitLab instance. All internal links, OAuth callbacks, and API clients rely on this value.

letsencrypt['enable'] = false

Disables Omnibusʼ automatic ACME integration. You manage certificates yourself with certbot (or any other tool).

nginx['listen_addresses'] = ['PRIMARY_IP']

Tells Omnibus NGINX only to bind to the primary IP. Prevents it from stealing :443 on the Pages IP.

nginx['ssl_certificate']     = '/etc/letsencrypt/live/git.PRIMARY_DOMAIN/fullchain.pem'
nginx['ssl_certificate_key'] = '/etc/letsencrypt/live/git.PRIMARY_DOMAIN/privkey.pem'

Full‑path PEM pair for the core GitLab site. Read directly from certbot’s live directory.


GitLab Pages block

pages_external_url 'https://prod.PRIMARY_DOMAIN'

Public URL end‑users visit for Pages content. Must match the CN/SAN in the cert below.

gitlab_pages['enable'] = true

Self‑explanatory—starts the Pages service.

gitlab_pages['external_http']  = ['PAGES_IP:80']
gitlab_pages['external_https'] = ['PAGES_IP:443']

Direct binding mode. Pages listens on its own IP instead of being proxied through NGINX.

gitlab_pages['cert']     = '/etc/letsencrypt/live/prod.PRIMARY_DOMAIN/fullchain.pem'
gitlab_pages['cert_key'] = '/etc/letsencrypt/live/prod.PRIMARY_DOMAIN/privkey.pem'

PEM pair for the Pages hostname. Since inplace_chroot is disabled (see below), the service can reach the real FS path.

gitlab_pages['inplace_chroot'] = false

Disables the default chroot jail. Simplifies cert management in containerised environments where an extra security layer is less critical.

gitlab_pages['acme']['enabled'] = false

Stops Pages from requesting its own ACME certs—which would clash with certbot.

pages_nginx['enable'] = false

Omnibus can spawn an internal NGINX reverse‑proxy in front of Pages. We turn it off because Pages is binding directly.

package['modify_kernel_parameters'] = false

On some cloud images/containers, Omnibus cannot change sysctl values. This flag avoids Chef failures.


3  Certbot Shortcuts

# Issue certs (example)
sudo certbot certonly --standalone -d git.PRIMARY_DOMAIN -d prod.PRIMARY_DOMAIN -m you@example.com --agree-tos

Auto‑reload Pages after renewal

Create /etc/letsencrypt/renewal-hooks/post/gitlab-pages-reload.sh:

#!/bin/sh
# Reload Pages after certbot renews prod.PRIMARY_DOMAIN
/usr/bin/gitlab-ctl hup gitlab-pages

chmod +x it. Certbot’s timer will run this automatically.


4  Firewall Rules

IP 80 443
PRIMARY_IP
PAGES_IP

Block all other inbound ports.


5  Troubleshooting Cheat‑Sheet

Symptom Common Cause Fix
address already in use :443 in Pages log Omnibus NGINX bound to 0.0.0.0 Set nginx['listen_addresses'] to primary IP only
open /etc/…crt: no such file or directory Wrong cert path / chroot mismatch Disable chroot or copy cert into …/gitlab-pages/etc/
gitlab-pages: runsv not running gitlab-runsvdir service dead systemctl start gitlab-runsvdir && systemctl enable gitlab-runsvdir
All services runsv not running Container rebooted without runit Same as above

5½  Keeping the supervisor (gitlab‑runsvdir) alive

GitLab’s runit supervision tree is launched by the systemd unit gitlab-runsvdir.service. If that unit is inactive every Omnibus component will show runsv not running and no ports will be open.

Why it dies

Make it start reliably

# one‑off recovery
sudo systemctl start gitlab-runsvdir

# persistent across reboots
sudo systemctl enable gitlab-runsvdir

Add a network dependency so the secondary IPs exist before runit starts:

# /etc/systemd/system/gitlab-runsvdir.service (snippet)
[Unit]
After=network-online.target
Wants=network-online.target

Optional watchdog timer

A tiny timer restarts the supervisor if it ever stops unexpectedly:

# /etc/systemd/system/gitlab-runsvdir-watchdog.timer
[Unit]
Description=Restart gitlab-runsvdir if it exits

[Timer]
OnBootSec=5min
OnUnitInactiveSec=1min
Unit=gitlab-runsvdir.service

[Install]
WantedBy=timers.target

Enable with systemctl enable --now gitlab-runsvdir-watchdog.timer.

When gitlab-runsvdir is healthy you will always see both listeners after boot:

ss -ltnp | grep :443
# 172.31.14.12:443 nginx
# 172.31.14.11:443 gitlab-pages

6  Security Notes

7  Full Example gitlab.rb  Full Example gitlab.rb

external_url 'https://git.PRIMARY_DOMAIN'

letsencrypt['enable'] = false

nginx['listen_addresses'] = ['PRIMARY_IP']
nginx['ssl_certificate']     = '/etc/letsencrypt/live/git.PRIMARY_DOMAIN/fullchain.pem'
nginx['ssl_certificate_key'] = '/etc/letsencrypt/live/git.PRIMARY_DOMAIN/privkey.pem'

pages_external_url 'https://prod.PRIMARY_DOMAIN'
gitlab_pages['enable'] = true

gitlab_pages['external_http']  = ['PAGES_IP:80']
gitlab_pages['external_https'] = ['PAGES_IP:443']

# direct certbot PEMs
gitlab_pages['cert']     = '/etc/letsencrypt/live/prod.PRIMARY_DOMAIN/fullchain.pem'
gitlab_pages['cert_key'] = '/etc/letsencrypt/live/prod.PRIMARY_DOMAIN/privkey.pem'

gitlab_pages['inplace_chroot'] = false
gitlab_pages['acme']['enabled'] = false

pages_nginx['enable'] = false
package['modify_kernel_parameters'] = false

Replace:


8  Command Quick‑Reference

# Apply config
gitlab-ctl reconfigure

# Start / stop Pages
gitlab-ctl restart gitlab-pages
gitlab-ctl tail gitlab-pages

# Restart entire stack after system boot
systemctl start gitlab-runsvdir

Document prepared · May 2025

9  When gitlab-secrets.json (aka secrets.rb) is relocated

Omnibus keeps its encryption keys (CI JWTs, LDAP secrets, backup encryption keys, etc.) in /etc/gitlab/gitlab-secrets.json—older docs sometimes call this secrets.rb. If the file is moved outside /etc/gitlab, GitLab can no longer read the self‑signed certificate or private keys it once generated. The result is TLS mis‑configuration and, if letsencrypt['enable'] is turned on, ACME registration failures.

Fix


10  Chroot ON vs OFF—trade‑offs at a glance

Mode Advantages Drawbacks
Chroot ON
gitlab_pages['inplace_chroot'] = true
• Additional isolation (Pages can only see its own tree).
• Blocks path‑traversal exploits inside user pages.
• Certs must be copied into /var/opt/gitlab/gitlab-pages/etc/.
• Debugging more complex.
• Breaks on minimal containers lacking pivot_root.
Chroot OFF
gitlab_pages['inplace_chroot'] = false
• Pages reads PEMs directly from /etc/letsencrypt/live/...—no duplication.
• Simple certbot renewal hook (gitlab-ctl hup gitlab-pages).
• Works on any container runtime.
• One less defence layer; rely on VM/container isolation and Unix perms.

Rule of thumb: In single‑tenant VMs or containers, disabling the chroot is pragmatic. On a shared host or if you let untrusted users push Pages content, keep the chroot and script the PEM copy in a certbot post‑renew hook.