Technical Writing
Homelab: Fully Managed Server
PVE: NVIDIA GPU Passthrough
This guide walks you through creating a Linux VM in Proxmox (with GPU pass-through or related capabilities) and installing the NVIDIA driver on both Arch Linux and Debian (Bookworm).
1. Proxmox VM Configuration
- Locate and edit the VM configuration file:
/etc/pve/qemu-server/<VMID>.conf
- Add the following settings:
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'
bios: ovmf
machine: q35
vga: none
- Note about VGA:
- For the initial operating system installation, you may need to set:
vga: std
- After installation, change it to:
and then reboot the VM.vga: none
2. Update GRUB
2.1 For Arch Linux
- Edit the GRUB configuration:
sudo nano /etc/default/grub
- Modify the
GRUB_CMDLINE_LINUX_DEFAULT
line to include:
GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 quiet splash nvidia-drm.modeset=1"
(Add any other parameters you need for your setup here.) 3. Regenerate the GRUB configuration:
sudo grub-mkconfig -o /boot/grub/grub.cfg
2.2 For Debian (Bookworm)
- Edit the GRUB configuration:
sudo nano /etc/default/grub
- Modify or add these lines:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="nvidia-drm.modeset=1"
- Update GRUB:
sudo update-grub
3. Update Package Sources
3.1 For Arch Linux
Arch Linux uses rolling releases, so you typically do not need to edit package sources. Simply keep your system up to date:
sudo pacman -Syu
3.2 For Debian (Bookworm)
- Edit the sources list:
sudo nano /etc/apt/sources.list
- Ensure it contains something like:
deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
deb-src http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
deb-src http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
deb http://deb.debian.org/debian/ bookworm-updates main non-free-firmware contrib non-free
deb-src http://deb.debian.org/debian/ bookworm-updates main non-free-firmware contrib non-free
- Update and upgrade your packages:
sudo apt update
sudo apt upgrade
4. Install the NVIDIA Driver
4.1 For Arch Linux
- Install the NVIDIA driver:
sudo pacman -S nvidia
- Create/edit the modprobe configuration file:
sudo nano /etc/modprobe.d/nvidia.conf
- Add the following lines:
options nvidia_drm modeset=1
options nvidia_drm fbdev=1
- Save and close the file.
4.2 For Debian (Bookworm)
- Install the NVIDIA driver:
sudo apt install nvidia-driver
- Depending on your hardware and kernel, Debian may automatically create the appropriate modprobe files. If additional configuration is needed, you can place them in
/etc/modprobe.d/
.
5. Secure Boot Configuration
Before rebooting, decide how you want to handle Secure Boot. If your VM or host uses UEFI with Secure Boot enabled, you have two main options:
Option A: Disable Secure Boot (Easier)
- Disable validation:
mokutil --disable-validation
- Follow the on-screen prompts on the first reboot to complete the process.
Option B: Sign the Driver (More Secure)
- Refer to your distribution’s documentation on how to sign kernel modules:
- This process involves creating or importing a Machine Owner Key (MOK), signing the NVIDIA kernel module, and enrolling the key with your firmware.
6. Reboot
After completing all the steps relevant to your distribution and Secure Boot configuration:
- Reboot your system:
sudo reboot
- Verify that the NVIDIA drivers are in use by checking:
- Arch Linux:
nvidia-smi
- Debian:
If the command shows information about your NVIDIA GPU, the driver is successfully loaded.nvidia-smi
Final Notes
- VM Performance: If you are using this VM for GPU pass-through, make sure your Proxmox host IOMMU groups and passthrough settings are properly configured.
- Troubleshooting:
- Check kernel messages (e.g.,
dmesg
) for signs of driver load failures. - Ensure your VM configuration (
qemu-server/<VMID>.conf
) properly masks the CPU vendor (hv_vendor_id=NV43FIX
) if you are trying to hide the virtualization from the driver (useful in some GPU pass-through scenarios).
- Check kernel messages (e.g.,
PVE: Custom VLANs
This guide presents two methods for setting up VLANs in Proxmox and configuring a UniFi switch to work with them.
Prerequisites
- Proxmox VE installed
- Root access to the Proxmox host
- UniFi Network Controller access
- Network interface(s) available for configuration
Method 1: Manual VLAN Creation Without Explicit VLAN Tagging
Proxmox Configuration
-
Access the Proxmox host
- SSH into your Proxmox host or access the console directly
-
Edit the network configuration file
- Open the network interfaces configuration file:
nano /etc/network/interfaces
- Open the network interfaces configuration file:
-
Configure the main bridge (vmbr0) and VLAN bridge (vmbr1)
- Add the following configuration:
auto vmbr0 iface vmbr0 inet static address 192.168.1.7/24 gateway 192.168.1.1 bridge-ports eno1 bridge-stp off bridge-fd 0 auto vmbr1 iface vmbr1 inet static address 192.168.2.1/24 bridge-ports eno1.2 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 source /etc/network/interfaces.d/*
- Add the following configuration:
-
Save and apply the configuration
- Save the file and exit the editor
- Restart networking or reboot the Proxmox host:
orsystemctl restart networking
reboot
UniFi Switch Configuration for Method 1
-
Access the UniFi Network Controller
- Log in to your UniFi Network Controller interface
-
Navigate to the Devices section
- Find and select the UniFi switch connected to your Proxmox host
-
Locate the correct port
- Identify the port number that your Proxmox host is connected to
-
Configure the port for multiple VLANs
- Click on the port to open its configuration settings
- Set the "Port Profile" to "All"
- In the "Native VLAN" field, enter the VLAN ID for your main network (usually 1)
- In the "Tagged VLANs" field, enter "2-4094" to allow all possible VLANs
-
Enable VLAN awareness on the switch
- In the switch settings, ensure that "VLAN Aware" is turned on
-
Create VLANs in UniFi Controller
- Go to the "Settings" > "Networks" section in your UniFi Controller
- Create a new network for each VLAN you plan to use
- Assign appropriate VLAN IDs to these networks (matching the ones you set up in Proxmox)
-
Configure DHCP and routing (if needed)
- If you want the UniFi Controller to handle DHCP for your VLANs, configure DHCP servers for each VLAN network
- Set up appropriate firewall rules to control traffic between VLANs
-
Apply the changes
- Save the port configuration
- Apply the changes to the switch
-
Verify the configuration
- Check the UniFi Controller's insights or statistics to ensure traffic is flowing correctly on the configured VLANs
Method 2: Using VLAN Tags in Proxmox VMs and UniFi
Proxmox Configuration
-
Access the Proxmox host
- SSH into your Proxmox host or access the console directly
-
Edit the network configuration file
- Open the network interfaces configuration file:
nano /etc/network/interfaces
- Open the network interfaces configuration file:
-
Configure the main bridge (vmbr0)
- The main bridge typically does not need to be changed. Here's an example of a basic default configuration:
auto lo iface lo inet loopback iface eno1 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.1.100/24 gateway 192.168.1.1 bridge-ports eno1 bridge-stp off bridge-fd 0 source /etc/network/interfaces.d/*
- Adjust the
address
andgateway
as needed for your network
- The main bridge typically does not need to be changed. Here's an example of a basic default configuration:
-
Save and apply the configuration
- Save the file and exit the editor
- Restart networking:
systemctl restart networking
-
Configure VLAN tagging for VMs
- When creating or editing a VM in the Proxmox web interface:
- Go to the VM's "Hardware" tab
- Add a new network device or edit an existing one
- Set "Bridge" to vmbr0
- In the "VLAN Tag" field, enter the desired VLAN ID (e.g., 10, 20, 30)
- When creating or editing a VM in the Proxmox web interface:
UniFi Switch Configuration for Method 2
-
Access the UniFi Network Controller
- Log in to your UniFi Network Controller interface
-
Navigate to the Devices section
- Find and select the UniFi switch connected to your Proxmox host
-
Locate the correct port
- Identify the port number that your Proxmox host is connected to
-
Configure the port for tagged VLANs
- Click on the port to open its configuration settings
- Set the "Port Profile" to "All"
- In the "Native VLAN" field, enter the VLAN ID for your main network (usually 1)
- In the "Tagged VLANs" field, enter the VLAN IDs you plan to use in your Proxmox VMs (e.g., "10,20,30")
-
Create VLANs in UniFi Controller
- Go to the "Settings" > "Networks" section in your UniFi Controller
- Create new networks for each VLAN, matching the IDs you plan to use in Proxmox VMs
-
Configure DHCP and routing (if needed)
- If you want the UniFi Controller to handle DHCP for your VLANs, configure DHCP servers for each VLAN network
- Set up appropriate firewall rules to control traffic between VLANs
-
Apply the changes
- Save the port configuration
- Apply the changes to the switch
-
Verify the configuration
- Check the UniFi Controller's insights or statistics to ensure traffic is flowing correctly on the configured VLANs
Comparison of Methods
- Method 1 uses a VLAN-aware bridge in Proxmox, which can be more flexible for the host system but may be more complex to set up initially.
- Method 2 keeps the Proxmox network configuration simple and uses VLAN tagging at the VM level. This method is more straightforward and aligns directly with how most network equipment handles VLANs.
Choose the method that best fits your network architecture and management preferences. Method 2 is often preferred for its simplicity and flexibility in managing VLANs on a per-VM basis.
Troubleshooting
- Verify VLAN IDs match between Proxmox (either in the host configuration for Method 1 or VM settings for Method 2) and UniFi configurations
- Check UniFi firewall rules for inter-VLAN traffic
- Use UniFi Controller's built-in tools to test connectivity between VLANs
- In Proxmox, use these commands to verify VLAN configurations:
ip a bridge vlan show
- For Method 2, ensure the VLAN tag is correctly set in each VM's network device settings
- If using Method 1, check that the VLAN-aware bridge (vmbr1) is correctly configured and up
- Test connectivity from within VMs to ensure they can reach their intended networks
Remember to adjust IP addresses, interfaces, and VLAN IDs as needed for your specific network setup.
PBS: Backup Strategy
3-2-1 Backup Setup
In this example, we are backing up a ZFS pool RAID 1 consisting of two 2TB Samsung 990 EVO PRO SSDs.
The goal is to implement the 3-2-1 backup strategy: three copies of your data on two different media, with one copy offsite. This setup is designed for a homelab using consumer software, which adds some challenges due to the lack of enterprise-level scalability. Here's how to do it.
Step 1: Install Proxmox Backup Server (PBS) with Drive Passthrough
First, install PBS with my drive passthrough script. This passthrough drive will be our first backup. Note that while this setup is fine for a homelab, there is a risk in having the backup drive on the host machine that you should be aware of.
Installation Steps:
- Access Administration:
- Go to the browser and navigate to
Administration > Repositories
. - Add or remove appropriate free and enterprise repositories as necessary.
- Update and Reboot:
- Open the shell and run:
apt update
apt upgrade
- Reboot the system if a kernel update is applied.
- Prepare the Backup Drive:
- Go to
Administration > Storage > Disks
and wipe the disk you intend to use. - Then, either create a directory or a ZFS filesystem on this drive.
- Add PBS to Proxmox:
- Go back to the dashboard and copy the fingerprint of the PBS instance.
- Ensure the "Add as datastore" option is checked and note the datastore name you select.
- Configure Proxmox:
- In Proxmox, go to
Datacenter > Storage > Add > Proxmox Backup Server
. - Set the following values:
- ID: Choose a unique identifier.
- Server: IP address of the PBS instance.
- Username:
root@pam
or the appropriate user. - Password: Password set in PBS.
- Datastore: Name you noted earlier.
- Fingerprint: Paste the fingerprint you copied.
Now, you can run your backup.
Step 2: Set Up Samba File Share
Since we need to spin up a Windows VM (because Rclone doesn't work well with Proton Drive and Proton Drive only supports NTFS), we'll set up a Samba file share.
Installation and Configuration:
- Install Samba:
apt install samba
- Configure Samba:
- Edit the Samba configuration file:
nano /etc/samba/smb.conf
[SharedDrive]
path = /mnt/new-storage
browseable = yes
read only = no
guest ok = yes
force user = root
- path: Directory you want to share.
- browseable: Allows the share to be visible when browsing network shares.
- read only: Set to
no
to allow writing to the share. - guest ok: Set to
yes
to allow access without a password (optional). - force user: Ensures files are accessible to all users via Samba.
- Set Permissions:
- Ensure that the directory you're sharing has the correct permissions:
chmod -R 0777 /mnt/new-storage
This makes the directory readable and writable by all users. Adjust permissions as necessary depending on your security requirements.
- Restart Samba Service:
- Apply the changes by restarting the Samba service:
systemctl restart smbd
Step 3: Access the Share from Windows
- Open File Explorer on your Windows PC.
- In the address bar, type
\Proxmox-IP-Address\SharedDrive
and press Enter.- Replace
Proxmox-IP-Address
with the IP address of your Proxmox server, andSharedDrive
with the name of your share as defined in thesmb.conf
file. - Authenticate if prompted. If guest access is allowed, it might not ask for credentials; otherwise, use a valid username and password from the Proxmox server.
- Replace
- Map the Network Drive Permanently (Optional):
- Right-click on "This PC" in File Explorer and select "Map network drive...".
- Choose a drive letter and enter the path to your Proxmox share (e.g.,
\Proxmox-IP-Address\SharedDrive
). - Check "Reconnect at sign-in" and click "Finish".
Step 4: Secure Samba Share with a Password (Optional)
If you want to secure your Samba share with a password, follow these steps:
- Create a Samba User: Add a Linux user if it doesn't already exist:
adduser yourusername
Add the user to Samba:
smbpasswd -a yourusername
- Modify Samba Configuration: Edit the Samba configuration file:
nano /etc/samba/smb.conf
[SharedDrive]
path = /mnt/new-storage
browseable = yes
read only = no
valid users = yourusername
force user = root
- Restart Samba Service: Apply the changes by restarting the Samba service:
systemctl restart smbd
By following these steps, you can share the passed-through drive on your Proxmox server with a Windows PC, allowing it to be accessed and used from both systems.
AWS
Multiple Public IPs, one EC2
Below is a step-by-step guide on how to configure your AWS EC2 instance (with Proxmox installed on Debian) so that multiple Elastic IPs can be assigned to different containers or virtual machines. This assumes:
- You already installed Proxmox on a Debian instance running in AWS.
- You have a bridge configured (e.g.,
vmbr1
with172.31.14.1/24
) to which you attach your containers/VMs. - You know how to allocate and associate multiple Elastic IPs in the AWS console.
AWS Prerequisites
-
Allocate & Associate EIPs
- In the AWS console, go to EC2 → Elastic IPs and allocate the addresses you need.
- Associate each Elastic IP to your EC2 instance's network interface (ENI) as a secondary private IP.
For example:EIP-1
↔172.31.14.10
EIP-2
↔172.31.14.11
EIP-3
↔172.31.14.12
EIP-4
↔172.31.14.13
-
Disable Source/Dest Check
- In the EC2 console, select your Proxmox instance → Actions → Networking → Change source/dest. check → set to Disable.
- This is crucial if you plan to forward traffic (via NAT or routing) to internal guests.
-
Security Groups
- Make sure the Security Group on your instance allows inbound ports you need (e.g., 22 for SSH, 80 for HTTP, 443 for HTTPS, etc.).
Proxmox Host Setup
-
Verify Secondary IPs on
ens33
- Once AWS associates the private IPs (e.g.,
172.31.14.10–13
), confirm they appear on the Proxmox host:ip addr show ens33
- If they're not there, manually add them:
sudo ip addr add 172.31.14.10/32 dev ens33 sudo ip addr add 172.31.14.11/32 dev ens33 sudo ip addr add 172.31.14.12/32 dev ens33 sudo ip addr add 172.31.14.13/32 dev ens33
- If they're not there, manually add them:
- Once AWS associates the private IPs (e.g.,
-
Confirm the Bridge (
vmbr1
)- You mentioned you have
vmbr1
at172.31.14.1/24
. - This means the Proxmox host uses
172.31.14.1
as its IP on that bridge. - Any container or VM assigned to
vmbr1
can then use IPs in the172.31.14.x/24
range, with.1
as the gateway.
- You mentioned you have
Container / VM Network Configuration
-
Create a Container/VM
- In Proxmox, create or edit a container/VM and set its network device to use:
- Bridge:
vmbr1
- Static IP:
172.31.14.101/24
(for example) - Gateway:
172.31.14.1
- Bridge:
- In Proxmox, create or edit a container/VM and set its network device to use:
-
Test Internal Connectivity
- From the host, ping
172.31.14.101
. - Inside the container, ping
172.31.14.1
. - Confirm that traffic flows locally on the bridge.
- From the host, ping
Making the Guest Public
AWS won't allow traditional layer-2 bridging with random MAC addresses, so we typically do NAT or routed setups:
NAT Method (Recommended)
-
Enable IP Forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
Or set
net.ipv4.ip_forward=1
in/etc/sysctl.conf
. -
Create DNAT/SNAT Rules
- Suppose
172.31.14.10
is associated with a public EIP and you want to forward all traffic to a container at172.31.14.101
:
# DNAT: traffic arriving at .10 -> container .101 iptables -t nat -A PREROUTING -d 172.31.14.10 -j DNAT --to-destination 172.31.14.101 # SNAT: traffic leaving .101 -> source it from .10 iptables -t nat -A POSTROUTING -s 172.31.14.101 -j SNAT --to-source 172.31.14.10
- Repeat for other secondary IPs (.11, .12, .13 → different containers).
- Suppose
-
Persistent iptables
- Put those rules in a script (e.g.,
/root/iptables.sh
) and call it on boot (via/etc/rc.local
,systemd
unit, oriptables-persistent
).
- Put those rules in a script (e.g.,
Result:
- Users connect to the public EIP, AWS maps that to
172.31.14.10
, and your Proxmox host DNATs inbound to172.31.14.101
. - Outbound packets from
172.31.14.101
get SNATed to172.31.14.10
, so replies return.
Direct / Routed Approach
- Disable Source/Dest Check (done).
- Enable
proxy_arp
on Proxmox:echo 1 > /proc/sys/net/ipv4/conf/ens33/proxy_arp echo 1 > /proc/sys/net/ipv4/conf/vmbr1/proxy_arp
- Assign the IP inside the Guest
- The container itself configures
172.31.14.10/24
with gateway172.31.14.1
.
- The container itself configures
- Host ARP
- The host must answer ARP for
.10
on behalf of the container (that’s what proxy_arp does).
- The host must answer ARP for
This method lets the container literally own the IP. However, NAT is typically easier and more common in AWS.
Final Checks & Tips
- Open Ports in AWS Security Group
- Make sure inbound rules allow the ports you need for each EIP.
- Test Externally
- From your local machine, try pinging or SSHing into the public EIP.
- Run
tcpdump
on the Proxmox host (ens33
) and inside the container to confirm packet flow if debugging is needed.
- Persist Your Config
- If you added secondary IPs manually with
ip addr add
, incorporate those changes into/etc/network/interfaces
or a startup script. - If you used iptables rules, ensure they load at boot.
- If you added secondary IPs manually with
Conclusion
By disabling source/dest check, attaching multiple private IPs (each mapped to an EIP), and either using NAT or a routed approach, you can give each Proxmox container (or VM) its own unique public IP address on AWS. The NAT method is simplest: each container has a private IP in the 172.31.14.x/24
range, and iptables translates the traffic to/from the Proxmox host’s secondary IPs. This way, you can host multiple external-facing services on a single AWS Proxmox instance.
Thanks for reading, and happy hosting!
Sharing S3 Buckets
This guide uses two roles—one in the Bucket Owner’s account and one in the Bucket User’s account—so that the Bucket User can manage (and delegate) who in their organization gains access to the shared S3 bucket.
Scenario Overview
-
Bucket Owner’s Account (Account A)
- Owns the S3 bucket.
- Will create an IAM role that grants permissions to access the bucket, and trusts the Bucket User’s AWS account.
-
Bucket User’s Account (Account B)
- Needs to provide access for multiple users or roles within their own organization.
- Will create an IAM role that trusted internal users can assume, which then “chains” into the role in Account A to access the bucket.
Diagrammatically:
Part A: Bucket Owner’s Steps
Step 1: Determine Which S3 Actions to Allow
Decide which S3 permissions to grant. This example allows read-only (List/Get) access. Adjust as needed (e.g., add s3:PutObject
, s3:DeleteObject
):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectAttributes",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME",
"arn:aws:s3:::YOUR_BUCKET_NAME/path/to/shared/files/*"
]
}
]
}
Tip: If you only want to permit certain folder paths, adjust the resources (e.g.,
arn:aws:s3:::YOUR_BUCKET_NAME/folder-name/*
).
Step 2: Create a Role in the Bucket Owner’s Account
- Go to the AWS Console → IAM → Roles → Create role.
- Select “Another AWS account” as the trusted entity.
- Enter the Bucket User’s AWS Account ID (Account B).
- Attach a custom policy (from Step 1) or create an inline policy that grants the desired S3 permissions.
- Name this role something like
CrossAccountS3AccessRole
. - Create the role.
When done, open the role’s Trust relationships tab and ensure it looks roughly like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::222222222222:root"
},
"Action": "sts:AssumeRole"
}
]
}
Important: Make sure the Principal is correct for the Bucket User’s account. If you plan to use an external ID or conditions, include them here.
Finally, copy the ARN for this role, for example:
arn:aws:iam::111111111111:role/CrossAccountS3AccessRole
Part B: Bucket User’s Steps
Step 3: Create a Role in the Bucket User’s Account
In your AWS account (Account B), create a role that your internal users or IAM principals can assume. This “local” role will chain into the bucket owner’s role:
- Go to AWS Console → IAM → Roles → Create role.
- Select your own account (Account B) as the trusted entity (or specify the user/group that can assume it).
- Add a policy that allows “sts:AssumeRole” on the Bucket Owner’s role, for example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::111111111111:role/CrossAccountS3AccessRole"
}
]
}
- Name the role something like
InternalToExternalS3AccessRole
. - Create the role.
Once created, the Trust relationships of this role might look like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::222222222222:root"
},
"Action": "sts:AssumeRole"
}
]
}
Note: You can restrict the principal further to specific users or groups within your account.
Step 4: Have Internal Users Assume Your Local Role
Now, internal developers or automation in your account can do the following:
- Assume the
InternalToExternalS3AccessRole
in your account (Account B). - That role policy grants
sts:AssumeRole
on the bucket owner’sCrossAccountS3AccessRole
(in Account A). - Finally, assume that external role to gain S3 access.
Example flow in a developer’s local environment:
# 1) Assume your local role (Account B) to “bridge” into the other account
aws sts assume-role \
--role-arn arn:aws:iam::222222222222:role/InternalToExternalS3AccessRole \
--role-session-name MyLocalSession
# 2) Using the temporary credentials from step (1), assume the external role:
aws sts assume-role \
--role-arn arn:aws:iam::111111111111:role/CrossAccountS3AccessRole \
--role-session-name MyCrossAccountSession
# 3) You'll get credentials that let you access the S3 bucket in Account A.
Part C: Using Environment Variables & jq
To automate environment variables in your shell:
# 1) Assume your local role (in Account B)
eval $(
aws sts assume-role \
--role-arn arn:aws:iam::222222222222:role/InternalToExternalS3AccessRole \
--role-session-name MyLocalSession \
| jq -r '.Credentials |
"export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\n
export AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\n
export AWS_SESSION_TOKEN=\(.SessionToken)"'
)
# 2) Chain-assume the external (Bucket Owner) role
eval $(
aws sts assume-role \
--role-arn arn:aws:iam::111111111111:role/CrossAccountS3AccessRole \
--role-session-name MyCrossAccountSession \
| jq -r '.Credentials |
"export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\n
export AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\n
export AWS_SESSION_TOKEN=\(.SessionToken)"'
)
# 3) Test S3 operations
aws s3 ls s3://YOUR_BUCKET_NAME/path/
Note: Each set of temporary credentials typically expires after 1 hour (or a configured max session duration). You’ll need to re-run these commands as needed.
Summary Diagram
-
Account A (Bucket Owner)
- Role:
CrossAccountS3AccessRole
- Trusts: Account B
- Policy: Read/Write to S3 (as required)
- Role:
-
Account B (Bucket User)
- Role:
InternalToExternalS3AccessRole
- Trusts: Principals in Account B
- Policy:
sts:AssumeRole
→CrossAccountS3AccessRole
in Account A
- Role:
-
Internal Principals in Account B assume
InternalToExternalS3AccessRole
, which then assumes theCrossAccountS3AccessRole
, granting access to the S3 bucket in Account A.
FAQs
-
Why two roles instead of one?
- Security & Flexibility: The bucket owner sets S3 permissions, but doesn’t manage which individuals in your organization assume them. You (the Bucket User) control internal access separately.
-
What if I just want direct access from my account without an extra role?
- That’s possible if the Bucket Owner trusts the entire external account or specific principals directly. However, it’s often cleaner to separate them for security and delegated access control.
-
Session duration?
- Both roles have a
MaxSessionDuration
. By default, it’s 1 hour. It can be extended up to 12 hours in IAM settings.
- Both roles have a
-
Access Denied errors?
- Check the trust policies on both roles.
- Ensure you’re assuming the correct role(s).
- Verify the resource ARNs and AWS account IDs are correct.
Additional Best Practice Considerations
- Condition Keys / External ID: For tighter security or third-party scenarios, consider using condition keys or an external ID in the trust policy.
- Logging & Monitoring: Enable CloudTrail to audit STS usage and S3 access logs (or Server Access Logging) to track object-level events.
- Encryption: Consider S3 default encryption or KMS keys for sensitive data, plus
aws:SecureTransport
conditions to force HTTPS. - MFA: You can require multi-factor authentication to assume critical roles.
These measures can further strengthen your cross-account setup.
Conclusion
By having two roles—one controlled by the Bucket Owner and one by the Bucket User—you achieve a clear separation of responsibilities. The Bucket Owner decides which actions are allowed in the bucket, while the Bucket User decides who in their organization has permission to assume that cross-account role.
EC2 Recovery
This guide demonstrates how to recover access to an EC2 instance when both SSH and Serial Console access are unavailable. We'll use a Proxmox instance as an example, but this method works for any Linux-based EC2 instance.
Prerequisites
- AWS Console access
- Basic understanding of Linux commands
- A working EC2 instance to use as rescue system (Amazon Linux 2 recommended)
Recovery Steps
1. Create a Snapshot of the Affected Volume
2. Stop the Affected Instance
3. Detach the Root Volume
- Select the stopped instance
- Scroll to 'Storage' tab
- Note the volume ID of the root volume
- Right-click the volume → Detach Volume
- Confirm detach
4. Launch a Rescue Instance
- Launch a new EC2 instance
- Use Amazon Linux 2 AMI
- Same availability zone as affected volume
- Configure security group to allow SSH access
5. Attach Problem Volume to Rescue Instance
- Select the detached volume
- Actions → Attach Volume
- Select rescue instance
- Note the device name (e.g., /dev/sdb or /dev/xvdb)
6. Access and Mount the Volume
# Connect to rescue instance
ssh -i your-key.pem ec2-user@rescue-instance-ip
# List available disks to find attached volume
sudo fdisk -l
# or
lsblk
# Create mount point
sudo mkdir -p /mnt/rescue
# Mount the root partition
sudo mount /dev/xvdb1 /mnt/rescue # Adjust device name as needed
7. Troubleshoot and Fix Issues
Common File Locations
# Network Configuration
sudo nano /mnt/rescue/etc/network/interfaces # Debian/Ubuntu/Proxmox
sudo nano /mnt/rescue/etc/sysconfig/network-scripts/ifcfg-eth0 # RHEL/CentOS
# SSH Configuration
sudo nano /mnt/rescue/etc/ssh/sshd_config
# System Logs
sudo less /mnt/rescue/var/log/syslog # Debian/Ubuntu
sudo less /mnt/rescue/var/log/messages # RHEL/CentOS
Example: Fixing Proxmox Network Configuration
# View current network config
sudo cat /mnt/rescue/etc/network/interfaces
# Edit if needed
sudo nano /mnt/rescue/etc/network/interfaces
# Example of working basic config:
auto lo
iface lo inet loopback
iface eth0 inet manual
auto vmbr0
iface vmbr0 inet dhcp
bridge-ports eth0
bridge-stp off
bridge-fd 0
8. Cleanup and Restore
# Unmount volume
cd ~ # Ensure you're not in mounted directory
sudo umount /mnt/rescue
After unmounting:
- Detach volume from rescue instance in AWS Console
- Reattach to original instance as root volume
- Start original instance
- Test connectivity
Common Issues and Solutions
Network Configuration Issues
- Check for correct interface names (eth0, ens5, etc.)
- Verify gateway configuration
- Ensure no conflicting network bridges
- Check for valid IP addressing
Boot Issues
- Check /boot partition isn't full
- Verify fstab entries are correct
- Check for kernel issues in grub configuration
Permission Issues
- Verify SSH key permissions
- Check SELinux/AppArmor settings
- Validate root access configuration
Prevention Tips
- Always maintain a snapshot of working system
- Document working network configuration
- Use AWS Systems Manager Session Manager as backup access method
- Keep serial console access enabled
- Document network changes before implementing
- Test changes in staging environment first
Additional Resources
Remember: Always maintain current backups and document your system configuration to make recovery easier when needed.
LXQt with VNC
This guide will help you set up the lightweight LXQt desktop environment with VNC on a Debian EC2 instance, allowing for a graphical desktop interface through a secure connection.
Initial Setup
# Update system packages
sudo apt update
sudo apt upgrade -y
# Install LXQt desktop environment
sudo apt install lxqt -y
# Install TigerVNC server
sudo apt install tigervnc-standalone-server tigervnc-common -y
# Set VNC password - you will be prompted to enter a password
vncpasswd
# Create VNC config directory if it doesn't exist
mkdir -p ~/.vnc
# Create a simple xstartup file
cat > ~/.vnc/xstartup << 'EOF'
#!/bin/sh
export XDG_SESSION_TYPE=x11
export DESKTOP_SESSION=lxqt
exec startlxqt
EOF
# Make xstartup executable
chmod +x ~/.vnc/xstartup
# Kill any existing VNC server instances (optional, use if needed)
# vncserver -kill :1
# Start VNC server
vncserver :1
Connecting to Your VNC Server
From your local machine:
# Create SSH tunnel and keep it open
# Replace with your actual key file and EC2 public DNS
ssh -L 5901:localhost:5901 -i "your-key.pem" user@your-ec2-public-dns "vncserver :1; sleep infinity"
In a new terminal window on your local machine:
# Connect to the VNC server using TigerVNC viewer
xtigervncviewer localhost:1
Useful VNC Management Commands
# Check if VNC server is running
vncserver -list
# Manually kill VNC server
vncserver -kill :1
# Start VNC with specific resolution
vncserver :1 -geometry 1920x1080
# Start VNC with more parameters (depth, geometry, etc.)
vncserver :1 -depth 24 -geometry 1920x1080 -localhost no
Security Best Practices
- Always use SSH tunneling when connecting over the internet
- Keep your EC2 security group restrictive (don't open VNC port 5901 publicly)
- Use a strong VNC password
- Consider using
-localhost yes
parameter when starting VNC to only allow connections via SSH tunnel
Applications
GitLab: Migrate YH CE to BM EE
Overview
This guide covers migrating a GitLab instance from Yunohost to a standalone server, including:
- Data migration from Yunohost
- User migration from LDAP to local authentication
- Upgrade from Community Edition (CE) to Enterprise Edition (EE)
1. Create and Transfer Backups
On the Yunohost server:
# Create GitLab backup
sudo gitlab-backup create
# Copy the three required files to new server (run these from the new server)
scp user@old-server:/home/yunohost.backup/archives/[TIMESTAMP]_gitlab_backup.tar user@new-server:/tmp/
scp user@old-server:/etc/gitlab/gitlab.rb user@new-server:/tmp/
scp user@old-server:/etc/gitlab/gitlab-secrets.json user@new-server:/tmp/
2. Set Up New Server
Initial Package Setup
# Update package list
sudo apt-get update
# Install required packages
sudo apt-get install -y curl openssh-server ca-certificates perl postfix git
# Add both CE and EE repositories
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh | sudo bash
# Update package list again after adding repos
sudo apt update
# Install GitLab CE first
sudo EXTERNAL_URL="https://gitlab.example.com" apt-get install gitlab-ce
3. Restore Data to CE Instance
# Stop GitLab services
sudo gitlab-ctl stop
# Move the backup files to correct locations
sudo mv /tmp/gitlab.rb /etc/gitlab/
sudo mv /tmp/gitlab-secrets.json /etc/gitlab/
sudo mv /tmp/*_gitlab_backup.tar /var/opt/gitlab/backups/
# Set correct permissions
sudo chmod 600 /etc/gitlab/gitlab-secrets.json
sudo chown root:root /etc/gitlab/gitlab*
# Restore the backup
sudo gitlab-backup restore BACKUP=[TIMESTAMP]
# Reconfigure and restart GitLab
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart
4. Configure Authentication Methods
First, access the Rails console:
sudo gitlab-rails console -e production
Then run these Ruby commands:
# Enable password authentication and sign-in
settings = ApplicationSetting.current
settings.update_column(:password_authentication_enabled_for_web, true)
settings.update_column(:signin_enabled, true)
# Exit console
quit
5. Migrate Users from LDAP
First, access the Rails console:
sudo gitlab-rails console -e production
Then run these Ruby commands:
# Find and update each LDAP user (repeat for each user)
user = User.find_by_username('username')
# Remove LDAP identities
user.identities.where(provider: 'ldap').destroy_all
# Reset authentication settings
if user.respond_to?(:authentication_type)
user.update_column(:authentication_type, nil)
end
# Set password expiry to far future
if user.respond_to?(:password_expires_at)
user.update_column(:password_expires_at, Time.now + 10.years)
end
# Ensure user account is active and reset login attempts
user.update_columns(
state: 'active',
failed_attempts: 0
)
# Set new password
user.password = 'temporary_password'
user.password_confirmation = 'temporary_password'
user.save!
# Verify changes
puts "Active: #{user.state}"
puts "Failed attempts: #{user.failed_attempts}"
puts "Password expires at: #{user.password_expires_at}"
puts "Identities: #{user.identities.pluck(:provider)}"
# Exit console
quit
Alternative password reset method:
sudo gitlab-rake "gitlab:password:reset[username]"
6. Verify CE Installation
- Verify GitLab CE is accessible via web browser
- Test user login with the new passwords
- Have users change their temporary passwords
- Confirm repositories and data are present
- Create a test issue and commit to verify functionality
7. Upgrade to Enterprise Edition
Only proceed after confirming CE is working correctly:
# Upgrade to GitLab EE
sudo apt install gitlab-ee
8. Install GitLab EE License
- Generate and install the license
- Navigate to Admin Area > Settings > General
- Upload your license file
- Accept Terms of Service
- Click "Add License"
9. Final Configuration
# Edit GitLab configuration
sudo nano /etc/gitlab/gitlab.rb
# Add these lines:
gitlab_rails['usage_ping_enabled'] = false
gitlab_rails['gitlab_url'] = 'http://your.gitlab.url'
# Reconfigure and restart
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart
Troubleshooting
# View logs
sudo gitlab-ctl tail
# Check GitLab status
sudo gitlab-ctl status
# If PostgreSQL issues occur
sudo mv /var/opt/gitlab/postgresql/data /var/opt/gitlab/postgresql/data.bak
sudo mkdir -p /var/opt/gitlab/postgresql/data
sudo chown -R gitlab-psql:gitlab-psql /var/opt/gitlab/postgresql/data
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart postgresql
Remember to:
- Replace placeholders with your actual values
- Document temporary passwords
- Keep track of migrated users
- Keep old server running until migration is confirmed
- Backup before each major step
- Test thoroughly after each configuration change
GitLab Pages: Cloudflared
Below is a minimal example of how to configure a self-hosted GitLab instance to serve GitLab Pages behind Cloudflared. This guide walks through:
- Setting the GitLab Pages config on your self-hosted instance.
- Creating a Cloudflared configuration to route traffic to GitLab Pages via a secure tunnel.
1. GitLab Configuration
In your GitLab configuration file (often /etc/gitlab/gitlab.rb
for Omnibus installations), you might have something like:
external_url 'https://git.example.com'
pages_external_url 'https://pages.example.com'
# Disable usage reporting
gitlab_rails['usage_ping_enabled'] = false
# Pages configuration
gitlab_pages['enable'] = true
gitlab_pages['listen_proxy'] = "127.0.0.1:8090"
gitlab_pages['auth_servers'] = ["http://127.0.0.1:8090"]
# Domains served by GitLab Pages
gitlab_pages['domains'] = ["pages.example.com"]
# Enable the built-in Pages NGINX
pages_nginx['enable'] = true
# Listen for HTTPS in GitLab’s Pages component
nginx['listen_https'] = true
gitlab_pages['external_https'] = ['127.0.0.1:8443']
Note: The exact ports (e.g. 8090
, 8443
) may vary depending on your setup or preferences. Adjust as needed.
After editing, run:
sudo gitlab-ctl reconfigure
2. Example Cloudflared Configuration
Create or edit your Cloudflared configuration file, commonly found at:
/etc/cloudflared/config.yml
(Adjust the file path based on your system.)
Below is an example that:
- Proxies
pages.example.com
to the GitLab Pages HTTPS listener on127.0.0.1:8443
. - Disables TLS verification (useful if your internal certificate is self-signed or otherwise untrusted).
- Ensures the Host header remains
pages.example.com
so GitLab Pages recognizes the incoming request.
ingress:
- hostname: pages.example.com
service: https://127.0.0.1:8443
originRequest:
noTLSVerify: true
httpHostHeader: pages.example.com
# (Optional) If you want to proxy the main GitLab web UI:
- hostname: git.example.com
service: https://127.0.0.1:443
originRequest:
noTLSVerify: true
httpHostHeader: git.example.com
# Catch-all for any other requests
- service: http_status:404
Important Keys
hostname
: Must match the domain you configured for GitLab Pages or the main GitLab instance.service
: Points to your local GitLab Pages HTTPS port (8443
in this example).noTLSVerify
: Bypasses TLS verification if your certificate is self-signed.httpHostHeader
: Ensures GitLab Pages sees the correct Host header, preventing unwanted redirects.
3. Restart Services
Apply your changes by restarting the necessary services:
sudo systemctl restart cloudflared
If you’re using Omnibus GitLab:
sudo gitlab-ctl reconfigure
4. Verify the Domain
In GitLab (under Settings → Pages for your project), ensure pages.example.com
is added as the project’s custom domain. If it’s the only domain, consider marking it as “Primary” so that all traffic is served without redirects to other domains.
5. Test Your Setup
- Go to:
https://pages.example.com
- Confirm your Pages site loads as expected (and is no longer redirecting to the main GitLab URL).
If you still see redirection or an error, double-check:
- Cloudflare DNS: Ensure
CNAME
orA
records forpages.example.com
point to your Cloudflare Tunnel domain (often<tunnel-id>.cfargotunnel.com
). - GitLab Configuration: Confirm
pages_external_url
matchespages.example.com
exactly and that the Pages domain is properly configured under Settings → Pages. - Host Header: Make sure
httpHostHeader
is set incloudflared
to the Pages domain.
That’s it! With this configuration, requests to https://pages.example.com
will be routed securely through Cloudflared to your self-hosted GitLab Pages service.
GitLab: Metal EE to Turnkey EE
Objective
This guide covers the process of migrating to GitLab Enterprise Edition (EE) within a container environment, specifically using TurnKey Linux containers. We'll address how to properly restore a backup from another GitLab instance and upgrade to the latest version.
Background
When running GitLab in containerized environments, you can't modify the kernel directly. Using a pre-configured TurnKey instance that runs GitLab CE and upgrading it to EE is an efficient approach that avoids kernel modification issues.
Prerequisites
- TurnKey GitLab container deployed in your virtualization platform
- GitLab EE backup file from your source system
- GitLab EE license
Step 0: Creating a Proper Full Backup
To avoid issues with missing repositories in backups, always create a complete backup:
# On your source GitLab server
# Create a full backup including repositories using STRATEGY=copy
sudo gitlab-backup create STRATEGY=copy
# This creates a backup file like: 1745848933_2025_04_28_17.11.1-ee_gitlab_backup.tar
# that includes both database AND repositories
# You should also backup these configuration files separately
sudo cp /etc/gitlab/gitlab.rb /path/to/safe/location/gitlab.rb
sudo cp /etc/gitlab/gitlab-secrets.json /path/to/safe/location/gitlab-secrets.json
# Verify backup contains repositories
sudo tar -tf /var/opt/gitlab/backups/your_backup_filename.tar | grep -i repositories | head -20
Step 1: Install GitLab EE with the Correct Version
First, determine the version of your backup:
# Backup filename format indicates version
# Example: 1745757520_2025_04_27_17.9.1-ee_gitlab_backup.tar
Install the matching GitLab EE version:
# Add the GitLab EE repository
curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh | sudo bash
# Check available versions
apt-cache madison gitlab-ee
# Install the specific version matching your backup
sudo apt-get install gitlab-ee=17.9.1-ee.0 # Replace with your version
Step 2: Prepare and Restore the Backup
# Stop GitLab services that connect to the database
sudo gitlab-ctl stop puma
sudo gitlab-ctl stop sidekiq
# Create backup directory if it doesn't exist
sudo mkdir -p /var/opt/gitlab/backups
# Copy your backup file to the correct location
sudo cp your_backup_file.tar /var/opt/gitlab/backups/
# Set correct ownership
sudo chown git:git /var/opt/gitlab/backups/your_backup_file.tar
# Restore the backup
sudo gitlab-backup restore BACKUP=your_backup_timestamp_version
Step 3: Post-Restore Configuration
# Reconfigure and restart GitLab
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart
# Verify all services are running
sudo gitlab-ctl status
Step 4: Update to the Latest GitLab EE Version
To properly update GitLab to the latest version, follow these steps to fix repository configuration issues:
1. Complete Repository Reset
Remove all conflicting and outdated GitLab repository configurations:
sudo rm -f /etc/apt/sources.list.d/gitlab*
sudo rm -f /etc/apt/preferences.d/gitlab*
sudo rm -f /usr/share/keyrings/gitlab*
2. Proper GPG Key Installation
Install the GPG key correctly using the modern method:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://packages.gitlab.com/gitlab/gitlab-ee/gpgkey | sudo gpg --dearmor -o /etc/apt/keyrings/gitlab-ee-archive-keyring.gpg
3. Modern Repository Configuration
Use the newer signed-by
syntax which explicitly links the repository to its key:
echo "deb [signed-by=/etc/apt/keyrings/gitlab-ee-archive-keyring.gpg] https://packages.gitlab.com/gitlab/gitlab-ee/debian bookworm main" | sudo tee /etc/apt/sources.list.d/gitlab_ee.list
4. Update and Install Latest Version
With proper configuration in place, update without version pinning:
sudo apt-get update
sudo apt-get install gitlab-ee
sudo gitlab-ctl reconfigure
Troubleshooting Guide
Issue: Version Mismatch During Restore
If the restore fails with a version mismatch error, make sure to install the exact GitLab version from the backup first, then upgrade after a successful restore.
Issue: GitLab Not Updating to Latest Version
If GitLab remains at the old version after trying to update, the issue is typically related to:
- Conflicting repository configurations
- Authentication problems with the repository
- Hidden preferences pinning to the old version
Follow the complete repository reset procedure in Step 4 to resolve these issues.
Issue: Repository Signing Error
If you see GPG key errors when updating packages, follow the GPG key installation in Step 4 to properly configure the signing key.
Best Practices
- Always match the GitLab version to your backup version during restoration
- Perform a complete system backup before attempting any upgrade
- Verify successful restoration before upgrading to the latest version
- Set up proper repository configurations to enable seamless future updates
Why This Approach Works
Using this method ensures that your GitLab instance is properly restored with all user data, repositories, and settings intact, while avoiding kernel modification issues common in containerized environments. The proper repository setup ensures you can easily update to newer versions as they become available.
Mastodon: Change Username
This article explains how to manually “rename” a local Mastodon account by transferring its content to a newly created account in your Mastodon instance’s database. This approach does not preserve followers/followings. It is a risky, unsupported method—proceed only if you fully understand the implications and have a complete database backup.
Important Disclaimers
- Mastodon does not officially support direct username changes via database edits.
- In some versions,
tootctl accounts rename
may not exist, leaving manual DB manipulation as your only option. - These steps involve low-level data changes that can break your instance if done incorrectly.
- The instructions below focus on transferring statuses, favorites, bookmarks, notifications, etc., without copying followers.
1. Create the Target Account
- Log in to your Mastodon instance’s web interface as an admin.
- Create a new local account using the desired/target username.
- Confirm you can log in as this new user to ensure it is recognized in the system.
2. Access the Rails Console
- SSH into your Mastodon server.
- Switch to your Mastodon user (commonly
mastodon
ormastodon-user
). - Navigate to your Mastodon installation directory (e.g.,
/home/mastodon/live
). - Start the Rails console in production mode:
RAILS_ENV=production bin/rails console
- You should see a Ruby prompt (
irb(main):001:0>
).
3. Identify the Old and New Accounts
In the Rails console, look up both the old and the new Account
objects:
old_username = 'oldusername' # Replace with the old username
new_username = 'newusername' # Replace with the new username
old_account = Account.find_by(username: old_username, domain: nil)
new_account = Account.find_by(username: new_username, domain: nil)
if old_account.nil?
puts "Old account not found! Check old_username."
end
if new_account.nil?
puts "New account not found! Check new_username."
end
puts "Old account ID: #{old_account.id}"
puts "New account ID: #{new_account.id}"
Note: domain: nil
ensures you are finding local (rather than remote) accounts.
4. Transfer Content (Without Followers)
The following code reassigns items like statuses, favorites, bookmarks, and notifications from the old account to the new account. We are deliberately skipping follower/following relationships:
# Move statuses to the new account
Status.where(account_id: old_account.id)
.update_all(account_id: new_account.id)
# Move favourites
Favourite.where(account_id: old_account.id)
.update_all(account_id: new_account.id)
# Move bookmarks
Bookmark.where(account_id: old_account.id)
.update_all(account_id: new_account.id)
# Move notifications
Notification.where(account_id: old_account.id)
.update_all(account_id: new_account.id)
# Optionally move pinned statuses
Pin.where(account_id: old_account.id)
.update_all(account_id: new_account.id)
Note: You can adapt this pattern for other tables, like Poll
or MediaAttachment
, if needed.
# Polls
Poll.where(account_id: old_account.id).update_all(account_id: new_account.id)
# Media attachments (if you want them reassigned from old to new account explicitly)
MediaAttachment.where(account_id: old_account.id).update_all(account_id: new_account.id)
5. Retire or Archive the Old Account
Once you confirm data has moved, you can:
- Rename the old account to avoid conflicts:
old_account.update!(username: "oldusername-archived")
- Suspend the old account (prevent further use):
old_account.update!(suspended_at: Time.now)
- Disable the old user record:
old_account.user.update!(disabled: true)
6. Verify the New Account
- Log in as the new user in the web UI.
- Check that statuses, favorites, and bookmarks have transferred.
- Confirm pinned statuses (if any) display properly.
- The old account should no longer have these items.
7. Reindex (If Using ElasticSearch)
If your instance uses ElasticSearch or advanced indexing:
RAILS_ENV=production bin/tootctl search deploy
This ensures the newly transferred posts are indexed correctly.
Final Notes
- This is not an official method to rename Mastodon accounts. The procedure effectively merges old account data into a new account.
- We deliberately did not transfer follower relationships—those remain with the old account.
- Remote servers may still reference the old username for a time, due to caching on the Fediverse.
- Always keep backups of your database. Minor mistakes in database manipulation can cause significant data loss.
GitLab & GitLab Pages on Separate IPs
Self‑Hosted GitLab & GitLab Pages on Separate IPs
Goal Run the core GitLab instance and the GitLab Pages service on different IP addresses while using Let’s Encrypt certificates managed outside of Omnibus. This guide documents every key
gitlab.rb
setting required, why it exists, and the common pitfalls that bite first‑time deployments.
1 Topology Overview
Component | FQDN | Listens on | Description |
---|---|---|---|
GitLab (core) | git.❱ PRIMARY_DOMAIN ❰ |
PRIMARY_IP:443/80 |
Standard web UI/API, served by Omnibus NGINX |
GitLab Pages | prod.❱ PRIMARY_DOMAIN ❰ |
PAGES_IP:443/80 |
Serves static pages; runs its own Go HTTP server |
Internet ─▶ 443 ➜ PRIMARY_IP ──┐
│ Omnibus NGINX → GitLab Core
Internet ─▶ 443 ➜ PAGES_IP ─┴── gitlab‑pages (direct bind)
- Distinct IPs prevent port clashes and simplify TLS.
- Let’s Encrypt via certbot is used for both hostnames; GitLab’s internal ACME is disabled.
2 gitlab.rb
– Directive‑by‑Directive Explanation
external_url 'https://git.PRIMARY_DOMAIN'
Sets the canonical URL for the core GitLab instance. All internal links, OAuth callbacks, and API clients rely on this value.
letsencrypt['enable'] = false
Disables Omnibusʼ automatic ACME integration. You manage certificates yourself with certbot (or any other tool).
nginx['listen_addresses'] = ['PRIMARY_IP']
Tells Omnibus NGINX only to bind to the primary IP. Prevents it from stealing :443
on the Pages IP.
nginx['ssl_certificate'] = '/etc/letsencrypt/live/git.PRIMARY_DOMAIN/fullchain.pem'
nginx['ssl_certificate_key'] = '/etc/letsencrypt/live/git.PRIMARY_DOMAIN/privkey.pem'
Full‑path PEM pair for the core GitLab site. Read directly from certbot’s live directory.
GitLab Pages block
pages_external_url 'https://prod.PRIMARY_DOMAIN'
Public URL end‑users visit for Pages content. Must match the CN/SAN in the cert below.
gitlab_pages['enable'] = true
Self‑explanatory—starts the Pages service.
gitlab_pages['external_http'] = ['PAGES_IP:80']
gitlab_pages['external_https'] = ['PAGES_IP:443']
Direct binding mode. Pages listens on its own IP instead of being proxied through NGINX.
gitlab_pages['cert'] = '/etc/letsencrypt/live/prod.PRIMARY_DOMAIN/fullchain.pem'
gitlab_pages['cert_key'] = '/etc/letsencrypt/live/prod.PRIMARY_DOMAIN/privkey.pem'
PEM pair for the Pages hostname. Since inplace_chroot
is disabled (see below), the service can reach the real FS path.
gitlab_pages['inplace_chroot'] = false
Disables the default chroot jail. Simplifies cert management in containerised environments where an extra security layer is less critical.
gitlab_pages['acme']['enabled'] = false
Stops Pages from requesting its own ACME certs—which would clash with certbot.
pages_nginx['enable'] = false
Omnibus can spawn an internal NGINX reverse‑proxy in front of Pages. We turn it off because Pages is binding directly.
package['modify_kernel_parameters'] = false
On some cloud images/containers, Omnibus cannot change sysctl values. This flag avoids Chef failures.
3 Certbot Shortcuts
# Issue certs (example)
sudo certbot certonly --standalone -d git.PRIMARY_DOMAIN -d prod.PRIMARY_DOMAIN -m you@example.com --agree-tos
Auto‑reload Pages after renewal
Create /etc/letsencrypt/renewal-hooks/post/gitlab-pages-reload.sh
:
#!/bin/sh
# Reload Pages after certbot renews prod.PRIMARY_DOMAIN
/usr/bin/gitlab-ctl hup gitlab-pages
chmod +x
it. Certbot’s timer will run this automatically.
4 Firewall Rules
IP | 80 | 443 |
---|---|---|
PRIMARY_IP |
✅ | ✅ |
PAGES_IP |
✅ | ✅ |
Block all other inbound ports.
5 Troubleshooting Cheat‑Sheet
Symptom | Common Cause | Fix |
---|---|---|
address already in use :443 in Pages log |
Omnibus NGINX bound to 0.0.0.0 | Set nginx['listen_addresses'] to primary IP only |
open /etc/…crt: no such file or directory |
Wrong cert path / chroot mismatch | Disable chroot or copy cert into …/gitlab-pages/etc/ |
gitlab-pages: runsv not running |
gitlab-runsvdir service dead |
systemctl start gitlab-runsvdir && systemctl enable gitlab-runsvdir |
All services runsv not running |
Container rebooted without runit | Same as above |
5½ Keeping the supervisor (gitlab‑runsvdir) alive
GitLab’s runit supervision tree is launched by the systemd unit gitlab-runsvdir.service
. If that unit is inactive every Omnibus component will show runsv not running
and no ports will be open.
Why it dies
- The VM/container reboots and systemd starts services before networking is up; Pages fails to bind to its IP and runit exits.
- Manual
systemctl stop
or a runawayOOM‑killer
event.
Make it start reliably
# one‑off recovery
sudo systemctl start gitlab-runsvdir
# persistent across reboots
sudo systemctl enable gitlab-runsvdir
Add a network dependency so the secondary IPs exist before runit starts:
# /etc/systemd/system/gitlab-runsvdir.service (snippet)
[Unit]
After=network-online.target
Wants=network-online.target
Optional watchdog timer
A tiny timer restarts the supervisor if it ever stops unexpectedly:
# /etc/systemd/system/gitlab-runsvdir-watchdog.timer
[Unit]
Description=Restart gitlab-runsvdir if it exits
[Timer]
OnBootSec=5min
OnUnitInactiveSec=1min
Unit=gitlab-runsvdir.service
[Install]
WantedBy=timers.target
Enable with systemctl enable --now gitlab-runsvdir-watchdog.timer
.
When gitlab-runsvdir
is healthy you will always see both listeners after boot:
ss -ltnp | grep :443
# 172.31.14.12:443 nginx
# 172.31.14.11:443 gitlab-pages
6 Security Notes
-
Direct binding (no Pages NGINX) means the Go Pages server terminates TLS itself.
-
Disabling chroot removes one sandbox layer. On single‑tenant VMs or Docker containers this is usually acceptable; on multi‑tenant hosts you might prefer to keep the chroot and copy the PEMs into the jail instead.
-
gitlab-secrets.json
relocated If you move or mount‑inject the secrets file, Omnibus can no longer create its fallback self‑signed certs. In this guide we disable all Omnibus ACME features (letsencrypt['enable'] = false
,gitlab_pages['acme']['enabled'] = false
) and provide Let’s Encrypt PEMs manually, so the missing secrets file is harmless—just ensure certbot renewal is working. -
Automatic renewal Remember to reload services after certbot renews. A one‑line renewal hook can do this:
#!/bin/sh /usr/bin/gitlab-ctl hup gitlab-pages /usr/bin/gitlab-ctl hup nginx
7 Full Example gitlab.rb
Full Example gitlab.rb
external_url 'https://git.PRIMARY_DOMAIN'
letsencrypt['enable'] = false
nginx['listen_addresses'] = ['PRIMARY_IP']
nginx['ssl_certificate'] = '/etc/letsencrypt/live/git.PRIMARY_DOMAIN/fullchain.pem'
nginx['ssl_certificate_key'] = '/etc/letsencrypt/live/git.PRIMARY_DOMAIN/privkey.pem'
pages_external_url 'https://prod.PRIMARY_DOMAIN'
gitlab_pages['enable'] = true
gitlab_pages['external_http'] = ['PAGES_IP:80']
gitlab_pages['external_https'] = ['PAGES_IP:443']
# direct certbot PEMs
gitlab_pages['cert'] = '/etc/letsencrypt/live/prod.PRIMARY_DOMAIN/fullchain.pem'
gitlab_pages['cert_key'] = '/etc/letsencrypt/live/prod.PRIMARY_DOMAIN/privkey.pem'
gitlab_pages['inplace_chroot'] = false
gitlab_pages['acme']['enabled'] = false
pages_nginx['enable'] = false
package['modify_kernel_parameters'] = false
Replace:
PRIMARY_DOMAIN
→ your apex domain (e.g. jack.water.house)PRIMARY_IP
→ IP mapped to git.PRIMARY_DOMAINPAGES_IP
→ IP mapped to prod.PRIMARY_DOMAIN
8 Command Quick‑Reference
# Apply config
gitlab-ctl reconfigure
# Start / stop Pages
gitlab-ctl restart gitlab-pages
gitlab-ctl tail gitlab-pages
# Restart entire stack after system boot
systemctl start gitlab-runsvdir
Document prepared · May 2025
9 When gitlab-secrets.json
(aka secrets.rb) is relocated
Omnibus keeps its encryption keys (CI JWTs, LDAP secrets, backup encryption keys, etc.) in /etc/gitlab/gitlab-secrets.json
—older docs sometimes call this secrets.rb. If the file is moved outside /etc/gitlab, GitLab can no longer read the self‑signed certificate or private keys it once generated. The result is TLS mis‑configuration and, if letsencrypt['enable']
is turned on, ACME registration failures.
Fix
- Keep
letsencrypt['enable'] = false
(use certbot externally). - Do not delete the secrets file—back it up and keep it under
/etc/gitlab
.
10 Chroot ON vs OFF—trade‑offs at a glance
Mode | Advantages | Drawbacks |
---|---|---|
Chroot ONgitlab_pages['inplace_chroot'] = true |
• Additional isolation (Pages can only see its own tree). • Blocks path‑traversal exploits inside user pages. |
• Certs must be copied into /var/opt/gitlab/gitlab-pages/etc/ .• Debugging more complex. • Breaks on minimal containers lacking pivot_root . |
Chroot OFFgitlab_pages['inplace_chroot'] = false |
• Pages reads PEMs directly from /etc/letsencrypt/live/... —no duplication.• Simple certbot renewal hook ( gitlab-ctl hup gitlab-pages ).• Works on any container runtime. |
• One less defence layer; rely on VM/container isolation and Unix perms. |
Rule of thumb: In single‑tenant VMs or containers, disabling the chroot is pragmatic. On a shared host or if you let untrusted users push Pages content, keep the chroot and script the PEM copy in a certbot post‑renew hook.