Migrating unknown unmanaged infrastructure into a managed Ansible-based IaC

I had a VPS on a popular developer-centric cloud platform. It started out life as Ubuntu 14, several years ago, and predates my learning about IaC, Terraform and Ansible. After starting work at a hosting company, I thought I may as well migrate my stuff over. And so, the headache of how to do so without forgetting anything began.

I use it for my personal projects and there's a lot of old stuff on there. Old source code, active source code, old databases, active databases, random backups.

The problem is I can't exactly remember what is on there and what's still active, so I thought I'd document the process.

sudo netstat -tlnp
tcp        0      0*               LISTEN      24744/mysqld    
tcp        0      0    *               LISTEN      1320/sshd       
tcp        0      0    *               LISTEN      4970/nginx: worker 
tcp        0      0    *               LISTEN      1299/vsftpd     
tcp        0      0    *               LISTEN      14839/master    
tcp        0      0   *               LISTEN      4970/nginx: worker 
tcp        0      0*               LISTEN      1354/monit      
tcp        0      0 *               LISTEN      1290/zabbix_agentd
tcp        0      0*               LISTEN      7567/node       
tcp6       0      0 :::22                   :::*                    LISTEN      1320/sshd       
tcp6       0      0 :::80                   :::*                    LISTEN      4970/nginx: worker 
tcp6       0      0 :::443                  :::*                    LISTEN      4970/nginx: worker 
tcp6       0      0 :::10050                :::*                    LISTEN      1290/zabbix_agentd

Starting with a netstat to see what TCP ports are listening. We've got a local node instance for Ghost, MySQL, SSH, Zabbix, and Nginx on both 80 and 443 as you would expect. We can do the same for UDP (netstat -ulnp) and confirm there's nothing out of the ordinary.

Let's go ahead and dump all the MySQL databases. We can always drop stuff we don't need locally or on the new production server later.

mysqldump -u root -p --all-databases > all_databases.sql

And grab the file from your local:

scp projects:~/all_databases.sql .

As we identified Nginx rather than Apache earlier, we can investigate the config.

ls -lha /etc/nginx/sites-enabled/

I had this setup with symlinks so any present in there are the active sites. This can be verified by checking the hosts against DNS records.

Next we'll leverage ripgrep (or regular plain old grep) to check where the projects are on the system.

rg -L root /etc/nginx/sites-enabled/*
6:  root /var/www/bencromwell/system/nginx-root;

The output from this will show us which files to backup/extract/place into Git if they're not already there. The plan is to deploy from Ansible with source in Github, but there are likely config files that'll need templating into Jinja2.

It'll be a manual process to pull each active site down, upload it to Github if it's missing from there, extract the config and start throwing together Ansible roles to setup Nginx, basic access, firewall, PHP, Ghost, and LetsEncrypt.

It's been a few weeks now and perfection is the enemy of done.

Ansible roles saved me a ton of time, but I ended up migrating some stuff by hand.

The SSL certs were recreated manually on the new server with certbot certonly -d example.org

I installed multiple PHP versions manually from the ondrej/php PPA.

Some of the config was done manually but pulled into an Ansible playbook.

- role: nginxinc.nginx
  tags: web
- role: nginxinc.nginx_config
  tags: web
- role: geerlingguy.certbot
  tags: web, ssl
- role: geerlingguy.php
  tags: php
- role: geerlingguy.composer
  tags: php
- role: geerlingguy.mysql
  tags: db
- role: geerlingguy.nodejs
  tags: blog

- role: n.bencromwell.com
  tags: code

- role: projects-nginx
  tags: http-config
Some of the Ansible roles