Setting Up A Cloud Server For Multiple Sites

Once upon a time, the correct answer for “how do I run a hobby site” was “put it on Heroku.” They offered free hobby servers that were completely configured for you, so that deployment was as simple as git push. Salesforce bought them, reliability went downhill, and they got rid of the free tier. Now there isn’t anything in its league to replace it, so let’s talk about building your own.

Cloud virtual machines (VMs) are getting cheaper and more powerful, and even the smallest VM can run several low-traffic sites. This is a guide to spinning up a *nix VM on one of the cloud providers and setting it up as a host for multiple independent sites by using Nginx as a reverse proxy. What this means is that you can own titlereader.com, fencingdatabase.com, and sdubinsky.com and have all three of them live on the same VM.

Before you start this article, you should have a site running on your own computer. This article will explain how to set up your VM to host that site and others like it, and is aimed at developers who need to know just enough DevOp to deploy personal sites. It assumes basic familiarity with the command line.

1. Setting up The Server

When you buy your VM, make sure it has a static external IP address and that your SSH key is installed. The original user created by your cloud provider will be your admin user, capable of using sudo (AKA running commands as root). All commands listed below that begin with sudo should be run as the admin user.

In addition, each site should have its own dedicated user, on the principle of least access. SSH into the VM as your admin user and run sudo adduser --disabled-login $sitename. $sitename will become the name of the user you run the site as. Switch to that user with sudo su - $sitename,create a file in your home dir named ~/.ssh/authorized_keys, and add your SSH key so you can deploy as that user.

Each site should also have a dedicated deploy directory. This is where all the site’s code will live. Originally, sites were static pages that lived in /var/www and were served from there directly. Now that sites are programs you can either make the directory in /var/www/$sitename/ or in the root folder as /$sitename/. Either way, as the admin user, make sure you change ownership of the folder to your site user by running: sudo chown -R $sitename:$sitename /path/to/deploy/directory.

Systemd

If you haven’t yet, this is the point at which you need to deploy your site. Install dependencies, upload your code, and make sure it runs just as it does on your machine. The rest of this section will be about setting it up to run automatically.

Systemd is a process manager. That means it will start your web site and make sure that if the machine reboots the site will come back up as well. If your machine doesn’t have Systemd, you should skip this section in favor of figuring out an alternative process manager solution.

Each process Systemd manages is defined in a service file. As an example, here is the service file for this site:

[Unit]
Description=sdubinsky.com
After=network-online.target
[Service]
Type=simple
Environment="PATH=/home/deus-ex/.asdf/shims:/home/deus-ex/.asdf/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
User=sdubinsky
WorkingDirectory=/scottdubinskysite/current
ExecStart=/home/sdubinsky/.asdf/shims/bundle exec ruby app.rb -e production -p 4568
Restart=always

[Install]
WantedBy=multi-user.target

The important lines here are:

  • After=network-online.target: This tells Systemd when to start this service - in this case, after the network goes up. ETA: This line was originally After=network.target. It’s been updated as per this comment.
  • Type=simple: This is a simple program which means it runs in the foreground. Other options are forking and daemonized.
  • Environment: The environment your program will run in. I use asdf-vm for ruby installation, so that’s why it’s there.
  • User: The user your site will run as. This needs to be the same user that owns the deploy directory.
  • WorkingDirectory: The directory your site lives in. This is the deploy directory from above.
  • ExecStart: The command Systemd will run to start your site. Note the fully-qualified path(starts with a /). Also note that it’s not running on port 80, the default HTTP port. We’ll handle that next, in the Nginx section. When running multiple sites, each needs their own port.
  • Restart: When to restart your site if it goes down.

Save this file as /etc/systemd/system/$mysite.service. Tell systemd that the service exists with sudo systemctl daemon-reload, enable it with sudo systemctl enable $mysite, and start it with sudo systemctl start $mysite.

At this point your site should be up and running, but not accessible from the global Internet. You can check that it’s operating normally by running curl localhost:$port or sudo systemctl status $mysite.

Nginx

Nginx is the tool (technically it’s a web server) that we’re going to use to direct requests to the correct site. It should come preinstalled on your VM. We’re going to do this by configuring it as a reverse proxy. That means it sits on the VM and listens on port 80 (the normal HTTP port), and connects the incoming request to the correct site, based on the site name and what port that specific site is listening on. This is the nginx conf for this site:

server {
    listen 80;
    listen [::]:80;
    root /scottdubinskysite/current/public;
    server_name sdubinsky.com;

    location / {
        proxy_pass              http://localhost:4568;
        proxy_set_header Host   $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

The four important lines here are:

  1. listen This tells nginx what ports to listen on for this server. All servers should listen on port 80.
  2. server_name: This is the important one. This is how nginx will differentiate between the sites you have running. Requests addressed to this server_name will go to this site. Requests addressed to a different server_name will go to that site..
  3. location /: This opens a location block, a set of config lines that only apply to this location.. All requests to this server addressed to this location are handled by this block. Since this is the only location block (and because the location is /), it handles all requests.
  4. proxy_pass: This is the line that tells nginx to proxy requests and pass them to the address listed. That’s the address your site is listening on.

As the admin user, save this file in /etc/nginx/sites-available/$mysite.conf, and symlink that to /etc/nginx/sites-enabled/ by runningsudo ln -s /etc/nginx/sites-available/$mysite.conf /etc/nginx/sites-enabled. While you’re there, remove default.conf from the sites-enabled directory (don’t skip this step). Finally, as your admin user, restart Nginx with sudo systemctl nginx restart. Your site should now be accessible from the global Internet, although because Nginx relies on the site name to direct requests you can’t see it until you set up DNS.

DNS

The simplest DNS setup is a single A record associating your domain name with your VM’s static IP address. This should be straightforward to set up wherever you handle DNS, whether it’s through your registrar directly or via a proxy like Cloudflare. DNS, being a distributed system, will take some time to propagate. Frantically refreshing until you see your change is a time-honored tradition.

Conclusion

There’s a few different parts to it, but setting up a basic VM for even one site isn’t very complex, and doing it for multiple sites is just repeating the same steps. There are a lot of configuration options and tweaks that are outside the scope of this article. If something doesn’t work exactly how you want it to, keep googling and asking until you get it right. I hope this demystifies the process for you. Happy hacking!

Want more? Subscribe via RSS. Want to talk about cleaning up your codebase? Email me! Otherwise, happy coding!
Originally published on April 27, 2023.