How the blog was built part 3 - update

How the blog was built part 3 - update
Photo by Markus Winkler / Unsplash

So in part 2 we backed the Ghost files up. Now we are going to create the static files and then upload them to S3.

Hosting the website

I'm using an AWS S3 bucket to host the website with CloudFlare routing and this article was instrumental in setting it up so I highly recommend a read. I won't repeat it all here but I will summarise what you do:

Assuming you have a domain of mywebsite.biz and subdomain of www you will create 2 separate buckets:

  • Subdomain bucket named www.mywebsite.biz, configured for static website hosting but public access to the bucket is enabled.
  • Domain bucket named mywebsite.biz. Requests will need redirecting from this domain bucket to the subdomain bucket.
  • Bucket policies will need configuring to allow the Cloudflare IP addresses ensuring only requests coming from the Cloudflare proxy will be responded to.

Then you need to set up your site on Cloudflare. It provides caching and some DDos protection but not the end-to-end encryption you might want. As this is just a static website and I don't make requests to anything else, I'm not necessarily worried about that even though I'd like to be 100% as a principle.

  • You will create 2 CNAME records for your root domain and subdomain. Each record will have the name as the domain and the value as the relevant S3 bucket endpoint.
  • Change your domain nameservers to Cloudflare.
  • To use HTTPS for the traffic between visitors and Cloudflare this article describes how to. On your Cloudflare dashboard SSL/TLS Overview tab - the encryption mode should be Flexible and the Edge Certificate tab with Always Use HTTPS enabled.
  • Use this online tester to verify your setup.

Now visitors will be able to visit the website using the subdomain or your root domain once you have uploaded the static files.

Static files

The only differences with the original GitHub file generate_static_content.sh are I added sudo to use with the gssg tool and not uploading the static files to the cloud. Note the gssg tool does have pre-requisites that need installing first.

It works by  extracting the files and swapping the https://binarydreams.biz for your configured domain.

#!/bin/bash

set -e 
cd /static
rm -rf *
mkdir content

echo "Running gssg..." 
sudo gssg --domain "https://binarydreams.biz" --dest . --url "https://$GHOST_DOMAIN" #--quiet

echo "Download all sizes of images"
cd /static/content/images
sizes=( "w600" "w1000" "w1600" "w2000" )

function getImage() {
  file=$1
  for size in "${sizes[@]}"; do
    targetFile="/static/content/images/size/$size/$file"
    path=$(dirname $targetFile)
    mkdir -p $path
    if [ ! -f $targetFile ]; then
      echo "Getting:  $targetFile"
      curl -f --silent -o $targetFile https://binarydreams.biz/content/images/size/$size/$file
    else 
      echo "Skipping: $targetFile"
    fi
  done
}

echo "Downloading images that have already been sized"
cd /static/content/images
for file in $(find size -type f -o -name "*.png"); do
  source=$(echo $file | sed 's,^[^/]*/,,' | sed 's,^[^/]*/,,')
  getImage $source
done

echo "Downloading images that have not already been sized"
for file in $(find . -path ./size -prune -type f -o -name "*.png"); do
  source=$(echo $file | sed 's,^[^/]*/,,')
  getImage $source
done

echo "Static content generated!"
generate_static_content.sh

To run the script execute docker-compose exec -T app /usr/local/bin/generate_static_content.sh and create the static files.

Let's upload

So rather than have the file upload in static generation script I wanted it in its own brand new file - this helped with testing it on it's own too.

  • This script takes an argument for the AWS region defaulted to eu-west-1.
  • It takes the generated files in the /static folder, deletes the existing files, uploads to the S3 domain bucket and makes them publicly readable, all using the configured ghost-blogger AWS profile.
#!/bin/bash

AWS_REGION=${1:-'eu-west-1'}

echo .
echo "Uploading to AWS..."
python3 -m awscliv2 s3 sync /static s3://$GHOST_DOMAIN --acl public-read --delete --region $AWS_REGION --profile ghost-blogger
upload_static_content.sh

To run this script you execute this docker-compose exec -T app /usr/local/bin/upload_static_content.sh.

All together now

Now lets bring it all together with the script executions in a batch file.

The AWS profile needs tobe checked this command can run before the actual script starts.

@echo off

REM This file will run on Windows only of course
ECHO Generate static files
docker-compose exec -T app /usr/local/bin/generate_static_content.sh

ECHO Sync code to S3
docker-compose exec -T app /usr/local/bin/upload_static_content.sh
update_website.bat

When I run that file it will now create the static files and upload to S3 which you see before you now!

Costs

Looking at my costs - my domain (£10 per year), the S3 bucket which is currently a few pence per month and Cloudflare for free as an individual user. Awesome!

  • Easily gives me SSL certification
  • CDN Caching to reduce requests to the bucket

Hopefully this has been of some help, any feedback always welcome!