<![CDATA[binarydreams]]>https://binarydreams.biz/https://binarydreams.biz/favicon.pngbinarydreamshttps://binarydreams.biz/Ghost 4.12Fri, 31 Mar 2023 22:52:26 GMT60<![CDATA[How the blog was built part 5 - import a backup]]>https://binarydreams.biz/how-the-blog-was-built-part-5-import-a-backup/63a0e83203359c00167a69deSun, 08 Jan 2023 01:29:26 GMT

Now I can backup and update the blog but a few times I have had to manually re-import the blog data. You saw from the backup scripts in part 1 will export the blog content as a JSON file and the content (that includes all the images used) as a ZIP file. The blog is manually imported with these steps:

  • The blog content JSON file from the Settings Lab Import button.
  • The images content uncompressed from the ZIP file and into the blog folder.
  • The profile is reset whenever the blog is reset and the details have to be manually updated again, the profile text and pictures.

I've done this enough times and I need to update the Ghost blog version and fix any Snyk reported security issues. I will automate all this and will show the scripts that will help import a backup automatically. These scripts are a work-in-progress because I haven't needed to execute them . This code is my educated guess so far and I will update the code once I have confirmed them working.

Importing the blog content

The import is split into two executing scripts, one to download the backup and the other to do the import.

The download_website.sh script will:

  • Get the list of backups from AWS S3 after the set date time after when backups are packaged correctly.
  • Convert the JSON array to a bash shell array.
  • Display the list of backup datetimes and wait for user input.
  • Once selected, create the download folder.
  • Get each file in the backup from AWS S3 in the requested backup and save to the download folder.
  • From Docker, extract the blog content images from the archive into the blog folder. NOTE: This doesn't currently work because I haven't restarted the blog with the new import folder reference in the Dockerfile.

This saves me logging into the AWS console, listing the bucket contents and downloading the archive or using the AWS CLI myself.

#!/bin/bash

GHOST_DOMAIN=binarydreams.biz
AWS_REGION=eu-west-1

# This datetime since the backups are packaged a certain way from then
DATETIME="2023-03-03-00-00-00"

echo "\nGet list of backups from S3 ..."

# Filter after datetime AND by tar.gz file because there is only ever one in a backup rather than by .json
# to get a definitive list of backups
declare -a DATETIMES_BASHED
declare -a DATETIMES
DATETIMES=$(aws s3api list-objects-v2 --bucket $GHOST_DOMAIN-backup --region $AWS_REGION --output json --profile ghost-blogger --query 'Contents[?LastModified > `'"$DATETIME"'` && ends_with(Key, `.tar.gz`)].Key | sort(@)')

echo "Backups found.\n"

# I didn't want to install more extensions etc but I just wanted
# a working solution.
# installed xcode-select --install
# brew install jq
# https://jqplay.org/
# jr -r output raw string
# @sh converts input string to to space separated strings
# and also removes [] and ,
# tr to remove single quotes from string output
DATETIMES_BASHED=($(jq -r '@sh' <<< $DATETIMES | tr -d \'\"))

# Show datetimes after $DATETIME and add the extracted string
# to a new array
declare -a EXTRACTED_DATETIMES

for (( i=0; i<${#DATETIMES_BASHED[@]}; i++ ));
do
    # Extract first 19 characters to get datetime
    backup=${DATETIMES_BASHED[$i]}
    #echo $backup
    backup=${backup:0:19}
    EXTRACTED_DATETIMES+=($backup)
    menuitem=$(($i + 1))
    echo "[${menuitem}] ${backup}"
done
echo "[x] Any other key to exit\n"

read -p "Choose backup number> " RESULT

# Check if not a number
if [ -z "${RESULT##*[!0-9]*}" ]
then
    exit 1
fi

# Reduce array index to get correct menu item
RESULT=$(($RESULT - 1))
SELECTED_DATE=${EXTRACTED_DATETIMES[$RESULT]}

echo "\nDownloading backup $SELECTED_DATE\n"

IMPORT_LOCATION="data/import"
DOWNLOAD_FOLDER="$IMPORT_LOCATION/$SELECTED_DATE"

# Create backup download folder if required
if [ ! -d "$DOWNLOAD_FOLDER" ] 
then
    mkdir -p "$DOWNLOAD_FOLDER"

    if [ $? -ne 0 ]; then
        exit 1
    fi

    echo "Created required $DOWNLOAD_FOLDER folder"
else
    # TODO: not working
    rm -rf "$DOWNLOAD_FOLDER/*"

    if [ $? -ne 0 ]; then
        exit 1
    fi
fi

function get_file {
    FILENAME=$1
    FILE_KEY="$SELECTED_DATE/$FILENAME"
    OUTPUT_FILE="$DOWNLOAD_FOLDER/$FILENAME"
    OUTPUT=$(aws s3api get-object --bucket $GHOST_DOMAIN-backup --region $AWS_REGION --profile ghost-blogger --key $FILE_KEY $OUTPUT_FILE)
    echo "$FILENAME downloaded."
}

get_file "ghost-content-$SELECTED_DATE.tar.gz"
get_file "content.ghost.$SELECTED_DATE.json"
get_file "profile.ghost.$SELECTED_DATE.json"

echo "Download complete.\n"

echo "Extract content folder from archive"
docker compose exec -T app /usr/local/bin/extract_content.sh $SELECTED_DATE
download_website.sh

Extract the blog data

The first part of the extraction is to uncompress the blog content into the import location and then move it to the Ghost install location.

#!/bin/bash

NOW=$1

GHOST_INSTALL=/var/www/ghost/
GHOST_ARCHIVE=ghost-content-$NOW.tar.gz
IMPORT_LOCATION=import/$NOW

echo "Unarchiving Ghost content"
cd /$IMPORT_LOCATION

# x - , v - show verbose progress, 
# f - file name type, z - create compressed gzip archive
tar -xvf $GHOST_ARCHIVE -C /$IMPORT_LOCATION

if [ $? -ne 0 ]; then
    exit 1
fi

#echo "Moving archive to $IMPORT_LOCATION"
#cp -Rv $GHOST_INSTALL$GHOST_ARCHIVE /$IMPORT_LOCATION
#rm -f $GHOST_INSTALL$GHOST_ARCHIVE
extract_content.sh

Import the blog data

The 2nd script, import_website.sh , will:

  • Get the list of downloaded imports found in the import folder.
  • Wait for user to select the datetime of the import.
  • Execute the Cypress tests with the selected datetime.
#!/bin/bash

echo "\nGet list of imports from import folder ..."

declare -a FOLDERS
FOLDERS=($(ls -d data/import/*))

for (( i=0; i<${#FOLDERS[@]}; i++ ));
do
    folder=${FOLDERS[$i]}
    echo "[${i}] $folder";
done

if [ -z ]
then
    echo "No imports found."
    exit 1
fi

echo "[x] Any other key to exit\n"

read -p "Choose import number> " RESULT

# Check if not a number
if [ -z "${RESULT##*[!0-9]*}" ]
then
    exit 1
fi

# Reduce array index to get correct menu item
RESULT=$(($RESULT - 1))
SELECTED_DATE=${FOLDERS[$RESULT]}

echo $SELECTED_DATE

# TODO:
# User choose whether to reset blog content by deleting existing blog content
# Check if first page in test is logging in OR blog setup.

# FOR NOW:
# Will need to manually setup blog, delete default blog posts and content files
# The UI tests should do the rest

#echo "Run the UI test to import the blog from JSON files and return to this process"
#npx as-a binarydreams-blog cypress run --spec "cypress/e2e/ghost_import.cy.js" --env timestamp=$SELECTED_DATE

The Cypress test will:

  • log into Ghost
  • check the blog content JSON file exists
  • run the test to import the blog with the import datetime passed in as an argument

Then the profile is imported with these steps:

  • log into Ghost
  • reading the profile JSON file from the expected location
  • Browse to the profile page
  • Upload the cover picture
  • Upload the profile picture
  • Enter the profile details
/// <reference types="cypress" />

// Command to use to pass secret to cypress
// as-a local cypress open/run

describe('Import', () => {

  beforeEach(() => {
    // Log into ghost
    const username = Cypress.env('username')
    const password = Cypress.env('password')
    
    cy.visit('/#/signin')

    // it is ok for the username to be visible in the Command Log
    expect(username, 'username was set').to.be.a('string').and.not.be.empty
    // but the password value should not be shown
    if (typeof password !== 'string' || !password) {
      throw new Error('Missing password value, set using CYPRESS_password=...')
    }

    cy.get('#ember7').type(username).should('have.value', username)
    cy.get('#ember9').type(password, { log: false }).should(el$ => {
      if (el$.val() !== password) {
        throw new Error('Different value of typed password')
      }
    })

    // Click Log in button
    cy.get('#ember11 > span').click()
  })

  it('Content from JSON', () => {

    let timestamp = Cypress.env("timestamp")
    let inputFile = `/import/${timestamp}/content.ghost.${timestamp}.json`
    cy.readFile(inputFile)

    // Click Settings icon
    cy.get('.gh-nav-bottom-tabicon', { timeout: 10000 }).should('be.visible').click()

    // The Labs link is generated so go via the link
    cy.visit('/#/settings/labs')

    // Click browse and select the file
    cy.get('.gh-input > span').selectFile(inputFile)

    // Click Import button
    cy.get(':nth-child(1) > .gh-expandable-header > #startupload > span').click()
  })

  it('Profile from JSON', () => {

    let timestamp = Cypress.env("timestamp")
    let inputFile = `/import/${timestamp}/profile.ghost.${timestamp}.json`
    let profile = cy.readFile(inputFile)

    // Click Settings icon
    cy.get('.gh-nav-bottom-tabicon', { timeout: 10000 }).should('be.visible').click()

    // The profile link is easier to go via the link
    cy.visit('/#/staff/jp')

    // Cover picture
    cy.get('.gh-btn gh-btn-default user-cover-edit', { timeout: 10000 })
      .should('be.visible').click()

    cy.get('.gh-btn gh-btn-white').click().selectFile(profile.coverpicture)

    // Save the picture
    cy.get('.gh-btn gh-btn-black right gh-btn-icon ember-view').click()

    // Profile picture
    cy.get('.edit-user-image', { timeout: 10000 })
      .should('be.visible').click()

    cy.get('.gh-btn gh-btn-white').click().selectFile(profile.profilepicture)

    // Save the picture
    cy.get('.gh-btn gh-btn-black right gh-btn-icon ember-view').click()

    // Import text from profile file
    cy.get('#user-name')
      .type(profile.username)

    cy.get('#user-slug')
      .type(profile.userslug)

    cy.get('#user-email')
      .type(profile.email)

    cy.get('#user-location')
      .type(profile.location)

    cy.get('#user-website')
      .type(profile.location)

    cy.get('#user-facebook')
      .type(profile.facebookprofile)

    cy.get('#user-twitter')
      .type(profile.twitterprofile)

    cy.get('#user-bio')
      .type(profile.bio)

  }) 
})
ghost_import.cy.js

Once this has been fully tested then I will update this article.

]]>
<![CDATA[How the blog was built part 4 - migration]]>https://binarydreams.biz/how-the-blog-was-built-part-4-migration/636c3fd962c9a1001f75c264Thu, 17 Nov 2022 22:54:54 GMT

I finally did it. I finally went and bought a Macbook and the M1 Air at that. The first three parts in this series document how I built the static Ghost blog in a Windows environment and now I have MacOS to contend with. In this part I will show how the code has changed and the issues I have encountered in rebuilding the blog on a Macbook.

Lets look at the tech installed first and then the changes to the blog code required.

GIT

A new platform needs GIT and I first had to install Home-brew https://brew.sh/ and then install GIT for MacOS, https://git-scm.com/download/mac.

Docker

I needed Docker for Mac for my local Ghost blog. I thought I would build the Docker tutorial docker build -t docker101tutorial to test it.

I had this jinja version package issue AttributeError: module 'jinja2' has no attribute 'contextfilter' and to fix you set the Jinja version number to 3.03 in the requirements.txt file.

jinja2==3.0.3
requirements.txt

Running the Docker voting sample

There was also an issue when running the Docker example voting app. Something like Ports are not available: listen tcp 0.0.0.0:5000: bind: address already in use.

To solve this I followed this advice to find out what was using port 5000, lsof -i:5000.

Then to be honest I don't recall what I disabled. I wish I had noted that - Gah! If you do have this issue please let me know your solution so I can record it here.

GitHub token

I use VSCode for my blog code and I needed to connect my GitHub to do this. To do this you must setup a GitHub token. Just follow the steps there.

Now the changes required when running the code.

The Dockerfile

Issue Service 'app' failed to build when docker composing

I ran docker compose up and received this error:

=> ERROR [ 2/24] RUN yum -y -q install which curl wget gettext patch gcc 
0.8s
------                                                                   

> [ 2/24] RUN yum -y -q install which curl wget gettext patch gcc-c++ 
make git-core bzip2 unzip gcc python3-devel python3-setuptools redhat-
rpm-config sudo  &&     yum -y -q clean all:
#5 0.575 Error: Failed to download metadata for repo 'appstream': Cannot 
prepare internal mirrorlist: No URLs in mirrorlist
------
executor failed running [/bin/sh -c yum -y -q install which curl wget 
gettext patch gcc-c++ make git-core bzip2 unzip gcc python3-devel 
python3-setuptools redhat-rpm-config sudo  &&     yum -y -q clean all]: 
exit code: 1
ERROR: Service 'app' failed to build : Build failed

This was because Centos v8 was deprecated so I added the following snippet to the Dockerfile. The changes repoint the package installs to a new location until I can probably use a different Linux distro.

RUN cd /etc/yum.repos.d/
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*

Python install issue

Installing Python in the Dockerfile used Python which is version 3.6 but now out of date. This meant I had to upgrade to Python v3.8 in the package install.

RUN yum -y -q install which curl ... python38-devel ...

docker-compose.yml

Issue platform required on MacOS

I don't recall the extact issue but as I'm running on MacBook M1, I had to specify the platform but then the composition should still work on a Windows machine.

services:
  app:
    platform: linux/x86_64 
    image: ...
docker-compose.yml

Issue config could not be found

The backup script needs to sync files to S3 and I was getting the message The config [ghost-blogger] could not be found. This meant I had to add an entry to the volumes list of where the AWS folder is in MacOS.

Warning: composing down and then up will reset your local Ghost installation and you will need to install your last backup.
	volumes:
      ...
      - ~/.aws:/root/.aws:ro
      ...
docker-compose.yml

backup_website.sh

As I was now using MacOS I converted the Windows-only backup_website.bat file to a shell script that could be used on any platform.

The notable changes were getting the datetime, the cypress UI test and copying the JSON files. I also changed the name of the profile in the .as-a.ini file (that has the Cypress password) to binarydreams-blog.

Not related to using macOS but still, I separated the content and profile UI tests into their own files because otherwise the Export content download would not happen.

Then those export data file needed renaming with the same datetime before copying to the backup folder.

#!/bin/bash

GHOST_DOMAIN=binarydreams.biz
AWS_REGION=eu-west-1

# Get the current date time as GMT - No daylight savings time.
DATETIME=`date +"%Y-%m-%d-%H-%M-%S"`

echo "${DATETIME}"

echo "Back up content folder first"
docker compose exec -T app /usr/local/bin/backup.sh ${DATETIME}

echo "Run the UI test to export the content as a JSON file and return to 
this process"
npx as-a binarydreams-blog cypress run --spec "cypress/e2e/ghost_export_content.cy.js,cypress/e2e/ghost_export_profile.cy.js" --env timestamp=${DATETIME}

echo "Rename the exported JSON files with a timestamp"
find $GHOST_OUTPUT -iname 'binarydreams.ghost*' -exec mv {} ${GHOST_OUTPUT}content.ghost.$DATETIME.json \;
find $GHOST_OUTPUT -iname 'profile.ghost*' -exec mv {} ${GHOST_OUTPUT}profile.ghost.$DATETIME.json \;

echo "Copy the JSON file to the backup folder"
cp ./cypress/downloads/*.json ./data/backup/$DATETIME/

echo "Sync back up files to S3"
aws s3 sync ./data/backup/ s3://$GHOST_DOMAIN-backup --region
$AWS_REGION --profile ghost-blogger
backup_website.sh

Then to run it on a terminal you enter sh ./backup_website.sh.

While testing the Cypress script I also found a major version had been released and this changed quite a few things.

Issue testing the backup script Cannot find file with env setting

This message was because I didn't have the as-a.ini saved with the full-stop prefix, even though this worked on Windows.

/Users/jon-paul.flood/.as-a.ini or /Users/jon-paul.flood/.as-a/.as-a.ini.

Issue running Cypress script spawn cypress ENONT

I ran the backup script and received this error:

Error: spawn cypress ENOENT
    at ChildProcess._handle.onexit (node:internal/child_process:283:19)
    at onErrorNT (node:internal/child_process:476:16)
    at process.processTicksAndRejections (node:internal/process
/task_queues:82:21) {
  errno: -2,
  code: 'ENOENT',
  syscall: 'spawn cypress',
  path: 'cypress',
  spawnargs: [ 'run' ]
}
cypress exit code -2
Error: spawn cypress ENOENT
    at ChildProcess._handle.onexit (node:internal/child_process:283:19)
    at onErrorNT (node:internal/child_process:476:16)
    at process.processTicksAndRejections (node:internal/process
/task_queues:82:21) {
  errno: -2,
  code: 'ENOENT',
  syscall: 'spawn cypress',
  path: 'cypress',
  spawnargs: [ 'run' ]
}

The fix was I had the profile in the .as-a.ini set to .(full-stop) and NOT the actual profile name of binarydreams-blog.

update_website.sh

This also replaced a Windows-only batch file, update_website.bat but the only differences are the bash reference and echo replacing rem.

!/bin/bash

echo "Generate static files"
docker-compose exec -T app /usr/local/bin/generate_static_content.sh

echo "Sync code to S3"
docker-compose exec -T app /usr/local/bin/upload_static_content.sh
update_website.sh

Also run in a terminal as sh ./update_website.sh.

Issue testing the update script permission denied

Running the upate script, I received this error:

OCI runtime exec failed: exec failed: unable to start container process: exec: "/usr/local/bin/upload_static_content.sh": permission denied: unknown

I had to use this command chmod -R +x * in the bin folder to execute all the shell script files as found on this website.

Setup a Ghost backup

During all these changes I had to set up a past Ghost backup, particularly when composed docker but this time with the volume location of the AWS profile, ghost-blogger. And thank goodness I did have my backups!

I would like to automate more of this but these are the manual steps currently.

  1. Run docker compose up.
  2. Download the last backup file from S3 and uncompress the ZIP.
  3. Replace the content folder with the backup.
  4. Browse to the site, start the blog as new.
  5. Setup the basic user (again!) with all the text and the profile picture.
  6. Go to settings and import the backed up JSON file.
  7. Suspend the Ghost user account as not used.
  8. The default Ghost articles with the Ghost author should be unpublished or deleted.

When I think about it, the automation could be, steps 2-3 as a shell script and 5-8 could be done by Cypress.

Issue importing the backed up JSON file

I received these warning while importing the JSON backup file.

Import successful with warnings

User: Entry was not imported and ignored. Detected duplicated entry.
{
   "id":"62572eeb179d0a0018299745",
   "name":"Jon-Paul",
   "slug":"jp",
   "password":"*************************************************",
   "email":"*******@binarydreams.biz",
   "profile_image":"https://binarydreams.biz/content/images/2021/09/ProfilePicture.png",
   "cover_image":"https://binarydreams.biz/content/images/2021/09/background.jpg",
   "bio":"AWS cloud developer",
   "website":null,
   "location":null,
   "facebook":null,
   "twitter":null,
   "accessibility":"{\"navigation\":{\"expanded\":{\"posts\":false}},\"launchComplete\":true}",
   "status":"active",
   "locale":null,
   "visibility":"public",
   "meta_title":null,
   "meta_description":null,
   "tour":null,
   "last_seen":"2021-10-30 17:20:35",
   "created_at":"2021-09-06 21:44:13",
   "updated_at":"2021-10-30 17:20:35",
   "roles":[
      "Administrator"
   ]
}

User: Entry was not imported and ignored. Detected duplicated entry.
{
   "id":"62572eeb179d0a0018299746",
   "name":"Ghost",
   "slug":"ghost",
   "password":"**************************************************",
   "email":"[email protected]",
   "profile_image":"https://static.ghost.org/v4.0.0/images/ghost-user.png",
   "cover_image":null,
   "bio":"You can delete this user to remove all the welcome posts",
   "website":"https://ghost.org",
   "location":"The Internet",
   "facebook":"ghost",
   "twitter":"@ghost",
   "accessibility":null,
   "status":"inactive",
   "locale":null,
   "visibility":"public",
   "meta_title":null,
   "meta_description":null,
   "tour":null,
   "last_seen":null,
   "created_at":"2021-09-06 21:44:14",
   "updated_at":"2021-09-06 21:53:50",
   "roles":[
      "Contributor"
   ]
}

Settings: Theme not imported, please upload in Settings - Design
{
   "id":"61368bb4b3a69e001a3618e6",
   "group":"theme",
   "key":"active_theme",
   "value":"casper",
   "type":"string",
   "flags":"RO",
   "created_at":"2021-09-06 21:44:21",
   "updated_at":"2021-09-06 21:44:21"
}

First User issue probably should have created a completely different user to avoid that warning.
Second warning was the Ghost default user so I’m not bothered about that.
The Theme warning though? Hmm.

The import results were that:

  • The Ghost original pages
    - Contribute page was created but my import didn’t replace it. I manually deleted it.
    - Privacy page was created but my import didn’t replace it. I manually deleted it.
    - Contact page was created but my import didn’t replace it. I manually deleted it.
    - About this site should be unpublished or deleted but is published in new site.
  • My Cover photo missing.
  • My staff details cover, name, bio, slug not set as I configured.

This all means I need a backup import script. On the to-do list!

Transferring my domain

Not required for my migration to MacOS but might as well note the transition here. My original domain host (EUKHost) hosting costs were about £12 and I knew Cloudflare were cheaper, basically at cost. I already use Cloudflare for the domain routing and it made sense now to have all my domain management in one place.

This YouTube video was a help to do this even though I had a different domain host.

]]>
<![CDATA[How the blog was built part 3 - update]]>So in part 2 we backed the Ghost files up. Now we are going to create the static files and then upload them to S3.

Hosting the website

I'm using an AWS S3 bucket to host the website with CloudFlare routing and this article was instrumental in setting

]]>
https://binarydreams.biz/how-the-blog-was-built-part-3-update/63605741234078001ffb328fFri, 29 Oct 2021 17:29:57 GMT

So in part 2 we backed the Ghost files up. Now we are going to create the static files and then upload them to S3.

Hosting the website

I'm using an AWS S3 bucket to host the website with CloudFlare routing and this article was instrumental in setting it up so I highly recommend a read. I won't repeat it all here but I will summarise what you do:

Assuming you have a domain of mywebsite.biz and subdomain of www you will create 2 separate buckets:

  • Subdomain bucket named www.mywebsite.biz, configured for static website hosting but public access to the bucket is enabled.
  • Domain bucket named mywebsite.biz. Requests will need redirecting from this domain bucket to the subdomain bucket.
  • Bucket policies will need configuring to allow the Cloudflare IP addresses ensuring only requests coming from the Cloudflare proxy will be responded to.

Then you need to set up your site on Cloudflare. It provides caching and some DDos protection but not the end-to-end encryption you might want. As this is just a static website and I don't make requests to anything else, I'm not necessarily worried about that even though I'd like to be 100% as a principle.

  • You will create 2 CNAME records for your root domain and subdomain. Each record will have the name as the domain and the value as the relevant S3 bucket endpoint.
  • Change your domain nameservers to Cloudflare.
  • To use HTTPS for the traffic between visitors and Cloudflare this article describes how to. On your Cloudflare dashboard SSL/TLS Overview tab - the encryption mode should be Flexible and the Edge Certificate tab with Always Use HTTPS enabled.
  • Use this online tester to verify your setup.

Now visitors will be able to visit the website using the subdomain or your root domain once you have uploaded the static files.

Static files

The only differences with the original GitHub file generate_static_content.sh are I added sudo to use with the gssg tool and not uploading the static files to the cloud. Note the gssg tool does have pre-requisites that need installing first.

It works by  extracting the files and swapping the https://binarydreams.biz for your configured domain.

#!/bin/bash

set -e 
cd /static
rm -rf *
mkdir content

echo "Running gssg..." 
sudo gssg --domain "https://binarydreams.biz" --dest . --url "https://$GHOST_DOMAIN" #--quiet

echo "Download all sizes of images"
cd /static/content/images
sizes=( "w600" "w1000" "w1600" "w2000" )

function getImage() {
  file=$1
  for size in "${sizes[@]}"; do
    targetFile="/static/content/images/size/$size/$file"
    path=$(dirname $targetFile)
    mkdir -p $path
    if [ ! -f $targetFile ]; then
      echo "Getting:  $targetFile"
      curl -f --silent -o $targetFile https://binarydreams.biz/content/images/size/$size/$file
    else 
      echo "Skipping: $targetFile"
    fi
  done
}

echo "Downloading images that have already been sized"
cd /static/content/images
for file in $(find size -type f -o -name "*.png"); do
  source=$(echo $file | sed 's,^[^/]*/,,' | sed 's,^[^/]*/,,')
  getImage $source
done

echo "Downloading images that have not already been sized"
for file in $(find . -path ./size -prune -type f -o -name "*.png"); do
  source=$(echo $file | sed 's,^[^/]*/,,')
  getImage $source
done

echo "Static content generated!"
generate_static_content.sh

To run the script execute docker-compose exec -T app /usr/local/bin/generate_static_content.sh and create the static files.

Let's upload

So rather than have the file upload in static generation script I wanted it in its own brand new file - this helped with testing it on it's own too.

  • This script takes an argument for the AWS region defaulted to eu-west-1.
  • It takes the generated files in the /static folder, deletes the existing files, uploads to the S3 domain bucket and makes them publicly readable, all using the configured ghost-blogger AWS profile.
#!/bin/bash

AWS_REGION=${1:-'eu-west-1'}

echo .
echo "Uploading to AWS..."
python3 -m awscliv2 s3 sync /static s3://$GHOST_DOMAIN --acl public-read --delete --region $AWS_REGION --profile ghost-blogger
upload_static_content.sh

To run this script you execute this docker-compose exec -T app /usr/local/bin/upload_static_content.sh.

All together now

Now lets bring it all together with the script executions in a batch file.

The AWS profile needs tobe checked this command can run before the actual script starts.

@echo off

REM This file will run on Windows only of course
ECHO Generate static files
docker-compose exec -T app /usr/local/bin/generate_static_content.sh

ECHO Sync code to S3
docker-compose exec -T app /usr/local/bin/upload_static_content.sh
update_website.bat

When I run that file it will now create the static files and upload to S3 which you see before you now!

Costs

Looking at my costs - my domain (£10 per year), the S3 bucket which is currently a few pence per month and Cloudflare for free as an individual user. Awesome!

  • Easily gives me SSL certification
  • CDN Caching to reduce requests to the bucket

Hopefully this has been of some help, any feedback always welcome!

]]>
<![CDATA[How the blog was built part 2 - backup]]>At this point I now have customised the original GitHub scripts and made it more configurable than before but having so many different commands to run meant I really wanted an easier way to back up the existing Ghost content and files.

I had run into some issues and when

]]>
https://binarydreams.biz/how-the-blog-was-built-part-2-backup/63605741234078001ffb328eWed, 27 Oct 2021 21:05:02 GMT

At this point I now have customised the original GitHub scripts and made it more configurable than before but having so many different commands to run meant I really wanted an easier way to back up the existing Ghost content and files.

I had run into some issues and when re-ran the docker-compose up it reset the Ghost blog and lost all the existing files BUT luckily I had already uploaded my data and the blog was live. I just copied the posts I created and manually rec-created my profile and other additional steps.

Latest Ghost version

One of those issues was the Ghost blog version so I hardcoded that in the docker-compose.yml file as an argument. I want to update, when I want to update.

OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: no such file or directory: unknown

Running these commands on Windows I sometimes faced the above issue. All I had to do was in VSCode set the line feed from CRLF to LF and save the file.

Creating the backup script

My backup script would need to do the following things:

  • Backup the Ghost blog content, profile data and images.
  • Collect the backup files together.
  • Upload the backup folder to S3.

Backup the content folder

Firstly, Ghost does not provide a CLI command to backup your files so we need to copy the data/content folder itself. This folder contains the database, settings and images used in your blog and not the actual blog post content - I will get that in a separate step.

This script will copy the content folder to a backup location - the additional volume referenced in the docker-compose.yml file.

It takes the datetime now, creates the backup folder with the datetime value, e.g backup/2021-09-09-23-02-58, archives the Ghost install folder and moves the archive, e.g ghost-content-2021-09-09-23-02-58.tar.gz, to the backup location.

#!/bin/bash

echo "Backing up Ghost"

NOW=$1
GHOST_INSTALL=/var/www/ghost/
GHOST_ARCHIVE=ghost-content-$NOW.tar.gz
BACKUP_LOCATION=backup

if [ ! -d "/$BACKUP_LOCATION/$NOW" ] 
then
    cd /
    mkdir $BACKUP_LOCATION/$NOW
    echo "Created required /$BACKUP_LOCATION/$NOW folder"
fi

cd $GHOST_INSTALL

# c - creates new file, v - show verbose progress, 
# f - file name type, z - create compressed gzip archive
echo "Archiving Ghost content"
tar cvzf $GHOST_ARCHIVE content

echo "Moving archive to $BACKUP_LOCATION/$NOW"
cp -Rv $GHOST_INSTALL$GHOST_ARCHIVE /$BACKUP_LOCATION/$NOW
rm -f $GHOST_INSTALL$GHOST_ARCHIVE

#To recover your blog, just replace the content folder and import your JSON file in the Labs.
backup.sh

To run the script you pass the datetime value docker-compose exec -T app /usr/local/bin/backup.sh %datetime%.

Export the JSON content

The other issue with Ghost is that I want to export the content that I am able to export but you can only do it through the UI. This is where I use Cypress ;)

In this test I have a reusable login sequence before each test but there's only one at this time anyway. This will get the Cypress config values from the cypress.json file where I have set the base URL, username and password as blank.

A very important property trashAssetsBeforeRuns is set as true because I want the test to start fresh each time with an empty cypress/downloads folder because when I copy the JSON file later I only want the one just created and I won't know what the datetime value is in the filename.

{
    "baseUrl": "https://binarydreams.biz/ghost",
    "env": {
        "username": "[email protected]",
        "password": ""
    },
    "trashAssetsBeforeRuns": true
}
cypress.json

This is the test file.

  • Get the environment variables, username and password.
  • If login fails then we do not show the password - that would be bad!
  • If login succeeds, then begin the test.
  • Click each UI link/button until the JSON is exported. This saves to the cypress/downloads folder.

Note the timeout value is set to 10 seconds because when the test first runs sometimes it can take a while and it just needs some time to run that first time. If it fails I just have to run it again. I would like to make this more resilient so I don't have to do that.

A very important note, the references to element names like #ember63 will only work when you have installed Ghost at that time. If you ever reinstall Ghost OR delete the container and re-run the image then these embers will likely to have changed and the test has to be updated. Be warned! Again, I'd like this to be more resilient to change.


// Command to use to pass secret to cypress
// as-a local cypress open/run

describe('Backup', () => {

  beforeEach(() => {
    // Log into ghost
    const username = Cypress.env('username')
    const password = Cypress.env('password')
    
    cy.visit('/#/signin')

    // it is ok for the username to be visible in the Command Log
    expect(username, 'username was set').to.be.a('string').and.not.be.empty
    // but the password value should not be shown
    if (typeof password !== 'string' || !password) {
      throw new Error('Missing password value, set using CYPRESS_password=...')
    }

    cy.get('#ember7').type(username).should('have.value', username)
    cy.get('#ember9').type(password, { log: false }).should(el$ => {
      if (el$.val() !== password) {
        throw new Error('Different value of typed password')
      }
    })

    // Click Log in button
    cy.get('#ember11 > span').click()

  })

  it('Exports content as JSON', () => {

    // Click Settings icon
    cy.get('.gh-nav-bottom-tabicon', { timeout: 10000 }).should('be.visible').click()

    // Click Labs
    cy.get('#ember63 > .pink > svg').click()

    // Click Export button
    cy.get(':nth-child(2) > .gh-expandable-header > .gh-btn > span').click()

  })

  // I have no members so no need to export that data
  // Plus no need to download the theme as I'm using the default - Casper.
})
content_export_spec.js

To run the Cypress test we store the Ghost Admin password locally in a .as-a.ini config file. Install the NPM as-a package and then have this file in your project root alongside the executing command or script file. No need to ever commit to your repository.

[local]
CYPRESS_password=yourpasswordgoeshere

Then we run it headless CALL npx as-a local cypress run expecting the .as-a.ini file to be there with a local profile, return to the main process and then copy the exported JSON file xcopy /Y .\cypress\downloads\*.json .\data\backup\%datetime%\ to the backup location.

Upload to S3

Now we can finally upload the backup to S3. This will need a new AWS profile created, e.g ghost-blogger. It has enough S3 access to upload files to the bucket.

This is where I use AWS CLI to get the files from the backup folder and upload to the domain backup bucket in my region - aws s3 sync ./data/backup/ s3://%GHOST_DOMAIN%-backup --region %AWS_REGION% --profile ghost-blogger.

The final script

This is the resulting batch script to backup everything and then upload to S3. I intend to rewrite this in Python. You will note I have hardcoded the domain and region here but I want to configure these elsewhere and eventually remove that extra change to make it reusable.

@echo off
REM This file will run on Windows only of course

set GHOST_DOMAIN=binarydreams.biz
set AWS_REGION=eu-west-1

REM Get the current date time
for /f %%a in ('powershell -Command "Get-Date -format yyyy-MM-dd-HH-mm-ss"') do set datetime=%%a

ECHO Back up content folder first
docker-compose exec -T app /usr/local/bin/backup.sh %datetime%

ECHO Run the UI test to export the content as a JSON file and return to this process
CALL npx as-a local cypress run

ECHO Copy the JSON file to the backup folder
xcopy /Y .\cypress\downloads\*.json .\data\backup\%datetime%\

ECHO Sync back up files to S3
aws s3 sync ./data/backup/ s3://%GHOST_DOMAIN%-backup --region %AWS_REGION% --profile ghost-blogger
backup_website.bat

Not quite the end

When I do the import I should run a script that will gather the files, unzip and re-import the data. This script should be generated and located with that backup so very little commands needed.

Do bare in mind I have only tested the JSON content import and not everything else yet but that time will come when I upgrade my laptop.

Next on the agenda is to upload the generated static files in part 3.

]]>
<![CDATA[How the blog was built part 1 - the static files]]>My previous blog was built with BlogEngine.NET and blog post content saved in files on the Azure web server because I didn't want the hassle and the costs of a database.

I decided to leave Azure for AWS because Microsoft decided to change the terms on the

]]>
https://binarydreams.biz/how-the-blog-was-built-part-1-the-static-files/63605741234078001ffb328aTue, 26 Oct 2021 16:44:42 GMT

My previous blog was built with BlogEngine.NET and blog post content saved in files on the Azure web server because I didn't want the hassle and the costs of a database.

I decided to leave Azure for AWS because Microsoft decided to change the terms on the developer subscription with free credits that I had had for so many years and needed a new subscription and then pay after 12 months. I didn't want to do that so I thought I might as well go all in into AWS as most of my cloud experience is there anyway.

I had found a bug upgrading BlogEngine, I didn't want to modify any source code and the people maintaining it weren't going to fix the issue. I wanted better support and also able to keep costs dead low or zero.

This meant my requirements for a new blog would be:

  • easy to use and well supported blog app
  • can create a static website
  • easy to backup the content
  • easy to update the static files and upload them
  • hosting costs low or zero

After plenty of googling I decided on Ghost and there were existing tools to use to create static files for hosting. Easy right? Well as it turned out there were quite a few things that didn't go right and at least one rabbit hole entered and escaped to get to this point.

I know I could have incurred some costs with a monthly subscription with Ghost managing it for me but where's the fun in that! I wanted to learn something new and I like having something to write about and share.

Creating the static files

I found this blog article on creating and hosting your static files from your Ghost blog in Google Cloud but I'm using AWS. This section will make some assumptions you have read the article and/or seen the existing code.

The author first created a Docker version of the Ghost blog and used an existing tool to create the static files. This is what I have changed:

  • Copied the files from GitHub.
  • Removed the gcloud.repo file as I'm using AWS.
  • Updated the docker-compose.yml with more configured values.
    - Arguments added for the author e-mail and Disqus site name and version numbers updated. I don't want to use the latest versions because I want predictability in my setup.
    - Environment Ghost domain updated.
    - Additional volumes for the static and backup files to be saved to.
version: '3.4'

services:
  app:
    image: stono/ghost:latest
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=1
    build: 
      context: .
      args:
        AUTHOR_EMAIL: "[email protected]"
        AWS_VERSION: 2.2.18
        GHOST_VERSION: 4.12.1
        GHOST_CLI_VERSION: 1.17.3
        SITEMAP_GENERATOR_VERSION: 1.0.1
        DISQUS_SITE: 'binarydreams'
    restart: always
    environment:
      NODE_ENV: production
      GHOST_DOMAIN: binarydreams.biz
    volumes:
      - ./data/content/data:/var/www/ghost/current/content/data
      - ./data/content/images:/var/www/ghost/current/content/images
      - ./data/content/settings:/var/www/ghost/current/content/settings
      - ./bin:/usr/local/bin
      - ./data/static:/static
      - ./data/backup:/backup
#      - ~/.config:/root/.config
    ports:
      - 80:80
docker-compose.yml
  • Next, the Dockerfile had many changes.
    - Replaced the MAINTAINER with the LABEL that uses the AUTHOR_EMAIL argument.
    - Added sudo when getting the dependencies.
    - Replaced the Google Cloud installation with AWS client.
    - A specific version of Ghost will be installed as per the argument.
    - A custom script will replace the variables with the argument values before the Disqus patch is applied to the blog.
FROM centos:8
LABEL org.opencontainers.image.authors=$AUTHOR_EMAIL

# This file runs when the docker image is built NOT when 'up' is used

# Get dependencies
RUN yum -y -q install which curl wget gettext patch gcc-c++ make git-core bzip2 unzip gcc python3-devel python3-setuptools redhat-rpm-config sudo  && \
    yum -y -q clean all

# Install crcmod
RUN easy_install-3 -U pip && \
    pip install -U crcmod

# Get nodejs repos
RUN curl --silent --location https://rpm.nodesource.com/setup_14.x | bash -

RUN yum -y -q install nodejs-14.* && \
    yum -y -q clean all

# Setup www-data user
RUN groupadd www-data && \
    useradd -r -g www-data www-data

RUN mkdir -p /var/www && \
    mkdir -p /home/www-data && \
    chown -R www-data:www-data /var/www && \
    chown -R www-data:www-data /home/www-data

EXPOSE 2368

# Configuration
ENV GHOST_HOME /var/www/ghost

# Install packages
RUN yum -y -q update && \
    yum -y -q clean all

# Install AWS utilities
RUN export PATH=/root/.local/bin:$PATH
RUN python3 -m pip install --user awscliv2
RUN /root/.local/bin/awscliv2 -i

CMD ["/usr/local/bin/start_ghost.sh"]

# Install Ghost
WORKDIR $GHOST_HOME
ARG GHOST_CLI_VERSION
RUN npm install -g ghost-cli@$GHOST_CLI_VERSION
RUN chown -R www-data:www-data $GHOST_HOME
ARG GHOST_VERSION
RUN su -c 'ghost install $GHOST_VERSION --local --no-setup --db sqlite3' www-data
RUN su -c 'npm install sqlite3 --save' www-data

# Add static content generator
ARG SITEMAP_GENERATOR_VERSION
RUN npm install -g ghost-static-site-generator@$SITEMAP_GENERATOR_VERSION
RUN mkdir /static 

# Patch ghost
RUN mkdir -p /usr/local/etc/ghost/patches
COPY patches/ /usr/local/etc/ghost/patches/
COPY bin/* /usr/local/bin/

ARG DISQUS_SITE
RUN /usr/local/bin/replace_disqus_patch_text.sh $GHOST_VERSION $DISQUS_SITE
RUN /usr/local/bin/apply_patches.sh

# Copy ghost config
COPY data/config.json /var/www/ghost/current/config.production.json
Dockerfile
  • As you can see from the above script it passes the Ghost version and Disqus site to the following script that then updated the disqus.patch file with the values. This is my improvement over the original GitHub so I do not need to update the version numbers every time I update Ghost.
# get args
GHOST_VERSION=$1
DISQUS_SITE=$2

echo ghost version $GHOST_VERSION
echo site domain $DISQUS_SITE

echo Replace text in disqus.patch
PATCH="/usr/local/etc/ghost/patches/disqus.patch"
sed -i "s/GHOST_VERSION/$GHOST_VERSION/g" $PATCH
sed -i "s/DISQUS_SITE/$DISQUS_SITE/g" $PATCH
replace_disqus_patch_text.sh
  • Then the following patch would be applied when the Dockerfile ran to use configured the values, GHOST_VERSION and DISQUS_SITE. This is essential for the patch to be applied to the installed Ghost blog. Note here I am using the default casper theme.
--- versions/GHOST_VERSION/content/themes/casper/post.hbs
+++ versions/GHOST_VERSION/content/themes/casper/post.hbs
@@ -69,11 +69,19 @@
         {{content}}
     </section>
 
-    {{!--
-    <section class="article-comments gh-canvas">
-        If you want to embed comments, this is a good place to paste your code!
-    </section>
-    --}}
+    <section class="article-comments gh-canvas">
+    <div id="disqus_thread"></div>
+    <script>
+
+    (function() { // DON'T EDIT BELOW THIS LINE
+    var d = document, s = d.createElement('script');
+    s.src = 'https://DISQUS_SITE.disqus.com/embed.js';
+    s.setAttribute('data-timestamp', +new Date());
+    (d.head || d.body).appendChild(s);
+    })();
+    </script>
+    <noscript>Please enable JavaScript to view the <a href="https://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
+    </section>
 
 </article>
 
@@ -114,4 +120,4 @@
     </div>
 </aside>
 
-{{/post}}
\ No newline at end of file
+{{/post}}
disqus.patch

So I have all that code customised for my blog and I need to create the static files.

  • If this is the first time setup then from the root project run docker-compose up.
  • Go to https://binarydreams.biz and edit your brand new Ghost blog or import your backup content.
  • I would have generated the static files using docker-compose exec app /usr/local/bin/generate_static_content.sh but that comes later in my setup when I upload to S3.

In what you have seen so far there is still more configuration I could do and I will come back to this article if I do make other changes. In the next few articles I will show what I did to easily backup and update the static website with two separate scripts.

]]>
<![CDATA[Upgrading my BlogEngine.NET website from 3.1.1.0 to 3.3.80]]>Last year I was having trouble upgrading my BlogEngine.NET orginal blog website, prior to moving to the Ghost platform. This article is a copy of the original and about what I did to fix the issue. Even though I no longer use BlogEngine this information may be useful to

]]>
https://binarydreams.biz/upgrading-my-blogengine-website/63605741234078001ffb328dMon, 04 Oct 2021 15:46:50 GMT

Last year I was having trouble upgrading my BlogEngine.NET orginal blog website, prior to moving to the Ghost platform. This article is a copy of the original and about what I did to fix the issue. Even though I no longer use BlogEngine this information may be useful to somebody else.

To upgrade your BlogEngine.NET website, you must log in to the Admin page "http://www.website.com/admin/" and you will get a message on the Home page if you can upgrade or nothing at all.

Upgrading my BlogEngine.NET website from 3.1.1.0 to 3.3.80

You can see what I did to get upgraded below and more simply here in my reported issue with the BlogEngine.NET team. Unfortunately they closed it with the suggestion to use a different product.

Now, whenever I go on the admin home page it's really slow when getting the gallery theme list and logged messages, so this picture took some patience to get.

If you get your browser developer tools (F12) open on this page you will notice that it will call this URL http://www.website.com/api/setup?version=3.1.1.0 to get the latest version to upgrade to. In my case "3.3.8.0".

You would then click on the Upgrade button and it will take you to this page "http://www.website.com/setup/upgrade":

"Looks like you already running latest version!"

Upgrading my BlogEngine.NET website from 3.1.1.0 to 3.3.80

Of course we know this isn't true. We know there is a new version available.

To track down the issue, I looked for the upgrade page and try to understand what it was doing.
Ah, the http://www.website.com/setup/upgrade/index.cshtml page has three script tags, one of which references Updater.js.

<script src="~/setup/upgrade/jquery-2.0.3.min.js"></script>
<script src="~/setup/upgrade/bootstrap.min.js"></script>
<script src="~/setup/upgrade/Updater.js"></script>

When the page is ready it will check the version

$(document).ready(function () {
    Check();
});

var newVersion = "";

function Check() {
    // CurrentVersionCheckVersion();
    if (!newVersion) { newVersion = ""; }

    if (newVersion.length > 0) {
        $("#spin1").hide();
        $("#spin2").hide();
        $("#spin3").hide();
        $("#spin4").hide();
        $("#spin5").hide();
        $("#spin9").hide();
        $("#step9").hide();
        $('#msg-success').hide();
        $('#spnNewVersion').html(newVersion);
    }
    else {
        $("#frm").hide();
        $("#btnRun").hide();
        $("h2").html("Looks like you already running latest version!");
    }
}

function CheckVersion() {
    $("#spin1").show();
    $.ajax({
        url: AppRoot + "setup/upgrade/Updater.asmx/Check",
        data: "{ version: '" + CurrentVersion + "' }",
        type: "POST",
        contentType: "application/json; charset=utf-8",
        dataType: "json",
        async: false,
        success: function (result) {
            newVersion = result.d; // e.g. "3.2.0.0";
        }
    });
}

Note, the CheckVersion function calls "http://www.website.com/setup/upgrade/Updater.asmx/Check" with the current website version value in the POST request.
I needed to see what the newVersion variable is populated with so I added a console.log() call to the success function.

success: function (result) {
	console.log(result);
	newVersion = result.d; // "3.2.0.0";
},
error: function(err){
	console.log(err);
}

But nothing was getting logged so I added an error function.

error: function(err){
	console.log(err);
}

The returned a large responseText value with a HTML page titled "binary dreams | Error".

Ok, there is something wrong happening in the ASMX file. But, what?

This is the summarised class (the ... (dots) replace unused by the Check method) :

[ScriptService]
public class Updater  : WebService {

    private StringCollection _ignoreDirs;
    private List<InstalledLog> _installed;
    private string _root;
    private string _newZip;
    private string _oldZip;
    private static string _upgradeReleases = BlogConfig.GalleryFeedUrl.Replace("nuget", "/Releases/");
    //private static string _upgradeReleases = "http://dnbe.net/v01/Releases/";
    ...
    private string _versionsTxt = _upgradeReleases + "versions.txt";
    ...

    public Updater()
    {
        _root = HostingEnvironment.MapPath("~/");
        if (_root.EndsWith("\\")) _root = _root.Substring(0, _root.Length - 1);

        _newZip = _root + "\\setup\\upgrade\\backup\\new.zip";
        _oldZip = _root + "\\setup\\upgrade\\backup\\old.zip";

        _ignoreDirs = new StringCollection();
        _ignoreDirs.Add(_root + "\\Custom");
        _ignoreDirs.Add(_root + "\\setup\\upgrade");

        _installed = new List<InstalledLog>();
    }

    [WebMethod]
    public string Check(string version)
    {
        try
        {
            WebClient client = new WebClient();
            Stream stream = client.OpenRead(_versionsTxt);
            StreamReader reader = new StreamReader(stream);
            string line = "";

            while (reader.Peek() >= 0)
            {
                line = reader.ReadLine();

                if (!string.IsNullOrEmpty(version) && line.Contains("|"))
                {
                    var iCurrent = int.Parse(version.Replace(".", ""));
                    var iFrom = int.Parse(line.Substring(0, line.IndexOf("|")).Replace(".", ""));
                    var iTo = int.Parse(line.Substring(line.LastIndexOf("|") + 1).Replace(".", ""));

                    if (iCurrent >= iFrom  && iCurrent < iTo)
                    {
                        return line.Substring(line.LastIndexOf("|") + 1);
                    }
                }
            }
            return "";
        }
        catch (Exception)
        {
            return "";
        }
    }

    ...
}

So I started ruling things out. I tried to log the exception messages in the Check() try-catch but nothing was logged let alone returning.
The I set the return values to "test" + number so I would know the failing path and reloaded the upgrade page.

try {
    WebClient client = new WebClient();
    Stream stream = client.OpenRead(_versionsTxt);
    StreamReader reader = new StreamReader(stream);
    string line = "test1";
    
    while (reader.Peek() >= 0)
    {
    	line = reader.ReadLine();
			
        if (!string.IsNullOrEmpty(version) && line.Contains("|"))
        {
            var iCurrent = int.Parse(version.Replace(".", ""));
            var iFrom = int.Parse(line.Substring(0, line.IndexOf("|")).Replace(".", ""));
            var iTo = int.Parse(line.Substring(line.LastIndexOf("|") + 1).Replace(".", ""));

            if (iCurrent >= iFrom  && iCurrent < iTo)
            {
            	return "test2";//line.Substring(line.LastIndexOf("|") + 1);
            }
        }
    }
    return "test3";
}
catch (Exception)
{
    return "test4";
}

Still same error page returned in the Check response. I then commented out everything in the Updater() constructor as the Check method never used any of it anyway and still I'm getting error page response.

Then I had a hunch, it had to be in the class initialisation and this line was the likely culprit:

private static string _upgradeReleases = BlogConfig.GalleryFeedUrl.Replace("nuget", "/Releases/");

So I commented that line out and uncommented the line below it:

private static string _upgradeReleases = "http://dnbe.net/v01/Releases/";

Now I get this object logged in the AJAX success function.

Object { d: "test2" }

This means the happy path was executed and I can undo my previous debugging changes.

Low and behold another refresh and I get the expected page to say I can upgrade!

Upgrading my BlogEngine.NET website from 3.1.1.0 to 3.3.80
]]>
<![CDATA[My highlights of DDD East Midlands 2021]]>Hey all, I spent the day at DDD East Midlands 2021 and these were the most interesting talks.

Design for Developers by Lex Lofthouse. I wanted to see this talk because I've done some front end projects here and there and thought it would be helpful with those

]]>
https://binarydreams.biz/my-highlights-of-ddd-east-midlands-2021/63605741234078001ffb328cSat, 02 Oct 2021 23:08:48 GMT

Hey all, I spent the day at DDD East Midlands 2021 and these were the most interesting talks.

Design for Developers by Lex Lofthouse. I wanted to see this talk because I've done some front end projects here and there and thought it would be helpful with those projects in the future. She went through a list of design principles and gave examples of what to do. I found it very insightful. She will be releasing the slides soon.

Senior by Default by Stephen Jackson. Interesting short talk about how he found himself suddenly promoted and his experiences learning to be a lead developer on the job.

Why do we need a Black Valley by Leke Sholuade. He demonstrated perfectly why with a picture that showed two comparisons - one tilted "Equality" where 3 generations of people are standing on one box each but the youngest can't see over the fence to see the football game and then "Equity" where the tallest no longer has the box but the youngest is now standing on two boxes and can see over the fence. Sometimes we need to take action to accelerate and improve chances so everyone can get in the game. You can find out more on Black Valley mentoring and job listings here.

How to ruin a kid's games with machine learning by Jennifer Mackown. A lesson in how sometimes you can't beat kids at board games using machine learning.

3D printed Bionic Hand a little IOT and an Xamarin Mobile App by Clifford Agius. This replaced an original talk that couldn't take place but ended up being my favourite talk of the day! Fascinating to hear about how a 3D printed hand was made for a child at such a reduced price and he could even use it to play XBox and customise it as he grows and muscles improve with use. I found a 9 month old YouTube video of the same talk here.

SOLID Principles in 5 Nightmares by Simon Painter. An interesting talk through SOLID using StarWars as a way to explain them. In completely separate example, this chap replied to a question how to explain the principles to a child.

I look forward to next year!

]]>
<![CDATA[Automating scripts for AWS Sandbox setup - Create a new user]]>https://binarydreams.biz/automating-scripts-for-aws-sandbox-setup-create-new-user/63605741234078001ffb328bTue, 28 Sep 2021 15:25:25 GMT

I decided to go back and complete an old AWS Lambda & Serverless Architecture Bootcamp Udemy course I started in 2019. I didn't finish it at the time because I only learnt what I needed to know for the company project I was working on and now I have a bit of time on my hands I want to complete it.

This article series is about how I overcame the issues I faced with an out of date course, AWS playground limitations and what I did to automate as much as possible. All the code is in GitHub.

A few things have changed since that time including AWS services and Serverless Framework but with one additional difference - this time I am using the A Cloud Guru AWS Playground as my practice account.

The two main issues with the AWS ACG playground are that:

  • It limits the regions to us-east-1 and us-west-2 only. A bit further away from the UK but I can work with it.
  • There is a set 4 hour time limit so that meant restarting the environment setup from scratch each time it ran out. I only have a certain amount of time to give to the course before have something else I need to do. I really needed some automation scripts.

I could have used my personal AWS account but I wanted to

  • save money
  • not complicate my own account
  • make my mistakes safe in isolation
  • do some learning
  • get more practice with technologies I haven't used that much

Create profile to deploy Serverless Framework

You need a profile to deploy Serverless Framework as instructed in the Udemy course. Firstly, I logged on to the A Cloud Guru website and created a new AWS sandbox.

Automating scripts for AWS Sandbox setup - Create a new user

The ACG cloud_user will be set up in the account already so all you need to do is save the credentials to the ~/.aws/credentials file. Personally, I found it useful to repeatedly set up the cloud_user to keep the file open in Notepad++ and add/update directly. You could still use the AWS CLI but no need for me to go into that here.
It might be possible to automate this using Cypress but that means logging you into ACG with your own password. Not a priority for me but one that can be looked at later.

When attempting the deployment I found a couple of issues.

User not authorised to perform on resource with explicit deny

> serverless deploy
Serverless: Packaging service...

 Serverless Error ----------------------------------------
 
 User: arn:aws:iam::123456789012:user/cloud_user is not authorized to perform:
 cloudformation:DescribeStacks on resource: arn:aws:cloudformation:us-
 east-1:123456789012:stack/sls-notes-backend-prod/* with an explicit deny
Serverless Error

Although the cloud_user is restricted to what permissions you can add, it is recommended by Serverless Framework to create a custom user for deployment anyway and with the AdministratorAccess managed policy. Yes, I know we should practice least privilege access but I am using a lab that is thrown away after the timeout and for this Udemy course there is no need for me to go into that rabbit hole. Whereas runtime access should absolutely be least privilege. This article has a list of good practice around this sort of thing.

The security token included in the request is invalid

Serverless Error ----------------------------------------

The security token included in the request is invalid.
Serverless Error

Next error - in this case this was easier to resolve, it was late in the day so by the time I realised, it was the sandbox time that had expired and the profile was no longer valid.
Also be aware of copying the wrong Access Key Id because it would still come back with that error.

How the profile needs to be created

Once I got into repeatedly recreating the sandbox setup I knew I was going to have to do this time and again so I was thinking about what to use.
To create each user the following steps need to be taken:

  • Create the user
  • Attach the AdministratorAccess managed policy
  • Create the secret access key
  • Save the AWS profile to the credentials file

The AWS CDK is a great tool but in this case you need the CloudFormation permissions but the profile required needs creating first, a chicken/egg scenario.
I then thought I would use the AWS CLI in a batch file but that gets more complicated because you need to extract the responses in batch script which didn't look straightforward plus an old technology and I need to move on.
I thought I would give PowerShell a go but even that AWS SDK didn't support adding a managed policy so the AWS CLI would be required for that step.
I then decided on Python, I've had some prior experience and as it turned out it was the right decision.

Creating the new user script

Pre-requisites:

  • You need Python installed, preferably the latest
  • The AWS SDK Python library Boto3

First we add the imports.

import boto3
import subprocess
import csv
import os
import argparse
Imports

The script needs parameters because I want the choice to use a different profile to execute the script and a different user name to create. This is where you use the Argument Parser and each argument defaults to a set value.

# Set the arguments
parser = argparse.ArgumentParser(description='Create an AWS CloudFormation user.')
parser.add_argument('-e', '--executing-profile', dest='executingprofile', 
	default='cloudguru', help='The AWS profile to use to create the new user')
parser.add_argument('-n', '--new-user', dest='newuser', default='sls-user',
                    help='The new AWS user to be created')
args = parser.parse_args()
Parse the script arguments

To execute the script with the executing profile we need to create a session. Do note I am using the args.executingprofile value.

# Use the AWS profile for this session
session = boto3.Session(profile_name=args.executingprofile)
iam = session.client('iam')
Use the profile for this session

Then you create the user.

print("Create the new user")
userCreated = iam.create_user(UserName=args.newuser)
Create the new user

Then attach the AdministratorAccess policy to the new user.

print("Attach the policy to the new user")
iam.attach_user_policy(
    UserName=args.newuser, 
    PolicyArn='arn:aws:iam::aws:policy/AdministratorAccess')
Attach the policy to the new user

Now we can get the Access Key Id and Secret Access Key.

print("Create the users' secret access key")
accessKey = iam.create_access_key(
    UserName=args.newuser,
)

# Save the values to use later
newUserId = accessKey['AccessKey']['AccessKeyId']
secretAccessKey = accessKey['AccessKey']['SecretAccessKey']
Create the users' secret access key

With that we can save AWS credentials to a temporary CSV file. Note this is a much more effective and trouble-free way of getting the new credentials into the AWS credentials file.

print("Save the AWS credentials to a CSV file")
credentials_file = "credentials.csv"

with open(credentials_file, mode='w', newline='') as csv_file:
    fieldnames = ['User name', 'Access key ID', 'Secret access key']
    writer = csv.DictWriter(csv_file, fieldnames=fieldnames)

    writer.writeheader()
    writer.writerow({'User name': args.newuser, 'Access key ID': newUserId, 'Secret access key': secretAccessKey})
Save the credentials to a temporary file

In the later versions of Python you had an issue where blank lines would appear between each row in the CSV file. To fix this you use newline='' when opening the file.

The AWS SDK does not support a way to get the credentials into the file so easiest was is execute through the AWS CLI.

subprocess.run(['aws', 'configure', 'import', '--csv',
                f'file://{credentials_file}'])
Import the credentials file

Finally, delete the temporary credentials file.

os.remove(credentials_file)
Delete the credentials file

You now have a quick and easy way to create an Admin user

> python .\create_aws_user.py

The full script is located in GitHub here.

In part 2, I go over the next issue where the Udemy course required a pipeline building.

]]>
<![CDATA[Welcome to my new blog]]>I finally have my new blog set up using Ghost as a static website, hosted in AWS S3 and routed via CloudFlare. I will document how it was all created in a future blog post.

I've got loads of ideas and plans for this blog and will add

]]>
https://binarydreams.biz/welcome-to-my-new-blog/63605741234078001ffb3289Mon, 06 Sep 2021 23:09:46 GMTI finally have my new blog set up using Ghost as a static website, hosted in AWS S3 and routed via CloudFlare. I will document how it was all created in a future blog post.

I've got loads of ideas and plans for this blog and will add some old posts from my old BlogEngine.Net blog over the coming months.

]]>