A few times I have had to manually re-import the blog data... I've done this enough times and I need to update the Ghost blog version and fix any Snyk reported security issues. I will automate all this and will show the scripts that will help import a backup automatically.
Now I can backup and update the blog but a few times I have had to manually re-import the blog data. You saw from the backup scripts in part 1 will export the blog content as a JSON file and the content (that includes all the images used) as a ZIP file. The blog is manually imported with these steps:
The blog content JSON file from the Settings Lab Import button.
The images content uncompressed from the ZIP file and into the blog folder.
The profile is reset whenever the blog is reset and the details have to be manually updated again, the profile text and pictures.
I've done this enough times and I need to update the Ghost blog version and fix any Snyk reported security issues. I will automate all this and will show the scripts that will help import a backup automatically. These scripts are a work-in-progress because I haven't needed to execute them . This code is my educated guess so far and I will update the code once I have confirmed them working.
Importing the blog content
The import is split into two executing scripts, one to download the backup and the other to do the import.
The download_website.sh script will:
Get the list of backups from AWS S3 after the set date time after when backups are packaged correctly.
Convert the JSON array to a bash shell array.
Display the list of backup datetimes and wait for user input.
Once selected, create the download folder.
Get each file in the backup from AWS S3 in the requested backup and save to the download folder.
From Docker, extract the blog content images from the archive into the blog folder. NOTE: This doesn't currently work because I haven't restarted the blog with the new import folder reference in the Dockerfile.
This saves me logging into the AWS console, listing the bucket contents and downloading the archive or using the AWS CLI myself.
Extract the blog data
The first part of the extraction is to uncompress the blog content into the import location and then move it to the Ghost install location.
Import the blog data
The 2nd script, import_website.sh , will:
Get the list of downloaded imports found in the import folder.
Wait for user to select the datetime of the import.
Execute the Cypress tests with the selected datetime.
#!/bin/bash
echo "\nGet list of imports from import folder ..."
declare -a FOLDERS
FOLDERS=($(ls -d data/import/*))
for (( i=0; i<${#FOLDERS[@]}; i++ ));
do
folder=${FOLDERS[$i]}
echo "[${i}] $folder";
done
if [ -z ]
then
echo "No imports found."
exit 1
fi
echo "[x] Any other key to exit\n"
read -p "Choose import number> " RESULT
# Check if not a number
if [ -z "${RESULT##*[!0-9]*}" ]
then
exit 1
fi
# Reduce array index to get correct menu item
RESULT=$(($RESULT - 1))
SELECTED_DATE=${FOLDERS[$RESULT]}
echo $SELECTED_DATE
# TODO:
# User choose whether to reset blog content by deleting existing blog content
# Check if first page in test is logging in OR blog setup.
# FOR NOW:
# Will need to manually setup blog, delete default blog posts and content files
# The UI tests should do the rest
#echo "Run the UI test to import the blog from JSON files and return to this process"
#npx as-a binarydreams-blog cypress run --spec "cypress/e2e/ghost_import.cy.js" --env timestamp=$SELECTED_DATE
The Cypress test will:
log into Ghost
check the blog content JSON file exists
run the test to import the blog with the import datetime passed in as an argument
Then the profile is imported with these steps:
log into Ghost
reading the profile JSON file from the expected location
Browse to the profile page
Upload the cover picture
Upload the profile picture
Enter the profile details
Once this has been fully tested then I will update this article.