I've recently been migrating my infrastructures to GCP and one of the most critical services I've been running is Ghost. Previously, I ran Ghost with:
- a tiny cloud VPS
- Docker Compose, which runs a MariaDB instance as well
- a local volume to
- a custom theme that I upload manually every time I made a change
Obviously this is one of the worst setup I could've had. Non-scalable, not fault-tolerant, and troublesome just to customise the theme files. I came up with:
- making the theme files built-in to the Ghost container images
- using Backblaze B2 for the image storage
- and using it with Cloudflare Workers saves a lot of money!
That means no more persistent storage volumes. Money saved again. Here's how!
First of all. Making the Ghost Dockerfile.
# from https://gist.github.com/22phuber/76c282d18ec0db166aa1aa3812217e1e
FROM ghost:3-alpine as ghost-storage-adapter-s3
RUN yarn add "ghost-storage-adapter-s3"
COPY $GHOST_INSTALL/current/node_modules $GHOST_INSTALL/current/node_modules
COPY $GHOST_INSTALL/current/node_modules/ghost-storage-adapter-s3 $GHOST_INSTALL/current/core/server/adapters/storage/s3
ADD src ./content.orig/themes/default
This grabs official
ghost:3-alpine as base image, installs Amazon S3 storage adapter for Ghost and copies
./src, the theme files, to
./content.orig/themes/default, which will be later automatically copied to
The Kubernetes manifest file will look like this:
- protocol: TCP
- name: ghost
- name: web
- name: url
- name: database__client
- name: database__connection__user
- name: database__connection__password
- name: database__connection__database
- name: mail__transport
- name: mail__from
value: "'Birkhoff\'s Blog' <firstname.lastname@example.org>"
- name: mail__options__service
- name: mail__options__port
- name: mail__options__auth__user
- name: mail__options__auth__pass
- name: storage__active
- name: storage__s3__accessKeyId
value: "AWS_ACCESS_KEY_ID" # B2 keyID
- name: storage__s3__secretAccessKey
value: "AWS_SECRET_ACCESS_KEY" # B2 applicationKey
- name: storage__s3__region
value: "AWS_DEFAULT_REGION" # e.g.: us-west-001 for s3.us-west-001.backblazeb2.com
- name: storage__s3__bucket
value: "GHOST_STORAGE_ADAPTER_S3_PATH_BUCKET" # B2 bucket name
- name: storage__s3__endpoint
value: "GHOST_STORAGE_ADAPTER_S3_ENDPOINT" # s3.us-west-001.backblazeb2.com
- name: storage__s3__assetHost
value: "GHOST_STORAGE_ADAPTER_S3_ASSET_HOST" # the image URL host on the website
Obviously you need a way to manage the secrets, which I will not cover here. For S3 compatible API docs, check out https://help.backblaze.com/hc/en-us/articles/360047425453. Moreover, I use GitLab CI to automate the process, but that is totally of personal preference.
Before deploying, export your old website data on Ghost admin panel. You will need to run another MySQL or some relational database on your K8s cluster as well. I tried Google Cloud SQL, but somehow they charged me ~30 USD for under 200 hours. So I decided to manage a MySQL instance on my own.