I've recently been migrating my infrastructures to GCP and one of the most critical services I've been running is Ghost. Previously, I ran Ghost with:
- a tiny cloud VPS
- Docker Compose, which runs a MariaDB instance as well
- a local volume to
/var/lib/ghost/content
- a custom theme that I upload manually every time I made a change
Obviously this is one of the worst setup I could've had. Non-scalable, not fault-tolerant, and troublesome just to customise the theme files. I came up with:
- making the theme files built-in to the Ghost container images
- using Backblaze B2 for the image storage
- and using it with Cloudflare Workers saves a lot of money!
That means no more persistent storage volumes. Money saved again. Here's how!
First of all. Making the Ghost Dockerfile.
# from https://gist.github.com/22phuber/76c282d18ec0db166aa1aa3812217e1e
# stage
FROM ghost:3-alpine as ghost-storage-adapter-s3
WORKDIR $GHOST_INSTALL/current
RUN yarn add "ghost-storage-adapter-s3"
# build
FROM ghost:3-alpine
COPY $GHOST_INSTALL/current/node_modules $GHOST_INSTALL/current/node_modules
COPY $GHOST_INSTALL/current/node_modules/ghost-storage-adapter-s3 $GHOST_INSTALL/current/core/server/adapters/storage/s3
ADD src ./content.orig/themes/default
This grabs official ghost:3-alpine
as base image, installs Amazon S3 storage adapter for Ghost and copies ./src
, the theme files, to ./content.orig/themes/default
, which will be later automatically copied to ./content/themes/default
.
The Kubernetes manifest file will look like this:
apiVersion: v1
kind: Service
metadata:
name: ghost
spec:
ports:
- protocol: TCP
name: web
port: 2368
selector:
app: ghost
layout: layouts/post.njk
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: ghost
labels:
app: ghost
spec:
replicas: 1
selector:
matchLabels:
app: ghost
template:
metadata:
labels:
app: ghost
spec:
containers:
- name: ghost
image: YOUR_DOCKER_IMAGE_URL_AND_TAG
ports:
- name: web
containerPort: 2368
resources:
limits:
memory: "256Mi"
cpu: "250m"
env:
- name: url
value: "https://blog.birkhoff.me"
- name: database__client
value: "mysql"
- name: database__connection__user
value: "ghost"
- name: database__connection__password
value: "ghost"
- name: database__connection__database
value: "ghost"
- name: mail__transport
value: "SMTP"
- name: mail__from
value: "'Birkhoff\'s Blog' <no-reply@blog.birkhoff.me>"
- name: mail__options__service
value: "Mailgun"
- name: mail__options__port
value: "2525"
- name: mail__options__auth__user
value: "smtp_user@mailgun.com"
- name: mail__options__auth__pass
value: "some_password_here"
- name: storage__active
value: "s3"
- name: storage__s3__accessKeyId
value: "AWS_ACCESS_KEY_ID" # B2 keyID
- name: storage__s3__secretAccessKey
value: "AWS_SECRET_ACCESS_KEY" # B2 applicationKey
- name: storage__s3__region
value: "AWS_DEFAULT_REGION" # e.g.: us-west-001 for s3.us-west-001.backblazeb2.com
- name: storage__s3__bucket
value: "GHOST_STORAGE_ADAPTER_S3_PATH_BUCKET" # B2 bucket name
- name: storage__s3__endpoint
value: "GHOST_STORAGE_ADAPTER_S3_ENDPOINT" # s3.us-west-001.backblazeb2.com
- name: storage__s3__assetHost
value: "GHOST_STORAGE_ADAPTER_S3_ASSET_HOST" # the image URL host on the website
Obviously you need a way to manage the secrets, which I will not cover here. For S3 compatible API docs, check out https://help.backblaze.com/hc/en-us/articles/360047425453. Moreover, I use GitLab CI to automate the process, but that is totally of personal preference.
Before deploying, export your old website data on Ghost admin panel. You will need to run another MySQL or some relational database on your K8s cluster as well. I tried Google Cloud SQL, but somehow they charged me ~30 USD for under 200 hours. So I decided to manage a MySQL instance on my own.