Moving Ghost Blog to Google Kubernetes Engine

I've recently been migrating my infrastructures to Google Cloud Platform. Previously, I've been running Ghost blog on a tiny cloud VPS, which is not scalable and fault-tolerant. I've been using Kubernetes for a while, so I decided to move my blog to Google Kubernetes Engine, which is a managed Kubernetes service on Google Cloud Platform.

In the following, I'll cover the way that you can use Backblaze B2 as the image storage for Ghost, which is a lot cheaper than Amazon S3. You can even use it with Cloudflare Workers to further reduce the egress traffic cost.

Also, to cut down the additional persistent storage volume, I decided to use a custom Ghost Docker image that has the theme files built in.

FROM ghost:3-alpine as ghost-storage-adapter-s3

WORKDIR $GHOST_INSTALL/current
RUN yarn add "ghost-storage-adapter-s3"

FROM ghost:3-alpine
COPY --chown=node:node --from=ghost-storage-adapter-s3 $GHOST_INSTALL/current/node_modules $GHOST_INSTALL/current/node_modules
COPY --chown=node:node --from=ghost-storage-adapter-s3 $GHOST_INSTALL/current/node_modules/ghost-storage-adapter-s3 $GHOST_INSTALL/current/core/server/adapters/storage/s3

ADD --chown=node:node src ./content.orig/themes/default

This grabs official ghost:3-alpine as base image, installs Amazon S3 storage adapter for Ghost and copies ./src, the theme files, to ./content.orig/themes/default, which will be later copied to ./content/themes/default.

After you prepared the Docker image, you can push it to Google Container Registry. I use Google Container Registry because I'm already using Google Kubernetes Engine. When you have a valid image tag, you can start working on the Kubernetes manifests, and here's an example on that:

apiVersion: v1
kind: Service
metadata:
  name: ghost

spec:
  ports:
    - protocol: TCP
      name: web
      port: 2368
  selector:
    app: ghost
layout: layouts/post.njk
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: ghost
  labels:
    app: ghost

spec:
  replicas: 1
  selector:
    matchLabels:
      app: ghost
  template:
    metadata:
      labels:
        app: ghost
    spec:
      containers:
        - name: ghost
          image: YOUR_DOCKER_IMAGE_URL_AND_TAG
          ports:
            - name: web
              containerPort: 2368
          resources:
            limits:
              memory: "256Mi"
              cpu: "250m"
          env:
            - name: url
              value: "https://blog.birkhoff.me"
            - name: database__client
              value: "mysql"
            - name: database__connection__user
              value: "ghost"
            - name: database__connection__password
              value: "ghost"
            - name: database__connection__database
              value: "ghost"
            - name: mail__transport
              value: "SMTP"
            - name: mail__from
              value: "'Birkhoff\'s Blog' <no-reply@blog.birkhoff.me>"
            - name: mail__options__service
              value: "Mailgun"
            - name: mail__options__port
              value: "2525"
            - name: mail__options__auth__user
              value: "smtp_user@mailgun.com"
            - name: mail__options__auth__pass
              value: "some_password_here"
            - name: storage__active
              value: "s3"
            - name: storage__s3__accessKeyId
              value: "AWS_ACCESS_KEY_ID" # B2 keyID
            - name: storage__s3__secretAccessKey
              value: "AWS_SECRET_ACCESS_KEY" # B2 applicationKey
            - name: storage__s3__region
              value: "AWS_DEFAULT_REGION" # e.g.: us-west-001 for s3.us-west-001.backblazeb2.com
            - name: storage__s3__bucket
              value: "GHOST_STORAGE_ADAPTER_S3_PATH_BUCKET" # B2 bucket name
            - name: storage__s3__endpoint
              value: "GHOST_STORAGE_ADAPTER_S3_ENDPOINT" # s3.us-west-001.backblazeb2.com
            - name: storage__s3__assetHost
              value: "GHOST_STORAGE_ADAPTER_S3_ASSET_HOST" # the image URL host on the website

To further secure the infrastructure, you would need a way to manage the secrets, e.g. Google Cloud Secret Manager or HashiCorp Vault.

Before deploying, you should export your old website data on Ghost admin panel. For the database, you can choose Google Cloud SQL. It didn't work out for me given its expensive pricing, and I had to run another MySQL instance the Kubernetes cluster to reduce costs.