The evolution of my solution to blogging

I have been writing blog posts (mostly technical ones) since elementry school,
probably from eight years ago. At that time, I had only little knowledge of web.
I started with WordPress, mostly because I was more comfortable with PHP since
that it was the very first programming language I've started with.

I hosted that WordPress site on GoDaddy. At some time, GoDaddy raised a payment
issue on my account, and I was not able to renew it because credit card issues
(I wasn't even 15, so billing was always a big issue to me). It turned out that
they immediately deleted every single byte of the files of the site, without any
backups. I lost everything related to the blog. I decided to move my blog to
somewhere else.

I began blogging on the hosted Ghost blog, Ghost(Pro). After two or three years,
I've become aware that the solution costs me a lot of money annually. I decided
to self-host the Ghost blog. I tried GCP (even GKE) and AWS, but they were,
frankly, still too expensive for me. I ultimately went to Vultr, and I'd been
hosting all my infrastructures there.

A couple of years later, it has come to my attention that I did not really need
a backend for blogging. My contents doesn't change frequently. It was
unnecessary for me to maintain a Node.js backend and a database solely for a
blogging site. I decided to publish the blog with a static site generator. One
significant downside of this approach is that when the content gets updated,
the change might not take effect immediately due to CDN edge cache TTL.
But that is perfectly fine for me.

Initially I started with Hexo, but later I moved to Hugo. There's no particular
reason on this, I just thought Hugo was a more mature solution to my question.

Now that my site is generated as static files. Whenever the contents get updated,
I need to upload them to somewhere reliable. GitHub Pages was an ok-ish service
for this, but GitHub hasn't been reliable lately.

Fortunately I came across Cloudflare Workers KV, and Cloudflare has made it
surprisingly easy to host a static website on a serverless environment with
Wrangler. It worked out pretty well,
since the requests are directly handled by the Cloudflare edge servers.

But I felt even more adventrous. I decided to try out S3 & Cloudfront for
hosting my Hugo site. I also moved my NS to Route 53. All of the infrastructures
are managed by Terraform. Setting all these things up is simple as a few
Terraform resources. I'm providing some of my codes for your reference:

data "template_file" "bucket_policy" {
template = file("bucket-policy.json")

vars = {
bucket_name = var.bucket_name
deployment_user_arn = module.s3_user.user_arn
}
}

resource "aws_s3_bucket" "hugo" {
bucket = var.bucket_name
acl = "public-read"
policy = data.template_file.bucket_policy.rendered
force_destroy = true

website {
index_document = "index.html"
error_document = "404.html"

routing_rules = <<EOF
[{
"Condition": {
"KeyPrefixEquals": "/"
},
"Redirect": {
"ReplaceKeyWith": "index.html"
}
}]
EOF

}

cors_rule {
allowed_headers = []
allowed_methods = ["GET"]
allowed_origins = ["https://s3.amazonaws.com"]
expose_headers = []
max_age_seconds = 3000
}
}

resource "aws_acm_certificate" "blog_certificate" {
provider = aws.us_east
domain_name = "birkhoff.me"
validation_method = "DNS"

# tags: {
Environment = "prod"
}

lifecycle {
create_before_destroy = true
}
}

resource "aws_route53_record" "acm_validation" {
for_each = {
for dvo in aws_acm_certificate.blog_certificate.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}

allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = data.terraform_remote_state.dns.outputs.route53_zone_id
}

resource "aws_cloudfront_distribution" "hugo" {
count = 1
depends_on = [aws_s3_bucket.hugo, aws_acm_certificate.blog_certificate]

origin {
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = ["TLSv1.2"]
}

domain_name = "${var.bucket_name}.s3-website-${var.aws_region}.amazonaws.com"

origin_id = local.s3_origin_id
}

custom_error_response {
error_caching_min_ttl = 3600
error_code = 404
response_code = 404
response_page_path = "/404.html"
}

enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"

aliases = ["birkhoff.me"]

default_cache_behavior {
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = "none"
}
}

viewer_protocol_policy = "redirect-to-https"

default_ttl = 86400
min_ttl = 0
max_ttl = 31536000
}

price_class = "PriceClass_200"

viewer_certificate {
acm_certificate_arn = aws_acm_certificate.blog_certificate.arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2019"
}

restrictions {
geo_restriction {
restriction_type = "none"
}
}
}

Hugo itself offers a tool for deploying to S3 in its CLI utility:
https://gohugo.io/hosting-and-deployment/hugo-deploy/#configure-the-deployment.
I only needed to create a dedicated IAM role for Hugo. I also bundled AWS CLI
and the Hugo CLI in a Docker image,
which is automatically built by Docker Hub.

The gitlab-ci.yaml is simple as follows after setting AWS access credentials
in the environment variables configuration page:

stages:
- build-and-deploy

build-and-deploy:
stage: build-and-deploy
only:
- master
image: birkhofflee/awscli-hugo:latest
script:
- hugo --minify
- hugo deploy

Now when the source code is published to GitLab, CI will automatically build
the files, deploy them to S3 and invalidate Cloudfront cache if needed.

At the time of writing this post, this site is hosted both on Cloudflare Workers
and S3 + Cloudfront, using weighted DNS records on Route 53.

$ zsh -c 'for i in `seq 1 1000`; dig @ns-993.awsdns-60.net +short A birkhoff.me | head -n 1' | uniq | nali
198.41.214.162 [CloudFlare Edge]
13.226.123.106 [Amazon Hong Kong PoP]

My Terraform config on this, for your information:

resource "aws_route53_record" "blog-cloudfront-a" {
zone_id = data.terraform_remote_state.dns.outputs.route53_zone_id
name = "birkhoff.me"
type = "A"

set_identifier = "blog-cloudfront-a"

weighted_routing_policy {
weight = 50
}

alias {
name = aws_cloudfront_distribution.hugo[0].domain_name
zone_id = aws_cloudfront_distribution.hugo[0].hosted_zone_id
evaluate_target_health = true
}
}

resource "aws_route53_record" "blog-cloudfront-aaaa" {
zone_id = data.terraform_remote_state.dns.outputs.route53_zone_id
name = "birkhoff.me"
type = "AAAA"

set_identifier = "blog-cloudfront-aaaa"

weighted_routing_policy {
weight = 50
}

alias {
name = aws_cloudfront_distribution.hugo[0].domain_name
zone_id = aws_cloudfront_distribution.hugo[0].hosted_zone_id
evaluate_target_health = true
}
}

# Cloudflare Workers
resource "aws_route53_record" "blog-workers-a" {
zone_id = data.terraform_remote_state.dns.outputs.route53_zone_id
name = "birkhoff.me"
type = "A"
ttl = "3600"
records = ["198.41.214.162", "104.16.132.229"]

set_identifier = "blog-workers-a"

weighted_routing_policy {
weight = 50
}
}

# Cloudflare Workers
resource "aws_route53_record" "blog-workers-aaaa" {
zone_id = data.terraform_remote_state.dns.outputs.route53_zone_id
name = "birkhoff.me"
type = "AAAA"
ttl = "3600"
records = ["2606:4700::6811:d109", "2606:4700::6810:85e5"]

set_identifier = "blog-workers-aaaa"

weighted_routing_policy {
weight = 50
}
}

That's it for now. I hope this post inspires you in some way! If you have any
thoughts on this post, please do not hesitate to comment below.