As part of my ongoing nearshoring project I found Bunny.net to best fit the hosting needs for my static websites. It’s good, it’s fast & is’s cheap - a rare contradiction to RFC 1925!
In this post I will cover automated site deployments with the following stack:
- Bunny Storage & CDN
- Github Actions
- Terraform
Bunny.net infrastructure
To serve our website with Bunny we need two things:
- A storage zone to store our website
- A pull zone to serve our website through
Storage zone
A storagezone is an object storage bucket, a “container” for your objects/files/blobs with it’s own set of properties and settings. In bunny the most important settings for our storagezone are tier selection and replication.
The edge tier is stored on SSDs while the standard tier is stored on spinning rust. The difference in price is barely measurable for our purposes while the difference in performance is huge. Replication settings mainly dictate the distance to the origin for cache misses. I have hence enabled all replication regions.
My monthly spend on storage for this site is about 1¢ despite turning all the knobs to maximize cost performance.
Our storage zone configuration looks like this:
resource "random_string" "randstring" {
length = 4
special = false
upper = false
}
resource "bunnynet_storage_zone" "site" {
name = "sz-${var.sitename}-${random_string.randstring.result}"
region = "DE"
zone_tier = "Edge"
replication_regions = ["BR", "CZ", "ES", "HK", "JH", "JP", "LA",
"MI", "NY", "SE", "SG", "SYD", "UK", "WA"]
}
Storage zone names must be unique and are not reusable, hence the need for randomizing it’s name. Without this you will find yourself manually updating the name when you accidentally delete your storagezone.
Pull zone
Our pull zone dictates the settings relating to serving our site through the Bunny CDN.
When you create your pullzone you will get a default hostname for your site in the following format: {pullzone name}.b-cdn.net. This is what you will need to configure your ALIAS/ANAME record to. Before moving your DNS records you can also use this hostname to verify functionality.
The main settings that affect cost is tier selection and pricing zones. The standard tier makes the most sense for our purposes and you likely want to enable all zones. Unless your website is significantly more popular than mine the cost should only be a few euros per month. Just in case this site is to go viral I have also configured a bandwidth limit of 40Gb. This gives me a max cost of about €22 per month.
I have left caching settings to the default values and have a 50% cache hit rate after a few days. The default max age for content served from our storagezone is about 300 days. This is fine for my use as the page rarely changes, but you might want to change this. Alternatively you can also add a step in your workflow for purging the cache if your change changes fequently. Note that smart caching does not cache text/html, so you likely won’t want to enable it for your static site.
The pull zone configuration looks like this:
resource "bunnynet_pullzone" "site" {
name = "pz-${var.sitename}"
cache_enabled = false # Misleading name, this refers to smart caching
origin {
type = "StorageZone"
storagezone = bunnynet_storage_zone.site.id
}
routing {
tier = "Standard"
zones = ["AF", "ASIA", "EU", "SA", "US"]
}
limit_bandwidth = 40 * 1000000000
}
resource "bunnynet_pullzone_hostname" "domain" {
pullzone = bunnynet_pullzone.site.id
name = var.domain
tls_enabled = true
force_ssl = true
}
Bonus shenanigans
Bunny makes blocking requests from certain countries easy, but it is much more fun to redirect them. This redirect rule will rick-roll the Swedes for example:
resource "bunnynet_pullzone_edgerule" "redirect" {
enabled = true
pullzone = bunnynet_pullzone.site.id
description = "Redirect"
actions = [
{
type = "Redirect"
parameter1 = "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
parameter2 = "301"
parameter3 = null
}
]
match_type = "MatchAny"
triggers = [
{
type = "CountryCode"
match_type = "MatchAny"
patterns = ["SE"]
parameter1 = null
parameter2 = null
}
]
}
Github Actions & Terraform
Before we can start deploying to bunny we need to prepare our GitHub actions environment. We need a secret and a couple of variables:
- Secret: BUNNY_KEY, your bunny.net API token
- Variable: FQDN, fully qualified domain name for your site
- Variable: STATE_SZ_ID, the ID of your state storagezone(more on this later).
As I want this to be as self-contained as possible I chose to store the terraform state in a bunny storagezone, and simply handle upload/download as part of the workflow. You will have to create this storagezone manually, and you can leave all settings at default. You can find the ID of the storage zone in the URL of the storage zone administration page.
Handling the terraform state like this is not ideal, but it does the job reliably enough for my single user- low deployment frequency usecase. In most other scenarios you should handle this with a terraform backend. Here are the parts of the workflow related to handling the state file:
- name: Fetch credentials
run: |
curl --header 'accept: application/json' \
--header 'AccessKey: ${{ secrets.BUNNY_KEY }}' \
https://api.bunny.net/storagezone/${{ vars.STATE_SZ_ID }} \
| tee >( echo "STATE_SZ_PASSWORD=$(jq -r .Password )" >> $GITHUB_ENV) \
| tee >( echo "STATE_SZ_HOSTNAME=$(jq -r .StorageHostname )" >> $GITHUB_ENV ) \
| echo "STATE_SZ_NAME=$(jq -r .Name)" >> $GITHUB_ENV
- name: Fetch state
continue-on-error: true
run: curl -u $STATE_SZ_NAME:$STATE_SZ_PASSWORD ftp://$STATE_SZ_HOSTNAME/terraform.tfstate -o terraform.tfstate
~~~ SNIP ~~~
- name: Upload state
run: curl -u $STATE_SZ_NAME:$STATE_SZ_PASSWORD ftp://$STATE_SZ_HOSTNAME/ -T terraform.tfstate
Another less-than-ideal aspect of this is the lack of encryption of the state file while being “at rest”. This is however quite easily fixed if you have some time and some motivation.
I build and upload the site in a separate job running a docker image by hugomods. This works flawlessly, but it requires fetching some terraform outputs and passing them between the jobs. Again, this should ideally be encrypted or be passed through a secret store but for my usecase it is good enough.
outputs.tf:
output "bunny_sz_name" {
value = bunnynet_storage_zone.site.name
}
output "bunny_sz_password" {
value = bunnynet_storage_zone.site.password
sensitive = true
}
output "bunny_sz_hostname" {
value = bunnynet_storage_zone.site.hostname
}
Relevant part of the workflow file:
- name: Get terraform outputs
id: get-outvars
run: |
for var in "bunny_sz_name" "bunny_sz_hostname" "bunny_sz_password"
do echo "${var}=$(terraform output -raw $var)" >> $GITHUB_OUTPUT
done
~~~ SNIP ~~~
deploy-site:
name: 🎨 Deploy site
runs-on: ubuntu-latest
needs: deploy-infra
container:
image: hugomods/hugo:go-git
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Build site
run: hugo build
- name: Upload bundle
uses: SamKirkland/FTP-Deploy-Action@v4.3.5
with:
local-dir: ./public/
server-dir: /
server: ${{ needs.deploy-infra.outputs.bunny-sz-hostname }}
username: ${{ needs.deploy-infra.outputs.bunny-sz-name }}
password: ${{ needs.deploy-infra.outputs.bunny-sz-password }}
Summary
I was completely green to bunny.net when I started this and have found it an absolute joy to work with. It performs well, has enough nerd-knobs to keep me happy and the tooling works great. It will be a new mainstay in my projects going forward. Now I just need to figure out a good usecase for the Bunny AI feature…
The code/configuration in this post can be found along with instructions in the follwing repo: github.com/torbbang/hugo-on-bunny-net
UPDATE: Added automated cache purges after site upload
See Also
- CML Autoshutdown nodes 🤖
- EEM in Catalyst Center templates 📅👨💼
- PNP Licence level change 🔀🪪
- EVE-NG Netplan configuration 🌐🙆
- Ubuntu, SecureCRT & Telnet links 🐧🔗
Got feedback or a question?
Feel free to contact me at hello@torbjorn.dev