Back to blog

Learnings from fixing three security issues

I recently reviewed a website I’ve been working on and there were three old issues that had been sitting there for years. The fixes were simple once I knew where to look.

Here is what I found and how I fixed each one.

1. Clickjacking via missing security headers

The site was missing X-Frame-Options and a proper Content-Security-Policy with frame-ancestors set.

Clickjacking is when an attacker loads your site in a hidden iframe and tricks a user into clicking your real page while they think they are clicking something else.

Example: a user thinks they are clicking a harmless “Play video” button on a shady page, but the click lands on your site’s hidden “Authorize app” or “Confirm payment” button underneath.

The fix is a header:

X-Frame-Options: SAMEORIGIN
Content-Security-Policy: frame-ancestors 'self'

The modern approach is frame-ancestors in CSP. X-Frame-Options is the legacy equivalent, and you want both for older browser coverage. SAMEORIGIN allows the page to be framed only by pages on the same origin, blocking third-party framing entirely.

This went in as a code change: added to the server’s response headers, committed, deployed.

What I learned

Good security is never set in stone because software changes with every update, and it’s often about finding the things that have been quietly wrong for a long time, especially before they’re exploited.

2. Path traversal on a Ghost blog

This one was more interesting.

The blog ran on Ghost, self-hosted on a VPS, and the subdomain had a path traversal vulnerability where a crafted URL like:

/assets/built%2F..%2F..%2Fpackage.json

…returned the actual package.json from the Ghost installation. That file exposes the Ghost version, dependencies, and enough information to fingerprint the exact version and target known vulnerabilities.

The underlying issue: Ghost serves its /assets/ directory, and with URL encoding (%2F for /), you could traverse out of that directory to the application root.

There were two layers to fix:

Immediate containment: Cloudflare WAF rule

The subdomain was behind Cloudflare, but the proxy was disabled (grey cloud / DNS-only), which meant Cloudflare’s WAF was not in the path at all.

First step: enable the proxy. Then add a firewall rule:

(http.host eq "blog.example.com" and (
  lower(http.request.uri) contains "%2e%2e" or
  lower(http.request.uri) contains "../" or
  lower(http.request.uri) contains "package.json" or
  lower(http.request.uri) contains ".env" or
  lower(http.request.uri) contains ".git"
))
→ Block

Tested with curl immediately after:

curl -i https://blog.example.com/assets/built%2F..%2F..%2Fpackage.json
# HTTP 403

Permanent fix: close the origin bypass

With the proxy enabled, the origin IP was now hidden behind Cloudflare. But that IP had been DNS-only and publicly resolvable for years, which means it was almost certainly in historical DNS records (SecurityTrails, etc.). An attacker could still hit it directly and bypass the WAF.

The clean solution: restrict the VPS to only accept HTTP/HTTPS traffic from Cloudflare’s IP ranges.

This is a ufw rule set:

# Allow SSH first - safety net
sudo ufw allow 22/tcp

# Allow Cloudflare IPs on 80/443
for ip in $(curl -s https://www.cloudflare.com/ips-v4); do
  sudo ufw allow from $ip to any port 80 proto tcp
  sudo ufw allow from $ip to any port 443 proto tcp
done

sudo ufw deny 80/tcp
sudo ufw deny 443/tcp
sudo ufw --force enable

Now the origin is unreachable from anything that is not Cloudflare. The WAF rule is the edge layer; the firewall is the origin layer.

What I learned

The grey cloud vs orange cloud distinction on a subdomain is easy to overlook. Most people set up Cloudflare on the root domain and assume subdomains are covered. They are not unless the DNS record is proxied.

A subdomain sitting in DNS-only mode is essentially unprotected, regardless of what you have configured in Cloudflare.

3. DMARC at p=none - and a duplicate SPF problem hiding underneath

DMARC was configured, but it was in monitoring mode (p=none), which collects aggregate reports and enforces nothing. The plan was probably to tighten it later. Nobody came back to do that.

Before moving to p=quarantine, I audited the SPF records and found two of them on the root domain (see my other post on this). Enforcing DMARC with a broken SPF configuration would have quarantined legitimate outbound mail.

So the order of operations matters:

  1. Fix SPF first - merge the two records into one.
  2. Move DMARC to p=quarantine; pct=10 - 10% of failing messages quarantined while you monitor.
  3. Ramp up over 2–4 weeks - pct=100, then eventually p=reject.

The pct tag is underused. It lets you do a staged rollout: enforce on a fraction of failing mail first, watch aggregate reports for false positives, then increase.

Going straight to p=reject on a domain with any sending complexity is how you accidentally break your transactional email.

v=DMARC1; p=quarantine; pct=10; rua=mailto:reports@yourdmarc.provider

What I learned

p=none is better than no DMARC, but barely. It has been sitting at p=none on domains I have seen for three or four years because no one feels the urgency to tighten it.

The urgency is that email spoofing - sending email that appears to come from your domain - is trivially easy while p=none is in place.

p=reject is the destination. p=quarantine; pct=10 is a safe way to start the journey.

None of these were complicated — DMARC added but not enforced, Cloudflare configured but proxy not enabled on a subdomain, and headers added on some routes but not all.

All three were found in a single audit pass, so they are fairly easy to catch — as long as somebody finishes the job.