Varnish ist ein HTTP-Accelerator, der als Reverse-Proxy vor Webservern arbeitet. Er speichert Antworten im RAM und liefert sie blitzschnell an Besucher aus.
Warum Varnish?
Vorteile
- Extrem schnell (im RAM)
- Reduziert Backend-Last drastisch
- Flexible Konfiguration (VCL)
- Hohe Skalierbarkeit
- Edge Side Includes (ESI)Typische Ergebnisse
Ohne Varnish: 200-500 ms pro Request
Mit Varnish: 1-10 ms pro Request (Cache-Hit)
Cache-Hit-Rate: 80-95% möglichInstallation
Debian/Ubuntu
# Repository hinzufügen
apt install debian-archive-keyring curl gnupg apt-transport-https
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish75/script.deb.sh | bash
# Varnish installieren
apt install varnish
# Version prüfen
varnishd -VCentOS/RHEL
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish75/script.rpm.sh | bash
dnf install varnishGrundkonfiguration
Varnish-Service
# /etc/systemd/system/varnish.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd \
-a :80 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,256msystemctl daemon-reload
systemctl restart varnishBackend konfigurieren
# /etc/varnish/default.vcl
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 5s;
.first_byte_timeout = 60s;
.between_bytes_timeout = 60s;
}Nginx als Backend
# /etc/nginx/sites-available/default
server {
listen 8080;
server_name example.com www.example.com;
root /var/www/html;
index index.php index.html;
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}VCL-Grundlagen
Request-Flow
Client → vcl_recv → vcl_hash → [Cache-Lookup]
↓
[HIT] → vcl_hit → vcl_deliver → Client
[MISS] → vcl_miss → vcl_backend_fetch → Backend
→ vcl_backend_response
→ vcl_deliver → ClientBasis-VCL
# /etc/varnish/default.vcl
vcl 4.1;
import std;
backend default {
.host = "127.0.0.1";
.port = "8080";
}
# Eingehende Requests
sub vcl_recv {
# Admin-Bereich nicht cachen
if (req.url ~ "^/admin" || req.url ~ "^/wp-admin") {
return (pass);
}
# POST-Requests nicht cachen
if (req.method == "POST") {
return (pass);
}
# Cookies für statische Dateien entfernen
if (req.url ~ "\.(css|js|png|jpg|jpeg|gif|ico|svg|woff2?)$") {
unset req.http.Cookie;
return (hash);
}
# Eingeloggte Benutzer nicht cachen
if (req.http.Cookie ~ "wordpress_logged_in" ||
req.http.Cookie ~ "PHPSESSID") {
return (pass);
}
return (hash);
}
# Backend-Antwort verarbeiten
sub vcl_backend_response {
# Statische Dateien lange cachen
if (bereq.url ~ "\.(css|js|png|jpg|jpeg|gif|ico|svg|woff2?)$") {
set beresp.ttl = 7d;
unset beresp.http.Set-Cookie;
}
# HTML-Seiten kurz cachen
if (beresp.http.Content-Type ~ "text/html") {
set beresp.ttl = 10m;
}
# Kein Caching wenn Backend Set-Cookie sendet
if (beresp.http.Set-Cookie) {
set beresp.uncacheable = true;
return (deliver);
}
return (deliver);
}
# Antwort an Client
sub vcl_deliver {
# Debug-Header hinzufügen
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
set resp.http.X-Cache-Hits = obj.hits;
} else {
set resp.http.X-Cache = "MISS";
}
return (deliver);
}WordPress-Konfiguration
WordPress VCL
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "8080";
}
acl purge {
"localhost";
"127.0.0.1";
}
sub vcl_recv {
# Purge erlauben
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return (synth(405, "Not allowed"));
}
return (purge);
}
# Admin und Login nicht cachen
if (req.url ~ "wp-(admin|login|cron)" ||
req.url ~ "preview=true" ||
req.url ~ "xmlrpc.php") {
return (pass);
}
# POST nicht cachen
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# WordPress-Cookies prüfen
if (req.http.Cookie ~ "wordpress_logged_in|wp-postpass|comment_author") {
return (pass);
}
# Cookies für anonyme Besucher entfernen
set req.http.Cookie = regsuball(req.http.Cookie, "has_js=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "__utm.=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "_ga=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "_gid=[^;]+(; )?", "");
if (req.http.Cookie == "") {
unset req.http.Cookie;
}
return (hash);
}
sub vcl_backend_response {
# Kein Caching für Admin
if (bereq.url ~ "wp-(admin|login)") {
set beresp.uncacheable = true;
set beresp.ttl = 0s;
return (deliver);
}
# Statische Dateien
if (bereq.url ~ "\.(css|js|png|jpg|jpeg|gif|ico|svg|woff2?|ttf|eot)$") {
set beresp.ttl = 30d;
unset beresp.http.Set-Cookie;
}
# HTML-Seiten
if (beresp.http.Content-Type ~ "text/html") {
set beresp.ttl = 1h;
set beresp.grace = 24h;
}
return (deliver);
}
sub vcl_deliver {
unset resp.http.X-Powered-By;
unset resp.http.Server;
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
}WordPress-Plugin
Empfohlene Plugins für Varnish-Integration:
- Proxy Cache Purge
- Varnish HTTP Purge
SSL mit Hitch/Nginx
Hitch (SSL-Termination)
apt install hitch# /etc/hitch/hitch.conf
frontend = "[*]:443"
backend = "[127.0.0.1]:8443"
pem-file = "/etc/letsencrypt/live/example.com/combined.pem"
ciphers = "ECDHE-RSA-AES128-GCM-SHA256:..."
alpn-protos = "h2, http/1.1"PEM-Datei erstellen
cat /etc/letsencrypt/live/example.com/fullchain.pem \
/etc/letsencrypt/live/example.com/privkey.pem > \
/etc/letsencrypt/live/example.com/combined.pemNginx als SSL-Proxy (Alternative)
# /etc/nginx/sites-available/ssl-proxy
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}Cache-Invalidierung
PURGE-Request
curl -X PURGE http://example.com/page-to-purgeBan (Pattern-basiert)
# Alle Seiten bannen
varnishadm "ban req.url ~ ."
# Bestimmtes Muster
varnishadm "ban req.url ~ ^/blog"
# Nach Host
varnishadm "ban req.http.host == example.com"In VCL
sub vcl_recv {
if (req.method == "BAN") {
if (!client.ip ~ purge) {
return (synth(405, "Not allowed"));
}
ban("req.http.host == " + req.http.host + " && req.url ~ " + req.url);
return (synth(200, "Banned"));
}
}Monitoring
varnishstat
# Live-Statistiken
varnishstat
# Wichtige Metriken
varnishstat -1 | grep -E "cache_hit|cache_miss|client_req"varnishlog
# Alle Requests loggen
varnishlog
# Nur Cache-Misses
varnishlog -q "VCL_call eq MISS"
# Bestimmte URL
varnishlog -q "ReqURL ~ '/api/'"varnishhist
# Response-Time-Histogramm
varnishhistPrometheus-Export
# varnish_exporter installieren
wget https://github.com/jonnenauha/prometheus_varnish_exporter/releases/download/1.6.1/prometheus_varnish_exporter-1.6.1.linux-amd64.tar.gz
tar -xzf prometheus_varnish_exporter-1.6.1.linux-amd64.tar.gz
./prometheus_varnish_exporterGrace Mode
Stale Content ausliefern
sub vcl_backend_response {
# Stale Content für 24h behalten
set beresp.grace = 24h;
# Backend-Fehler überbrücken
if (beresp.status >= 500) {
if (bereq.is_bgfetch) {
return (abandon);
}
set beresp.uncacheable = true;
set beresp.ttl = 30s;
}
}
sub vcl_hit {
# Grace-Modus aktivieren wenn Backend langsam/down
if (obj.ttl >= 0s) {
return (deliver);
}
if (obj.ttl + obj.grace > 0s) {
return (deliver);
}
return (restart);
}Health Checks
Backend-Probe
backend default {
.host = "127.0.0.1";
.port = "8080";
.probe = {
.url = "/health";
.timeout = 2s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}Mehrere Backends
import directors;
backend web1 {
.host = "192.168.1.10";
.port = "80";
.probe = { .url = "/health"; }
}
backend web2 {
.host = "192.168.1.11";
.port = "80";
.probe = { .url = "/health"; }
}
sub vcl_init {
new cluster = directors.round_robin();
cluster.add_backend(web1);
cluster.add_backend(web2);
}
sub vcl_recv {
set req.backend_hint = cluster.backend();
}ESI (Edge Side Includes)
ESI aktivieren
sub vcl_backend_response {
if (beresp.http.Content-Type ~ "text/html") {
set beresp.do_esi = true;
}
}HTML mit ESI
<html>
<body>
<!-- Statischer Content (gecacht) -->
<header>...</header>
<!-- Dynamischer Content -->
<esi:include src="/user-menu.php" />
<main>...</main>
<!-- Weiterer dynamischer Teil -->
<esi:include src="/shopping-cart.php" />
</body>
</html>Troubleshooting
Cache wird nicht genutzt
# Headers prüfen
curl -I http://example.com
# X-Cache Header
X-Cache: MISS # Nicht gecacht
X-Cache: HIT # Aus Cache
# Warum nicht gecacht?
varnishlog -q "VCL_call eq PASS"Backend-Fehler
# Backend-Status
varnishadm backend.list
# Logs prüfen
varnishlog -q "BerespStatus >= 500"Speicher voll
# Speicher erhöhen
# /etc/systemd/system/varnish.service.d/override.conf
-s malloc,1G # 1 GBZusammenfassung
| Tool | Funktion | |------|----------| | varnishstat | Live-Statistiken | | varnishlog | Request-Logs | | varnishadm | Administration | | varnishhist | Latenz-Histogramm |
| VCL-Funktion | Verwendung | |--------------|------------| | vcl_recv | Eingehende Requests | | vcl_backend_response | Backend-Antworten | | vcl_deliver | Auslieferung an Client | | vcl_hit/vcl_miss | Cache-Treffer/Fehlschlag |
Fazit
Varnish Cache kann die Performance von Websites drastisch verbessern. Die VCL-Konfiguration erfordert Einarbeitung, bietet aber maximale Flexibilität. Für WordPress und andere CMS gibt es bewährte Konfigurationen. Kombinieren Sie Varnish mit einem SSL-Terminator wie Nginx oder Hitch für HTTPS. Überwachen Sie die Cache-Hit-Rate und optimieren Sie die VCL entsprechend.