Quickstart guide
Prerequisites
We assume that you're already familiar with the core concepts and you have followed the integrations instructions for your environment.
Going further
To demonstrate the use of BunkerWeb, we will deploy a dummy "Hello World" web application as an example. See the examples folder of the repository to get real-world examples.
Protect HTTP applications
Protecting existing web applications already accessible with the HTTP(S) protocol is the main goal of BunkerWeb : it will act as a classical reverse proxy with extra security features.
The following settings can be used :
USE_REVERSE_PROXY
: enable/disable reverse proxy modeREVERSE_PROXY_URL
: the public path prefixREVERSE_PROXY_HOST
: (internal) address of the proxied web application
You will find more settings about reverse proxy in the settings section of the documentation.
Single application
When using Docker integration, the easiest way of protecting an existing application is to create a network so BunkerWeb can send requests using the container name.
Create the Docker network if it's not already created :
docker network create bw-net
Then, instantiate your app :
docker run -d \
--name myapp \
--network bw-net \
nginxdemos/hello:plain-text
Create the BunkerWeb volume if it's not already created :
docker volume create bw-data
You can now run BunkerWeb and configure it for your app :
docker run -d \
--name mybunker \
--network bw-net \
-p 80:8080 \
-p 443:8443 \
-v bw-data:/data \
-e SERVER_NAME=www.example.com \
-e USE_REVERSE_PROXY=yes \
-e REVERSE_PROXY_URL=/ \
-e REVERSE_PROXY_HOST=http://myapp \
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent :
version: '3'
services:
mybunker:
image: bunkerity/bunkerweb:1.4.8
ports:
- 80:8080
- 443:8443
volumes:
- bw-data:/data
environment:
- USE_REVERSE_PROXY=yes
- REVERSE_PROXY_URL=/
- REVERSE_PROXY_HOST=http://myapp
networks:
- bw-net
myapp:
image: nginxdemos/hello:plain-text
networks:
- bw-net
volumes:
bw-data:
networks:
bw-net:
name: bw-net
We will assume that you already have the Docker autoconf integration stack running on your machine and connected to a network called bw-services.
You can instantiate your container and pass the settings as labels :
docker run -d \
--name myapp \
--network bw-services \
-l bunkerweb.SERVER_NAME=www.example.com \
-l bunkerweb.USE_REVERSE_PROXY=yes \
-l bunkerweb.USE_REVERSE_URL=/ \
-l bunkerweb.REVERSE_PROXY_HOST=http://myapp \
nginxdemos/hello:plain-text
Here is the docker-compose equivalent :
version: '3'
services:
myapp:
image: nginxdemos/hello:plain-text
networks:
bw-services:
aliases:
- myapp
labels:
- "bunkerweb.SERVER_NAME=www.example.com"
- "bunkerweb.USE_REVERSE_PROXY=yes"
- "bunkerweb.REVERSE_PROXY_URL=/"
- "bunkerweb.REVERSE_PROXY_HOST=http://myapp"
networks:
bw-services:
external:
name: bw-services
We will assume that you already have the Swarm integration stack running on your cluster.
You can instantiate your service and pass the settings as labels :
docker service \
create \
--name myapp \
--network bw-services \
-l bunkerweb.SERVER_NAME=www.example.com \
-l bunkerweb.USE_REVERSE_PROXY=yes \
-l bunkerweb.REVERSE_PROXY_HOST=http://myapp \
-l bunkerweb.REVERSE_PROXY_URL=/ \
nginxdemos/hello:plain-text
Here is the docker-compose equivalent (using docker stack deploy
) :
version: "3"
services:
myapp:
image: nginxdemos/hello:plain-text
networks:
bw-services:
aliases:
- myapp
deploy:
placement:
constraints:
- "node.role==worker"
labels:
- "bunkerweb.SERVER_NAME=www.example.com"
- "bunkerweb.USE_REVERSE_PROXY=yes"
- "bunkerweb.REVERSE_PROXY_URL=/"
- "bunkerweb.REVERSE_PROXY_HOST=http://myapp"
networks:
bw-services:
external:
name: bw-services
We will assume that you already have the Kubernetes integration stack running on your cluster.
Let's assume that you have a typical Deployment with a Service to access the web application from within the cluster :
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: svc-app
spec:
selector:
app: app
ports:
- protocol: TCP
port: 80
targetPort: 80
Here is the corresponding Ingress definition to serve and protect the web application :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
bunkerweb.io/AUTOCONF: "yes"
spec:
rules:
- host: www.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-app
port:
number: 80
We will assume that you already have the Linux integration stack running on your machine.
The following command will run a basic HTTP server on the port 8000 and deliver the files in the current directory :
python3 -m http.server -b 127.0.0.1
Configuration of BunkerWeb is done by editing the /opt/bunkerweb/variables.env
file :
SERVER_NAME=www.example.com
HTTP_PORT=80
HTTPS_PORT=443
DNS_RESOLVERS=8.8.8.8 8.8.4.4
USE_REVERSE_PROXY=yes
REVERSE_PROXY_URL=/
REVERSE_PROXY_HOST=http://127.0.0.1:8000
Let's check the status of BunkerWeb :
systemctl status bunkerweb
If it's already running, we can restart it :
systemctl restart bunkerweb
Otherwise, we will need to start it :
systemctl start bunkerweb
We will assume that you already have a service running and you want to use BunkerWeb as a reverse-proxy.
The following command will run a basic HTTP server on the port 8000 and deliver the files in the current directory :
python3 -m http.server -b 127.0.0.1
Content of the my_variables.env
configuration file :
SERVER_NAME=www.example.com
HTTP_PORT=80
HTTPS_PORT=443
DNS_RESOLVERS=8.8.8.8 8.8.4.4
USE_REVERSE_PROXY=yes
REVERSE_PROXY_URL=/
REVERSE_PROXY_HOST=http://127.0.0.1:8000
In your Ansible inventory, you can use the variables_env
variable to set the path of configuration file :
[mybunkers]
192.168.0.42 variables_env="{{ playbook_dir }}/my_variables.env"
Or alternatively, in your playbook file :
- hosts: all
become: true
vars:
- variables_env: "{{ playbook_dir }}/my_variables.env"
roles:
- bunkerity.bunkerweb
You can now run the playbook :
ansible-playbook -i inventory.yml playbook.yml
Multiple applications
Testing
To perform quick tests when multisite mode is enabled (and if you don't have the proper DNS entries set up for the domains) you can use curl with the HTTP Host header of your choice :
curl -H "Host: app1.example.com" http://ip-or-fqdn-of-server
If you are using HTTPS, you will need to play with SNI :
curl -H "Host: app1.example.com" --resolve example.com:443:ip-of-server https://example.com
When using Docker integration, the easiest way of protecting multiple existing applications is to create a network so BunkerWeb can send requests using the container names.
Create the Docker network if it's not already created :
docker network create bw-net
Then instantiate your apps :
docker run -d \
--name myapp1 \
--network bw-net \
nginxdemos/hello:plain-text
docker run -d \
--name myapp2 \
--network bw-net \
nginxdemos/hello:plain-text
docker run -d \
--name myapp3 \
--network bw-net \
nginxdemos/hello:plain-text
Create the BunkerWeb volume if it's not already created :
docker volume create bw-data
You can now run BunkerWeb and configure it for your apps :
docker run -d \
--name mybunker \
--network bw-net \
-p 80:8080 \
-p 443:8443 \
-v bw-data:/data \
-e MULTISITE=yes \
-e "SERVER_NAME=app1.example.com app2.example.com app3.example.com" \
-e USE_REVERSE_PROXY=yes \
-e REVERSE_PROXY_URL=/ \
-e app1.example.com_REVERSE_PROXY_HOST=http://myapp1 \
-e app2.example.com_REVERSE_PROXY_HOST=http://myapp2 \
-e app3.example.com_REVERSE_PROXY_HOST=http://myapp3 \
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent :
version: '3'
services:
mybunker:
image: bunkerity/bunkerweb:1.4.8
ports:
- 80:8080
- 443:8443
volumes:
- bw-data:/data
environment:
- MULTISITE=yes
- SERVER_NAME=app1.example.com app2.example.com app3.example.com
- USE_REVERSE_PROXY=yes
- REVERSE_PROXY_URL=/
- app1.example.com_REVERSE_PROXY_HOST=http://myapp1
- app2.example.com_REVERSE_PROXY_HOST=http://myapp2
- app3.example.com_REVERSE_PROXY_HOST=http://myapp3
networks:
- bw-net
myapp1:
image: nginxdemos/hello:plain-text
networks:
- bw-net
myapp2:
image: nginxdemos/hello:plain-text
networks:
- bw-net
myapp3:
image: nginxdemos/hello:plain-text
networks:
- bw-net
volumes:
bw-data:
networks:
bw-net:
name: bw-net
We will assume that you already have the Docker autoconf integration stack running on your machine and connected to a network called bw-services.
You can instantiate your containers and pass the settings as labels :
docker run -d \
--name myapp1 \
--network bw-services \
-l bunkerweb.SERVER_NAME=app1.example.com \
-l bunkerweb.USE_REVERSE_PROXY=yes \
-l bunkerweb.USE_REVERSE_URL=/ \
-l bunkerweb.REVERSE_PROXY_HOST=http://myapp1 \
nginxdemos/hello:plain-text
Here is the docker-compose equivalent :
version: '3'
services:
myapp1:
image: nginxdemos/hello:plain-text
networks:
bw-services:
aliases:
- myapp1
labels:
- "bunkerweb.SERVER_NAME=app1.example.com"
- "bunkerweb.USE_REVERSE_PROXY=yes"
- "bunkerweb.REVERSE_PROXY_URL=/"
- "bunkerweb.REVERSE_PROXY_HOST=http://myapp1"
networks:
bw-services:
external:
name: bw-services
docker run -d \
--name myapp2 \
--network bw-services \
-l bunkerweb.SERVER_NAME=app2.example.com \
-l bunkerweb.USE_REVERSE_PROXY=yes \
-l bunkerweb.USE_REVERSE_URL=/ \
-l bunkerweb.REVERSE_PROXY_HOST=http://myapp2 \
nginxdemos/hello:plain-text
Here is the docker-compose equivalent :
version: '3'
services:
myapp2:
image: nginxdemos/hello:plain-text
networks:
bw-services:
aliases:
- myapp2
labels:
- "bunkerweb.SERVER_NAME=app2.example.com"
- "bunkerweb.USE_REVERSE_PROXY=yes"
- "bunkerweb.REVERSE_PROXY_URL=/"
- "bunkerweb.REVERSE_PROXY_HOST=http://myapp2"
networks:
bw-services:
external:
name: bw-services
docker run -d \
--name myapp3 \
--network bw-services \
-l bunkerweb.SERVER_NAME=app3.example.com \
-l bunkerweb.USE_REVERSE_PROXY=yes \
-l bunkerweb.USE_REVERSE_URL=/ \
-l bunkerweb.REVERSE_PROXY_HOST=http://myapp3 \
nginxdemos/hello:plain-text
Here is the docker-compose equivalent :
version: '3'
services:
myapp3:
image: nginxdemos/hello:plain-text
networks:
bw-services:
aliases:
- myapp3
labels:
- "bunkerweb.SERVER_NAME=app3.example.com"
- "bunkerweb.USE_REVERSE_PROXY=yes"
- "bunkerweb.REVERSE_PROXY_URL=/"
- "bunkerweb.REVERSE_PROXY_HOST=http://myapp3"
networks:
bw-services:
external:
name: bw-services
We will assume that you already have the Swarm integration stack running on your cluster.
You can instantiate your services and pass the settings as labels :
docker service \
create \
--name myapp1 \
--network bw-services \
-l bunkerweb.SERVER_NAME=app1.example.com \
-l bunkerweb.USE_REVERSE_PROXY=yes \
-l bunkerweb.REVERSE_PROXY_HOST=http://myapp1 \
-l bunkerweb.REVERSE_PROXY_URL=/ \
nginxdemos/hello:plain-text
Here is the docker-compose equivalent (using docker stack deploy
) :
version: "3"
services:
myapp1:
image: nginxdemos/hello:plain-text
networks:
bw-services:
aliases:
- myapp1
deploy:
placement:
constraints:
- "node.role==worker"
labels:
- "bunkerweb.SERVER_NAME=app1.example.com"
- "bunkerweb.USE_REVERSE_PROXY=yes"
- "bunkerweb.REVERSE_PROXY_URL=/"
- "bunkerweb.REVERSE_PROXY_HOST=http://myapp1"
networks:
bw-services:
external:
name: bw-services
docker service \
create \
--name myapp2 \
--network bw-services \
-l bunkerweb.SERVER_NAME=app2.example.com \
-l bunkerweb.USE_REVERSE_PROXY=yes \
-l bunkerweb.REVERSE_PROXY_HOST=http://myapp2 \
-l bunkerweb.REVERSE_PROXY_URL=/ \
nginxdemos/hello:plain-text
Here is the docker-compose equivalent (using docker stack deploy
) :
version: "3"
services:
myapp2:
image: nginxdemos/hello:plain-text
networks:
bw-services:
aliases:
- myapp2
deploy:
placement:
constraints:
- "node.role==worker"
labels:
- "bunkerweb.SERVER_NAME=app2.example.com"
- "bunkerweb.USE_REVERSE_PROXY=yes"
- "bunkerweb.REVERSE_PROXY_URL=/"
- "bunkerweb.REVERSE_PROXY_HOST=http://myapp2"
networks:
bw-services:
external:
name: bw-services
docker service \
create \
--name myapp3 \
--network bw-services \
-l bunkerweb.SERVER_NAME=app3.example.com \
-l bunkerweb.USE_REVERSE_PROXY=yes \
-l bunkerweb.REVERSE_PROXY_HOST=http://myapp3 \
-l bunkerweb.REVERSE_PROXY_URL=/ \
nginxdemos/hello:plain-text
Here is the docker-compose equivalent (using docker stack deploy
) :
version: "3"
services:
myapp3:
image: nginxdemos/hello:plain-text
networks:
bw-services:
aliases:
- myapp3
deploy:
placement:
constraints:
- "node.role==worker"
labels:
- "bunkerweb.SERVER_NAME=app3.example.com"
- "bunkerweb.USE_REVERSE_PROXY=yes"
- "bunkerweb.REVERSE_PROXY_URL=/"
- "bunkerweb.REVERSE_PROXY_HOST=http://myapp3"
networks:
bw-services:
external:
name: bw-services
We will assume that you already have the Kubernetes integration stack running on your cluster.
Let's also assume that you have some typical Deployments with Services to access the web applications from within the cluster :
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1
labels:
app: app1
spec:
replicas: 1
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: svc-app1
spec:
selector:
app: app1
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: app2
labels:
app: app2
spec:
replicas: 1
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: svc-app2
spec:
selector:
app: app2
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: app3
labels:
app: app3
spec:
replicas: 1
selector:
matchLabels:
app: app3
template:
metadata:
labels:
app: app3
spec:
containers:
- name: app1
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: svc-app3
spec:
selector:
app: app3
ports:
- protocol: TCP
port: 80
targetPort: 80
Here is the corresponding Ingress definition to serve and protect the web applications :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
bunkerweb.io/AUTOCONF: "yes"
spec:
rules:
- host: app1.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-app1
port:
number: 80
- host: app2.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-app2
port:
number: 80
- host: app3.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-app3
port:
number: 80
We will assume that you already have the Linux integration stack running on your machine.
Let's assume that you have some web applications running on the same machine as BunkerWeb :
The following command will run a basic HTTP server on the port 8001 and deliver the files in the current directory :
python3 -m http.server -b 127.0.0.1 8001
The following command will run a basic HTTP server on the port 8002 and deliver the files in the current directory :
python3 -m http.server -b 127.0.0.1 8002
The following command will run a basic HTTP server on the port 8003 and deliver the files in the current directory :
python3 -m http.server -b 127.0.0.1 8003
Configuration of BunkerWeb is done by editing the /opt/bunkerweb/variables.env
file :
SERVER_NAME=app1.example.com app2.example.com app3.example.com
HTTP_PORT=80
HTTPS_PORT=443
MULTISITE=yes
DNS_RESOLVERS=8.8.8.8 8.8.4.4
USE_REVERSE_PROXY=yes
REVERSE_PROXY_URL=/
app1.example.com_REVERSE_PROXY_HOST=http://127.0.0.1:8001
app2.example.com_REVERSE_PROXY_HOST=http://127.0.0.1:8002
app3.example.com_REVERSE_PROXY_HOST=http://127.0.0.1:8003
Let's check the status of BunkerWeb :
systemctl status bunkerweb
If it's already running, we can restart it :
systemctl restart bunkerweb
Otherwise, we will need to start it :
systemctl start bunkerweb
Let's assume that you have some web applications running on the same machine as BunkerWeb :
The following command will run a basic HTTP server on the port 8001 and deliver the files in the current directory :
python3 -m http.server -b 127.0.0.1 8001
The following command will run a basic HTTP server on the port 8002 and deliver the files in the current directory :
python3 -m http.server -b 127.0.0.1 8002
The following command will run a basic HTTP server on the port 8003 and deliver the files in the current directory :
python3 -m http.server -b 127.0.0.1 8003
Content of the my_variables.env
configuration file :
SERVER_NAME=app1.example.com app2.example.com app3.example.com
HTTP_PORT=80
HTTPS_PORT=443
MULTISITE=yes
DNS_RESOLVERS=8.8.8.8 8.8.4.4
USE_REVERSE_PROXY=yes
REVERSE_PROXY_URL=/
app1.example.com_REVERSE_PROXY_HOST=http://127.0.0.1:8001
app2.example.com_REVERSE_PROXY_HOST=http://127.0.0.1:8002
app3.example.com_REVERSE_PROXY_HOST=http://127.0.0.1:8003
variables_env
variable to set the path of configuration file :
[mybunkers]
192.168.0.42 variables_env="{{ playbook_dir }}/my_variables.env"
- hosts: all
become: true
vars:
- variables_env: "{{ playbook_dir }}/my_variables.env"
roles:
- bunkerity.bunkerweb
ansible-playbook -i inventory.yml playbook.yml
Behind load balancer or reverse proxy
When BunkerWeb is itself behind a load balancer or a reverse proxy, you need to configure it so it can get the real IP address of the clients. If you don't, the security features will block the IP address of the load balancer or reverse proxy instead of the client's one.
BunkerWeb actually supports two methods to retrieve the real IP address of the client :
- Using the PROXY protocol
- Using a HTTP header like X-Forwarded-For
The following settings can be used :
USE_REAL_IP
: enable/disable real IP retrievalUSE_PROXY_PROTOCOL
: enable/disable PROXY protocol supportREAL_IP_FROM
: list of trusted IP/network address allowed to send us the "real IP"REAL_IP_HEADER
: the HTTP header containing the real IP or special value "proxy_protocol" when using PROXY protocol
You will find more settings about real IP in the settings section of the documentation.
HTTP header
We will assume the following regarding the load balancers or reverse proxies (you will need to update the settings depending on your configuration) :
- They use the X-Forwarded-For header to set the real IP
- They have IPs in the 1.2.3.0/24 and 100.64.0.0/16 networks
The following settings need to be set :
USE_REAL_IP=yes
REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
REAL_IP_HEADER=X-Forwarded-For
When starting the BunkerWeb container, you will need to add the settings :
docker run \
...
-e USE_REAL_IP=yes \
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
-e REAL_IP_HEADER=X-Forwarded-For \
...
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent :
mybunker:
image: bunkerity/bunkerweb:1.4.8
...
environment:
- USE_REAL_IP=yes
- REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
- REAL_IP_HEADER=X-Forwarded-For
...
Before running the Docker autoconf integration stack, you will need to add the settings for the BunkerWeb container :
docker run \
...
-e USE_REAL_IP=yes \
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
-e REAL_IP_HEADER=X-Forwarded-For \
...
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent :
mybunker:
image: bunkerity/bunkerweb:1.4.8
...
environment:
- USE_REAL_IP=yes
- REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
- REAL_IP_HEADER=X-Forwarded-For
...
Before running the Swarm integration stack, you will need to add the settings for the BunkerWeb service :
docker service create \
...
-e USE_REAL_IP=yes \
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
-e REAL_IP_HEADER=X-Forwarded-For \
...
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent (using docker stack deploy
) :
mybunker:
image: bunkerity/bunkerweb:1.4.8
...
environment:
- USE_REAL_IP=yes
- REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
- REAL_IP_HEADER=X-Forwarded-For
...
You will need to add the settings to the environment variables of the BunkerWeb containers (doing it using the ingress is not supported because you will get into trouble when using things like Let's Encrypt) :
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: bunkerweb
spec:
selector:
matchLabels:
app: bunkerweb
template:
spec:
containers:
- name: bunkerweb
image: bunkerity/bunkerweb:1.4.8
...
env:
- name: USE_REAL_IP
value: "yes"
- name: REAL_IP_HEADER
value: "X-Forwarded-For"
- name: REAL_IP_FROM
value: "1.2.3.0/24 100.64.0.0/16"
...
You will need to add the settings to the /opt/bunkerweb/variables.env
file :
...
USE_REAL_IP=yes
REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
REAL_IP_HEADER=X-Forwarded-For
...
Don't forget to restart the BunkerWeb service once it's done.
You will need to add the settings to your my_variables.env
configuration file :
...
USE_REAL_IP=yes
REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
REAL_IP_HEADER=X-Forwarded-For
...
In your Ansible inventory, you can use the variables_env
variable to set the path of configuration file :
[mybunkers]
192.168.0.42 variables_env="{{ playbook_dir }}/my_variables.env"
Or alternatively, in your playbook file :
- hosts: all
become: true
vars:
- variables_env: "{{ playbook_dir }}/my_variables.env"
roles:
- bunkerity.bunkerweb
Run the playbook :
ansible-playbook -i inventory.yml playbook.yml
Proxy protocol
We will assume the following regarding the load balancers or reverse proxies (you will need to update the settings depending on your configuration) :
- They use the PROXY protocol v1 or v2 to set the real IP
- They have IPs in the 1.2.3.0/24 and 100.64.0.0/16 networks
The following settings need to be set :
USE_REAL_IP=yes
USE_PROXY_PROTOCOL=yes
REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
REAL_IP_HEADER=proxy_protocol
When starting the BunkerWeb container, you will need to add the settings :
docker run \
...
-e USE_REAL_IP=yes \
-e USE_PROXY_PROTOCOL=yes \
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
-e REAL_IP_HEADER=proxy_protocol \
...
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent :
mybunker:
image: bunkerity/bunkerweb:1.4.8
...
environment:
- USE_REAL_IP=yes
- USE_PROXY_PROTOCOL=yes
- REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
- REAL_IP_HEADER=proxy_protocol
...
Before running the Docker autoconf integration stack, you will need to add the settings for the BunkerWeb container :
docker run \
...
-e USE_REAL_IP=yes \
-e USE_PROXY_PROTOCOL=yes \
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
-e REAL_IP_HEADER=proxy_protocol \
...
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent :
mybunker:
image: bunkerity/bunkerweb:1.4.8
...
environment:
- USE_REAL_IP=yes
- USE_PROXY_PROTOCOL=yes
- REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
- REAL_IP_HEADER=proxy_protocol
...
Before running the Swarm integration stack, you will need to add the settings for the BunkerWeb service :
docker service create \
...
-e USE_REAL_IP=yes \
-e USE_PROXY_PROTOCOL=yes \
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
-e REAL_IP_HEADER=proxy_protocol \
...
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent (using docker stack deploy
) :
mybunker:
image: bunkerity/bunkerweb:1.4.8
...
environment:
- USE_REAL_IP=yes
- USE_PROXY_PROTOCOL=yes
- REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
- REAL_IP_HEADER=proxy_protocol
...
You will need to add the settings to the environment variables of the BunkerWeb containers (doing it using the ingress is not supported because you will get into trouble when using things like Let's Encrypt) :
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: bunkerweb
spec:
selector:
matchLabels:
app: bunkerweb
template:
spec:
containers:
- name: bunkerweb
image: bunkerity/bunkerweb:1.4.8
...
env:
- name: USE_REAL_IP
value: "yes"
- name: USE_PROXY_PROTOCOL
value: "yes"
- name: REAL_IP_HEADER
value: "proxy_protocol"
- name: REAL_IP_FROM
value: "1.2.3.0/24 100.64.0.0/16"
...
You will need to add the settings to the /opt/bunkerweb/variables.env
file :
...
USE_REAL_IP=yes
USE_PROXY_PROTOCOL=yes
REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
REAL_IP_HEADER=proxy_protocol
...
Don't forget to restart the BunkerWeb service once it's done.
You will need to add the settings to your my_variables.env
configuration file :
...
USE_REAL_IP=yes
USE_PROXY_PROTOCOL=yes
REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16
REAL_IP_HEADER=proxy_protocol
...
In your Ansible inventory, you can use the variables_env
variable to set the path of configuration file :
[mybunkers]
192.168.0.42 variables_env="{{ playbook_dir }}/my_variables.env"
Or alternatively, in your playbook file :
- hosts: all
become: true
vars:
- variables_env: "{{ playbook_dir }}/my_variables.env"
roles:
- bunkerity.bunkerweb
Run the playbook :
ansible-playbook -i inventory.yml playbook.yml
Custom configurations
Because BunkerWeb is based on the NGINX web server, you can add custom NGINX configurations in different NGINX contexts. You can also apply custom configurations for the ModSecurity WAF which is a core component of BunkerWeb (more info here). Here is the list of custom configurations types :
- http : http level of NGINX
- server-http : server level of NGINX
- default-server-http : server level of NGINX (only apply to the "default server" when the name supplied by the client doesn't match any server name in
SERVER_NAME
) - modsec-crs : before the OWASP Core Rule Set is loaded
- modsec : after the OWASP Core Rule Set is loaded (also used if CRS is not loaded)
Custom configurations can be applied globally or only for a specific server when applicable and if the multisite mode is enabled.
The howto depends on the integration used but under the hood, applying custom configurations is done by adding files ending with the .conf suffix in their name to specific folders. To apply a custom configuration for a specific server, the file is written to a subfolder which is named as the primary server name.
Some integrations offer a more convenient way of applying configurations such as using Configs with Swarm or ConfigMap with Kubernetes.
When using the Docker integration, you have two choices for the addition of custom configurations :
- Using specific settings
*_CUSTOM_CONF_*
as environment variables (easiest) - Writing .conf files to the volume mounted on /data
Using settings
The settings to use must follow the pattern <SITE>_CUSTOM_CONF_<TYPE>_<NAME>
:
<SITE>
: optional primary server name if multisite mode is enabled and the config must be applied to a specific service<TYPE>
: the type of config, accepted values areHTTP
,DEFAULT_SERVER_HTTP
,SERVER_HTTP
,MODSEC
andMODSEC_CRS
<NAME>
: the name of config without the .conf suffix
Here is a dummy example using a docker-compose file :
mybunker:
image: bunkerity/bunkerweb:1.4.8
environment:
- |
CUSTOM_CONF_SERVER_HTTP_hello-world=
location /hello {
default_type 'text/plain';
content_by_lua_block {
ngx.say('world')
}
}
...
Using files
The first thing to do is to create the folders :
mkdir -p ./bw-data/configs/server-http
You can now write your configurations :
echo "location /hello {
default_type 'text/plain';
content_by_lua_block {
ngx.say('world')
}
}" > ./bw-data/configs/server-http/hello-world.conf
Because BunkerWeb runs as an unprivileged user with UID and GID 101, you will need to edit the permissions :
chown -R root:101 bw-data && \
chmod -R 770 bw-data
When starting the BunkerWeb container, you will need to mount the folder on /data :
docker run \
...
-v "${PWD}/bw-data:/data" \
...
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent :
mybunker:
image: bunkerity/bunkerweb:1.4.8
volumes:
- ./bw-data:/data
...
When using the Docker autoconf integration, you have two choices for adding custom configurations :
- Using specific settings
*_CUSTOM_CONF_*
as labels (easiest) - Writing .conf files to the volume mounted on /data
Using labels
Limitations using labels
When using labels with the Docker autoconf integration, you can only apply custom configurations for the corresponding web service. Applying http, default-server-http or any global configurations (like server-http for all services) is not possible : you will need to mount files for that purpose.
The labels to use must follow the pattern bunkerweb.CUSTOM_CONF_<TYPE>_<NAME>
:
<TYPE>
: the type of config, accepted values areSERVER_HTTP
,MODSEC
andMODSEC_CRS
<NAME>
: the name of config without the .conf suffix
Here is a dummy example using a docker-compose file :
myapp:
image: nginxdemos/hello:plain-text
labels:
- |
bunkerweb.CUSTOM_CONF_SERVER_HTTP_hello-world=
location /hello {
default_type 'text/plain';
content_by_lua_block {
ngx.say('world')
}
...
Using files
The first thing to do is to create the folders :
mkdir -p ./bw-data/configs/server-http
You can now write your configurations :
echo "location /hello {
default_type 'text/plain';
content_by_lua_block {
ngx.say('world')
}
}" > ./bw-data/configs/server-http/hello-world.conf
When starting the BunkerWeb autoconf container, you will need to mount the folder on /data :
docker run \
...
-v "${PWD}/bw-data:/data" \
...
bunkerity/bunkerweb-autoconf:1.4.8
Here is the docker-compose equivalent :
myautoconf:
image: bunkerity/bunkerweb-autoconf:1.4.8
volumes:
- ./bw-data:/data
...
When using the Swarm integration, custom configurations are managed using Docker Configs.
To keep it simple, you don't even need to attach the Config to a service : the autoconf service is listening for Config events and will update the custom configurations when needed.
When creating a Config, you will need to add special labels :
- bunkerweb.CONFIG_TYPE : must be set to a valid custom configuration type (http, server-http, default-server-http, modsec or modsec-crs)
- bunkerweb.CONFIG_SITE : set to a server name to apply configuration to that specific server (optional, will be applied globally if unset)
Here is the example :
echo "location /hello {
default_type 'text/plain';
content_by_lua_block {
ngx.say('world')
}
}" | docker config create -l bunkerweb.CONFIG_TYPE=server-http my-config -
There is no update mechanism : the alternative is to remove an existing config using docker config rm
and then recreate it.
When using the Kubernetes integration, custom configurations are managed using ConfigMap.
To keep it simple, you don't even need to use the ConfigMap with a Pod (e.g. as environment variable or volume) : the autoconf Pod is listening for ConfigMap events and will update the custom configurations when needed.
When creating a ConfigMap, you will need to add special labels :
- bunkerweb.io/CONFIG_TYPE : must be set to a valid custom configuration type (http, server-http, default-server-http, modsec or modsec-crs)
- bunkerweb.io/CONFIG_SITE : set to a server name to apply configuration to that specific server (optional, will be applied globally if unset)
Here is the example :
apiVersion: v1
kind: ConfigMap
metadata:
name: cfg-bunkerweb-all-server-http
annotations:
bunkerweb.io/CONFIG_TYPE: "server-http"
data:
myconf: |
location /hello {
default_type 'text/plain';
content_by_lua_block {
ngx.say('world')
}
}
When using the Linux integration, custom configurations must be written to the /opt/bunkerweb/configs folder.
Here is an example for server-http/hello-world.conf :
location /hello {
default_type 'text/plain';
content_by_lua_block {
ngx.say('world')
}
}
Because BunkerWeb runs as an unprivileged user (nginx:nginx), you will need to edit the permissions :
chown -R root:nginx /opt/bunkerweb/configs && \
chmod -R 770 /opt/bunkerweb/configs
Don't forget to restart the BunkerWeb service once it's done.
The custom_configs_path[]
variable is a dictionary with configuration types (http
, server-http
, modsec
, modsec-crs
) as keys and the corresponding values are path containing the configuration files.
Here is an example for server-http/hello-world.conf :
location /hello {
default_type 'text/plain';
content_by_lua_block {
ngx.say('world')
}
}
And the corresponding custom_configs_path[server-http]
variable used in your inventory :
[mybunkers]
192.168.0.42 custom_configs_path={"server-http": "{{ playbook_dir }}/server-http"}
Or alternatively, in your playbook file :
- hosts: all
become: true
vars:
- custom_configs_path: {
server-http: "{{ playbook_dir }}/server-http"
}
roles:
- bunkerity.bunkerweb
Run the playbook :
ansible-playbook -i inventory.yml playbook.yml
PHP
Support is in beta
At the moment, PHP support with BunkerWeb is still in beta and we recommend you use a reverse-proxy architecture if you can. By the way, PHP is not supported at all for some integrations like Kubernetes.
BunkerWeb supports PHP using external or remote PHP-FPM instances. We will assume that you are already familiar with managing that kind of services.
The following settings can be used :
REMOTE_PHP
: Hostname of the remote PHP-FPM instance.REMOTE_PHP_PATH
: Root folder containing files in the remote PHP-FPM instance.LOCAL_PHP
: Path to the local socket file of PHP-FPM instance.LOCAL_PHP_PATH
: Root folder containing files in the local PHP-FPM instance.
Single application
When using the Docker integration, to support PHP applications, you will need to :
- Copy your application files into the
www
subfolder of thebw-data
volume of BunkerWeb - Set up a PHP-FPM container for your application and mount the
bw-data/www
folder - Use the specific settings
REMOTE_PHP
andREMOTE_PHP_PATH
as environment variables when starting BunkerWeb
Create the bw-data/www
folder :
mkdir -p bw-data/www
You can create a Docker network if it's not already created :
docker network create bw-net
Now you can copy your application files to the bw-data/www
folder. Please note that you will need to fix the permissions so BunkerWeb (UID/GID 101) can at least read files and list folders and PHP-FPM (UID/GID 33) is the owner of the files and folders :
chown -R 101:101 ./bw-data && \
chown -R 33:101 ./bw-data/www && \
find ./bw-data/www -type f -exec chmod 0640 {} \; && \
find ./bw-data/www -type d -exec chmod 0750 {} \;
Let's create the PHP-FPM container, give it a name, connect it to the network and mount the application files :
docker run -d \
--name myphp \
--network bw-net \
-v "${PWD}/bw-data/www:/app" \
php:fpm
You can now run BunkerWeb and configure it for your PHP application :
docker run -d \
--name mybunker \
--network bw-net \
-p 80:8080 \
-p 443:8443 \
-v "${PWD}/bw-data:/data" \
-e SERVER_NAME=www.example.com \
-e AUTO_LETS_ENCRYPT=yes \
-e REMOTE_PHP=myphp \
-e REMOTE_PHP_PATH=/app \
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent :
version: '3'
services:
mybunker:
image: bunkerity/bunkerweb:1.4.8
ports:
- 80:8080
- 443:8443
volumes:
- ./bw-data:/data
environment:
- SERVER_NAME=www.example.com
- AUTO_LETS_ENCRYPT=yes
- REMOTE_PHP=myphp
- REMOTE_PHP_PATH=/app
networks:
- bw-net
myphp:
image: php:fpm
volumes:
- ./bw-data/www:/app
networks:
- bw-net
networks:
bw-net:
When using the Docker autoconf integration, your PHP files must not be mounted into the bw-data/www
folder. Instead, you will need to create a specific folder containing your PHP application and mount it both on the BunkerWeb container (outside the /data
endpoint) and your PHP-FPM container.
First of all, create the application folder (e.g. myapp
), copy your files and fix the permissions so BunkerWeb (UID/GID 101) can at least read files and list folders and PHP-FPM (UID/GID 33) is the owner of the files and folders :
chown -R 33:101 ./myapp && \
find ./myapp -type f -exec chmod 0640 {} \; && \
find ./myapp -type d -exec chmod 0750 {} \;
When you create the BunkerWeb container, simply mount the folder containing your PHP application to a specific endpoint like /app
:
docker run -d \
...
-v "${PWD}/myapp:/app" \
...
bunkerity/bunkerweb:1.4.8
Once BunkerWeb and autoconf are ready, you will be able to create the PHP-FPM container, mount the application folder inside the container and configure it using specific labels :
docker run -d \
--name myphp \
--network bw-services \
-v "${PWD}/myapp:/app" \
-l bunkerweb.SERVER_NAME=www.example.com \
-l bunkerweb.AUTO_LETS_ENCRYPT=yes \
-l bunkerweb.ROOT_FOLDER=/app \
-l bunkerweb.REMOTE_PHP=myphp \
-l bunkerweb.REMOTE_PHP_PATH=/app \
php:fpm
Here is the docker-compose equivalent :
version: '3'
services:
myphp:
image: php:fpm
volumes:
- ./myapp:/app
networks:
bw-services:
aliases:
- myphp
labels:
- bunkerweb.SERVER_NAME=www.example.com
- bunkerweb.AUTO_LETS_ENCRYPT=yes
- bunkerweb.ROOT_FOLDER=/app
- bunkerweb.REMOTE_PHP=myphp
- bunkerweb.REMOTE_PHP_PATH=/app
networks:
bw-services:
external:
name: bw-services
Shared volume
Using PHP with the Docker Swarm integration needs a shared volume between all BunkerWeb and PHP-FPM instances.
When using the Docker Swarm integration, your PHP files must not be mounted into the bw-data/www
folder. Instead, you will need to create a specific folder containing your PHP application and mount it both on the BunkerWeb container (outside the /data
endpoint) and your PHP-FPM container. As an example, we will consider that you have a shared folder mounted on your worker nodes on the /shared
endpoint.
First of all, create the application folder (e.g. /shared/myapp
), copy your files and fix the permissions so BunkerWeb (UID/GID 101) can at least read files and list folders and PHP-FPM (UID/GID 33) is the owner of the files and folders :
chown -R 33:101 /shared/myapp && \
find /shared/myapp -type f -exec chmod 0640 {} \; && \
find /shared/myapp -type d -exec chmod 0750 {} \;
When you create the BunkerWeb service, simply mount the folder containing your PHP application to a specific endpoint like /app
:
docker service create \
...
-v "/shared/myapp:/app" \
...
bunkerity/bunkerweb:1.4.8
Once BunkerWeb and autoconf are ready, you will be able to create the PHP-FPM service, mount the application folder inside the container and configure it using specific labels :
docker service create \
--name myphp \
--network bw-services \
-v "/shared/myapp:/app" \
-l bunkerweb.SERVER_NAME=www.example.com \
-l bunkerweb.AUTO_LETS_ENCRYPT=yes \
-l bunkerweb.ROOT_FOLDER=/app \
-l bunkerweb.REMOTE_PHP=myphp \
-l bunkerweb.REMOTE_PHP_PATH=/app \
php:fpm
Here is the docker-compose equivalent (using docker stack deploy
) :
version: '3'
services:
myphp:
image: php:fpm
volumes:
- ./myapp:/app
networks:
- bw-services
deploy:
placement:
constraints:
- "node.role==worker"
labels:
- bunkerweb.SERVER_NAME=www.example.com
- bunkerweb.AUTO_LETS_ENCRYPT=yes
- bunkerweb.ROOT_FOLDER=/app
- bunkerweb.REMOTE_PHP=myphp
- bunkerweb.REMOTE_PHP_PATH=/app
networks:
bw-services:
external:
name: bw-services
PHP is not supported for Kubernetes
Kubernetes integration allows configuration through Ingress and the BunkerWeb controller only supports HTTP applications at the moment.
We will assume that you already have the Linux integration stack running on your machine.
By default, BunkerWeb will search for web files inside the /opt/bunkerweb/www
folder. You can use it to store your PHP application. Please note that you will need to configure your PHP-FPM service to get or set the user/group of the running processes and the UNIX socket file used to communicate with BunkerWeb.
First of all, you will need to make sure that your PHP-FPM instance can access the files inside the /opt/bunkerweb/www
folder and also that BunkerWeb can access the UNIX socket file in order to communicate with PHP-FPM. We recommend to set a different user like www-data
for the PHP-FPM service and to give the nginx group access to the UNIX socket file. Here is corresponding PHP-FPM configuration :
...
[www]
user = www-data
group = www-data
listen = /run/php/php-fpm.sock
listen.owner = www-data
listen.group = nginx
listen.mode = 0660
...
Don't forget to restart your PHP-FPM service :
systemctl restart php-fpm
Once your application is copied to the /opt/bunkerweb/www
folder, you will need to fix the permissions so BunkerWeb (user/group nginx) can at least read files and list folders and PHP-FPM (user/group www-data) is the owner of the files and folders :
chown -R www-data:nginx /opt/bunkerweb/www && \
find /opt/bunkerweb/www -type f -exec chmod 0640 {} \; && \
find /opt/bunkerweb/www -type d -exec chmod 0750 {} \;
You can now edit the /opt/bunkerweb/variable.env
file :
HTTP_PORT=80
HTTPS_PORT=443
DNS_RESOLVERS=8.8.8.8 8.8.4.4
SERVER_NAME=www.example.com
AUTO_LETS_ENCRYPT=yes
LOCAL_PHP=/run/php/php-fpm.sock
LOCAL_PHP_PATH=/opt/bunkerweb/www/
Let's check the status of BunkerWeb :
systemctl status bunkerweb
systemctl restart bunkerweb
Otherwise, we will need to start it :
systemctl start bunkerweb
By default, BunkerWeb will search for web files inside the /opt/bunkerweb/www
folder. You can use it to store your PHP application. Please note that you will need to configure your PHP-FPM service to get or set the user/group of the running processes and the UNIX socket file used to communicate with BunkerWeb.
First of all, you will need to make sure that your PHP-FPM instance can access the files inside the /opt/bunkerweb/www
folder and also that BunkerWeb can access the UNIX socket file in order to communicate with PHP-FPM. We recommend to set a different user like www-data
for the PHP-FPM service and to give the nginx group access to the UNIX socket file. Here is corresponding PHP-FPM configuration :
...
[www]
user = www-data
group = www-data
listen = /run/php/php-fpm.sock
listen.owner = www-data
listen.group = nginx
listen.mode = 0660
...
PHP-FPM with Ansible
The PHP-FPM configuration part using Ansible is out of the scope of this documentation.
Content of the my_variables.env
configuration file :
HTTP_PORT=80
HTTPS_PORT=443
DNS_RESOLVERS=8.8.8.8 8.8.4.4
SERVER_NAME=www.example.com
AUTO_LETS_ENCRYPT=yes
LOCAL_PHP=/run/php/php-fpm.sock
LOCAL_PHP_PATH=/opt/bunkerweb/www/
The custom_site
variable can be used to specify a directory containing your application files (e.g : my_app
) that will be copied to /opt/bunkerweb/www
and the custom_www_owner
variable contains the owner that should be set for the files and folders. Here is an example using the Ansible inventory :
[mybunkers]
192.168.0.42 variables_env="{{ playbook_dir }}/my_variables.env" custom_www="{{ playbook_dir }}/my_app" custom_www_owner="www-data"
Or alternatively, in your playbook file :
- hosts: all
become: true
vars:
- variables_env: "{{ playbook_dir }}/my_variables.env"
- custom_www: "{{ playbook_dir }}/my_app"
- custom_www_owner: "www-data"
roles:
- bunkerity.bunkerweb
You can now run the playbook :
ansible-playbook -i inventory.yml playbook.yml
Multiple applications
Testing
To perform quick tests when multisite mode is enabled (and if you don't have the proper DNS entries set up for the domains) you can use curl with the HTTP Host header of your choice :
curl -H "Host: app1.example.com" http://ip-or-fqdn-of-server
If you are using HTTPS, you will need to play with SNI :
curl -H "Host: app1.example.com" --resolve example.com:443:ip-of-server https://example.com
When using the Docker integration, to support PHP applications, you will need to :
- Copy your application files into the
www
subfolder of thebw-data
volume of BunkerWeb (each application will be in its own subfolder named the same as the primary server name) - Setup a PHP-FPM container for your application and mount the
bw-data/www/subfolder
folder - Use the specific settings
REMOTE_PHP
andREMOTE_PHP_PATH
as environment variables when starting BunkerWeb
Create the bw-data/www
subfolders :
mkdir -p bw-data/www/{app1.example.com,app2.example.com,app3.example.com}
You can create a Docker network if it's not already created :
docker network create bw-net
Now you can copy your application files to the bw-data/www
subfolders. Please note that you will need to fix the permissions so BunkerWeb (UID/GID 101) can at least read files and list folders and PHP-FPM (UID/GID 33) is the owner of the files and folders :
chown -R 101:101 ./bw-data && \
chown -R 33:101 ./bw-data/www && \
find ./bw-data/www -type f -exec chmod 0640 {} \; && \
find ./bw-data/www -type d -exec chmod 0750 {} \;
Let's create the PHP-FPM containers, give them a name, connect them to the network and mount the application files :
docker run -d \
--name myphp1 \
--network bw-net \
-v "${PWD}/bw-data/www/app1.example.com:/app" \
php:fpm
docker run -d \
--name myphp2 \
--network bw-net \
-v "${PWD}/bw-data/www/app2.example.com:/app" \
php:fpm
docker run -d \
--name myphp3 \
--network bw-net \
-v "${PWD}/bw-data/www/app3.example.com:/app" \
php:fpm
You can now run BunkerWeb and configure it for your PHP applications :
docker run -d \
--name mybunker \
--network bw-net \
-p 80:8080 \
-p 443:8443 \
-v "${PWD}/bw-data:/data" \
-e MULTISITE=yes \
-e "SERVER_NAME=app1.example.com app2.example.com app3.example.com" \
-e AUTO_LETS_ENCRYPT=yes \
-e app1.example.com_REMOTE_PHP=myphp1 \
-e app1.example.com_REMOTE_PHP_PATH=/app \
-e app2.example.com_REMOTE_PHP=myphp2 \
-e app2.example.com_REMOTE_PHP_PATH=/app \
-e app3.example.com_REMOTE_PHP=myphp3 \
-e app3.example.com_REMOTE_PHP_PATH=/app \
bunkerity/bunkerweb:1.4.8
Here is the docker-compose equivalent :
version: '3'
services:
mybunker:
image: bunkerity/bunkerweb:1.4.8
ports:
- 80:8080
- 443:8443
volumes:
- ./bw-data:/data
environment:
- SERVER_NAME=app1.example.com app2.example.com app3.example.com
- MULTISITE=yes
- AUTO_LETS_ENCRYPT=yes
- app1.example.com_REMOTE_PHP=myphp1
- app1.example.com_REMOTE_PHP_PATH=/app
- app2.example.com_REMOTE_PHP=myphp2
- app2.example.com_REMOTE_PHP_PATH=/app
- app3.example.com_REMOTE_PHP=myphp3
- app3.example.com_REMOTE_PHP_PATH=/app
networks:
- bw-net
myphp1:
image: php:fpm
volumes:
- ./bw-data/www/app1.example.com:/app
networks:
- bw-net
myphp2:
image: php:fpm
volumes:
- ./bw-data/www/app2.example.com:/app
networks:
- bw-net
myphp3:
image: php:fpm
volumes:
- ./bw-data/www/app3.example.com:/app
networks:
- bw-net
networks:
bw-net:
When using the Docker autoconf integration, your PHP files must not be mounted into the bw-data/www
folder. Instead, you will need to create a specific folder containing your PHP applications and mount it both on the BunkerWeb container (outside the /data
endpoint) and your PHP-FPM containers.
First of all create the applications folder (e.g. myapp
), the subfolders for each application (e.g, app1
, app2
and app3
), copy your web files and fix the permissions so BunkerWeb (UID/GID 101) can at least read files and list folders and PHP-FPM (UID/GID 33) is the owner of the files and folders :
chown -R 33:101 ./myapps && \
find ./myapps -type f -exec chmod 0640 {} \; && \
find ./myapps -type d -exec chmod 0750 {} \;
When you create the BunkerWeb container, simply mount the folder containing your PHP applications to a specific endpoint like /apps
:
docker run -d \
...
-v "${PWD}/myapps:/apps" \
...
bunkerity/bunkerweb:1.4.8
Once BunkerWeb and autoconf are ready, you will be able to create the PHP-FPM containers, mount the right application folder inside each container and configure them using specific labels :
docker run -d \
--name myphp1 \
--network bw-services \
-v "${PWD}/myapps/app1:/app" \
-l bunkerweb.SERVER_NAME=app1.example.com \
-l bunkerweb.AUTO_LETS_ENCRYPT=yes \
-l bunkerweb.REMOTE_PHP=myphp1 \
-l bunkerweb.REMOTE_PHP_PATH=/app \
-l bunkerweb.ROOT_FOLDER=/apps/app1 \
php:fpm
docker run -d \
--name myphp2 \
--network bw-services \
-v "${PWD}/myapps/app2:/app" \
-l bunkerweb.SERVER_NAME=app2.example.com \
-l bunkerweb.AUTO_LETS_ENCRYPT=yes \
-l bunkerweb.REMOTE_PHP=myphp2 \
-l bunkerweb.REMOTE_PHP_PATH=/app \
-l bunkerweb.ROOT_FOLDER=/apps/app2 \
php:fpm
docker run -d \
--name myphp3 \
--network bw-services \
-v "${PWD}/myapps/app3:/app" \
-l bunkerweb.SERVER_NAME=app3.example.com \
-l bunkerweb.AUTO_LETS_ENCRYPT=yes \
-l bunkerweb.REMOTE_PHP=myphp3 \
-l bunkerweb.REMOTE_PHP_PATH=/app \
-l bunkerweb.ROOT_FOLDER=/apps/app3 \
php:fpm
Here is the docker-compose equivalent :
version: '3'
services:
myphp1:
image: php:fpm
volumes:
- ./myapps/app1:/app
networks:
bw-services:
aliases:
- myphp1
labels:
- bunkerweb.SERVER_NAME=app1.example.com
- bunkerweb.AUTO_LETS_ENCRYPT=yes
- bunkerweb.REMOTE_PHP=myphp1
- bunkerweb.REMOTE_PHP_PATH=/app
- bunkerweb.ROOT_FOLDER=/apps/app1
myphp2:
image: php:fpm
volumes:
- ./myapps/app2:/app
networks:
bw-services:
aliases:
- myphp2
labels:
- bunkerweb.SERVER_NAME=app2.example.com
- bunkerweb.AUTO_LETS_ENCRYPT=yes
- bunkerweb.REMOTE_PHP=myphp2
- bunkerweb.REMOTE_PHP_PATH=/app
- bunkerweb.ROOT_FOLDER=/apps/app2
myphp3:
image: php:fpm
volumes:
- ./myapps/app3:/app
networks:
bw-services:
aliases:
- myphp3
labels:
- bunkerweb.SERVER_NAME=app3.example.com
- bunkerweb.AUTO_LETS_ENCRYPT=yes
- bunkerweb.REMOTE_PHP=myphp3
- bunkerweb.REMOTE_PHP_PATH=/app
- bunkerweb.ROOT_FOLDER=/apps/app3
networks:
bw-services:
external:
name: bw-services
Shared volume
Using PHP with the Docker Swarm integration needs a shared volume between all BunkerWeb and PHP-FPM instances.
When using the Docker Swarm integration, your PHP files must not be mounted into the bw-data/www
folder. Instead, you will need to create a specific folder containing your PHP applications and mount it both on the BunkerWeb container (outside the /data
endpoint) and your PHP-FPM containers. As an example, we will consider that you have a shared folder mounted on your worker nodes on the /shared
endpoint.
First of all, create the applications folder (e.g. myapp
), the subfolders for each application (e.g, app1
, app2
and app3
), copy your files and fix the permissions so BunkerWeb (UID/GID 101) can at least read files and list folders and PHP-FPM (UID/GID 33) is the owner of the files and folders :
chown -R 33:101 /shared/myapps && \
find /shared/myapps -type f -exec chmod 0640 {} \; && \
find /shared/myapps -type d -exec chmod 0750 {} \;
When you create the BunkerWeb service, simply mount the folder containing your PHP applications to a specific endpoint like /apps
:
docker service create \
...
-v "/shared/myapps:/apps" \
...
bunkerity/bunkerweb:1.4.8
Once BunkerWeb and autoconf are ready, you will be able to create the PHP-FPM service, mount the application folder inside the container and configure it using specific labels :
docker service create \
--name myphp1 \
--network bw-services \
-v /shared/myapps/app1:/app \
-l bunkerweb.SERVER_NAME=app1.example.com \
-l bunkerweb.AUTO_LETS_ENCRYPT=yes \
-l bunkerweb.REMOTE_PHP=myphp1 \
-l bunkerweb.REMOTE_PHP_PATH=/app \
-l bunkerweb.ROOT_FOLDER=/apps/app1 \
php:fpm
docker service create \
--name myphp2 \
--network bw-services \
-v /shared/myapps/app2:/app \
-l bunkerweb.SERVER_NAME=app2.example.com \
-l bunkerweb.AUTO_LETS_ENCRYPT=yes \
-l bunkerweb.REMOTE_PHP=myphp2 \
-l bunkerweb.REMOTE_PHP_PATH=/app \
-l bunkerweb.ROOT_FOLDER=/apps/app2 \
php:fpm
docker service create \
--name myphp3 \
--network bw-services \
-v /shared/myapps/app3:/app \
-l bunkerweb.SERVER_NAME=app3.example.com \
-l bunkerweb.AUTO_LETS_ENCRYPT=yes \
-l bunkerweb.REMOTE_PHP=myphp3 \
-l bunkerweb.REMOTE_PHP_PATH=/app \
-l bunkerweb.ROOT_FOLDER=/apps/app3 \
php:fpm
Here is the docker-compose equivalent (using docker stack deploy
) :
version: '3'
services:
myphp1:
image: php:fpm
volumes:
- /shared/myapps/app1:/app
networks:
- bw-services
deploy:
placement:
constraints:
- "node.role==worker"
labels:
- bunkerweb.SERVER_NAME=app1.example.com
- bunkerweb.AUTO_LETS_ENCRYPT=yes
- bunkerweb.REMOTE_PHP=myphp1
- bunkerweb.REMOTE_PHP_PATH=/app
- bunkerweb.ROOT_FOLDER=/apps/app1
myphp2:
image: php:fpm
volumes:
- /shared/myapps/app2:/app
networks:
- bw-services
deploy:
placement:
constraints:
- "node.role==worker"
labels:
- bunkerweb.SERVER_NAME=app2.example.com
- bunkerweb.AUTO_LETS_ENCRYPT=yes
- bunkerweb.REMOTE_PHP=myphp2
- bunkerweb.REMOTE_PHP_PATH=/app
- bunkerweb.ROOT_FOLDER=/apps/app2
myphp3:
image: php:fpm
volumes:
- /shared/myapps/app3:/app
networks:
- bw-services
deploy:
placement:
constraints:
- "node.role==worker"
labels:
- bunkerweb.SERVER_NAME=app3.example.com
- bunkerweb.AUTO_LETS_ENCRYPT=yes
- bunkerweb.REMOTE_PHP=myphp3
- bunkerweb.REMOTE_PHP_PATH=/app
- bunkerweb.ROOT_FOLDER=/apps/app3
networks:
bw-services:
external:
name: bw-services
PHP is not supported for Kubernetes
Kubernetes integration allows configuration through Ingress and the BunkerWeb controller only supports HTTP applications at the moment.
We will assume that you already have the Linux integration stack running on your machine.
By default, BunkerWeb will search for web files inside the /opt/bunkerweb/www
folder. You can use it to store your PHP applications : each application will be in its own subfolder named the same as the primary server name. Please note that you will need to configure your PHP-FPM service to get or set the user/group of the running processes and the UNIX socket file used to communicate with BunkerWeb.
First of all, you will need to make sure that your PHP-FPM instance can access the files inside the /opt/bunkerweb/www
folder and also that BunkerWeb can access the UNIX socket file in order to communicate with PHP-FPM. We recommend to set a different user like www-data
for the PHP-FPM service and to give the nginx group access to the UNIX socket file. Here is corresponding PHP-FPM configuration :
...
[www]
user = www-data
group = www-data
listen = /run/php/php-fpm.sock
listen.owner = www-data
listen.group = nginx
listen.mode = 0660
...
Don't forget to restart your PHP-FPM service :
systemctl restart php-fpm
Once your application is copied to the /opt/bunkerweb/www
folder, you will need to fix the permissions so BunkerWeb (user/group nginx) can at least read files and list folders and PHP-FPM (user/group www-data) is the owner of the files and folders :
chown -R www-data:nginx /opt/bunkerweb/www && \
find /opt/bunkerweb/www -type f -exec chmod 0640 {} \; && \
find /opt/bunkerweb/www -type d -exec chmod 0750 {} \;
You can now edit the /opt/bunkerweb/variable.env
file :
HTTP_PORT=80
HTTPS_PORT=443
DNS_RESOLVERS=8.8.8.8 8.8.4.4
SERVER_NAME=app1.example.com app2.example.com app3.example.com
MULTISITE=yes
AUTO_LETS_ENCRYPT=yes
app1.example.com_LOCAL_PHP=/run/php/php-fpm.sock
app1.example.com_LOCAL_PHP_PATH=/opt/bunkerweb/www/app1.example.com
app2.example.com_LOCAL_PHP=/run/php/php-fpm.sock
app2.example.com_LOCAL_PHP_PATH=/opt/bunkerweb/www/app2.example.com
app3.example.com_LOCAL_PHP=/run/php/php-fpm.sock
app3.example.com_LOCAL_PHP_PATH=/opt/bunkerweb/www/app3.example.com
Let's check the status of BunkerWeb :
systemctl status bunkerweb
systemctl restart bunkerweb
Otherwise, we will need to start it :
systemctl start bunkerweb
By default, BunkerWeb will search for web files inside the /opt/bunkerweb/www
folder. You can use it to store your PHP application : each application will be in its own subfolder named the same as the primary server name. Please note that you will need to configure your PHP-FPM service to get or set the user/group of the running processes and the UNIX socket file used to communicate with BunkerWeb.
First of all, you will need to make sure that your PHP-FPM instance can access the files inside the /opt/bunkerweb/www
folder and also that BunkerWeb can access the UNIX socket file in order to communicate with PHP-FPM. We recommend to set a different user like www-data
for the PHP-FPM service and to give the nginx group access to the UNIX socket file. Here is corresponding PHP-FPM configuration :
...
[www]
user = www-data
group = www-data
listen = /run/php/php-fpm.sock
listen.owner = www-data
listen.group = nginx
listen.mode = 0660
...
PHP-FPM with Ansible
The PHP-FPM configuration part using Ansible is out of the scope of this documentation.
Content of the my_variables.env
configuration file :
HTTP_PORT=80
HTTPS_PORT=443
DNS_RESOLVERS=8.8.8.8 8.8.4.4
SERVER_NAME=app1.example.com app2.example.com app3.example.com
MULTISITE=yes
AUTO_LETS_ENCRYPT=yes
app1.example.com_LOCAL_PHP=/run/php/php-fpm.sock
app1.example.com_LOCAL_PHP_PATH=/opt/bunkerweb/www/app1.example.com
app2.example.com_LOCAL_PHP=/run/php/php-fpm.sock
app2.example.com_LOCAL_PHP_PATH=/opt/bunkerweb/www/app2.example.com
app3.example.com_LOCAL_PHP=/run/php/php-fpm.sock
app3.example.com_LOCAL_PHP_PATH=/opt/bunkerweb/www/app3.example.com
The custom_site
variable can be used to specify a directory containing your application files (e.g : my_app
) that will be copied to /opt/bunkerweb/www
and the custom_www_owner
variable contains the owner that should be set for the files and folders. Here is an example using the Ansible inventory :
[mybunkers]
192.168.0.42 variables_env="{{ playbook_dir }}/my_variables.env" custom_www="{{ playbook_dir }}/my_app" custom_www_owner="www-data"
Or alternatively, in your playbook file :
- hosts: all
become: true
vars:
- variables_env: "{{ playbook_dir }}/my_variables.env"
- custom_www: "{{ playbook_dir }}/my_app"
- custom_www_owner: "www-data"
roles:
- bunkerity.bunkerweb
You can now run the playbook :
ansible-playbook -i inventory.yml playbook.yml