-
Notifications
You must be signed in to change notification settings - Fork 16
Step by Step Guide
In this page we will walk through the steps required to generate a Seccomp profile for the Nginx Docker image. This guide has been prepared for use in the Software Security Summer School (SSSS20).
NOTE: It is supposed that the installation steps have been followed. Reading the User Guide prior to performing these steps is advised.
After you connect to the provided AWS instance, open a terminal and run the following commands.
In this section we will run a command to validate you are running the correct kernel version required for completing the hands-on exercise.
uname -a
This should print a line which starts as follows:
Linux [hostname] 4.15.0-1054-aws #56-Ubuntu SMP
It is critical that you see the correct Linux kernel version which should be 4.15.0-1054-aws. Please use the raise your hand feature of your Cisco Webex to notify one of the panelists, if the version does NOT match.
Our system generates Seccomp profiles which can be used in launching Docker containers. In this section we will learn the basics of running and killing containers. All Docker commands should be run with sudo. We will switch to the root user for the rest of the tutorial using the following command:
sudo -s
- View list of containers:
sudo docker container ls -a
The output will show a list of containers both running and killed. The format is as follows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- Launch a container with the default Seccomp policy:
sudo docker run --name [any-name] -td [docker-image-name]
example:
sudo docker run --name test1 -td nginx
- Now let's view the list of containers again:
sudo docker container ls -a
The output will show a list of containers both running and killed. The format is as follows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[long hash] nginx "/docker-entrypoint.…" 2 seconds ago Up 1 second 80/tcp test1
- If we try to launch another container with the same name we will get an error from Docker. So each container should have a unique name. Run the following command:
sudo docker run --name test1 -td nginx
You should get an error with the following format:
docker: Error response from daemon: Conflict. The container name
"/test1" is already in use by container
"[long hash]". You have to remove (or rename) that container to be able
to reuse that name.
- Now we will kill the previously launched container.
sudo docker kill test1
- Let's view the list of containers again.
sudo docker container ls -a
The output should be as follows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[long hash] nginx "/docker-entrypoint.…" [x] minutes ago Exited(137) [x] seconds ago 80/tcp test1
- Now we will delete the previously launched container.
sudo docker rm test1
- Change your current working directory to the root of the respository (/home/ubuntu/confine).
cd /home/ubuntu/confine
- Open a new file, name it as you like. We will use myimages.json. If you choose another name you need to change the rest of the commands accordingly. You can use your favorite text editor (vim, nano, emacs).
vim myimages.json
- Copy the following (the text in the box) into the file you opened.
{
"nginx": {
"enable": "true",
"image-name": "nginx",
"image-url": "nginx",
"dependencies": {}
}
}
- Now we are ready to run Confine using the following command and generate the Seccomp profile for Nginx. Note: You must run the following command as root
sudo python3.7 confine.py -l libc-callgraphs/glibc.callgraph -m libc-callgraphs/musllibc.callgraph -i myimages.json -o output/ -p default.seccomp.json -r results/ -g go.syscalls/
The script will now start analyzing the Nginx Docker image. We will go through each step the script is performing and what it will output to the console in this section:
a) The scripts prints the following line, showing it has started its analysis.
------------------------------------------------------------------------
////////////////////////////////////////////////////////////////////////
----->Starting analysis for image: nginx<-----
////////////////////////////////////////////////////////////////////////
b) Then it will start the monitoring phase where it will launch sysdig the third party tool used to identify binaries executed in our initial 60 seconds and the container. (For more details on why we do this please refer to the About page) In case it is the first time we are hardening a Docker image and we haven't previously extracted the list of binaries and libraries it will first print
Cache doesn't exist, must extract binaries and libraries
Then it will monitor the executed binaries by running sysdig. This will print the following line:
--->Starting MONITOR phase:
Running sysdig multiple times. Run count: 1 from total: 3
Ran container sleeping for 60 seconds to generate logs and extract execve
system calls
len(psList) from sysdig: 39
Container: nginx extracted psList with 52 elements
Running sysdig multiple times. Run count: 2 from total: 3
Ran container sleeping for 60 seconds to generate logs and extract execve
system calls
len(psList) from sysdig: 48
Container: nginx extracted psList with 62 elements
Running sysdig multiple times. Run count: 3 from total: 3
Ran container sleeping for 60 seconds to generate logs and extract execve
system calls
len(psList) from sysdig: 45
Container: nginx extracted psList with 63 elements
Container: nginx PS List: {'env', '/usr/sbin/sh', '/usr/bin/basename', 'find',
'containerd-shim', '/docker-entrypoint.d/20-envsubst-on-templates.sh',
'/usr/bin/sort', '/usr/bin/sh', '/bin/grep', '/usr/local/sbin/nginx',
'md5sum', 'dumpe2fs', '/usr/local/sbin/dumpe2fs', '/usr/sbin/runc',
'/docker-entrypoint.sh', 'cut', '/lib/x86_64-linux-gnu/libc-2.28.so',
'/usr/bin/dumpe2fs', 'libnetwork-setkey', '/usr/sbin/nginx', '[vdso]',
'/usr/bin/find', '/usr/bin/dpkg-query', '/lib/x86_64-linux-gnu/libdl-2.28.so',
'/sbin/sh', '/sbin/dumpe2fs', '/bin/sed', '/usr/bin/env', '/usr/bin/touch',
'basename', '/usr/local/bin/nginx', '/bin/sh', '/usr/sbin/dumpe2fs',
'set-ipv6', '/usr/local/sbin/sh', 'touch',
'/lib/x86_64-linux-gnu/libz.so.1.2.11', 'sort', '[vsyscall]', 'runc',
'/lib/systemd/systemd-sysctl', 'sed', 'ifquery', '/usr/local/bin/dumpe2fs',
'/docker-entrypoint.d/10-listen-on-ipv6-by-default.sh',
'/usr/lib/x86_64-linux-gnu/libssl.so.1.1',
'/lib/x86_64-linux-gnu/libnss_files-2.28.so',
'/lib/x86_64-linux-gnu/libcrypt-2.28.so', '/lib/x86_64-linux-gnu/ld-2.28.so',
'nginx', '/lib/udev/bridge-network-interface', '/usr/bin/md5sum',
'/lib/x86_64-linux-gnu/libpcre.so.3.13.3',
'/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1', '/usr/bin/cut', 'dpkg-query',
'grep', '/usr/bin/containerd-shim',
'/lib/x86_64-linux-gnu/libpthread-2.28.so', 'sh',
'/lib/udev/ifupdown-hotplug', '/sbin/ifquery', '/usr/local/bin/sh'}
Starting to copy identified binaries and libraries (This can take some
time...)
Finished copying identified binaries and libraries
<---Finished MONITOR phase
In case we have previously ran the dynamic analysis phase and extracted all the binaries and libraries it will only run once. We need this to generate the logs created for the Docker image as our baseline to validate the correctness of the generated Seccomp profile.
c) The execution of the script can differ in this step, depending on needing to extract the binaries or not. In case the dynamic analysis has successfully extracted the set of binaries and libraries from the container, it does not need to copy the binaries and libraries and it will skip this step. Otherwise it will first generate the list of binaries used in the container and then start trying to copy them.
Starting to copy identified binaries and libraries (This can take some
time...)
Finished copying identified binaries and libraries
<---Finished MONITOR phase
d) First it starts extracting direct system calls through objdump. It will go over all the files copied from the container to the temporary output folder and identify direct system calls. It will output the start and finish.
--->Starting Direct Syscall Extraction
Extracting direct system call invocations
<---Finished Direct Syscall Extraction
e) Then it extracts the list of functions imported by each binary and library.
--->Starting ANALYZE phase
Extracting imported functions and storing in libs.out
<---Finished ANALYZE phase
f) After it extracts all the direct system calls and combines the imported libc functions with the set of system calls required by those libc functions it generates the set of prohibited system calls and prints the following line:
--->Starting INTEGRATE phase, extracting the list required system calls
Traversing libc call graph to identify required system calls
Generating final system call filter list
************************************************************************************
Container Name: nginx Num of filtered syscalls (original): 149
************************************************************************************
<---Finished INTEGRATE phase
h) Now that the unnecessary system calls have been extracted, we generate the respective Seccomp profile and validate if it works correctly through launching the container with our generated Seccomp profile.
--->Validating generated Seccomp profile: results//nginx.seccomp.json
************************************************************************************
Finished validation. Container for image: nginx was hardened SUCCESSFULLY!
************************************************************************************
In case you see the Container for image: $$$ was hardened successfully! it means that the Seccomp profile passed our validation steps.
IMPORTANT: If you did not see the message above please ask for help from one of the panelists.
i) Finally analyzing the Nginx Docker image has finished and Confine prints the following line:
///////////////////////////////////////////////////////////////////////////////////////
----->Finished extracting system calls for nginx, sleeping for 5 seconds<-----
///////////////////////////////////////////////////////////////////////////////////////
---------------------------------------------------------------------------------------
- Now that we have generated the Seccomp profile, we can look into the results. Please check the output/nginx folder to see the binaries and libraries identified for nginx.
ls -lh ./output/nginx
Also look into results/ and view nginx.seccomp.json which has stored the generated Seccomp profile.
cat ./results/nginx.seccomp.json
In this part of the hands-on exercise, we would like to work with the hardened container to see if any operation has become restricted. To do so:
- Launch the container using the generated Seccomp profile:
sudo docker run --name container-hardened --security-opt seccomp=results/nginx.seccomp.json -td nginx
- Extract the IP address of the running container:
sudo docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container-hardened
- Fetch the default index page of Nginx from the host. Does it work?
wget http://[IP-Address]
- Try connecting to the hardened container and run some commands.
sudo docker exec -it container-hardened /bin/bash
As you can see, this does not work. That's because /bin/bash was not identified as necessary for running the container. Now let's test with the sh shell.
sudo docker exec -it container-hardened /bin/sh
- Now that we are inside the container let's see which commands work and which don't.
Should work:
ls
cp
Should not work:
apt-get update
In this part we would like to explore the security benefits of applying a Seccomp filter like the one generated for Nginx.
- How many system calls can be filtered?
cat results/nginx.seccomp.json | grep name | wc -l
- Map the filtered system calls to kernel CVEs mitigated.
You can use the filterToCveProfile.py script to map the generated Seccomp profile to the mitigated CVEs.
python3.7 filterProfileToCve.py -c cve.files/cveToStartNodes.csv.validated -f results/profile.report.details.csv -o results -v cve.files/cveToFile.json.type.csv --manualcvefile cve.files/cve.to.syscall.manual --manualtypefile cve.files/cve.to.vulntype.manual
-c: Path to the file containing a map between each CVE and all the starting nodes which can reach the vulnerable point in the kernel call graph
-f: This file is generated after you run Confine for a set of containers. It can be found in the results path in the root of the repository.
-o: Name of the prefix of the file you would like to store the results in.
-v: A CSV file containing the mapping of CVEs to their vulnerability type.
--manualcvefile: Some CVEs have been gathered manually which can be specified using this option.
--manualtypefile: A file containing the mapping of CVEs identified manually to their respective vulnerability type.
-d: Enable/disable debug mode which prints much more log messages.
Note: The scripts required to generate the mapping between the kernel functions and their CVEs are in a separate repository. You do not need to recreate those results.
After you run the script above a single file will be created named [results].container.csv. You can change the prefix through the -o option. Each line corresponds to a CVE mitigated in at least one of the Docker images provided in the profile.report.details.csv file.
Line format:
cveid;system call names(can be more than one);cve-type(can be more than
one);was-it-mitigated-by-the-default-seccomp-policies;number-of-docker-images-affected;
name of docker images
CVE-2015-8539;add_key, keyctl;Denial Of Service, Gain privileges;True;1;nginx
- Open the generated file (results.container.csv) and see how many CVEs can be mitigated. Also check if CVE-2017-5123 is mitigated by applying the Nginx Seccomp profile.