mirror of
https://github.com/telekom-security/tpotce.git
synced 2025-07-02 01:27:27 -04:00
Compare commits
7 Commits
Author | SHA1 | Date | |
---|---|---|---|
198305fc57 | |||
c50d147926 | |||
0405a20402 | |||
775ea76b98 | |||
b98d9d352e | |||
c277f313dc | |||
b12cf0e418 |
21
CHANGELOG.md
21
CHANGELOG.md
@ -1,26 +1,5 @@
|
||||
# Changelog
|
||||
|
||||
## 20200316
|
||||
- **Move from Sid to Stable**
|
||||
- Debian Stable has now all the packages and versions we need for T-Pot. As a consequence we can now move to the `stable` branch.
|
||||
|
||||
## 20200310
|
||||
- **Add 2FA to Cockpit**
|
||||
- Just run `2fa.sh` to enable two factor authentication in Cockpit.
|
||||
- **Find fastest mirror with netselect-apt**
|
||||
- Netselect-apt will find the fastest mirror close to you (outgoing ICMP required).
|
||||
|
||||
## 20200309
|
||||
- **Bump Nextgen to 20.06**
|
||||
- All NextGen images have been rebuilt to their latest master.
|
||||
- ElasticStack bumped to 7.6.1 (Elasticsearch will need at least 2048MB of RAM now, T-Pot at least 8GB of RAM) and tweak to accomodate changes of 7.x.
|
||||
- Fixed errors in Tanner / Snare which will now handle downloads of malware via SSL and store them correctly (thanks to @afeena).
|
||||
- Fixed errors in Heralding which will now improve on RDP connections (thanks to @johnnykv, @realsdx).
|
||||
- Fixed error in honeytrap which will now build in Debian/Buster (thanks to @tillmannw).
|
||||
- Mailoney is now logging in JSON format (thanks to @monsherko).
|
||||
- Base T-Pot landing page on Heimdall.
|
||||
- Tweaking of tools and some minor bug fixing
|
||||
|
||||
## 20200116
|
||||
- **Bump ELK to latest 6.8.6**
|
||||
- **Update ISO image to fix upstream bug of missing kernel modules**
|
||||
|
78
README.md
78
README.md
@ -1,6 +1,6 @@
|
||||

|
||||
|
||||
T-Pot 19.03 runs on Debian (Stable), is based heavily on
|
||||
T-Pot 19.03 runs on Debian (Sid), is based heavily on
|
||||
|
||||
[docker](https://www.docker.com/), [docker-compose](https://docs.docker.com/compose/)
|
||||
|
||||
@ -43,6 +43,7 @@ Furthermore we use the following tools
|
||||
|
||||
|
||||
# Table of Contents
|
||||
- [Changelog](#changelog)
|
||||
- [Technical Concept](#concept)
|
||||
- [System Requirements](#requirements)
|
||||
- [Installation](#installation)
|
||||
@ -73,11 +74,66 @@ Furthermore we use the following tools
|
||||
- [Credits](#credits)
|
||||
- [Stay tuned](#staytuned)
|
||||
- [Testimonial](#testimonial)
|
||||
- [Fun Fact](#funfact)
|
||||
|
||||
<a name="changelog"></a>
|
||||
# Release Notes
|
||||
- **Move from Ubuntu 18.04 to Debian (Sid)**
|
||||
- For almost 5 years Ubuntu LTS versions were our distributions of choice. Last year we made a design choice for T-Pot to be closer to a rolling release model and thus allowing us to issue smaller changes and releases in a more timely manner. The distribution of choice is Debian (Sid / unstable) which will provide us with the latest advancements in a Debian based distribution.
|
||||
- **Include HoneyPy honeypot**
|
||||
- *HoneyPy* is now included in the NEXTGEN installation type
|
||||
- **Include Suricata 4.1.3**
|
||||
- Building *Suricata 4.1.3* from scratch to enable JA3 and overall better protocol support.
|
||||
- **Update tools to the latest versions**
|
||||
- ELK Stack 6.6.2
|
||||
- CyberChef 8.27.0
|
||||
- SpiderFoot v3.0
|
||||
- Cockpit 188
|
||||
- NGINX is now built to enforce TLS 1.3 on the T-Pot WebUI
|
||||
- **Update honeypots**
|
||||
- Where possible / feasible the honeypots have been updated to their latest versions.
|
||||
- *Cowrie* now supports *HASSH* generated hashes which allows for an easier identification of an attacker accross IP adresses.
|
||||
- *Heralding* now supports *SOCKS5* emulation.
|
||||
- **Update Dashboards & Visualizations**
|
||||
- *Offset Dashboard* added to easily spot changes in attacks on a single dashboard in 24h time window.
|
||||
- *Cowrie Dashboard* modified to integrate *HASSH* support / visualizations.
|
||||
- *HoneyPy Dashboard* added to support latest honeypot addition.
|
||||
- *Suricata Dashboard* modified to integrate *JA3* support / visualizations.
|
||||
- **Debian mirror selection**
|
||||
- During base install you now have to manually select a mirror.
|
||||
- Upon T-Pot install the mirror closest to you will be determined automatically, `netselect-apt` requires you to allow ICMP outbound.
|
||||
- This solves peering problems for most of the users speeding up installation and updates.
|
||||
- **Bugs**
|
||||
- Fixed issue #298 where the import and export of objects on the shell did not work.
|
||||
- Fixed issue #313 where Spiderfoot raised a KeyError, which was previously fixed in upstream.
|
||||
- Fixed error in Suricata where path for reference.config changed.
|
||||
- **Release Cycle**
|
||||
- As far as possible we will integrate changes now faster into the master branch, eliminating the need for monolithic releases. The update feature will be continuously improved on that behalf. However this might not account for all feature changes.
|
||||
- **HPFEEDS Opt-In**
|
||||
- If you want to share your T-Pot data with a 3rd party HPFEEDS broker such as [SISSDEN](https://sissden.eu) you can do so by creating an account at the SISSDEN portal and run `hpfeeds_optin.sh` on T-Pot.
|
||||
- **Update Feature**
|
||||
- For the ones who like to live on the bleeding edge of T-Pot development there is now an update script available in `/opt/tpot/update.sh`.
|
||||
- This feature is beta and is mostly intended to provide you with the latest development advances without the need of reinstalling T-Pot.
|
||||
- **Deprecated tools**
|
||||
- *ctop* will no longer be part of T-Pot.
|
||||
- **Fix #332**
|
||||
- If T-Pot, opposed to the requirements, does not have full internet access netselect-apt fails to determine the fastest mirror as it needs ICMP and UDP outgoing. Should netselect-apt fail the default mirrors will be used.
|
||||
- **Improve install speed with apt-fast**
|
||||
- Migrating from a stable base install to Debian (Sid) requires downloading lots of packages. Depending on your geo location the download speed was already improved by introducing netselect-apt to determine the fastest mirror. With apt-fast the downloads will be even faster by downloading packages not only in parallel but also with multiple connections per package.
|
||||
- **HPFEEDS Opt-In commandline option**
|
||||
- Pass a hpfeeds config file as a commandline argument
|
||||
- hpfeeds config is saved in `/data/ews/conf/hpfeeds.cfg`
|
||||
- Update script restores hpfeeds config
|
||||
- **Ansible T-Pot Deployment**
|
||||
- Transitioned from bash script to all Ansible
|
||||
- Reusable Ansible Playbook for OpenStack clouds
|
||||
- Example Showcase with our Open Telekom Cloud
|
||||
- Adaptable for other cloud providers
|
||||
|
||||
<a name="concept"></a>
|
||||
# Technical Concept
|
||||
|
||||
T-Pot is based on the network installer Debian (Stable).
|
||||
T-Pot is based on the network installer Debian (Stretch). During installation the whole system will be updated to Debian (Sid).
|
||||
The honeypot daemons as well as other support components being used have been containerized using [docker](http://docker.io).
|
||||
This allows us to run multiple honeypot daemons on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment.
|
||||
|
||||
@ -246,7 +302,7 @@ In some cases it is necessary to install Debian 9.7 (Stretch) on your own:
|
||||
- Within your company you have to setup special policies, software etc.
|
||||
- You just like to stay on top of things.
|
||||
|
||||
The T-Pot Universal Installer will upgrade the system and install all required T-Pot dependencies.
|
||||
The T-Pot Universal Installer will upgrade the system to Debian (Sid) and install all required T-Pot dependencies.
|
||||
|
||||
Just follow these steps:
|
||||
|
||||
@ -331,7 +387,7 @@ In case you need external Admin UI access, forward TCP port 64294 to T-Pot, see
|
||||
In case you need external SSH access, forward TCP port 64295 to T-Pot, see below.
|
||||
In case you need external Web UI access, forward TCP port 64297 to T-Pot, see below.
|
||||
|
||||
T-Pot requires outgoing git, http, https connections for updates (Debian, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location. Also during first install outgoing ICMP is required additionally to find the closest and fastest mirror to you.
|
||||
T-Pot requires outgoing git, http, https connections for updates (Debian, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location.
|
||||
|
||||
<a name="updates"></a>
|
||||
# Updates
|
||||
@ -340,7 +396,7 @@ For the ones of you who want to live on the bleeding edge of T-Pot development w
|
||||
|
||||
The Update script will:
|
||||
- **mercilessly** overwrite local changes to be in sync with the T-Pot master branch
|
||||
- upgrade the system to the packages available in Debian (Stable)
|
||||
- upgrade the system to the packages available in Debian (Sid)
|
||||
- update all resources to be in-sync with the T-Pot master branch
|
||||
- ensure all T-Pot relevant system files will be patched / copied into the original T-Pot state
|
||||
- restore your custom ews.cfg and HPFEED settings from `/data/ews/conf`
|
||||
@ -368,8 +424,6 @@ If you do not have a SSH client at hand and still want to access the machine via
|
||||
- user: **[tsec or user]** *you chose during one of the post install methods*
|
||||
- pass: **[password]** *you chose during the installation*
|
||||
|
||||
You can also add two factor authentication to Cockpit just by running `2fa.sh` on the command line.
|
||||
|
||||

|
||||
|
||||
<a name="kibana"></a>
|
||||
@ -433,8 +487,9 @@ We encourage you not to disable the data submission as it is the main purpose of
|
||||
|
||||
<a name="hpfeeds-optin"></a>
|
||||
## Opt-In HPFEEDS Data Submission
|
||||
As an Opt-In it is now possible to also share T-Pot data with 3rd party HPFEEDS brokers.
|
||||
If you want to share your T-Pot data you simply have to register an account with a 3rd party broker with its own benefits towards the community. You simply run `hpfeeds_optin.sh` which will ask for your credentials. It will automatically update `/opt/tpot/etc/tpot.yml` to deliver events to your desired broker.
|
||||
As an Opt-In it is now possible to also share T-Pot data with 3rd party HPFEEDS brokers, such as [SISSDEN](https://sissden.eu).
|
||||
If you want to share your T-Pot data you simply have to register an account with a 3rd party broker with its own benefits towards the community. Once registered you will receive your credentials to share events with the broker. In T-Pot you simply run `hpfeeds_optin.sh` which will ask for your credentials, in case of SISSDEN this is just `Ident` and `Secret`, everything else is pre-configured.
|
||||
It will automatically update `/opt/tpot/etc/tpot.yml` to deliver events to your desired broker.
|
||||
|
||||
The script can accept a config file as an argument, e.g. `./hpfeeds_optin.sh --conf=hpfeeds.cfg`
|
||||
|
||||
@ -531,3 +586,8 @@ We will be releasing a new version of T-Pot about every 6-12 months.
|
||||
# Testimonial
|
||||
One of the greatest feedback we have gotten so far is by one of the Conpot developers:<br>
|
||||
***"[...] I highly recommend T-Pot which is ... it's not exactly a swiss army knife .. it's more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. [...]"***
|
||||
|
||||
<a name="funfact"></a>
|
||||
# Fun Fact
|
||||
|
||||
In an effort of saving the environment we are now brewing our own Mate Ice Tea and consumed 73 liters so far for the T-Pot 19.03 development 😇
|
||||
|
77
bin/2fa.sh
77
bin/2fa.sh
@ -1,77 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Make sure script is started as non-root.
|
||||
myWHOAMI=$(whoami)
|
||||
if [ "$myWHOAMI" = "root" ]
|
||||
then
|
||||
echo "Need to run as non-root ..."
|
||||
echo ""
|
||||
exit
|
||||
fi
|
||||
|
||||
# set vars, check deps
|
||||
myPAM_COCKPIT_FILE="/etc/pam.d/cockpit"
|
||||
if ! [ -s "$myPAM_COCKPIT_FILE" ];
|
||||
then
|
||||
echo "### Cockpit PAM module config does not exist. Something went wrong."
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
myPAM_COCKPIT_GA="
|
||||
|
||||
# google authenticator for two-factor
|
||||
auth required pam_google_authenticator.so
|
||||
"
|
||||
myAUTHENTICATOR=$(which google-authenticator)
|
||||
if [ "$myAUTHENTICATOR" == "" ];
|
||||
then
|
||||
echo "### Could not locate google-authenticator, trying to install (if asked provide root password)."
|
||||
echo ""
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y libpam-google-authenticator
|
||||
exec "$1" "$2"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
# write PAM changes
|
||||
function fuWRITE_PAM_CHANGES {
|
||||
myCHECK=$(cat $myPAM_COCKPIT_FILE | grep -c "google")
|
||||
if ! [ "$myCHECK" == "0" ];
|
||||
then
|
||||
echo "### PAM config already enabled. Skipped."
|
||||
echo ""
|
||||
else
|
||||
echo "### Updating PAM config for Cockpit (if asked provide root password)."
|
||||
echo "$myPAM_COCKPIT_GA" | sudo tee -a $myPAM_COCKPIT_FILE
|
||||
sudo systemctl restart cockpit
|
||||
fi
|
||||
}
|
||||
|
||||
# create 2fa
|
||||
function fuGEN_TOKEN {
|
||||
echo "### Now generating token for Google Authenticator."
|
||||
echo ""
|
||||
google-authenticator -t -d -r 3 -R 30 -w 17
|
||||
}
|
||||
|
||||
|
||||
# main
|
||||
echo "### This script will enable Two Factor Authentication for Cockpit."
|
||||
echo ""
|
||||
echo "### Please download one of the many authenticator apps from the appstore of your choice."
|
||||
echo ""
|
||||
while true;
|
||||
do
|
||||
read -p "### Ready to start (y/n)? " myANSWER
|
||||
case $myANSWER in
|
||||
[Yy]* ) echo "### OK. Starting ..."; break;;
|
||||
[Nn]* ) echo "### Exiting."; exit;;
|
||||
esac
|
||||
done
|
||||
|
||||
fuWRITE_PAM_CHANGES
|
||||
fuGEN_TOKEN
|
||||
|
||||
echo "Done. Re-run this script by every user who needs Cockpit access."
|
||||
echo ""
|
@ -32,7 +32,7 @@ trap fuCLEANUP EXIT
|
||||
# Export index patterns
|
||||
mkdir -p patterns
|
||||
echo $myCOL1"### Now exporting"$myCOL0 $myINDEXCOUNT $myCOL1"index pattern fields." $myCOL0
|
||||
curl -s -XGET ''$myKIBANA'api/saved_objects/index-pattern/'$myINDEXID'' | jq '. | {attributes, references}' > patterns/$myINDEXID.json &
|
||||
curl -s -XGET ''$myKIBANA'api/saved_objects/index-pattern/'$myINDEXID'' | jq '. | {attributes}' > patterns/$myINDEXID.json &
|
||||
echo
|
||||
|
||||
# Export dashboards
|
||||
@ -41,7 +41,7 @@ echo $myCOL1"### Now exporting"$myCOL0 $(echo $myDASHBOARDS | wc -w) $myCOL1"das
|
||||
for i in $myDASHBOARDS;
|
||||
do
|
||||
echo $myCOL1"###### "$i $myCOL0
|
||||
curl -s -XGET ''$myKIBANA'api/saved_objects/dashboard/'$i'' | jq '. | {attributes, references}' > dashboards/$i.json &
|
||||
curl -s -XGET ''$myKIBANA'api/saved_objects/dashboard/'$i'' | jq '. | {attributes}' > dashboards/$i.json &
|
||||
done;
|
||||
echo
|
||||
|
||||
@ -51,7 +51,7 @@ echo $myCOL1"### Now exporting"$myCOL0 $(echo $myVISUALIZATIONS | wc -w) $myCOL1
|
||||
for i in $myVISUALIZATIONS;
|
||||
do
|
||||
echo $myCOL1"###### "$i $myCOL0
|
||||
curl -s -XGET ''$myKIBANA'api/saved_objects/visualization/'$i'' | jq '. | {attributes, references}' > visualizations/$i.json &
|
||||
curl -s -XGET ''$myKIBANA'api/saved_objects/visualization/'$i'' | jq '. | {attributes}' > visualizations/$i.json &
|
||||
done;
|
||||
echo
|
||||
|
||||
@ -61,7 +61,7 @@ echo $myCOL1"### Now exporting"$myCOL0 $(echo $mySEARCHES | wc -w) $myCOL1"searc
|
||||
for i in $mySEARCHES;
|
||||
do
|
||||
echo $myCOL1"###### "$i $myCOL0
|
||||
curl -s -XGET ''$myKIBANA'api/saved_objects/search/'$i'' | jq '. | {attributes, references}' > searches/$i.json &
|
||||
curl -s -XGET ''$myKIBANA'api/saved_objects/search/'$i'' | jq '. | {attributes}' > searches/$i.json &
|
||||
done;
|
||||
echo
|
||||
|
||||
|
@ -10,6 +10,20 @@ fi
|
||||
|
||||
myTPOTYMLFILE="/opt/tpot/etc/tpot.yml"
|
||||
|
||||
function fuSISSDEN () {
|
||||
echo
|
||||
echo "You chose SISSDEN, you just need to provide ident and secret"
|
||||
echo
|
||||
myENABLE="true"
|
||||
myHOST="hpfeeds.sissden.eu"
|
||||
myPORT="10000"
|
||||
myCHANNEL="t-pot.events"
|
||||
myCERT="/opt/ewsposter/sissden.pem"
|
||||
read -p "Ident: " myIDENT
|
||||
read -p "Secret: " mySECRET
|
||||
myFORMAT="json"
|
||||
}
|
||||
|
||||
function fuGENERIC () {
|
||||
echo
|
||||
echo "You chose generic, please provide all the details of the broker"
|
||||
@ -105,7 +119,8 @@ echo
|
||||
echo
|
||||
echo "Please choose your broker"
|
||||
echo "---------------------------"
|
||||
echo "[1] - Generic (enter details manually)"
|
||||
echo "[1] - SISSDEN"
|
||||
echo "[2] - Generic (enter details manually)"
|
||||
echo "[0] - Opt out of HPFEEDS"
|
||||
echo "[q] - Do not agree end exit"
|
||||
echo
|
||||
@ -115,6 +130,10 @@ while [ 1 != 2 ]
|
||||
echo $mySELECT
|
||||
case "$mySELECT" in
|
||||
[1])
|
||||
fuSISSDEN
|
||||
break
|
||||
;;
|
||||
[2])
|
||||
fuGENERIC
|
||||
break
|
||||
;;
|
||||
|
@ -1,27 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Make sure ES is available
|
||||
myES="http://127.0.0.1:64298/"
|
||||
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
|
||||
if ! [ "$myESSTATUS" = "1" ]
|
||||
then
|
||||
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
|
||||
exit 1
|
||||
else
|
||||
echo "### Elasticsearch is available, now continuing."
|
||||
echo
|
||||
fi
|
||||
|
||||
function fuMYTOPIPS {
|
||||
curl -s -XGET $myES"_search" -H 'Content-Type: application/json' -d'
|
||||
{
|
||||
"aggs": {
|
||||
"ips": {
|
||||
"terms": { "field": "src_ip.keyword", "size": 100 }
|
||||
}
|
||||
},
|
||||
"size" : 0
|
||||
}'
|
||||
}
|
||||
|
||||
echo "### Aggregating top 100 source IPs in ES"
|
||||
fuMYTOPIPS | jq '.aggregations.ips.buckets[].key' | tr -d '"'
|
@ -1,11 +1,10 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Install packages
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
apk -U add \
|
||||
RUN apk -U add \
|
||||
git \
|
||||
libcap \
|
||||
python3 \
|
||||
@ -21,7 +20,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
addgroup -g 2000 adbhoney && \
|
||||
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 adbhoney && \
|
||||
chown -R adbhoney:adbhoney /opt/adbhoney && \
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python3.7 && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge git \
|
||||
|
@ -14,7 +14,7 @@ services:
|
||||
- adbhoney_local
|
||||
ports:
|
||||
- "5555:5555"
|
||||
image: "dtagdevsec/adbhoney:2006"
|
||||
image: "dtagdevsec/adbhoney:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/adbhoney/log:/opt/adbhoney/log
|
||||
|
@ -1,11 +1,10 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Setup env and apt
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
apk -U upgrade && \
|
||||
RUN apk -U upgrade && \
|
||||
apk add build-base \
|
||||
git \
|
||||
libffi \
|
||||
@ -24,6 +23,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
cd /opt/ && \
|
||||
git clone --depth=1 https://github.com/cymmetria/ciscoasa_honeypot && \
|
||||
cd ciscoasa_honeypot && \
|
||||
pip3 install --no-cache-dir --upgrade pip && \
|
||||
pip3 install --no-cache-dir -r requirements.txt && \
|
||||
cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \
|
||||
chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \
|
||||
|
@ -13,7 +13,7 @@ services:
|
||||
ports:
|
||||
- "5000:5000/udp"
|
||||
- "8443:8443"
|
||||
image: "dtagdevsec/ciscoasa:2006"
|
||||
image: "dtagdevsec/ciscoasa:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/ciscoasa/log:/var/log/ciscoasa
|
||||
|
@ -1,8 +1,7 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Install packages
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
apk -U add \
|
||||
RUN apk -U add \
|
||||
git \
|
||||
libcap \
|
||||
openssl \
|
||||
|
@ -14,7 +14,7 @@ services:
|
||||
- citrixhoneypot_local
|
||||
ports:
|
||||
- "443:443"
|
||||
image: "dtagdevsec/citrixhoneypot:2006"
|
||||
image: "dtagdevsec/citrixhoneypot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
|
||||
|
@ -35,7 +35,7 @@ services:
|
||||
- "2121:21"
|
||||
- "44818:44818"
|
||||
- "47808:47808"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
@ -58,7 +58,7 @@ services:
|
||||
ports:
|
||||
# - "161:161"
|
||||
- "2404:2404"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
@ -80,7 +80,7 @@ services:
|
||||
- conpot_local_guardian_ast
|
||||
ports:
|
||||
- "10001:10001"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
@ -102,7 +102,7 @@ services:
|
||||
- conpot_local_ipmi
|
||||
ports:
|
||||
- "623:623"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
@ -125,7 +125,7 @@ services:
|
||||
ports:
|
||||
- "1025:1025"
|
||||
- "50100:50100"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
|
@ -4,8 +4,7 @@ FROM alpine
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Get and install dependencies & packages
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
apk -U add \
|
||||
RUN apk -U add \
|
||||
bash \
|
||||
build-base \
|
||||
git \
|
||||
@ -30,7 +29,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
# Install cowrie
|
||||
mkdir -p /home/cowrie && \
|
||||
cd /home/cowrie && \
|
||||
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.0.2 && \
|
||||
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.0.0 && \
|
||||
cd cowrie && \
|
||||
mkdir -p log && \
|
||||
pip3 install --upgrade pip && \
|
||||
|
70
docker/cowrie/Dockerfile.old
Normal file
70
docker/cowrie/Dockerfile.old
Normal file
@ -0,0 +1,70 @@
|
||||
FROM alpine
|
||||
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
|
||||
# Get and install dependencies & packages
|
||||
RUN apk -U --no-cache add \
|
||||
bash \
|
||||
build-base \
|
||||
git \
|
||||
gmp-dev \
|
||||
libcap \
|
||||
libffi-dev \
|
||||
mpc1-dev \
|
||||
mpfr-dev \
|
||||
openssl \
|
||||
openssl-dev \
|
||||
python \
|
||||
python-dev \
|
||||
py-bcrypt \
|
||||
py-mysqldb \
|
||||
py-pip \
|
||||
py-requests \
|
||||
py-setuptools && \
|
||||
|
||||
# Setup user
|
||||
addgroup -g 2000 cowrie && \
|
||||
adduser -S -s /bin/ash -u 2000 -D -g 2000 cowrie && \
|
||||
|
||||
# Install cowrie
|
||||
mkdir -p /home/cowrie && \
|
||||
cd /home/cowrie && \
|
||||
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b 1.5.3 && \
|
||||
cd cowrie && \
|
||||
mkdir -p log && \
|
||||
pip install --upgrade pip && \
|
||||
pip install --upgrade -r requirements.txt && \
|
||||
|
||||
# Setup configs
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \
|
||||
cp /root/dist/cowrie.cfg /home/cowrie/cowrie/cowrie.cfg && \
|
||||
chown cowrie:cowrie -R /home/cowrie/* /usr/lib/python2.7/site-packages/twisted/plugins && \
|
||||
|
||||
# Start Cowrie once to prevent dropin.cache errors upon container start caused by read-only filesystem
|
||||
su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie:/home/cowrie/cowrie/src && \
|
||||
cd /home/cowrie/cowrie && \
|
||||
/usr/bin/twistd --uid=2000 --gid=2000 -y cowrie.tac --pidfile cowrie.pid cowrie &" && \
|
||||
sleep 10 && \
|
||||
|
||||
# Clean up
|
||||
apk del --purge build-base \
|
||||
git \
|
||||
gmp-dev \
|
||||
libcap \
|
||||
libffi-dev \
|
||||
mpc1-dev \
|
||||
mpfr-dev \
|
||||
openssl-dev \
|
||||
python-dev \
|
||||
py-mysqldb \
|
||||
py-pip && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /var/cache/apk/* && \
|
||||
rm -rf /home/cowrie/cowrie/cowrie.pid
|
||||
|
||||
# Start cowrie
|
||||
ENV PYTHONPATH /home/cowrie/cowrie:/home/cowrie/cowrie/src
|
||||
WORKDIR /home/cowrie/cowrie
|
||||
USER cowrie:cowrie
|
||||
CMD ["/usr/bin/twistd", "--nodaemon", "-y", "cowrie.tac", "--pidfile", "/tmp/cowrie/cowrie.pid", "cowrie"]
|
@ -18,7 +18,7 @@ services:
|
||||
ports:
|
||||
- "22:22"
|
||||
- "23:23"
|
||||
image: "dtagdevsec/cowrie:2006"
|
||||
image: "dtagdevsec/cowrie:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
|
||||
|
@ -14,5 +14,5 @@ services:
|
||||
- cyberchef_local
|
||||
ports:
|
||||
- "127.0.0.1:64299:8000"
|
||||
image: "dtagdevsec/cyberchef:2006"
|
||||
image: "dtagdevsec/cyberchef:1903"
|
||||
read_only: true
|
||||
|
@ -27,7 +27,7 @@ services:
|
||||
- "5060:5060/udp"
|
||||
- "5061:5061"
|
||||
- "27017:27017"
|
||||
image: "dtagdevsec/dionaea:2006"
|
||||
image: "dtagdevsec/dionaea:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
|
||||
|
@ -1,160 +0,0 @@
|
||||
# T-Pot Image Builder (use only for building docker images)
|
||||
version: '2.3'
|
||||
|
||||
services:
|
||||
|
||||
##################
|
||||
#### Honeypots
|
||||
##################
|
||||
|
||||
# Adbhoney service
|
||||
adbhoney:
|
||||
build: adbhoney/.
|
||||
image: "dtagdevsec/adbhoney:2006"
|
||||
|
||||
# Ciscoasa service
|
||||
ciscoasa:
|
||||
build: ciscoasa/.
|
||||
image: "dtagdevsec/ciscoasa:2006"
|
||||
|
||||
# CitrixHoneypot service
|
||||
citrixhoneypot:
|
||||
build: citrixhoneypot/.
|
||||
image: "dtagdevsec/citrixhoneypot:2006"
|
||||
|
||||
# Conpot IEC104 service
|
||||
conpot_IEC104:
|
||||
build: conpot/.
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
|
||||
# Cowrie service
|
||||
cowrie:
|
||||
build: cowrie/.
|
||||
image: "dtagdevsec/cowrie:2006"
|
||||
|
||||
# Dionaea service
|
||||
dionaea:
|
||||
build: dionaea/.
|
||||
image: "dtagdevsec/dionaea:2006"
|
||||
|
||||
# Glutton service
|
||||
glutton:
|
||||
build: glutton/.
|
||||
image: "dtagdevsec/glutton:2006"
|
||||
|
||||
# Heralding service
|
||||
heralding:
|
||||
build: heralding/.
|
||||
image: "dtagdevsec/heralding:2006"
|
||||
|
||||
# HoneyPy service
|
||||
honeypy:
|
||||
build: honeypy/.
|
||||
image: "dtagdevsec/honeypy:2006"
|
||||
|
||||
# Honeytrap service
|
||||
honeytrap:
|
||||
build: honeytrap/.
|
||||
image: "dtagdevsec/honeytrap:2006"
|
||||
|
||||
# Mailoney service
|
||||
mailoney:
|
||||
build: mailoney/.
|
||||
image: "dtagdevsec/mailoney:2006"
|
||||
|
||||
# Medpot service
|
||||
medpot:
|
||||
build: medpot/.
|
||||
image: "dtagdevsec/medpot:2006"
|
||||
|
||||
# Rdpy service
|
||||
rdpy:
|
||||
build: rdpy/.
|
||||
image: "dtagdevsec/rdpy:2006"
|
||||
|
||||
#### Snare / Tanner
|
||||
## Tanner Redis Service
|
||||
tanner_redis:
|
||||
build: tanner/redis/.
|
||||
image: "dtagdevsec/redis:2006"
|
||||
|
||||
## PHP Sandbox service
|
||||
tanner_phpox:
|
||||
build: tanner/phpox/.
|
||||
image: "dtagdevsec/phpox:2006"
|
||||
|
||||
## Tanner API Service
|
||||
tanner_api:
|
||||
build: tanner/tanner/.
|
||||
image: "dtagdevsec/tanner:2006"
|
||||
|
||||
## Snare Service
|
||||
snare:
|
||||
build: tanner/snare/.
|
||||
image: "dtagdevsec/snare:2006"
|
||||
|
||||
|
||||
##################
|
||||
#### NSM
|
||||
##################
|
||||
|
||||
# Fatt service
|
||||
fatt:
|
||||
build: fatt/.
|
||||
image: "dtagdevsec/fatt:2006"
|
||||
|
||||
# P0f service
|
||||
p0f:
|
||||
build: p0f/.
|
||||
image: "dtagdevsec/p0f:2006"
|
||||
|
||||
# Suricata service
|
||||
suricata:
|
||||
build: suricata/.
|
||||
image: "dtagdevsec/suricata:2006"
|
||||
|
||||
|
||||
##################
|
||||
#### Tools
|
||||
##################
|
||||
|
||||
# Cyberchef service
|
||||
cyberchef:
|
||||
build: cyberchef/.
|
||||
image: "dtagdevsec/cyberchef:2006"
|
||||
|
||||
#### ELK
|
||||
## Elasticsearch service
|
||||
elasticsearch:
|
||||
build: elk/elasticsearch/.
|
||||
image: "dtagdevsec/elasticsearch:2006"
|
||||
|
||||
## Kibana service
|
||||
kibana:
|
||||
build: elk/kibana/.
|
||||
image: "dtagdevsec/kibana:2006"
|
||||
|
||||
## Logstash service
|
||||
logstash:
|
||||
build: elk/logstash/.
|
||||
image: "dtagdevsec/logstash:2006"
|
||||
|
||||
## Elasticsearch-head service
|
||||
head:
|
||||
build: elk/head/.
|
||||
image: "dtagdevsec/head:2006"
|
||||
|
||||
# Ewsposter service
|
||||
ewsposter:
|
||||
build: ews/.
|
||||
image: "dtagdevsec/ewsposter:2006"
|
||||
|
||||
# Nginx service
|
||||
nginx:
|
||||
build: heimdall/.
|
||||
image: "dtagdevsec/nginx:2006"
|
||||
|
||||
# Spiderfoot service
|
||||
spiderfoot:
|
||||
build: spiderfoot/.
|
||||
image: "dtagdevsec/spiderfoot:2006"
|
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
|
@ -14,7 +14,7 @@ services:
|
||||
- elasticpot_local
|
||||
ports:
|
||||
- "9200:9200"
|
||||
image: "dtagdevsec/elasticpot:2006"
|
||||
image: "dtagdevsec/elasticpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/elasticpot/log:/opt/ElasticpotPY/log
|
||||
|
@ -24,7 +24,7 @@ services:
|
||||
mem_limit: 4g
|
||||
ports:
|
||||
- "127.0.0.1:64298:9200"
|
||||
image: "dtagdevsec/elasticsearch:2006"
|
||||
image: "dtagdevsec/elasticsearch:1903"
|
||||
volumes:
|
||||
- /data:/data
|
||||
|
||||
@ -39,7 +39,7 @@ services:
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- "127.0.0.1:64296:5601"
|
||||
image: "dtagdevsec/kibana:2006"
|
||||
image: "dtagdevsec/kibana:1903"
|
||||
|
||||
## Logstash service
|
||||
logstash:
|
||||
@ -51,10 +51,10 @@ services:
|
||||
condition: service_healthy
|
||||
env_file:
|
||||
- /opt/tpot/etc/compose/elk_environment
|
||||
image: "dtagdevsec/logstash:2006"
|
||||
image: "dtagdevsec/logstash:1903"
|
||||
volumes:
|
||||
- /data:/data
|
||||
# - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
|
||||
- /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
|
||||
|
||||
## Elasticsearch-head service
|
||||
head:
|
||||
@ -66,5 +66,5 @@ services:
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- "127.0.0.1:64302:9100"
|
||||
image: "dtagdevsec/head:2006"
|
||||
image: "dtagdevsec/head:1903"
|
||||
read_only: true
|
||||
|
@ -1,8 +1,5 @@
|
||||
FROM alpine
|
||||
#
|
||||
# VARS
|
||||
ENV ES_VER=7.6.1 \
|
||||
JAVA_HOME=/usr/lib/jvm/java-11-openjdk
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
@ -13,13 +10,13 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
bash \
|
||||
curl \
|
||||
nss \
|
||||
openjdk11-jre && \
|
||||
openjdk8-jre && \
|
||||
#
|
||||
# Get and install packages
|
||||
cd /root/dist/ && \
|
||||
mkdir -p /usr/share/elasticsearch/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VER-linux-x86_64.tar.gz && \
|
||||
tar xvfz elasticsearch-$ES_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.6.tar.gz && \
|
||||
tar xvfz elasticsearch-6.8.6.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \
|
||||
#
|
||||
# Add and move files
|
||||
cd /root/dist/ && \
|
||||
@ -43,4 +40,5 @@ HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9200/_cat/health'
|
||||
#
|
||||
# Start ELK
|
||||
USER elasticsearch:elasticsearch
|
||||
ENV JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk
|
||||
CMD ["/usr/share/elasticsearch/bin/elasticsearch"]
|
||||
|
@ -1,16 +1,11 @@
|
||||
cluster.name: tpotcluster
|
||||
node.name: "tpotcluster-node-01"
|
||||
xpack.ml.enabled: false
|
||||
xpack.security.enabled: false
|
||||
xpack.ilm.enabled: false
|
||||
path:
|
||||
logs: /data/elk/log
|
||||
data: /data/elk/data
|
||||
http.host: 0.0.0.0
|
||||
http.cors.enabled: true
|
||||
http.cors.allow-origin: "*"
|
||||
indices.query.bool.max_clause_count: 2000
|
||||
cluster.initial_master_nodes:
|
||||
- "tpotcluster-node-01"
|
||||
discovery.zen.ping.unicast.hosts:
|
||||
- localhost
|
||||
- localhost
|
||||
|
@ -24,6 +24,6 @@ services:
|
||||
mem_limit: 2g
|
||||
ports:
|
||||
- "127.0.0.1:64298:9200"
|
||||
image: "dtagdevsec/elasticsearch:2006"
|
||||
image: "dtagdevsec/elasticsearch:1903"
|
||||
volumes:
|
||||
- /data:/data
|
||||
|
@ -12,5 +12,5 @@ services:
|
||||
# condition: service_healthy
|
||||
ports:
|
||||
- "127.0.0.1:64302:9100"
|
||||
image: "dtagdevsec/head:2006"
|
||||
image: "dtagdevsec/head:1903"
|
||||
read_only: true
|
||||
|
@ -1,7 +1,4 @@
|
||||
FROM node:10.19.0-alpine
|
||||
#
|
||||
# VARS
|
||||
ENV KB_VER=7.6.1
|
||||
FROM node:10.15.2-alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
@ -15,20 +12,20 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
# Get and install packages
|
||||
cd /root/dist/ && \
|
||||
mkdir -p /usr/share/kibana/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-$KB_VER-linux-x86_64.tar.gz && \
|
||||
tar xvfz kibana-$KB_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-6.8.6-linux-x86_64.tar.gz && \
|
||||
tar xvfz kibana-6.8.6-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \
|
||||
#
|
||||
# Kibana's bundled node does not work in alpine
|
||||
rm /usr/share/kibana/node/bin/node && \
|
||||
ln -s /usr/local/bin/node /usr/share/kibana/node/bin/node && \
|
||||
ln -s /usr/bin/node /usr/share/kibana/node/bin/node && \
|
||||
#
|
||||
# Add and move files
|
||||
cd /root/dist/ && \
|
||||
# cp kibana.svg /usr/share/kibana/src/ui/public/images/kibana.svg && \
|
||||
# cp kibana.svg /usr/share/kibana/src/ui/public/icons/kibana.svg && \
|
||||
# cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon.ico && \
|
||||
# cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-16x16.png && \
|
||||
# cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-32x32.png && \
|
||||
cp kibana.svg /usr/share/kibana/src/ui/public/images/kibana.svg && \
|
||||
cp kibana.svg /usr/share/kibana/src/ui/public/icons/kibana.svg && \
|
||||
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon.ico && \
|
||||
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-16x16.png && \
|
||||
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-32x32.png && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \
|
||||
@ -36,21 +33,17 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \
|
||||
sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \
|
||||
sed -i 's/#server.rewriteBasePath: false/server.rewriteBasePath: false/' /usr/share/kibana/config/kibana.yml && \
|
||||
# sed -i "s/#005571/#e20074/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
|
||||
# sed -i "s/#007ba4/#9e0051/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
|
||||
# sed -i "s/#00465d/#4f0028/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
|
||||
sed -i "s/#005571/#e20074/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
|
||||
sed -i "s/#007ba4/#9e0051/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
|
||||
sed -i "s/#00465d/#4f0028/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
|
||||
echo "xpack.infra.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
|
||||
echo "xpack.logstash.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
|
||||
echo "xpack.canvas.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
|
||||
echo "xpack.spaces.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
|
||||
echo "xpack.apm.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
|
||||
echo "xpack.security.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
|
||||
echo "xpack.uptime.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
|
||||
echo "xpack.siem.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
|
||||
echo "elasticsearch.requestTimeout: 60000" >> /usr/share/kibana/config/kibana.yml && \
|
||||
echo "elasticsearch.shardTimeout: 60000" >> /usr/share/kibana/config/kibana.yml && \
|
||||
rm -rf /usr/share/kibana/optimize/bundles/* && \
|
||||
/usr/share/kibana/bin/kibana --optimize --allow-root && \
|
||||
/usr/share/kibana/bin/kibana --optimize && \
|
||||
addgroup -g 2000 kibana && \
|
||||
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \
|
||||
chown -R kibana:kibana /usr/share/kibana/ && \
|
||||
|
@ -12,4 +12,4 @@ services:
|
||||
# condition: service_healthy
|
||||
ports:
|
||||
- "127.0.0.1:64296:5601"
|
||||
image: "dtagdevsec/kibana:2006"
|
||||
image: "dtagdevsec/kibana:1903"
|
||||
|
@ -1,7 +1,5 @@
|
||||
FROM alpine
|
||||
#
|
||||
# VARS
|
||||
ENV LS_VER=7.6.1
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
@ -15,18 +13,18 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
libc6-compat \
|
||||
libzmq \
|
||||
nss \
|
||||
openjdk11-jre && \
|
||||
openjdk8-jre && \
|
||||
#
|
||||
# Get and install packages
|
||||
mkdir -p /etc/listbot && \
|
||||
cd /etc/listbot && \
|
||||
aria2c -s16 -x 16 https://raw.githubusercontent.com/dtag-dev-sec/listbot/master/cve.yaml.bz2 && \
|
||||
aria2c -s16 -x 16 https://raw.githubusercontent.com/dtag-dev-sec/listbot/master/iprep.yaml.bz2 && \
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
|
||||
bunzip2 *.bz2 && \
|
||||
cd /root/dist/ && \
|
||||
mkdir -p /usr/share/logstash/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER.tar.gz && \
|
||||
tar xvfz logstash-$LS_VER.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-6.8.6.tar.gz && \
|
||||
tar xvfz logstash-6.8.6.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
|
||||
/usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \
|
||||
/usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \
|
||||
#
|
||||
@ -36,7 +34,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
chmod u+x /usr/bin/update.sh && \
|
||||
mkdir -p /etc/logstash/conf.d && \
|
||||
cp logstash.conf /etc/logstash/conf.d/ && \
|
||||
cp elasticsearch-template-es7x.json /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.1-java/lib/logstash/outputs/elasticsearch/ && \
|
||||
cp elasticsearch-template-es6x.json /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/ && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 logstash && \
|
||||
|
53
docker/elk/logstash/dist/elasticsearch-template-es5x.json
vendored
Normal file
53
docker/elk/logstash/dist/elasticsearch-template-es5x.json
vendored
Normal file
@ -0,0 +1,53 @@
|
||||
{
|
||||
"template" : "logstash-*",
|
||||
"version" : 50001,
|
||||
"settings" : {
|
||||
"index.refresh_interval" : "5s",
|
||||
"index.number_of_shards" : "1",
|
||||
"index.number_of_replicas" : "0",
|
||||
"mapping" : {
|
||||
"total_fields" : {
|
||||
"limit" : "2000"
|
||||
}
|
||||
}
|
||||
},
|
||||
"mappings" : {
|
||||
"_default_" : {
|
||||
"_all" : {"enabled" : true, "norms" : false},
|
||||
"dynamic_templates" : [ {
|
||||
"message_field" : {
|
||||
"path_match" : "message",
|
||||
"match_mapping_type" : "string",
|
||||
"mapping" : {
|
||||
"type" : "text",
|
||||
"norms" : false
|
||||
}
|
||||
}
|
||||
}, {
|
||||
"string_fields" : {
|
||||
"match" : "*",
|
||||
"match_mapping_type" : "string",
|
||||
"mapping" : {
|
||||
"type" : "text", "norms" : false,
|
||||
"fields" : {
|
||||
"keyword" : { "type": "keyword", "ignore_above": 256 }
|
||||
}
|
||||
}
|
||||
}
|
||||
} ],
|
||||
"properties" : {
|
||||
"@timestamp": { "type": "date", "include_in_all": false },
|
||||
"@version": { "type": "keyword", "include_in_all": false },
|
||||
"geoip" : {
|
||||
"dynamic": true,
|
||||
"properties" : {
|
||||
"ip": { "type": "ip" },
|
||||
"location" : { "type" : "geo_point" },
|
||||
"latitude" : { "type" : "half_float" },
|
||||
"longitude" : { "type" : "half_float" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
48
docker/elk/logstash/dist/elasticsearch-template-es6x.json
vendored
Normal file
48
docker/elk/logstash/dist/elasticsearch-template-es6x.json
vendored
Normal file
@ -0,0 +1,48 @@
|
||||
{
|
||||
"template" : "logstash-*",
|
||||
"version" : 60001,
|
||||
"settings" : {
|
||||
"index.refresh_interval" : "5s",
|
||||
"index.number_of_shards" : "1",
|
||||
"index.number_of_replicas" : "0",
|
||||
"index.mapping.total_fields.limit": "2000"
|
||||
},
|
||||
"mappings" : {
|
||||
"_default_" : {
|
||||
"dynamic_templates" : [ {
|
||||
"message_field" : {
|
||||
"path_match" : "message",
|
||||
"match_mapping_type" : "string",
|
||||
"mapping" : {
|
||||
"type" : "text",
|
||||
"norms" : false
|
||||
}
|
||||
}
|
||||
}, {
|
||||
"string_fields" : {
|
||||
"match" : "*",
|
||||
"match_mapping_type" : "string",
|
||||
"mapping" : {
|
||||
"type" : "text", "norms" : false,
|
||||
"fields" : {
|
||||
"keyword" : { "type": "keyword", "ignore_above": 256 }
|
||||
}
|
||||
}
|
||||
}
|
||||
} ],
|
||||
"properties" : {
|
||||
"@timestamp": { "type": "date"},
|
||||
"@version": { "type": "keyword"},
|
||||
"geoip" : {
|
||||
"dynamic": true,
|
||||
"properties" : {
|
||||
"ip": { "type": "ip" },
|
||||
"location" : { "type" : "geo_point" },
|
||||
"latitude" : { "type" : "half_float" },
|
||||
"longitude" : { "type" : "half_float" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,49 +0,0 @@
|
||||
{
|
||||
"index_patterns" : "logstash-*",
|
||||
"version" : 60001,
|
||||
"settings" : {
|
||||
"index.refresh_interval" : "5s",
|
||||
"number_of_shards" : 1,
|
||||
"index.number_of_replicas" : "0",
|
||||
"index.mapping.total_fields.limit" : "2000",
|
||||
"index.query": {
|
||||
"default_field": "fields.*"
|
||||
}
|
||||
},
|
||||
"mappings" : {
|
||||
"dynamic_templates" : [ {
|
||||
"message_field" : {
|
||||
"path_match" : "message",
|
||||
"match_mapping_type" : "string",
|
||||
"mapping" : {
|
||||
"type" : "text",
|
||||
"norms" : false
|
||||
}
|
||||
}
|
||||
}, {
|
||||
"string_fields" : {
|
||||
"match" : "*",
|
||||
"match_mapping_type" : "string",
|
||||
"mapping" : {
|
||||
"type" : "text", "norms" : false,
|
||||
"fields" : {
|
||||
"keyword" : { "type": "keyword", "ignore_above": 256 }
|
||||
}
|
||||
}
|
||||
}
|
||||
} ],
|
||||
"properties" : {
|
||||
"@timestamp": { "type": "date"},
|
||||
"@version": { "type": "keyword"},
|
||||
"geoip" : {
|
||||
"dynamic": true,
|
||||
"properties" : {
|
||||
"ip": { "type": "ip" },
|
||||
"location" : { "type" : "geo_point" },
|
||||
"latitude" : { "type" : "half_float" },
|
||||
"longitude" : { "type" : "half_float" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
4
docker/elk/logstash/dist/logstash.conf
vendored
4
docker/elk/logstash/dist/logstash.conf
vendored
@ -413,12 +413,12 @@ if "_grokparsefailure" in [tags] { drop {} }
|
||||
geoip {
|
||||
cache_size => 10000
|
||||
source => "src_ip"
|
||||
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"
|
||||
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"
|
||||
}
|
||||
geoip {
|
||||
cache_size => 10000
|
||||
source => "src_ip"
|
||||
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-ASN.mmdb"
|
||||
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-ASN.mmdb"
|
||||
}
|
||||
translate {
|
||||
refresh_interval => 86400
|
||||
|
18
docker/elk/logstash/dist/update.sh
vendored
18
docker/elk/logstash/dist/update.sh
vendored
@ -22,23 +22,15 @@ for i in $mySITES;
|
||||
}
|
||||
|
||||
# Check for connectivity and download latest translation maps
|
||||
myCHECK=$(fuCHECKINET "raw.githubusercontent.com")
|
||||
myCHECK=$(fuCHECKINET "listbot.sicherheitstacho.eu")
|
||||
if [ "$myCHECK" == "0" ];
|
||||
then
|
||||
echo "Connection to Github looks good, now downloading latest translation maps."
|
||||
echo "Connection to Listbot looks good, now downloading latest translation maps."
|
||||
cd /etc/listbot
|
||||
aria2c -s16 -x 16 https://raw.githubusercontent.com/dtag-dev-sec/listbot/master/cve.yaml.bz2 && \
|
||||
aria2c -s16 -x 16 https://raw.githubusercontent.com/dtag-dev-sec/listbot/master/iprep.yaml.bz2 && \
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
|
||||
bunzip2 -f *.bz2
|
||||
cd /
|
||||
else
|
||||
echo "Cannot reach Github, starting Logstash without latest translation maps."
|
||||
echo "Cannot reach Listbot, starting Logstash without latest translation maps."
|
||||
fi
|
||||
|
||||
# Make sure logstash can put latest logstash template by deleting the old one first
|
||||
echo "Removing logstash template."
|
||||
curl -XDELETE http://elasticsearch:9200/_template/logstash
|
||||
echo
|
||||
echo "Checking if empty."
|
||||
curl -XGET http://elasticsearch:9200/_template/logstash
|
||||
echo
|
||||
|
@ -12,7 +12,7 @@ services:
|
||||
# condition: service_healthy
|
||||
env_file:
|
||||
- /opt/tpot/etc/compose/elk_environment
|
||||
image: "dtagdevsec/logstash:2006"
|
||||
image: "dtagdevsec/logstash:1903"
|
||||
volumes:
|
||||
- /data:/data
|
||||
- /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
|
||||
|
@ -1,11 +1,10 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Install packages
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
apk -U --no-cache add \
|
||||
RUN apk -U --no-cache add \
|
||||
build-base \
|
||||
git \
|
||||
libffi-dev \
|
||||
@ -33,7 +32,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
#
|
||||
# Supply configs
|
||||
mv /root/dist/ews.cfg /opt/ewsposter/ && \
|
||||
# mv /root/dist/*.pem /opt/ewsposter/ && \
|
||||
mv /root/dist/*.pem /opt/ewsposter/ && \
|
||||
#
|
||||
# Clean up
|
||||
apk del build-base \
|
||||
|
70
docker/ews/dist/sissden.pem
vendored
Normal file
70
docker/ews/dist/sissden.pem
vendored
Normal file
@ -0,0 +1,70 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIGBDCCA+ygAwIBAgIBATANBgkqhkiG9w0BAQsFADCBnTEYMBYGA1UEAwwPU0lT
|
||||
U0RFTiBSb290IENBMQswCQYDVQQGEwJQTDERMA8GA1UEBwwIV2Fyc3phd2ExLjAs
|
||||
BgNVBAoMJU5hdWtvd2EgaSBBa2FkZW1pY2thIFNpZWMgS29tcHV0ZXJvd2ExEDAO
|
||||
BgNVBAsMB1NJU1NERU4xHzAdBgkqhkiG9w0BCQEWEGFkbWluQHNpc3NkZW4uZXUw
|
||||
HhcNMTcwNDExMTMxNDE2WhcNMjcwNDA5MTMxNDE2WjCBjTEbMBkGA1UEAwwSU0lT
|
||||
U0RFTiBTZXJ2aWNlIENBMQswCQYDVQQGEwJQTDEfMB0GCSqGSIb3DQEJARYQYWRt
|
||||
aW5Ac2lzc2Rlbi5ldTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2ll
|
||||
YyBLb21wdXRlcm93YTEQMA4GA1UECwwHU0lTU0RFTjCCAiIwDQYJKoZIhvcNAQEB
|
||||
BQADggIPADCCAgoCggIBAPFLjU6cLQoGz1s73QMPiRxYISCMUh3CXFe52Uim9a60
|
||||
nkBDLfjMFW87MNhFCcE2xmxwdPPTz4+f5+DsEV3eZf0y63NxWx+RFV+UpODuEW5n
|
||||
tWPFUDxmgKx6iAR/tyeLVNqmgtCnWzSthE0cg71dlil6onWvkMc+Wn5Kv6aXoz4e
|
||||
5YVVhNsymhhrR0BntospY8EvtPm70hHAzOty957/zixOQ/MM+4SHRsWXTlKqv0K2
|
||||
udWpkUy1Ihs3bpea2KAvn9bBWejFwy7K4q3LyhSyqwpVCYjNi+s+9z4ipSMfvAlT
|
||||
FvHrMrODv/Iz/TQOfypYSlpX2gBP9WKLgOQj3wulJnMDQlvG1XNgOAqKfEF52YGF
|
||||
eUu21UraRgDAguIIhWxRwgXenmRo8ngWjfk9Q8734PzzXt8cwzbxJWiJLMew1SiW
|
||||
I+Kg8uYNGNT4mdBeUMo92S17ZNMXVnkt1TYfxT0A0ZlTCrhXPiWITtsVZXAdqFtl
|
||||
j5hASmEcRYNgXEUQHBn13O9IinEmks2PEcqbbbKbs2Je0DS/JvxBkqES51UdsaVQ
|
||||
zITKw3deCk0pISG8WDWZ97LEeDCvAKA5l/ooKjDwfS5vWw11mTUCOdhCoF0m8Lao
|
||||
TwE1fzzNbSaqMsT6JF/n0ACabfuvF2aqCmWsZC/Hpw8LQQS62zOouCLdcqizL9+z
|
||||
AgMBAAGjXTBbMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgHuMB0GA1UdDgQWBBQ4
|
||||
nurxBppBA5PTNvFFU/vhDr/NFzAfBgNVHSMEGDAWgBSDpRyQSgaBD5XvyFOA8YHH
|
||||
tbUAbzANBgkqhkiG9w0BAQsFAAOCAgEAIvA2gkYsIVH7FGuoIo9RIxgwy7G/SHNC
|
||||
Xllz6hyTx10UwbttJ+o4gdNt8WPuGnkmywFgsjL1//bFw2+fUO5IRvWKSmXzwx9N
|
||||
faRJAjQT4JNx2uOW0ctw4USngPrLjXr3UrIQQlJFtZnEyT9u5VJXX8zkhfNJudyJ
|
||||
N88YVrPEf6Gh1Q0P+yCX0rDEb3PlP2jsYyXZtcYA5kDQ6Qq7jpLT/zrjJdaPTmzh
|
||||
2NUe7jJOBfZxPCoeev7meafY2vVOgqRqMz1+DZRoOgwq+ysczzRaXmd5a2p9Tabc
|
||||
L1w5FXKNJQ4apszA0cEScI+4mBIIQ7VFT3GO098GOcYsC2MelRkgONAIyamm66AP
|
||||
tvLQAKoiK/xz3sEHN4zaZvN/YVHaSYZEXUP0QHdyL62P62a92aCNyrHpzKURhEDA
|
||||
n8cs6icxKrS4xuVa517m53zun0brjrfeltfbO7z1A2TstFYu9BHKzRuhwV9cGRHP
|
||||
EDcb7PkfA/08sDHsyfsWtzIysNo3hwCmQ6gtOW5xlrGplFfwSsXmPG4SR3ByW379
|
||||
RA5h3zzrO0g7iCvbLclqHoqLTJTMS+6U43qXjnQ7DJ+mcbhRGcMHcZVKqO3QmLm+
|
||||
mmkDNzNYfTgY52D5mXJqUK50750mQ8dwMSkD2TufSAPmAPUp90LdQ8u9CIv6gQ+x
|
||||
A08hDHJ1cdY=
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIGHDCCBASgAwIBAgIJAPZqsOOroxaHMA0GCSqGSIb3DQEBCwUAMIGdMRgwFgYD
|
||||
VQQDDA9TSVNTREVOIFJvb3QgQ0ExCzAJBgNVBAYTAlBMMREwDwYDVQQHDAhXYXJz
|
||||
emF3YTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2llYyBLb21wdXRl
|
||||
cm93YTEQMA4GA1UECwwHU0lTU0RFTjEfMB0GCSqGSIb3DQEJARYQYWRtaW5Ac2lz
|
||||
c2Rlbi5ldTAeFw0xNzA0MTExMzA3NTZaFw0yNzA0MDkxMzA3NTZaMIGdMRgwFgYD
|
||||
VQQDDA9TSVNTREVOIFJvb3QgQ0ExCzAJBgNVBAYTAlBMMREwDwYDVQQHDAhXYXJz
|
||||
emF3YTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2llYyBLb21wdXRl
|
||||
cm93YTEQMA4GA1UECwwHU0lTU0RFTjEfMB0GCSqGSIb3DQEJARYQYWRtaW5Ac2lz
|
||||
c2Rlbi5ldTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANKT77EYYEhV
|
||||
tJUnfnvQtGttfgqIzKIV2W6nPK9aDsKRTX5BVDHF6P5ZAF1u/52ATwdyTK7+LD66
|
||||
Q/nCzyyA2kqTgdruX6VGucpD2DVVSVF6nZhV9PcISNaMXytoG2HHlqrim53E/rVa
|
||||
rskColfs7oCxama6lPKZ/rqrJlVjA1Pl5ZtxR0IORjpOyZjSbSzKQwLp/JxHPMCU
|
||||
2cVirS7aEu5UGj+Q7Ibg0AEyoAu5tnHBKun4hmIoo7LtKWNEe1TdboxOSboGJ5wd
|
||||
UTEmNH+7izZ5FAogTUINjubkf2zZ65xEnN7DT/zFS30vYU1EclqCTp96EKPANogV
|
||||
ZeBKntEN6M5azM6Q6+nFI56TV5DWHTIXm85zzeDj5JM7TQlIGTh8A5APHpr0YyUP
|
||||
AiIUrixV2lqSDrjewey5qQcWV6WbjMS72OFKh/x7+UJICJhoUw+KwnPmWSq1WAlt
|
||||
n7C+W0raSQzt7puI30LUkInKL6iEQebMoYg0eDRI5vsRIpbo+PzflIuk/Vea/D1Y
|
||||
twgRc8ujoKI9GpPJyP4yO4nY7BkShLqKJ251lEJZnxq8LiFVi8aN6ZHt//OGEtVs
|
||||
6L97cPzqFx7qx8vnyLBFk23lb8pilHK1G0nqxCCjakTruT/JgkLXnZcLu/IDSqd3
|
||||
QLjJL0rmU9q6+RTH8A782pcBUNzeLKnlAgMBAAGjXTBbMAwGA1UdEwQFMAMBAf8w
|
||||
CwYDVR0PBAQDAgHuMB0GA1UdDgQWBBSDpRyQSgaBD5XvyFOA8YHHtbUAbzAfBgNV
|
||||
HSMEGDAWgBSDpRyQSgaBD5XvyFOA8YHHtbUAbzANBgkqhkiG9w0BAQsFAAOCAgEA
|
||||
IA0U6znfPykr5PoQlXb/Wr4L5mY/ZtNAJsvJ8jwNMsj3ZlqLOJfnHHoG5LHkb2b/
|
||||
xfM1Ee2ojmYBt4VDARqrHLLbup38Ivqt0aEco3Qx/WqbIR4IlvZBF+/qKF/wIUuc
|
||||
CuBYNIy12PcLzafT+SJosj1BJ+XiUCj/RsVXIT5CxsdXIABWC+5b3T3/PrAtKk+C
|
||||
sVjA/ck1KAHDd+3VUyRjLAAekYWA9C/hek3YwWQ3OvmyHos5gxifqMMDj6bx5qgv
|
||||
AuIs4mYJlBlHE19GxRmo2TDwE0eZiUoUdavdRBbl9v7dex+AF2GegmnC1ouYc9kv
|
||||
9moNBcuPFXuJMCOCU44aTpgEKRm3QTZTvVcUza251T+4kgT2wlFyzPqQ8hcpih4t
|
||||
knlqHhNc9ibL3/qzWr093AgC9uNaNRqmqu1WAu3vs9g3DVb/RSMrUG/V0YS1GgPq
|
||||
E+nVJ1AIJoee8YaxHztRfjPsmu1R3pp633lfcRPUKCkz52dZDFRPuQP36DuJzl2M
|
||||
itTra0MtDUuRCsuJfVGe1op2wFprswLI0qy7O9N21D4Ab8g0ik+lhmpOf5DpYxmx
|
||||
C2Xpe4d/5Xlg3wIYhEs5MnfeEy4lSMA4cxwJs11gVYHba62L7/5lqzpPmHdRYHu3
|
||||
Vf0pM/6zniQpy58Pf9+9CNU15I3iWF5K3zmevFArd6s=
|
||||
-----END CERTIFICATE-----
|
@ -23,7 +23,7 @@ services:
|
||||
- EWS_HPFEEDS_FORMAT=json
|
||||
env_file:
|
||||
- /opt/tpot/etc/compose/elk_environment
|
||||
image: "dtagdevsec/ewsposter:2006"
|
||||
image: "dtagdevsec/ewsposter:1903"
|
||||
volumes:
|
||||
- /data:/data
|
||||
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
#ADD dist/ /root/dist/
|
||||
|
@ -12,6 +12,6 @@ services:
|
||||
- NET_ADMIN
|
||||
- SYS_NICE
|
||||
- NET_RAW
|
||||
image: "dtagdevsec/fatt:2006"
|
||||
image: "dtagdevsec/fatt:1903"
|
||||
volumes:
|
||||
- /data/fatt/log:/opt/fatt/log
|
||||
|
Before Width: | Height: | Size: 793 KiB After Width: | Height: | Size: 793 KiB |
@ -1,11 +1,10 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Setup apk
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
apk -U --no-cache add \
|
||||
RUN apk -U --no-cache add \
|
||||
build-base \
|
||||
git \
|
||||
go \
|
||||
|
54
docker/glutton/Dockerfile.old
Normal file
54
docker/glutton/Dockerfile.old
Normal file
@ -0,0 +1,54 @@
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Setup apk
|
||||
RUN apk -U --no-cache add \
|
||||
build-base \
|
||||
git \
|
||||
go \
|
||||
g++ \
|
||||
iptables-dev \
|
||||
libnetfilter_queue-dev \
|
||||
libcap \
|
||||
libpcap-dev && \
|
||||
#
|
||||
# Setup go, glutton
|
||||
export GOPATH=/opt/go/ && \
|
||||
go get -d github.com/mushorg/glutton && \
|
||||
cd /opt/go/src/github.com/satori/ && \
|
||||
rm -rf go.uuid && \
|
||||
git clone https://github.com/satori/go.uuid && \
|
||||
cd go.uuid && \
|
||||
git checkout v1.2.0 && \
|
||||
mv /root/dist/system.go /opt/go/src/github.com/mushorg/glutton/ && \
|
||||
cd /opt/go/src/github.com/mushorg/glutton/ && \
|
||||
make build && \
|
||||
cd / && \
|
||||
mkdir -p /opt/glutton && \
|
||||
mv /opt/go/src/github.com/mushorg/glutton/bin /opt/glutton/ && \
|
||||
mv /opt/go/src/github.com/mushorg/glutton/config /opt/glutton/ && \
|
||||
mv /opt/go/src/github.com/mushorg/glutton/rules /opt/glutton/ && \
|
||||
setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \
|
||||
setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-multi && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 glutton && \
|
||||
adduser -S -s /bin/ash -u 2000 -D -g 2000 glutton && \
|
||||
mkdir -p /var/log/glutton && \
|
||||
mv /root/dist/rules.yaml /opt/glutton/rules/ && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge build-base \
|
||||
git \
|
||||
go \
|
||||
g++ && \
|
||||
rm -rf /var/cache/apk/* \
|
||||
/opt/go \
|
||||
/root/dist
|
||||
#
|
||||
# Start glutton
|
||||
WORKDIR /opt/glutton
|
||||
USER glutton:glutton
|
||||
CMD exec bin/server -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) -l /var/log/glutton/glutton.log > /dev/null 2>&1
|
@ -13,7 +13,7 @@ services:
|
||||
network_mode: "host"
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
image: "dtagdevsec/glutton:2006"
|
||||
image: "dtagdevsec/glutton:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/glutton/log:/var/log/glutton
|
||||
|
@ -26,7 +26,7 @@ services:
|
||||
ports:
|
||||
- "64297:64297"
|
||||
- "127.0.0.1:64304:64304"
|
||||
image: "dtagdevsec/nginx:2006"
|
||||
image: "dtagdevsec/nginx:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/nginx/cert/:/etc/nginx/cert/:ro
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine:3.10
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
@ -23,6 +23,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
cd /opt/ && \
|
||||
git clone --depth=1 https://github.com/johnnykv/heralding && \
|
||||
cd heralding && \
|
||||
sed -i 's/asyncssh/asyncssh==1.18.0/' requirements.txt && \
|
||||
pip3 install --no-cache-dir -r requirements.txt && \
|
||||
pip3 install --no-cache-dir . && \
|
||||
#
|
||||
@ -31,7 +32,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 heralding && \
|
||||
mkdir -p /var/log/heralding/ /etc/heralding && \
|
||||
mv /root/dist/heralding.yml /etc/heralding/ && \
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python3.7 && \
|
||||
chown -R heralding:heralding /var/log/heralding && \
|
||||
#
|
||||
# Clean up
|
||||
|
54
docker/heralding/Dockerfile.old
Normal file
54
docker/heralding/Dockerfile.old
Normal file
@ -0,0 +1,54 @@
|
||||
FROM alpine
|
||||
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
|
||||
# Install packages
|
||||
RUN apk -U --no-cache add \
|
||||
build-base \
|
||||
git \
|
||||
libcap \
|
||||
libffi-dev \
|
||||
openssl-dev \
|
||||
libzmq \
|
||||
postgresql-dev \
|
||||
python3 \
|
||||
python3-dev \
|
||||
py-virtualenv && \
|
||||
pip3 install --no-cache-dir --upgrade pip && \
|
||||
|
||||
# Setup heralding
|
||||
mkdir -p /opt && \
|
||||
cd /opt/ && \
|
||||
git clone --depth=1 https://github.com/johnnykv/heralding && \
|
||||
cd heralding && \
|
||||
pip3 install --no-cache-dir -r requirements.txt && \
|
||||
pip3 install --no-cache-dir . && \
|
||||
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 heralding && \
|
||||
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 heralding && \
|
||||
mkdir -p /var/log/heralding/ /etc/heralding && \
|
||||
mv /root/dist/heralding.yml /etc/heralding/ && \
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python3.6 && \
|
||||
chown -R heralding:heralding /var/log/heralding && \
|
||||
|
||||
# Clean up
|
||||
apk del --purge \
|
||||
build-base \
|
||||
git \
|
||||
libcap \
|
||||
libffi-dev \
|
||||
libressl-dev \
|
||||
postgresql-dev \
|
||||
python3-dev \
|
||||
py-virtualenv && \
|
||||
rm -rf /root/* \
|
||||
/var/cache/apk/* \
|
||||
/opt/heralding
|
||||
|
||||
# Start elasticpot
|
||||
STOPSIGNAL SIGINT
|
||||
WORKDIR /tmp/heralding/
|
||||
USER heralding:heralding
|
||||
CMD exec heralding -c /etc/heralding/heralding.yml -l /var/log/heralding/heralding.log
|
@ -30,7 +30,7 @@ services:
|
||||
- "3389:3389"
|
||||
- "5432:5432"
|
||||
- "5900:5900"
|
||||
image: "dtagdevsec/heralding:2006"
|
||||
image: "dtagdevsec/heralding:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/heralding/log:/var/log/heralding
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
@ -28,7 +28,6 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
sed -i 's/bytes/size/g' /opt/honeypy/loggers/file/honeypy_file.py && \
|
||||
sed -i 's/date_time/timestamp/g' /opt/honeypy/loggers/file/honeypy_file.py && \
|
||||
sed -i 's/data,/data.decode("hex"),/g' /opt/honeypy/loggers/file/honeypy_file.py && \
|
||||
sed -i 's/urllib3/urllib3 == 1.21.1/g' /opt/honeypy/requirements.txt && \
|
||||
virtualenv env && \
|
||||
cp /root/dist/services.cfg /opt/honeypy/etc && \
|
||||
cp /root/dist/honeypy.cfg /opt/honeypy/etc && \
|
||||
@ -38,7 +37,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
addgroup -g 2000 honeypy && \
|
||||
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 honeypy && \
|
||||
chown -R honeypy:honeypy /opt/honeypy && \
|
||||
setcap cap_net_bind_service=+ep /opt/honeypy/env/bin/python && \
|
||||
setcap cap_net_bind_service=+ep /opt/honeypy/env/bin/python2 && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge build-base \
|
||||
|
@ -20,7 +20,7 @@ services:
|
||||
- "2324:2324"
|
||||
- "4096:4096"
|
||||
- "9200:9200"
|
||||
image: "dtagdevsec/honeypy:2006"
|
||||
image: "dtagdevsec/honeypy:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/honeypy/log:/opt/honeypy/log
|
||||
|
@ -1,39 +0,0 @@
|
||||
FROM alpine:latest
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Install packages
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
apk -U --no-cache add \
|
||||
build-base \
|
||||
git \
|
||||
libcap \
|
||||
python2 \
|
||||
python2-dev \
|
||||
py2-pip \
|
||||
tcpdump && \
|
||||
#
|
||||
# Clone honeysap from git
|
||||
git clone --depth=1 https://github.com/SecureAuthCorp/HoneySAP /opt/honeysap && \
|
||||
cd /opt/honeysap && \
|
||||
mkdir conf && \
|
||||
cp /root/dist/* conf/ && \
|
||||
python setup.py install && \
|
||||
pip install -r requirements-optional.txt && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 honeysap && \
|
||||
adduser -S -s /bin/ash -u 2000 -D -g 2000 honeysap && \
|
||||
chown -R honeysap:honeysap /opt/honeysap && \
|
||||
# setcap cap_net_bind_service=+ep /opt/honeypy/env/bin/python && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge git && \
|
||||
rm -rf /root/* \
|
||||
/var/cache/apk/*
|
||||
#
|
||||
# Set workdir and start honeysap
|
||||
USER honeysap:honeysap
|
||||
WORKDIR /opt/honeysap
|
||||
CMD ["/opt/honeysap/bin/honeysap", "--config-file", "/opt/honeysap/conf/honeysap.yml"]
|
@ -1,6 +0,0 @@
|
||||
# HoneSAP default external profile route table
|
||||
# ============================================
|
||||
#
|
||||
|
||||
# Allow any protocols to 10.0.0.100 port 3200
|
||||
- allow,any,10.0.0.100,3200,
|
103
docker/honeysap/dist/honeysap.yml
vendored
103
docker/honeysap/dist/honeysap.yml
vendored
@ -1,103 +0,0 @@
|
||||
# HoneSAP default external profile configuration
|
||||
# ==============================================
|
||||
|
||||
# Console logging configuration
|
||||
# -----------------------------
|
||||
|
||||
# Level of console logging
|
||||
verbose: 2
|
||||
|
||||
# Use colored output
|
||||
colored_console: false
|
||||
|
||||
|
||||
# Miscellaneous configuration
|
||||
# ---------------------------
|
||||
|
||||
# Enable reloading after a change in one of the configuration files
|
||||
reload: true
|
||||
|
||||
# Address to listen for all services
|
||||
listener_address: 0.0.0.0
|
||||
|
||||
|
||||
# SAP instance configuration
|
||||
# --------------------------
|
||||
|
||||
# Release version
|
||||
release: "720"
|
||||
|
||||
|
||||
# Services configuration
|
||||
# ----------------------
|
||||
|
||||
services:
|
||||
-
|
||||
# SAP Router configuration
|
||||
# ------------------------
|
||||
service: SAPRouterService
|
||||
alias: ExternalSAPRouter
|
||||
enabled: yes
|
||||
listener_port: 3299
|
||||
|
||||
# Router version number
|
||||
router_version: 40
|
||||
|
||||
# Router patch version
|
||||
router_version_patch: 4
|
||||
|
||||
# Password for information requests. If present it will be required
|
||||
info_password:
|
||||
|
||||
# Wether the external administration would be enabled on this SAP Router
|
||||
external_admin: false
|
||||
|
||||
# Route table file
|
||||
route_table: !include external_route_table.yml
|
||||
|
||||
# Hostname for the SAP Router
|
||||
hostname: saprouter
|
||||
|
||||
-
|
||||
# SAP Dispatcher configuration
|
||||
# ----------------------------
|
||||
service: SAPDispatcherService
|
||||
alias: InternalDispatcherService
|
||||
enabled: yes
|
||||
virtual: yes
|
||||
listener_port: 3200
|
||||
listener_address: 10.0.0.100
|
||||
|
||||
# Name of the instance
|
||||
instance: NSP
|
||||
|
||||
# Client number
|
||||
client_no: "001"
|
||||
|
||||
# SID
|
||||
sid: PRD
|
||||
|
||||
# Hostname
|
||||
hostname: uscasf-sap01
|
||||
|
||||
|
||||
# Feeds configuration
|
||||
# -------------------
|
||||
|
||||
feeds:
|
||||
-
|
||||
feed: LogFeed
|
||||
log_filename: log/honeysap-external.log
|
||||
enabled: yes
|
||||
-
|
||||
feed: ConsoleFeed
|
||||
enabled: yes
|
||||
-
|
||||
feed: HPFeed
|
||||
channels:
|
||||
- honeysap.events
|
||||
feed_host: 10.250.250.20
|
||||
feed_port: 20000
|
||||
feed_ident: honeysap
|
||||
feed_secret: password
|
||||
enabled: no
|
@ -1,20 +0,0 @@
|
||||
version: '2.3'
|
||||
|
||||
networks:
|
||||
honeysap_local:
|
||||
|
||||
services:
|
||||
|
||||
# HoneySAP service
|
||||
honeysap:
|
||||
build: .
|
||||
container_name: honeysap
|
||||
restart: always
|
||||
networks:
|
||||
- honeysap_local
|
||||
ports:
|
||||
- "3299:3299"
|
||||
- "8001:8001"
|
||||
image: "dtagdevsec/honeysap:2006"
|
||||
volumes:
|
||||
- /data/honeysap/log:/opt/honeysap/log
|
@ -1,4 +1,4 @@
|
||||
FROM debian:buster-slim
|
||||
FROM debian:stretch-slim
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
#
|
||||
# Include dist
|
||||
@ -26,8 +26,8 @@ RUN apt-get update -y && \
|
||||
wget && \
|
||||
#
|
||||
# Install honeytrap from source
|
||||
git clone https://github.com/armedpot/honeytrap /root/honeytrap && \
|
||||
# git clone https://github.com/t3chn0m4g3/honeytrap /root/honeytrap && \
|
||||
cd /root/ && \
|
||||
git clone https://github.com/armedpot/honeytrap && \
|
||||
cd /root/honeytrap/ && \
|
||||
autoreconf -vfi && \
|
||||
./configure \
|
||||
|
@ -12,7 +12,7 @@ services:
|
||||
network_mode: "host"
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
image: "dtagdevsec/honeytrap:2006"
|
||||
image: "dtagdevsec/honeytrap:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
|
||||
|
@ -1,11 +1,10 @@
|
||||
### This is only for testing purposes, do NOT use for production
|
||||
FROM alpine:latest
|
||||
#
|
||||
FROM alpine
|
||||
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
|
||||
# Install packages
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
apk -U --no-cache add \
|
||||
RUN apk -U --no-cache add \
|
||||
build-base \
|
||||
coreutils \
|
||||
git \
|
||||
@ -16,7 +15,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
python \
|
||||
python-dev \
|
||||
sqlite && \
|
||||
#
|
||||
|
||||
# Install php sandbox from git
|
||||
git clone --depth=1 https://github.com/rep/hpfeeds /opt/hpfeeds && \
|
||||
cd /opt/hpfeeds/broker && \
|
||||
@ -24,10 +23,10 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
cp /root/dist/adduser.sql . && \
|
||||
cd /opt/hpfeeds/broker && timeout 5 python broker.py || : && \
|
||||
sqlite3 db.sqlite3 < adduser.sql && \
|
||||
#
|
||||
|
||||
#python setup.py build && \
|
||||
#python setup.py install && \
|
||||
#
|
||||
|
||||
# Clean up
|
||||
apk del --purge autoconf \
|
||||
build-base \
|
||||
@ -36,7 +35,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
python-dev && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
|
||||
# Set workdir and start glastopf
|
||||
WORKDIR /opt/hpfeeds/broker
|
||||
CMD python broker.py
|
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Install packages
|
||||
RUN apk -U --no-cache add \
|
||||
|
52
docker/mailoney/Dockerfile.old
Normal file
52
docker/mailoney/Dockerfile.old
Normal file
@ -0,0 +1,52 @@
|
||||
FROM alpine
|
||||
#
|
||||
# Install packages
|
||||
RUN apk -U --no-cache add \
|
||||
autoconf \
|
||||
automake \
|
||||
build-base \
|
||||
git \
|
||||
libcap \
|
||||
libtool \
|
||||
py-pip \
|
||||
python \
|
||||
python-dev && \
|
||||
#
|
||||
# Install libemu
|
||||
git clone --depth=1 https://github.com/buffer/libemu /root/libemu/ && \
|
||||
cd /root/libemu/ && \
|
||||
autoreconf -vi && \
|
||||
./configure && \
|
||||
make && \
|
||||
make install && \
|
||||
#
|
||||
# Install libemu python wrapper
|
||||
pip install --no-cache-dir --upgrade pip && \
|
||||
pip install --no-cache-dir \
|
||||
hpfeeds \
|
||||
pylibemu && \
|
||||
#
|
||||
# Install mailoney from git
|
||||
git clone --depth=1 https://github.com/awhitehatter/mailoney /opt/mailoney && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 mailoney && \
|
||||
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 mailoney && \
|
||||
chown -R mailoney:mailoney /opt/mailoney && \
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge autoconf \
|
||||
automake \
|
||||
build-base \
|
||||
git \
|
||||
py-pip \
|
||||
python-dev && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
# Set workdir and start mailoney
|
||||
STOPSIGNAL SIGINT
|
||||
USER mailoney:mailoney
|
||||
WORKDIR /opt/mailoney/
|
||||
CMD ["/usr/bin/python","mailoney.py","-i","0.0.0.0","-p","25","-s","mailrelay.local","-t","schizo_open_relay"]
|
@ -20,7 +20,7 @@ services:
|
||||
- mailoney_local
|
||||
ports:
|
||||
- "25:25"
|
||||
image: "dtagdevsec/mailoney:2006"
|
||||
image: "dtagdevsec/mailoney:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/mailoney/log:/opt/mailoney/logs
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Setup apk
|
||||
RUN apk -U --no-cache add \
|
||||
|
@ -14,7 +14,7 @@ services:
|
||||
- medpot_local
|
||||
ports:
|
||||
- "2575:2575"
|
||||
image: "dtagdevsec/medpot:2006"
|
||||
image: "dtagdevsec/medpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/medpot/log/:/var/log/medpot
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 16 KiB |
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Add source
|
||||
ADD . /opt/p0f
|
||||
|
@ -8,7 +8,7 @@ services:
|
||||
container_name: p0f
|
||||
restart: always
|
||||
network_mode: "host"
|
||||
image: "dtagdevsec/p0f:2006"
|
||||
image: "dtagdevsec/p0f:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/p0f/log:/var/log/p0f
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
|
@ -22,7 +22,7 @@ services:
|
||||
- rdpy_local
|
||||
ports:
|
||||
- "3389:3389"
|
||||
image: "dtagdevsec/rdpy:2006"
|
||||
image: "dtagdevsec/rdpy:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/rdpy/log:/var/log/rdpy
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine:3.10
|
||||
#
|
||||
# Get and install dependencies & packages
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
@ -6,61 +6,51 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
build-base \
|
||||
curl \
|
||||
git \
|
||||
jpeg-dev \
|
||||
libffi-dev \
|
||||
libxml2 \
|
||||
libxml2-dev \
|
||||
libxslt \
|
||||
libxslt-dev \
|
||||
musl \
|
||||
musl-dev \
|
||||
openjpeg-dev \
|
||||
openssl \
|
||||
openssl-dev \
|
||||
python3 \
|
||||
python3-dev \
|
||||
python \
|
||||
python-dev \
|
||||
py-cffi \
|
||||
py-pillow \
|
||||
py-future \
|
||||
py3-pip \
|
||||
swig \
|
||||
tinyxml \
|
||||
tinyxml-dev \
|
||||
zlib-dev && \
|
||||
py-pip \
|
||||
swig && \
|
||||
#
|
||||
# Setup user
|
||||
addgroup -g 2000 spiderfoot && \
|
||||
adduser -S -s /bin/ash -u 2000 -D -g 2000 spiderfoot && \
|
||||
#
|
||||
# Install spiderfoot
|
||||
# git clone --depth=1 https://github.com/smicallef/spiderfoot -b v2.12.0-final /home/spiderfoot && \
|
||||
git clone --depth=1 https://github.com/smicallef/spiderfoot /home/spiderfoot && \
|
||||
cd /home/spiderfoot && \
|
||||
pip3 install --no-cache-dir wheel && \
|
||||
pip3 install --no-cache-dir -r requirements.txt && \
|
||||
pip install --no-cache-dir openxmllib wheel && \
|
||||
pip install --no-cache-dir -r requirements.txt && \
|
||||
chown -R spiderfoot:spiderfoot /home/spiderfoot && \
|
||||
sed -i "s#'__docroot': ''#'__docroot': '\/spiderfoot'#" /home/spiderfoot/sf.py && \
|
||||
sed -i 's#raise cherrypy.HTTPRedirect("\/")#raise cherrypy.HTTPRedirect("\/spiderfoot")#' /home/spiderfoot/sfwebui.py && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge build-base \
|
||||
gcc \
|
||||
git \
|
||||
libffi-dev \
|
||||
libxml2-dev \
|
||||
libxslt-dev \
|
||||
musl-dev \
|
||||
openssl-dev \
|
||||
python3-dev \
|
||||
py3-pip \
|
||||
swig \
|
||||
tinyxml-dev && \
|
||||
python-dev \
|
||||
py-pip \
|
||||
py-setuptools && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
# Healthcheck
|
||||
#HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080'
|
||||
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080/spiderfoot/'
|
||||
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080'
|
||||
#
|
||||
# Set user, workdir and start spiderfoot
|
||||
USER spiderfoot:spiderfoot
|
||||
WORKDIR /home/spiderfoot
|
||||
CMD ["/usr/bin/python3.8", "sf.py","-l", "0.0.0.0:8080"]
|
||||
CMD ["/usr/bin/python", "sf.py", "0.0.0.0:8080"]
|
||||
|
@ -14,6 +14,6 @@ services:
|
||||
- spiderfoot_local
|
||||
ports:
|
||||
- "127.0.0.1:64303:8080"
|
||||
image: "dtagdevsec/spiderfoot:2006"
|
||||
image: "dtagdevsec/spiderfoot:1903"
|
||||
volumes:
|
||||
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
|
||||
|
@ -1,17 +1,90 @@
|
||||
FROM alpine:latest
|
||||
FROM alpine
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Install packages
|
||||
RUN apk -U --no-cache add \
|
||||
#RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
RUN apk -U add \
|
||||
ca-certificates \
|
||||
curl \
|
||||
file \
|
||||
libcap \
|
||||
wget && \
|
||||
apk -U add --repository http://dl-cdn.alpinelinux.org/alpine/edge/community \
|
||||
suricata && \
|
||||
geoip \
|
||||
hiredis \
|
||||
jansson \
|
||||
libcap-ng \
|
||||
libmagic \
|
||||
libmaxminddb \
|
||||
libnet \
|
||||
libnetfilter_queue \
|
||||
libnfnetlink \
|
||||
libpcap \
|
||||
luajit \
|
||||
lz4-libs \
|
||||
musl \
|
||||
nspr \
|
||||
nss \
|
||||
pcre \
|
||||
yaml \
|
||||
wget \
|
||||
automake \
|
||||
autoconf \
|
||||
build-base \
|
||||
cargo \
|
||||
file-dev \
|
||||
geoip-dev \
|
||||
hiredis-dev \
|
||||
jansson-dev \
|
||||
libtool \
|
||||
libcap-ng-dev \
|
||||
luajit-dev \
|
||||
libmaxminddb-dev \
|
||||
libpcap-dev \
|
||||
libnet-dev \
|
||||
libnetfilter_queue-dev \
|
||||
libnfnetlink-dev \
|
||||
lz4-dev \
|
||||
nss-dev \
|
||||
nspr-dev \
|
||||
pcre-dev \
|
||||
python3 \
|
||||
rust \
|
||||
yaml-dev && \
|
||||
#
|
||||
# We need latest libhtp[-dev] which is only available in community
|
||||
apk -U add --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community \
|
||||
libhtp \
|
||||
libhtp-dev && \
|
||||
#
|
||||
# Upgrade pip, install suricata-update to meet deps, however we will not be using it
|
||||
# to reduce image (no python needed) and use the update script.
|
||||
pip3 install --no-cache-dir --upgrade pip && \
|
||||
pip3 install --no-cache-dir suricata-update && \
|
||||
#
|
||||
# Get and build Suricata
|
||||
mkdir -p /opt/builder/ && \
|
||||
wget https://www.openinfosecfoundation.org/download/suricata-5.0.0.tar.gz && \
|
||||
tar xvfz suricata-5.0.0.tar.gz --strip-components=1 -C /opt/builder/ && \
|
||||
rm suricata-5.0.0.tar.gz && \
|
||||
cd /opt/builder && \
|
||||
./configure \
|
||||
--prefix=/usr \
|
||||
--sysconfdir=/etc \
|
||||
--mandir=/usr/share/man \
|
||||
--localstatedir=/var \
|
||||
--enable-non-bundled-htp \
|
||||
--enable-nfqueue \
|
||||
--enable-rust \
|
||||
--disable-gccmarch-native \
|
||||
--enable-hiredis \
|
||||
--enable-geoip \
|
||||
--enable-gccprotect \
|
||||
--enable-pie \
|
||||
--enable-luajit && \
|
||||
make && \
|
||||
make check && \
|
||||
make install && \
|
||||
make install-full && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 suri && \
|
||||
@ -19,6 +92,8 @@ RUN apk -U --no-cache add \
|
||||
chmod 644 /etc/suricata/*.config && \
|
||||
cp /root/dist/suricata.yaml /etc/suricata/suricata.yaml && \
|
||||
cp /root/dist/*.bpf /etc/suricata/ && \
|
||||
mkdir -p /etc/suricata/rules && \
|
||||
cp /opt/builder/rules/* /etc/suricata/rules/ && \
|
||||
#
|
||||
# Download the latest EmergingThreats ruleset, replace rulebase and enable all rules
|
||||
cp /root/dist/update.sh /usr/bin/ && \
|
||||
@ -26,6 +101,32 @@ RUN apk -U --no-cache add \
|
||||
update.sh OPEN && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge \
|
||||
automake \
|
||||
autoconf \
|
||||
build-base \
|
||||
cargo \
|
||||
file-dev \
|
||||
geoip-dev \
|
||||
hiredis-dev \
|
||||
jansson-dev \
|
||||
libtool \
|
||||
libhtp-dev \
|
||||
libcap-ng-dev \
|
||||
luajit-dev \
|
||||
libpcap-dev \
|
||||
libmaxminddb-dev \
|
||||
libnet-dev \
|
||||
libnetfilter_queue-dev \
|
||||
libnfnetlink-dev \
|
||||
lz4-dev \
|
||||
nss-dev \
|
||||
nspr-dev \
|
||||
pcre-dev \
|
||||
python3 \
|
||||
rust \
|
||||
yaml-dev && \
|
||||
rm -rf /opt/builder && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /tmp/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
|
@ -1,139 +0,0 @@
|
||||
FROM alpine
|
||||
#
|
||||
# VARS
|
||||
ENV VER=5.0.2
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Install packages
|
||||
#RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
RUN apk -U add \
|
||||
ca-certificates \
|
||||
curl \
|
||||
file \
|
||||
geoip \
|
||||
hiredis \
|
||||
jansson \
|
||||
libcap-ng \
|
||||
libmagic \
|
||||
libmaxminddb \
|
||||
libnet \
|
||||
libnetfilter_queue \
|
||||
libnfnetlink \
|
||||
libpcap \
|
||||
luajit \
|
||||
lz4-libs \
|
||||
musl \
|
||||
nspr \
|
||||
nss \
|
||||
pcre \
|
||||
yaml \
|
||||
wget \
|
||||
automake \
|
||||
autoconf \
|
||||
build-base \
|
||||
cargo \
|
||||
file-dev \
|
||||
geoip-dev \
|
||||
hiredis-dev \
|
||||
jansson-dev \
|
||||
libtool \
|
||||
libcap-ng-dev \
|
||||
luajit-dev \
|
||||
libmaxminddb-dev \
|
||||
libpcap-dev \
|
||||
libnet-dev \
|
||||
libnetfilter_queue-dev \
|
||||
libnfnetlink-dev \
|
||||
lz4-dev \
|
||||
nss-dev \
|
||||
nspr-dev \
|
||||
pcre-dev \
|
||||
python3 \
|
||||
rust \
|
||||
yaml-dev && \
|
||||
#
|
||||
# We need latest libhtp[-dev] which is only available in community
|
||||
apk -U add --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community \
|
||||
libhtp \
|
||||
libhtp-dev && \
|
||||
#
|
||||
# Upgrade pip, install suricata-update to meet deps, however we will not be using it
|
||||
# to reduce image (no python needed) and use the update script.
|
||||
pip3 install --no-cache-dir --upgrade pip && \
|
||||
pip3 install --no-cache-dir suricata-update && \
|
||||
#
|
||||
# Get and build Suricata
|
||||
mkdir -p /opt/builder/ && \
|
||||
wget https://www.openinfosecfoundation.org/download/suricata-$VER.tar.gz && \
|
||||
tar xvfz suricata-$VER.tar.gz --strip-components=1 -C /opt/builder/ && \
|
||||
rm suricata-$VER.tar.gz && \
|
||||
cd /opt/builder && \
|
||||
./configure \
|
||||
--prefix=/usr \
|
||||
--sysconfdir=/etc \
|
||||
--mandir=/usr/share/man \
|
||||
--localstatedir=/var \
|
||||
--enable-non-bundled-htp \
|
||||
--enable-nfqueue \
|
||||
--enable-rust \
|
||||
--disable-gccmarch-native \
|
||||
--enable-hiredis \
|
||||
--enable-geoip \
|
||||
--enable-gccprotect \
|
||||
--enable-pie \
|
||||
--enable-luajit && \
|
||||
make && \
|
||||
make check && \
|
||||
make install && \
|
||||
make install-full && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 suri && \
|
||||
adduser -S -H -u 2000 -D -g 2000 suri && \
|
||||
chmod 644 /etc/suricata/*.config && \
|
||||
cp /root/dist/suricata.yaml /etc/suricata/suricata.yaml && \
|
||||
cp /root/dist/*.bpf /etc/suricata/ && \
|
||||
mkdir -p /etc/suricata/rules && \
|
||||
cp /opt/builder/rules/* /etc/suricata/rules/ && \
|
||||
#
|
||||
# Download the latest EmergingThreats ruleset, replace rulebase and enable all rules
|
||||
cp /root/dist/update.sh /usr/bin/ && \
|
||||
chmod 755 /usr/bin/update.sh && \
|
||||
update.sh OPEN && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge \
|
||||
automake \
|
||||
autoconf \
|
||||
build-base \
|
||||
cargo \
|
||||
file-dev \
|
||||
geoip-dev \
|
||||
hiredis-dev \
|
||||
jansson-dev \
|
||||
libtool \
|
||||
libhtp-dev \
|
||||
libcap-ng-dev \
|
||||
luajit-dev \
|
||||
libpcap-dev \
|
||||
libmaxminddb-dev \
|
||||
libnet-dev \
|
||||
libnetfilter_queue-dev \
|
||||
libnfnetlink-dev \
|
||||
lz4-dev \
|
||||
nss-dev \
|
||||
nspr-dev \
|
||||
pcre-dev \
|
||||
python3 \
|
||||
rust \
|
||||
yaml-dev && \
|
||||
rm -rf /opt/builder && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /tmp/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
# Start suricata
|
||||
STOPSIGNAL SIGINT
|
||||
CMD SURICATA_CAPTURE_FILTER=$(update.sh $OINKCODE) && exec suricata -v -F $SURICATA_CAPTURE_FILTER -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:])
|
2
docker/suricata/dist/capture-filter.bpf
vendored
2
docker/suricata/dist/capture-filter.bpf
vendored
@ -1,3 +1,3 @@
|
||||
not (host sicherheitstacho.eu or community.sicherheitstacho.eu) and
|
||||
not (host sicherheitstacho.eu or community.sicherheitstacho.eu or listbot.sicherheitstacho.eu) and
|
||||
not (host deb.debian.org) and
|
||||
not (host index.docker.io or docker.io)
|
||||
|
@ -15,6 +15,6 @@ services:
|
||||
- NET_ADMIN
|
||||
- SYS_NICE
|
||||
- NET_RAW
|
||||
image: "dtagdevsec/suricata:2006"
|
||||
image: "dtagdevsec/suricata:1903"
|
||||
volumes:
|
||||
- /data/suricata/log:/var/log/suricata
|
||||
|
@ -14,7 +14,7 @@ services:
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
image: "dtagdevsec/redis:2006"
|
||||
image: "dtagdevsec/redis:1903"
|
||||
read_only: true
|
||||
|
||||
# PHP Sandbox service
|
||||
@ -23,12 +23,10 @@ services:
|
||||
container_name: tanner_phpox
|
||||
restart: always
|
||||
stop_signal: SIGKILL
|
||||
tmpfs:
|
||||
- /tmp:uid=2000,gid=2000
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
image: "dtagdevsec/phpox:2006"
|
||||
image: "dtagdevsec/phpox:1903"
|
||||
read_only: true
|
||||
|
||||
# Tanner API Service
|
||||
@ -42,7 +40,7 @@ services:
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
image: "dtagdevsec/tanner:2006"
|
||||
image: "dtagdevsec/tanner:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/tanner/log:/var/log/tanner
|
||||
@ -61,9 +59,7 @@ services:
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
# ports:
|
||||
# - "127.0.0.1:8091:8091"
|
||||
image: "dtagdevsec/tanner:2006"
|
||||
image: "dtagdevsec/tanner:1903"
|
||||
command: tannerweb
|
||||
read_only: true
|
||||
volumes:
|
||||
@ -82,7 +78,7 @@ services:
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
image: "dtagdevsec/tanner:2006"
|
||||
image: "dtagdevsec/tanner:1903"
|
||||
command: tanner
|
||||
read_only: true
|
||||
volumes:
|
||||
@ -104,6 +100,6 @@ services:
|
||||
- tanner_local
|
||||
ports:
|
||||
- "80:80"
|
||||
image: "dtagdevsec/snare:2006"
|
||||
image: "dtagdevsec/snare:1903"
|
||||
depends_on:
|
||||
- tanner
|
||||
|
@ -1,5 +1,8 @@
|
||||
FROM alpine:3.10
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Install packages
|
||||
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
apk -U --no-cache add \
|
||||
@ -29,6 +32,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
# Install PHP Sandbox
|
||||
git clone --depth=1 https://github.com/mushorg/phpox /opt/phpox && \
|
||||
cd /opt/phpox && \
|
||||
cp /root/dist/sandbox.py . && \
|
||||
pip3 install -r requirements.txt && \
|
||||
make && \
|
||||
#
|
||||
|
125
docker/tanner/phpox/dist/sandbox.py
vendored
Normal file
125
docker/tanner/phpox/dist/sandbox.py
vendored
Normal file
@ -0,0 +1,125 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
# Copyright (C) 2016 Lukas Rist
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or
|
||||
# modify it under the terms of the GNU General Public License
|
||||
# as published by the Free Software Foundation; either version 2
|
||||
# of the License, or (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program; if not, write to the Free Software
|
||||
# Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
||||
|
||||
import os
|
||||
import tempfile
|
||||
import json
|
||||
import asyncio
|
||||
import hashlib
|
||||
import argparse
|
||||
|
||||
from aiohttp import web
|
||||
from asyncio.subprocess import PIPE
|
||||
|
||||
from pprint import pprint
|
||||
|
||||
class PHPSandbox(object):
|
||||
@classmethod
|
||||
def php_tag_check(cls, script):
|
||||
with open(script, "r+") as check_file:
|
||||
file_content = check_file.read()
|
||||
if "<?" not in file_content:
|
||||
file_content = "<?php" + file_content
|
||||
if "?>" not in file_content:
|
||||
file_content += "?>"
|
||||
check_file.write(file_content)
|
||||
return script
|
||||
|
||||
@asyncio.coroutine
|
||||
def read_process(self):
|
||||
while True:
|
||||
line = yield from self.proc.stdout.readline()
|
||||
if not line:
|
||||
break
|
||||
else:
|
||||
self.stdout_value += line + b'\n'
|
||||
|
||||
@asyncio.coroutine
|
||||
def sandbox(self, script, phpbin="php7.0"):
|
||||
if not os.path.isfile(script):
|
||||
raise Exception("Sample not found: {0}".format(script))
|
||||
|
||||
try:
|
||||
cmd = [phpbin, "sandbox.php", script]
|
||||
self.proc = yield from asyncio.create_subprocess_exec(*cmd, stdout=PIPE)
|
||||
self.stdout_value = b''
|
||||
yield from asyncio.wait_for(self.read_process(), timeout=3)
|
||||
except Exception as e:
|
||||
try:
|
||||
self.proc.kill()
|
||||
except Exception:
|
||||
pass
|
||||
print("Error executing the sandbox: {}".format(e))
|
||||
# raise e
|
||||
return {'stdout': self.stdout_value.decode('utf-8')}
|
||||
|
||||
|
||||
class EchoServer(asyncio.Protocol):
|
||||
def connection_made(self, transport):
|
||||
# peername = transport.get_extra_info('peername')
|
||||
# print('connection from {}'.format(peername))
|
||||
self.transport = transport
|
||||
|
||||
def data_received(self, data):
|
||||
# print('data received: {}'.format(data.decode()))
|
||||
self.transport.write(data)
|
||||
|
||||
|
||||
@asyncio.coroutine
|
||||
def api(request):
|
||||
data = yield from request.read()
|
||||
file_md5 = hashlib.md5(data).hexdigest()
|
||||
with tempfile.NamedTemporaryFile(suffix='.php') as f:
|
||||
f.write(data)
|
||||
f.seek(0)
|
||||
sb = PHPSandbox()
|
||||
try:
|
||||
server = yield from loop.create_server(EchoServer, '127.0.0.1', 1234)
|
||||
ret = yield from asyncio.wait_for(sb.sandbox(f.name, phpbin), timeout=10)
|
||||
server.close()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
ret['file_md5'] = file_md5
|
||||
return web.Response(body=json.dumps(ret, sort_keys=True, indent=4).encode('utf-8'))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--phpbin", help="PHP binary, ex: php7.0", default="php7.0")
|
||||
args = parser.parse_args()
|
||||
phpbin = args.phpbin
|
||||
|
||||
app = web.Application()
|
||||
app.router.add_route('POST', '/', api)
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
handler = app.make_handler()
|
||||
f = loop.create_server(handler, '0.0.0.0', 8088)
|
||||
srv = loop.run_until_complete(f)
|
||||
print('serving on', srv.sockets[0].getsockname())
|
||||
try:
|
||||
loop.run_forever()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
finally:
|
||||
loop.run_until_complete(handler.finish_connections(1.0))
|
||||
srv.close()
|
||||
loop.run_until_complete(srv.wait_closed())
|
||||
loop.run_until_complete(app.finish())
|
||||
loop.close()
|
@ -13,7 +13,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
rm -rf /tmp/* /var/tmp/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
# Start redis
|
||||
# Start conpot
|
||||
STOPSIGNAL SIGKILL
|
||||
USER nobody:nobody
|
||||
CMD redis-server /etc/redis.conf
|
||||
|
@ -18,10 +18,8 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
#
|
||||
# Setup Tanner
|
||||
git clone --depth=1 https://github.com/mushorg/tanner /opt/tanner && \
|
||||
cd /opt/tanner/ && \
|
||||
# git fetch origin pull/364/head:test && \
|
||||
# git checkout test && \
|
||||
cp /root/dist/config.py /opt/tanner/tanner/ && \
|
||||
cd /opt/tanner/ && \
|
||||
pip3 install --no-cache-dir setuptools && \
|
||||
pip3 install --no-cache-dir -r requirements.txt && \
|
||||
python3 setup.py install && \
|
||||
@ -58,7 +56,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
rm -rf /tmp/* /var/tmp/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
# Start tanner
|
||||
# Start conpot
|
||||
STOPSIGNAL SIGKILL
|
||||
USER tanner:tanner
|
||||
WORKDIR /opt/tanner
|
||||
|
12
docker/tanner/tanner/dist/config.py
vendored
12
docker/tanner/tanner/dist/config.py
vendored
@ -13,10 +13,10 @@ config_template = {'DATA': {'db_config': '/opt/tanner/db/db_config.json',
|
||||
'tornado': '/opt/tanner/data/tornado.py',
|
||||
'mako': '/opt/tanner/data/mako.py'
|
||||
},
|
||||
'TANNER': {'host': 'tanner', 'port': 8090},
|
||||
'WEB': {'host': 'tanner_web', 'port': 8091},
|
||||
'API': {'host': 'tanner_api', 'port': 8092, 'auth': False, 'auth_signature': 'tanner_api_auth'},
|
||||
'PHPOX': {'host': 'tanner_phpox', 'port': 8088},
|
||||
'TANNER': {'host': '0.0.0.0', 'port': 8090},
|
||||
'WEB': {'host': '0.0.0.0', 'port': 8091},
|
||||
'API': {'host': '0.0.0.0', 'port': 8092},
|
||||
'PHPOX': {'host': '0.0.0.0', 'port': 8088},
|
||||
'REDIS': {'host': 'tanner_redis', 'port': 6379, 'poolsize': 80, 'timeout': 1},
|
||||
'EMULATORS': {'root_dir': '/opt/tanner'},
|
||||
'EMULATOR_ENABLED': {'sqli': True, 'rfi': True, 'lfi': False, 'xss': True, 'cmd_exec': False,
|
||||
@ -25,7 +25,6 @@ config_template = {'DATA': {'db_config': '/opt/tanner/db/db_config.json',
|
||||
'SQLI': {'type': 'SQLITE', 'db_name': 'tanner_db', 'host': 'localhost', 'user': 'root',
|
||||
'password': 'user_pass'},
|
||||
'XXE_INJECTION': {'OUT_OF_BAND': False},
|
||||
'RFI': {"allow_insecure": True},
|
||||
'DOCKER': {'host_image': 'busybox:latest'},
|
||||
'LOGGER': {'log_debug': '/tmp/tanner/tanner.log', 'log_err': '/tmp/tanner/tanner.err'},
|
||||
'MONGO': {'enabled': False, 'URI': 'mongodb://localhost'},
|
||||
@ -34,8 +33,7 @@ config_template = {'DATA': {'db_config': '/opt/tanner/db/db_config.json',
|
||||
'LOCALLOG': {'enabled': True, 'PATH': '/var/log/tanner/tanner_report.json'},
|
||||
'CLEANLOG': {'enabled': False},
|
||||
'REMOTE_DOCKERFILE': {'GITHUB': "https://raw.githubusercontent.com/mushorg/tanner/master/docker/"
|
||||
"tanner/template_injection/Dockerfile"},
|
||||
'SESSIONS': {"delete_timeout": 300}
|
||||
"tanner/template_injection/Dockerfile"}
|
||||
}
|
||||
|
||||
|
||||
|
@ -34,7 +34,7 @@ services:
|
||||
- adbhoney_local
|
||||
ports:
|
||||
- "5555:5555"
|
||||
image: "dtagdevsec/adbhoney:2006"
|
||||
image: "dtagdevsec/adbhoney:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/adbhoney/log:/opt/adbhoney/log
|
||||
@ -50,7 +50,7 @@ services:
|
||||
ports:
|
||||
- "5000:5000/udp"
|
||||
- "8443:8443"
|
||||
image: "dtagdevsec/ciscoasa:2006"
|
||||
image: "dtagdevsec/ciscoasa:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/ciscoasa/log:/var/log/ciscoasa
|
||||
@ -63,7 +63,7 @@ services:
|
||||
- citrixhoneypot_local
|
||||
ports:
|
||||
- "443:443"
|
||||
image: "dtagdevsec/citrixhoneypot:2006"
|
||||
image: "dtagdevsec/citrixhoneypot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
|
||||
@ -85,7 +85,7 @@ services:
|
||||
ports:
|
||||
- "161:161"
|
||||
- "2404:2404"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
@ -106,7 +106,7 @@ services:
|
||||
- conpot_local_guardian_ast
|
||||
ports:
|
||||
- "10001:10001"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
@ -127,7 +127,7 @@ services:
|
||||
- conpot_local_ipmi
|
||||
ports:
|
||||
- "623:623"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
@ -149,7 +149,7 @@ services:
|
||||
ports:
|
||||
- "1025:1025"
|
||||
- "50100:50100"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
@ -166,7 +166,7 @@ services:
|
||||
ports:
|
||||
- "22:22"
|
||||
- "23:23"
|
||||
image: "dtagdevsec/cowrie:2006"
|
||||
image: "dtagdevsec/cowrie:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
|
||||
@ -198,7 +198,7 @@ services:
|
||||
- "5060:5060/udp"
|
||||
- "5061:5061"
|
||||
- "27017:27017"
|
||||
image: "dtagdevsec/dionaea:2006"
|
||||
image: "dtagdevsec/dionaea:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
|
||||
@ -220,7 +220,7 @@ services:
|
||||
network_mode: "host"
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
image: "dtagdevsec/glutton:2006"
|
||||
image: "dtagdevsec/glutton:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/glutton/log:/var/log/glutton
|
||||
@ -250,7 +250,7 @@ services:
|
||||
- "1080:1080"
|
||||
- "5432:5432"
|
||||
- "5900:5900"
|
||||
image: "dtagdevsec/heralding:2006"
|
||||
image: "dtagdevsec/heralding:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/heralding/log:/var/log/heralding
|
||||
@ -269,7 +269,7 @@ services:
|
||||
- "2324:2324"
|
||||
- "4096:4096"
|
||||
- "9200:9200"
|
||||
image: "dtagdevsec/honeypy:2006"
|
||||
image: "dtagdevsec/honeypy:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/honeypy/log:/opt/honeypy/log
|
||||
@ -301,7 +301,7 @@ services:
|
||||
- medpot_local
|
||||
ports:
|
||||
- "2575:2575"
|
||||
image: "dtagdevsec/medpot:2006"
|
||||
image: "dtagdevsec/medpot:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/medpot/log/:/var/log/medpot
|
||||
@ -322,7 +322,7 @@ services:
|
||||
- rdpy_local
|
||||
ports:
|
||||
- "3389:3389"
|
||||
image: "dtagdevsec/rdpy:2006"
|
||||
image: "dtagdevsec/rdpy:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/rdpy/log:/var/log/rdpy
|
||||
@ -335,7 +335,7 @@ services:
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
image: "dtagdevsec/redis:2006"
|
||||
image: "dtagdevsec/redis:1903"
|
||||
read_only: true
|
||||
|
||||
## PHP Sandbox service
|
||||
@ -345,7 +345,7 @@ services:
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
image: "dtagdevsec/phpox:2006"
|
||||
image: "dtagdevsec/phpox:1903"
|
||||
read_only: true
|
||||
|
||||
## Tanner API Service
|
||||
@ -357,7 +357,7 @@ services:
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
image: "dtagdevsec/tanner:2006"
|
||||
image: "dtagdevsec/tanner:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/tanner/log:/var/log/tanner
|
||||
@ -366,21 +366,21 @@ services:
|
||||
- tanner_redis
|
||||
|
||||
## Tanner WEB Service
|
||||
# tanner_web:
|
||||
# container_name: tanner_web
|
||||
# restart: always
|
||||
# tmpfs:
|
||||
# - /tmp/tanner:uid=2000,gid=2000
|
||||
# tty: true
|
||||
# networks:
|
||||
# - tanner_local
|
||||
# image: "dtagdevsec/tanner:2006"
|
||||
# command: tannerweb
|
||||
# read_only: true
|
||||
# volumes:
|
||||
# - /data/tanner/log:/var/log/tanner
|
||||
# depends_on:
|
||||
# - tanner_redis
|
||||
tanner_web:
|
||||
container_name: tanner_web
|
||||
restart: always
|
||||
tmpfs:
|
||||
- /tmp/tanner:uid=2000,gid=2000
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
image: "dtagdevsec/tanner:1903"
|
||||
command: tannerweb
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/tanner/log:/var/log/tanner
|
||||
depends_on:
|
||||
- tanner_redis
|
||||
|
||||
## Tanner Service
|
||||
tanner:
|
||||
@ -391,7 +391,7 @@ services:
|
||||
tty: true
|
||||
networks:
|
||||
- tanner_local
|
||||
image: "dtagdevsec/tanner:2006"
|
||||
image: "dtagdevsec/tanner:1903"
|
||||
command: tanner
|
||||
read_only: true
|
||||
volumes:
|
||||
@ -399,7 +399,7 @@ services:
|
||||
- /data/tanner/files:/opt/tanner/files
|
||||
depends_on:
|
||||
- tanner_api
|
||||
# - tanner_web
|
||||
- tanner_web
|
||||
- tanner_phpox
|
||||
|
||||
## Snare Service
|
||||
@ -411,7 +411,7 @@ services:
|
||||
- tanner_local
|
||||
ports:
|
||||
- "80:80"
|
||||
image: "dtagdevsec/snare:2006"
|
||||
image: "dtagdevsec/snare:1903"
|
||||
depends_on:
|
||||
- tanner
|
||||
|
||||
@ -429,7 +429,7 @@ services:
|
||||
- NET_ADMIN
|
||||
- SYS_NICE
|
||||
- NET_RAW
|
||||
image: "dtagdevsec/fatt:2006"
|
||||
image: "dtagdevsec/fatt:1903"
|
||||
volumes:
|
||||
- /data/fatt/log:/opt/fatt/log
|
||||
|
||||
@ -438,7 +438,7 @@ services:
|
||||
container_name: p0f
|
||||
restart: always
|
||||
network_mode: "host"
|
||||
image: "dtagdevsec/p0f:2006"
|
||||
image: "dtagdevsec/p0f:1903"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/p0f/log:/var/log/p0f
|
||||
@ -455,7 +455,7 @@ services:
|
||||
- NET_ADMIN
|
||||
- SYS_NICE
|
||||
- NET_RAW
|
||||
image: "dtagdevsec/suricata:2006"
|
||||
image: "dtagdevsec/suricata:1903"
|
||||
volumes:
|
||||
- /data/suricata/log:/var/log/suricata
|
||||
|
||||
@ -472,7 +472,7 @@ services:
|
||||
- cyberchef_local
|
||||
ports:
|
||||
- "127.0.0.1:64299:8000"
|
||||
image: "dtagdevsec/cyberchef:2006"
|
||||
image: "dtagdevsec/cyberchef:1903"
|
||||
read_only: true
|
||||
|
||||
#### ELK
|
||||
@ -482,7 +482,7 @@ services:
|
||||
restart: always
|
||||
environment:
|
||||
- bootstrap.memory_lock=true
|
||||
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
|
||||
- ES_JAVA_OPTS=-Xms1024m -Xmx1024m
|
||||
- ES_TMPDIR=/tmp
|
||||
cap_add:
|
||||
- IPC_LOCK
|
||||
@ -553,7 +553,7 @@ services:
|
||||
- EWS_HPFEEDS_FORMAT=json
|
||||
env_file:
|
||||
- /opt/tpot/etc/compose/elk_environment
|
||||
image: "dtagdevsec/ewsposter:2006"
|
||||
image: "dtagdevsec/ewsposter:1903"
|
||||
volumes:
|
||||
- /data:/data
|
||||
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
|
||||
@ -599,6 +599,6 @@ services:
|
||||
- spiderfoot_local
|
||||
ports:
|
||||
- "127.0.0.1:64303:8080"
|
||||
image: "dtagdevsec/spiderfoot:2006"
|
||||
image: "dtagdevsec/spiderfoot:1903"
|
||||
volumes:
|
||||
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
|
||||
|
@ -16,11 +16,11 @@ actions:
|
||||
disable_action: False
|
||||
filters:
|
||||
- filtertype: pattern
|
||||
kind: timestring
|
||||
value: '%Y.%m.%d'
|
||||
kind: prefix
|
||||
value: logstash-
|
||||
- filtertype: age
|
||||
source: name
|
||||
direction: older
|
||||
timestring: '%Y.%m.%d'
|
||||
unit: days
|
||||
unit_count: 60
|
||||
unit_count: 90
|
||||
|
@ -11,21 +11,22 @@ myPROGRESSBOXCONF=" --backtitle "$myBACKTITLE" --progressbox 24 80"
|
||||
mySITES="https://hub.docker.com https://github.com https://pypi.python.org https://debian.org"
|
||||
myTPOTCOMPOSE="/opt/tpot/etc/tpot.yml"
|
||||
myLSB_STABLE_SUPPORTED="stretch buster"
|
||||
myLSB_TESTING_SUPPORTED="stable"
|
||||
myLSB_TESTING_SUPPORTED="sid"
|
||||
myREMOTESITES="https://hub.docker.com https://github.com https://pypi.python.org https://debian.org"
|
||||
myPREINSTALLPACKAGES="aria2 apache2-utils cracklib-runtime curl dialog figlet fuse grc libcrack2 libpq-dev lsb-release netselect-apt net-tools software-properties-common toilet"
|
||||
myINSTALLPACKAGES="aria2 apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit console-setup console-setup-linux cracklib-runtime curl debconf-utils dialog dnsutils docker.io docker-compose elasticsearch-curator ethtool fail2ban figlet genisoimage git glances grc haveged html2text htop iptables iw jq kbd libcrack2 libltdl7 libpam-google-authenticator man mosh multitail netselect-apt net-tools npm ntp openssh-server openssl pass pigz prips software-properties-common syslinux psmisc pv python3-pip toilet unattended-upgrades unzip vim wget wireless-tools wpasupplicant"
|
||||
myPREINSTALLPACKAGES="aria2 apache2-utils curl cracklib-runtime dialog figlet fuse grc libcrack2 libpq-dev lsb-release netselect-apt net-tools software-properties-common toilet"
|
||||
myINSTALLPACKAGES="aria2 apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit cockpit-docker console-setup console-setup-linux curl debconf-utils dialog dnsutils docker.io docker-compose ethtool fail2ban figlet genisoimage git glances grc haveged html2text htop iptables iw jq kbd libcrack2 libltdl7 man mosh multitail netselect-apt net-tools npm ntp openssh-server openssl pass pigz prips software-properties-common syslinux psmisc pv python3-pip toilet unattended-upgrades unzip vim wget wireless-tools wpasupplicant"
|
||||
myINFO="\
|
||||
###########################################
|
||||
### T-Pot Installer for Debian (Stable) ###
|
||||
###########################################
|
||||
########################################
|
||||
### T-Pot Installer for Debian (Sid) ###
|
||||
########################################
|
||||
|
||||
Disclaimer:
|
||||
This script will install T-Pot on this system.
|
||||
By running the script you know what you are doing:
|
||||
1. SSH will be reconfigured to tcp/64295.
|
||||
2. Please ensure other means of access to this system in case something goes wrong.
|
||||
3. At best this script will be executed on the console instead through a SSH session.
|
||||
2. Your Debian installation will be upgraded to Sid / unstable.
|
||||
3. Please ensure other means of access to this system in case something goes wrong.
|
||||
4. At best this script will be executed on the console instead through a SSH session.
|
||||
|
||||
########################################
|
||||
|
||||
@ -278,21 +279,21 @@ function fuCHECKNET {
|
||||
# Install T-Pot dependencies
|
||||
function fuGET_DEPS {
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
# Determine fastest mirror
|
||||
echo
|
||||
echo "### Determine fastest mirror for your location."
|
||||
echo
|
||||
netselect-apt -n -a amd64 stable && cp sources.list /etc/apt/
|
||||
mySOURCESCHECK=$(cat /etc/apt/sources.list | grep -c stable)
|
||||
if [ "$mySOURCESCHECK" == "0" ]
|
||||
then
|
||||
echo "### Automatic mirror selection failed, using main mirror."
|
||||
# Point to Debian (stable)
|
||||
# # Determine fastest mirror
|
||||
# echo
|
||||
# echo "### Determine fastest mirror for your location."
|
||||
# echo
|
||||
# netselect-apt -n -a amd64 unstable && cp sources.list /etc/apt/
|
||||
# mySOURCESCHECK=$(cat /etc/apt/sources.list | grep -c unstable)
|
||||
# if [ "$mySOURCESCHECK" == "0" ]
|
||||
# then
|
||||
# echo "### Automatic mirror selection failed, using main mirror."
|
||||
# Point to Debian (Sid, unstable)
|
||||
tee /etc/apt/sources.list <<EOF
|
||||
deb http://deb.debian.org/debian stable main contrib non-free
|
||||
deb-src http://deb.debian.org/debian stable main contrib non-free
|
||||
deb http://deb.debian.org/debian unstable main contrib non-free
|
||||
deb-src http://deb.debian.org/debian unstable main contrib non-free
|
||||
EOF
|
||||
fi
|
||||
# fi
|
||||
echo
|
||||
echo "### Getting update information."
|
||||
echo
|
||||
@ -402,7 +403,7 @@ for i in "$@"
|
||||
echo " A configuration example is available in \"tpotce/iso/installer/tpot.conf.dist\"."
|
||||
echo
|
||||
echo "--type=<[user, auto, iso]>"
|
||||
echo " user, use this if you want to manually install a T-Pot on a Debian (Stable) machine."
|
||||
echo " user, use this if you want to manually install a T-Pot on a Debian (testing) machine."
|
||||
echo " auto, implied if a configuration file is passed as an argument for automatic deployment."
|
||||
echo " iso, use this if you are a T-Pot developer and want to install a T-Pot from a pre-compiled iso."
|
||||
echo
|
||||
@ -683,8 +684,8 @@ echo "UseRoaming no" | tee -a /etc/ssh/ssh_config
|
||||
|
||||
# Installing elasticdump, yq
|
||||
fuBANNER "Installing pkgs"
|
||||
npm install elasticdump -g
|
||||
pip3 install yq
|
||||
npm install https://github.com/taskrabbit/elasticsearch-dump -g
|
||||
pip3 install elasticsearch-curator yq
|
||||
hash -r
|
||||
|
||||
# Cloning T-Pot from GitHub
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user