62 Commits

Author SHA1 Message Date
e588e62815 Update README.md 2020-03-16 16:38:39 +01:00
20cdb4f454 Update CHANGELOG.md 2020-03-16 16:29:39 +01:00
9d7b37b126 Merge pull request #585 from dtag-dev-sec/dev
Prepare release 19.03.3
2020-03-16 16:18:23 +01:00
62aae45dd6 prepare for release 19.03.3 2020-03-16 15:01:18 +00:00
21d48ca2bb remove honeysap for testing 2020-03-15 21:55:10 +00:00
80ee3cc5dd update elasticdump install location 2020-03-15 21:24:01 +00:00
67e70780bf tweaking for testing 2020-03-15 21:10:28 +00:00
5bbebd6fc4 Merge pull request #583 from dtag-dev-sec/t3chn0m4g3-patch-1
t3chn0m4g3 patch 1
2020-03-15 21:32:35 +01:00
cc70144c41 Update version 2020-03-15 21:29:10 +01:00
140a3d22ac Update update.sh 2020-03-15 21:28:46 +01:00
6a1f4f9aea Update update.sh 2020-03-15 21:27:33 +01:00
4409d9cdac Update tpot.seed 2020-03-15 21:25:44 +01:00
1452ca4e4c Update install.sh 2020-03-15 21:24:42 +01:00
313df2f644 Merge pull request #582 from dtag-dev-sec/master
sync
2020-03-15 21:20:57 +01:00
f6503cce3c Update update.sh 2020-03-15 21:13:07 +01:00
5badf352be deal with changes in sid
move to testing
cockpit-docker removed upstream, remove here
2020-03-15 21:11:26 +01:00
2201e072f6 testing honeysap 2020-03-12 16:02:43 +00:00
5192ce1dc7 Merge pull request #578 from dtag-dev-sec/dev
get top 100 src_ip's
2020-03-11 14:56:37 +01:00
5319c548ad get top 100 src_ip's 2020-03-11 13:51:49 +00:00
c32a150c51 typo 2020-03-10 16:49:41 +01:00
e77d24db08 Merge pull request #576 from dtag-dev-sec/dev
Dev
2020-03-10 16:47:31 +01:00
857190ec20 add 2fa, update reamde and changelog 2020-03-10 15:39:16 +00:00
809d598076 reactivate netselect-apt
automatic mirror detection needs ICMP
2020-03-10 10:12:50 +00:00
9a64c88aba Merge pull request #574 from dtag-dev-sec/dev
Update CHANGELOG.md
2020-03-09 15:15:23 +01:00
af3242e8d5 Update CHANGELOG.md 2020-03-09 15:14:46 +01:00
5ddf1fdd07 Merge pull request #573 from dtag-dev-sec/dev
bump version
2020-03-09 13:12:40 +01:00
020d4e9738 bump version 2020-03-09 12:11:13 +00:00
7081bafb6e Merge pull request #572 from dtag-dev-sec/dev
Bump NextGen to 20.06
2020-03-09 13:00:24 +01:00
fb06c46793 Merge branch 'dev' of https://github.com/dtag-dev-sec/tpotce into dev 2020-03-09 10:44:36 +00:00
f76d8ab161 update delivery window 2020-03-09 10:43:52 +00:00
a256ecedc8 Merge branch 'master' into dev 2020-03-09 11:20:39 +01:00
fb3777141b tanner, prepare merger w/ master 2020-03-09 09:44:26 +00:00
a18304dfdc tanner, prepare merger w/ master 2020-03-09 09:35:19 +00:00
6a703544c6 tweaking 2020-03-05 23:58:27 +00:00
941a0e1587 tweaking 2020-03-05 23:22:03 +00:00
692a21ddb1 tanner tweaking and testing
include unsecure, fix name bug
2020-03-05 23:12:49 +00:00
df22adb45d bump elk stack to 7.6.1 2020-03-05 21:20:11 +00:00
07c68c85bb tweaking 2020-03-04 14:36:03 +00:00
a4227e6a9f tweaking 2020-03-04 12:12:12 +00:00
3b8c959c66 tweaking 2020-03-03 12:30:57 +00:00
5d7a6f3270 tweaking 2020-03-02 15:23:05 +00:00
ee1342ce2a remove tanner_web from nextgen 2020-02-27 11:29:42 +00:00
53e9470d58 cleanup 2020-02-27 10:35:50 +00:00
21c68f75e2 tweaking 2020-02-26 14:43:02 +00:00
bf7d1299ca tweaking 2020-02-26 14:22:48 +00:00
70dca02ce4 tweaking 2020-02-25 16:59:22 +00:00
6bfcf8b1c4 tweaking 2020-02-24 16:43:34 +00:00
bd0e6936eb bump heralding to latest master
fixed by https://github.com/johnnykv/heralding/issues/129#event-3058184614
2020-02-21 11:38:29 +00:00
545209dce6 fix for honeytrap 2020-02-15 15:40:47 +00:00
153f7be9dc cleanup 2020-02-14 17:26:53 +00:00
faa5667246 bump adbhoney, cowrie, honeytrap to 20.06 2020-02-14 17:22:30 +00:00
aa4a93684d bump more images to 20.06 2020-02-14 15:30:55 +00:00
f11ad6b523 tweaking
ELK 7.6.0 is not ready for production, however it works if APM is enabled (disabled in config, so image wont build as precaution)
Remove SISSDEN from ewsposter, suricata
Bump suricata to 5.0.1
Alpine now support suricata incl. enabled JA3 support, move back to Alpine install
2020-02-14 15:28:06 +00:00
a49d560809 up java mem limit 2020-02-05 15:24:32 +00:00
ad861200de update mailoney 2020-02-03 14:46:43 +00:00
5ce5911ec1 cleanup 2020-02-03 12:59:21 +00:00
b9da9f04af adjust default field 2020-02-03 12:18:43 +00:00
92c0543c55 Merge branch 'dev' of https://github.com/dtag-dev-sec/tpotce into dev 2020-02-01 14:09:33 +00:00
984ba958fb logstash template not upgraded
with daily index enabled logstash will not be able to put new events into ES
simple solution, just deleting logstash template upon logstash start and leave it to logstash to upload the latest template
.
2020-02-01 14:08:23 +00:00
2d249ac6b1 tweak export script for new references 2020-01-31 17:43:04 +00:00
64729f5064 remove ilm support, breaks existing index at upgrade 2020-01-31 15:50:34 +00:00
5a4724bcba elk 7.x dev test 2020-01-31 14:21:55 +00:00
104 changed files with 912 additions and 932 deletions

View File

@ -1,5 +1,26 @@
# Changelog # Changelog
## 20200316
- **Move from Sid to Stable**
- Debian Stable has now all the packages and versions we need for T-Pot. As a consequence we can now move to the `stable` branch.
## 20200310
- **Add 2FA to Cockpit**
- Just run `2fa.sh` to enable two factor authentication in Cockpit.
- **Find fastest mirror with netselect-apt**
- Netselect-apt will find the fastest mirror close to you (outgoing ICMP required).
## 20200309
- **Bump Nextgen to 20.06**
- All NextGen images have been rebuilt to their latest master.
- ElasticStack bumped to 7.6.1 (Elasticsearch will need at least 2048MB of RAM now, T-Pot at least 8GB of RAM) and tweak to accomodate changes of 7.x.
- Fixed errors in Tanner / Snare which will now handle downloads of malware via SSL and store them correctly (thanks to @afeena).
- Fixed errors in Heralding which will now improve on RDP connections (thanks to @johnnykv, @realsdx).
- Fixed error in honeytrap which will now build in Debian/Buster (thanks to @tillmannw).
- Mailoney is now logging in JSON format (thanks to @monsherko).
- Base T-Pot landing page on Heimdall.
- Tweaking of tools and some minor bug fixing
## 20200116 ## 20200116
- **Bump ELK to latest 6.8.6** - **Bump ELK to latest 6.8.6**
- **Update ISO image to fix upstream bug of missing kernel modules** - **Update ISO image to fix upstream bug of missing kernel modules**

View File

@ -1,6 +1,6 @@
![T-Pot](doc/tpotsocial.png) ![T-Pot](doc/tpotsocial.png)
T-Pot 19.03 runs on Debian (Sid), is based heavily on T-Pot 19.03 runs on Debian (Stable), is based heavily on
[docker](https://www.docker.com/), [docker-compose](https://docs.docker.com/compose/) [docker](https://www.docker.com/), [docker-compose](https://docs.docker.com/compose/)
@ -43,7 +43,6 @@ Furthermore we use the following tools
# Table of Contents # Table of Contents
- [Changelog](#changelog)
- [Technical Concept](#concept) - [Technical Concept](#concept)
- [System Requirements](#requirements) - [System Requirements](#requirements)
- [Installation](#installation) - [Installation](#installation)
@ -74,66 +73,11 @@ Furthermore we use the following tools
- [Credits](#credits) - [Credits](#credits)
- [Stay tuned](#staytuned) - [Stay tuned](#staytuned)
- [Testimonial](#testimonial) - [Testimonial](#testimonial)
- [Fun Fact](#funfact)
<a name="changelog"></a>
# Release Notes
- **Move from Ubuntu 18.04 to Debian (Sid)**
- For almost 5 years Ubuntu LTS versions were our distributions of choice. Last year we made a design choice for T-Pot to be closer to a rolling release model and thus allowing us to issue smaller changes and releases in a more timely manner. The distribution of choice is Debian (Sid / unstable) which will provide us with the latest advancements in a Debian based distribution.
- **Include HoneyPy honeypot**
- *HoneyPy* is now included in the NEXTGEN installation type
- **Include Suricata 4.1.3**
- Building *Suricata 4.1.3* from scratch to enable JA3 and overall better protocol support.
- **Update tools to the latest versions**
- ELK Stack 6.6.2
- CyberChef 8.27.0
- SpiderFoot v3.0
- Cockpit 188
- NGINX is now built to enforce TLS 1.3 on the T-Pot WebUI
- **Update honeypots**
- Where possible / feasible the honeypots have been updated to their latest versions.
- *Cowrie* now supports *HASSH* generated hashes which allows for an easier identification of an attacker accross IP adresses.
- *Heralding* now supports *SOCKS5* emulation.
- **Update Dashboards & Visualizations**
- *Offset Dashboard* added to easily spot changes in attacks on a single dashboard in 24h time window.
- *Cowrie Dashboard* modified to integrate *HASSH* support / visualizations.
- *HoneyPy Dashboard* added to support latest honeypot addition.
- *Suricata Dashboard* modified to integrate *JA3* support / visualizations.
- **Debian mirror selection**
- During base install you now have to manually select a mirror.
- Upon T-Pot install the mirror closest to you will be determined automatically, `netselect-apt` requires you to allow ICMP outbound.
- This solves peering problems for most of the users speeding up installation and updates.
- **Bugs**
- Fixed issue #298 where the import and export of objects on the shell did not work.
- Fixed issue #313 where Spiderfoot raised a KeyError, which was previously fixed in upstream.
- Fixed error in Suricata where path for reference.config changed.
- **Release Cycle**
- As far as possible we will integrate changes now faster into the master branch, eliminating the need for monolithic releases. The update feature will be continuously improved on that behalf. However this might not account for all feature changes.
- **HPFEEDS Opt-In**
- If you want to share your T-Pot data with a 3rd party HPFEEDS broker such as [SISSDEN](https://sissden.eu) you can do so by creating an account at the SISSDEN portal and run `hpfeeds_optin.sh` on T-Pot.
- **Update Feature**
- For the ones who like to live on the bleeding edge of T-Pot development there is now an update script available in `/opt/tpot/update.sh`.
- This feature is beta and is mostly intended to provide you with the latest development advances without the need of reinstalling T-Pot.
- **Deprecated tools**
- *ctop* will no longer be part of T-Pot.
- **Fix #332**
- If T-Pot, opposed to the requirements, does not have full internet access netselect-apt fails to determine the fastest mirror as it needs ICMP and UDP outgoing. Should netselect-apt fail the default mirrors will be used.
- **Improve install speed with apt-fast**
- Migrating from a stable base install to Debian (Sid) requires downloading lots of packages. Depending on your geo location the download speed was already improved by introducing netselect-apt to determine the fastest mirror. With apt-fast the downloads will be even faster by downloading packages not only in parallel but also with multiple connections per package.
- **HPFEEDS Opt-In commandline option**
- Pass a hpfeeds config file as a commandline argument
- hpfeeds config is saved in `/data/ews/conf/hpfeeds.cfg`
- Update script restores hpfeeds config
- **Ansible T-Pot Deployment**
- Transitioned from bash script to all Ansible
- Reusable Ansible Playbook for OpenStack clouds
- Example Showcase with our Open Telekom Cloud
- Adaptable for other cloud providers
<a name="concept"></a> <a name="concept"></a>
# Technical Concept # Technical Concept
T-Pot is based on the network installer Debian (Stretch). During installation the whole system will be updated to Debian (Sid). T-Pot is based on the network installer Debian (Stable).
The honeypot daemons as well as other support components being used have been containerized using [docker](http://docker.io). The honeypot daemons as well as other support components being used have been containerized using [docker](http://docker.io).
This allows us to run multiple honeypot daemons on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment. This allows us to run multiple honeypot daemons on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment.
@ -302,7 +246,7 @@ In some cases it is necessary to install Debian 9.7 (Stretch) on your own:
- Within your company you have to setup special policies, software etc. - Within your company you have to setup special policies, software etc.
- You just like to stay on top of things. - You just like to stay on top of things.
The T-Pot Universal Installer will upgrade the system to Debian (Sid) and install all required T-Pot dependencies. The T-Pot Universal Installer will upgrade the system and install all required T-Pot dependencies.
Just follow these steps: Just follow these steps:
@ -387,7 +331,7 @@ In case you need external Admin UI access, forward TCP port 64294 to T-Pot, see
In case you need external SSH access, forward TCP port 64295 to T-Pot, see below. In case you need external SSH access, forward TCP port 64295 to T-Pot, see below.
In case you need external Web UI access, forward TCP port 64297 to T-Pot, see below. In case you need external Web UI access, forward TCP port 64297 to T-Pot, see below.
T-Pot requires outgoing git, http, https connections for updates (Debian, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location. T-Pot requires outgoing git, http, https connections for updates (Debian, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location. Also during first install outgoing ICMP is required additionally to find the closest and fastest mirror to you.
<a name="updates"></a> <a name="updates"></a>
# Updates # Updates
@ -396,7 +340,7 @@ For the ones of you who want to live on the bleeding edge of T-Pot development w
The Update script will: The Update script will:
- **mercilessly** overwrite local changes to be in sync with the T-Pot master branch - **mercilessly** overwrite local changes to be in sync with the T-Pot master branch
- upgrade the system to the packages available in Debian (Sid) - upgrade the system to the packages available in Debian (Stable)
- update all resources to be in-sync with the T-Pot master branch - update all resources to be in-sync with the T-Pot master branch
- ensure all T-Pot relevant system files will be patched / copied into the original T-Pot state - ensure all T-Pot relevant system files will be patched / copied into the original T-Pot state
- restore your custom ews.cfg and HPFEED settings from `/data/ews/conf` - restore your custom ews.cfg and HPFEED settings from `/data/ews/conf`
@ -424,6 +368,8 @@ If you do not have a SSH client at hand and still want to access the machine via
- user: **[tsec or user]** *you chose during one of the post install methods* - user: **[tsec or user]** *you chose during one of the post install methods*
- pass: **[password]** *you chose during the installation* - pass: **[password]** *you chose during the installation*
You can also add two factor authentication to Cockpit just by running `2fa.sh` on the command line.
![Cockpit Terminal](doc/cockpit3.png) ![Cockpit Terminal](doc/cockpit3.png)
<a name="kibana"></a> <a name="kibana"></a>
@ -487,9 +433,8 @@ We encourage you not to disable the data submission as it is the main purpose of
<a name="hpfeeds-optin"></a> <a name="hpfeeds-optin"></a>
## Opt-In HPFEEDS Data Submission ## Opt-In HPFEEDS Data Submission
As an Opt-In it is now possible to also share T-Pot data with 3rd party HPFEEDS brokers, such as [SISSDEN](https://sissden.eu). As an Opt-In it is now possible to also share T-Pot data with 3rd party HPFEEDS brokers.
If you want to share your T-Pot data you simply have to register an account with a 3rd party broker with its own benefits towards the community. Once registered you will receive your credentials to share events with the broker. In T-Pot you simply run `hpfeeds_optin.sh` which will ask for your credentials, in case of SISSDEN this is just `Ident` and `Secret`, everything else is pre-configured. If you want to share your T-Pot data you simply have to register an account with a 3rd party broker with its own benefits towards the community. You simply run `hpfeeds_optin.sh` which will ask for your credentials. It will automatically update `/opt/tpot/etc/tpot.yml` to deliver events to your desired broker.
It will automatically update `/opt/tpot/etc/tpot.yml` to deliver events to your desired broker.
The script can accept a config file as an argument, e.g. `./hpfeeds_optin.sh --conf=hpfeeds.cfg` The script can accept a config file as an argument, e.g. `./hpfeeds_optin.sh --conf=hpfeeds.cfg`
@ -586,8 +531,3 @@ We will be releasing a new version of T-Pot about every 6-12 months.
# Testimonial # Testimonial
One of the greatest feedback we have gotten so far is by one of the Conpot developers:<br> One of the greatest feedback we have gotten so far is by one of the Conpot developers:<br>
***"[...] I highly recommend T-Pot which is ... it's not exactly a swiss army knife .. it's more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. [...]"*** ***"[...] I highly recommend T-Pot which is ... it's not exactly a swiss army knife .. it's more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. [...]"***
<a name="funfact"></a>
# Fun Fact
In an effort of saving the environment we are now brewing our own Mate Ice Tea and consumed 73 liters so far for the T-Pot 19.03 development 😇

77
bin/2fa.sh Executable file
View File

@ -0,0 +1,77 @@
#!/bin/bash
# Make sure script is started as non-root.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" = "root" ]
then
echo "Need to run as non-root ..."
echo ""
exit
fi
# set vars, check deps
myPAM_COCKPIT_FILE="/etc/pam.d/cockpit"
if ! [ -s "$myPAM_COCKPIT_FILE" ];
then
echo "### Cockpit PAM module config does not exist. Something went wrong."
echo ""
exit 1
fi
myPAM_COCKPIT_GA="
# google authenticator for two-factor
auth required pam_google_authenticator.so
"
myAUTHENTICATOR=$(which google-authenticator)
if [ "$myAUTHENTICATOR" == "" ];
then
echo "### Could not locate google-authenticator, trying to install (if asked provide root password)."
echo ""
sudo apt-get update
sudo apt-get install -y libpam-google-authenticator
exec "$1" "$2"
exit 1
fi
# write PAM changes
function fuWRITE_PAM_CHANGES {
myCHECK=$(cat $myPAM_COCKPIT_FILE | grep -c "google")
if ! [ "$myCHECK" == "0" ];
then
echo "### PAM config already enabled. Skipped."
echo ""
else
echo "### Updating PAM config for Cockpit (if asked provide root password)."
echo "$myPAM_COCKPIT_GA" | sudo tee -a $myPAM_COCKPIT_FILE
sudo systemctl restart cockpit
fi
}
# create 2fa
function fuGEN_TOKEN {
echo "### Now generating token for Google Authenticator."
echo ""
google-authenticator -t -d -r 3 -R 30 -w 17
}
# main
echo "### This script will enable Two Factor Authentication for Cockpit."
echo ""
echo "### Please download one of the many authenticator apps from the appstore of your choice."
echo ""
while true;
do
read -p "### Ready to start (y/n)? " myANSWER
case $myANSWER in
[Yy]* ) echo "### OK. Starting ..."; break;;
[Nn]* ) echo "### Exiting."; exit;;
esac
done
fuWRITE_PAM_CHANGES
fuGEN_TOKEN
echo "Done. Re-run this script by every user who needs Cockpit access."
echo ""

View File

@ -32,7 +32,7 @@ trap fuCLEANUP EXIT
# Export index patterns # Export index patterns
mkdir -p patterns mkdir -p patterns
echo $myCOL1"### Now exporting"$myCOL0 $myINDEXCOUNT $myCOL1"index pattern fields." $myCOL0 echo $myCOL1"### Now exporting"$myCOL0 $myINDEXCOUNT $myCOL1"index pattern fields." $myCOL0
curl -s -XGET ''$myKIBANA'api/saved_objects/index-pattern/'$myINDEXID'' | jq '. | {attributes}' > patterns/$myINDEXID.json & curl -s -XGET ''$myKIBANA'api/saved_objects/index-pattern/'$myINDEXID'' | jq '. | {attributes, references}' > patterns/$myINDEXID.json &
echo echo
# Export dashboards # Export dashboards
@ -41,7 +41,7 @@ echo $myCOL1"### Now exporting"$myCOL0 $(echo $myDASHBOARDS | wc -w) $myCOL1"das
for i in $myDASHBOARDS; for i in $myDASHBOARDS;
do do
echo $myCOL1"###### "$i $myCOL0 echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myKIBANA'api/saved_objects/dashboard/'$i'' | jq '. | {attributes}' > dashboards/$i.json & curl -s -XGET ''$myKIBANA'api/saved_objects/dashboard/'$i'' | jq '. | {attributes, references}' > dashboards/$i.json &
done; done;
echo echo
@ -51,7 +51,7 @@ echo $myCOL1"### Now exporting"$myCOL0 $(echo $myVISUALIZATIONS | wc -w) $myCOL1
for i in $myVISUALIZATIONS; for i in $myVISUALIZATIONS;
do do
echo $myCOL1"###### "$i $myCOL0 echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myKIBANA'api/saved_objects/visualization/'$i'' | jq '. | {attributes}' > visualizations/$i.json & curl -s -XGET ''$myKIBANA'api/saved_objects/visualization/'$i'' | jq '. | {attributes, references}' > visualizations/$i.json &
done; done;
echo echo
@ -61,7 +61,7 @@ echo $myCOL1"### Now exporting"$myCOL0 $(echo $mySEARCHES | wc -w) $myCOL1"searc
for i in $mySEARCHES; for i in $mySEARCHES;
do do
echo $myCOL1"###### "$i $myCOL0 echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myKIBANA'api/saved_objects/search/'$i'' | jq '. | {attributes}' > searches/$i.json & curl -s -XGET ''$myKIBANA'api/saved_objects/search/'$i'' | jq '. | {attributes, references}' > searches/$i.json &
done; done;
echo echo

View File

@ -10,20 +10,6 @@ fi
myTPOTYMLFILE="/opt/tpot/etc/tpot.yml" myTPOTYMLFILE="/opt/tpot/etc/tpot.yml"
function fuSISSDEN () {
echo
echo "You chose SISSDEN, you just need to provide ident and secret"
echo
myENABLE="true"
myHOST="hpfeeds.sissden.eu"
myPORT="10000"
myCHANNEL="t-pot.events"
myCERT="/opt/ewsposter/sissden.pem"
read -p "Ident: " myIDENT
read -p "Secret: " mySECRET
myFORMAT="json"
}
function fuGENERIC () { function fuGENERIC () {
echo echo
echo "You chose generic, please provide all the details of the broker" echo "You chose generic, please provide all the details of the broker"
@ -119,8 +105,7 @@ echo
echo echo
echo "Please choose your broker" echo "Please choose your broker"
echo "---------------------------" echo "---------------------------"
echo "[1] - SISSDEN" echo "[1] - Generic (enter details manually)"
echo "[2] - Generic (enter details manually)"
echo "[0] - Opt out of HPFEEDS" echo "[0] - Opt out of HPFEEDS"
echo "[q] - Do not agree end exit" echo "[q] - Do not agree end exit"
echo echo
@ -130,10 +115,6 @@ while [ 1 != 2 ]
echo $mySELECT echo $mySELECT
case "$mySELECT" in case "$mySELECT" in
[1]) [1])
fuSISSDEN
break
;;
[2])
fuGENERIC fuGENERIC
break break
;; ;;

27
bin/mytopips.sh Executable file
View File

@ -0,0 +1,27 @@
#!/bin/bash
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit 1
else
echo "### Elasticsearch is available, now continuing."
echo
fi
function fuMYTOPIPS {
curl -s -XGET $myES"_search" -H 'Content-Type: application/json' -d'
{
"aggs": {
"ips": {
"terms": { "field": "src_ip.keyword", "size": 100 }
}
},
"size" : 0
}'
}
echo "### Aggregating top 100 source IPs in ES"
fuMYTOPIPS | jq '.aggregations.ips.buckets[].key' | tr -d '"'

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
# #
# Install packages # Install packages
RUN apk -U add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
git \ git \
libcap \ libcap \
python3 \ python3 \
@ -20,7 +21,7 @@ RUN apk -U add \
addgroup -g 2000 adbhoney && \ addgroup -g 2000 adbhoney && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 adbhoney && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 adbhoney && \
chown -R adbhoney:adbhoney /opt/adbhoney && \ chown -R adbhoney:adbhoney /opt/adbhoney && \
setcap cap_net_bind_service=+ep /usr/bin/python3.7 && \ setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
# #
# Clean up # Clean up
apk del --purge git \ apk del --purge git \

View File

@ -14,7 +14,7 @@ services:
- adbhoney_local - adbhoney_local
ports: ports:
- "5555:5555" - "5555:5555"
image: "dtagdevsec/adbhoney:1903" image: "dtagdevsec/adbhoney:2006"
read_only: true read_only: true
volumes: volumes:
- /data/adbhoney/log:/opt/adbhoney/log - /data/adbhoney/log:/opt/adbhoney/log

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
# #
# Setup env and apt # Setup env and apt
RUN apk -U upgrade && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U upgrade && \
apk add build-base \ apk add build-base \
git \ git \
libffi \ libffi \
@ -23,7 +24,6 @@ RUN apk -U upgrade && \
cd /opt/ && \ cd /opt/ && \
git clone --depth=1 https://github.com/cymmetria/ciscoasa_honeypot && \ git clone --depth=1 https://github.com/cymmetria/ciscoasa_honeypot && \
cd ciscoasa_honeypot && \ cd ciscoasa_honeypot && \
pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir -r requirements.txt && \
cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \ cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \
chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \ chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \

View File

@ -13,7 +13,7 @@ services:
ports: ports:
- "5000:5000/udp" - "5000:5000/udp"
- "8443:8443" - "8443:8443"
image: "dtagdevsec/ciscoasa:1903" image: "dtagdevsec/ciscoasa:2006"
read_only: true read_only: true
volumes: volumes:
- /data/ciscoasa/log:/var/log/ciscoasa - /data/ciscoasa/log:/var/log/ciscoasa

View File

@ -1,7 +1,8 @@
FROM alpine FROM alpine:latest
# #
# Install packages # Install packages
RUN apk -U add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
git \ git \
libcap \ libcap \
openssl \ openssl \

View File

@ -14,7 +14,7 @@ services:
- citrixhoneypot_local - citrixhoneypot_local
ports: ports:
- "443:443" - "443:443"
image: "dtagdevsec/citrixhoneypot:1903" image: "dtagdevsec/citrixhoneypot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs - /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs

View File

@ -35,7 +35,7 @@ services:
- "2121:21" - "2121:21"
- "44818:44818" - "44818:44818"
- "47808:47808" - "47808:47808"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -58,7 +58,7 @@ services:
ports: ports:
# - "161:161" # - "161:161"
- "2404:2404" - "2404:2404"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -80,7 +80,7 @@ services:
- conpot_local_guardian_ast - conpot_local_guardian_ast
ports: ports:
- "10001:10001" - "10001:10001"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -102,7 +102,7 @@ services:
- conpot_local_ipmi - conpot_local_ipmi
ports: ports:
- "623:623" - "623:623"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -125,7 +125,7 @@ services:
ports: ports:
- "1025:1025" - "1025:1025"
- "50100:50100" - "50100:50100"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot

View File

@ -4,7 +4,8 @@ FROM alpine
ADD dist/ /root/dist/ ADD dist/ /root/dist/
# #
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
bash \ bash \
build-base \ build-base \
git \ git \
@ -29,7 +30,7 @@ RUN apk -U add \
# Install cowrie # Install cowrie
mkdir -p /home/cowrie && \ mkdir -p /home/cowrie && \
cd /home/cowrie && \ cd /home/cowrie && \
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.0.0 && \ git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.0.2 && \
cd cowrie && \ cd cowrie && \
mkdir -p log && \ mkdir -p log && \
pip3 install --upgrade pip && \ pip3 install --upgrade pip && \

View File

@ -1,70 +0,0 @@
FROM alpine
# Include dist
ADD dist/ /root/dist/
# Get and install dependencies & packages
RUN apk -U --no-cache add \
bash \
build-base \
git \
gmp-dev \
libcap \
libffi-dev \
mpc1-dev \
mpfr-dev \
openssl \
openssl-dev \
python \
python-dev \
py-bcrypt \
py-mysqldb \
py-pip \
py-requests \
py-setuptools && \
# Setup user
addgroup -g 2000 cowrie && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 cowrie && \
# Install cowrie
mkdir -p /home/cowrie && \
cd /home/cowrie && \
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b 1.5.3 && \
cd cowrie && \
mkdir -p log && \
pip install --upgrade pip && \
pip install --upgrade -r requirements.txt && \
# Setup configs
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \
cp /root/dist/cowrie.cfg /home/cowrie/cowrie/cowrie.cfg && \
chown cowrie:cowrie -R /home/cowrie/* /usr/lib/python2.7/site-packages/twisted/plugins && \
# Start Cowrie once to prevent dropin.cache errors upon container start caused by read-only filesystem
su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie:/home/cowrie/cowrie/src && \
cd /home/cowrie/cowrie && \
/usr/bin/twistd --uid=2000 --gid=2000 -y cowrie.tac --pidfile cowrie.pid cowrie &" && \
sleep 10 && \
# Clean up
apk del --purge build-base \
git \
gmp-dev \
libcap \
libffi-dev \
mpc1-dev \
mpfr-dev \
openssl-dev \
python-dev \
py-mysqldb \
py-pip && \
rm -rf /root/* && \
rm -rf /var/cache/apk/* && \
rm -rf /home/cowrie/cowrie/cowrie.pid
# Start cowrie
ENV PYTHONPATH /home/cowrie/cowrie:/home/cowrie/cowrie/src
WORKDIR /home/cowrie/cowrie
USER cowrie:cowrie
CMD ["/usr/bin/twistd", "--nodaemon", "-y", "cowrie.tac", "--pidfile", "/tmp/cowrie/cowrie.pid", "cowrie"]

View File

@ -18,7 +18,7 @@ services:
ports: ports:
- "22:22" - "22:22"
- "23:23" - "23:23"
image: "dtagdevsec/cowrie:1903" image: "dtagdevsec/cowrie:2006"
read_only: true read_only: true
volumes: volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl - /data/cowrie/downloads:/home/cowrie/cowrie/dl

View File

@ -14,5 +14,5 @@ services:
- cyberchef_local - cyberchef_local
ports: ports:
- "127.0.0.1:64299:8000" - "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:1903" image: "dtagdevsec/cyberchef:2006"
read_only: true read_only: true

View File

Before

Width:  |  Height:  |  Size: 793 KiB

After

Width:  |  Height:  |  Size: 793 KiB

View File

@ -1,10 +1,11 @@
### This is only for testing purposes, do NOT use for production ### This is only for testing purposes, do NOT use for production
FROM alpine FROM alpine:latest
#
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
coreutils \ coreutils \
git \ git \
@ -15,7 +16,7 @@ RUN apk -U --no-cache add \
python \ python \
python-dev \ python-dev \
sqlite && \ sqlite && \
#
# Install php sandbox from git # Install php sandbox from git
git clone --depth=1 https://github.com/rep/hpfeeds /opt/hpfeeds && \ git clone --depth=1 https://github.com/rep/hpfeeds /opt/hpfeeds && \
cd /opt/hpfeeds/broker && \ cd /opt/hpfeeds/broker && \
@ -23,10 +24,10 @@ RUN apk -U --no-cache add \
cp /root/dist/adduser.sql . && \ cp /root/dist/adduser.sql . && \
cd /opt/hpfeeds/broker && timeout 5 python broker.py || : && \ cd /opt/hpfeeds/broker && timeout 5 python broker.py || : && \
sqlite3 db.sqlite3 < adduser.sql && \ sqlite3 db.sqlite3 < adduser.sql && \
#
#python setup.py build && \ #python setup.py build && \
#python setup.py install && \ #python setup.py install && \
#
# Clean up # Clean up
apk del --purge autoconf \ apk del --purge autoconf \
build-base \ build-base \
@ -35,7 +36,7 @@ RUN apk -U --no-cache add \
python-dev && \ python-dev && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Set workdir and start glastopf # Set workdir and start glastopf
WORKDIR /opt/hpfeeds/broker WORKDIR /opt/hpfeeds/broker
CMD python broker.py CMD python broker.py

View File

@ -1,4 +1,4 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -27,7 +27,7 @@ services:
- "5060:5060/udp" - "5060:5060/udp"
- "5061:5061" - "5061:5061"
- "27017:27017" - "27017:27017"
image: "dtagdevsec/dionaea:1903" image: "dtagdevsec/dionaea:2006"
read_only: true read_only: true
volumes: volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp - /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp

160
docker/docker-compose.yml Normal file
View File

@ -0,0 +1,160 @@
# T-Pot Image Builder (use only for building docker images)
version: '2.3'
services:
##################
#### Honeypots
##################
# Adbhoney service
adbhoney:
build: adbhoney/.
image: "dtagdevsec/adbhoney:2006"
# Ciscoasa service
ciscoasa:
build: ciscoasa/.
image: "dtagdevsec/ciscoasa:2006"
# CitrixHoneypot service
citrixhoneypot:
build: citrixhoneypot/.
image: "dtagdevsec/citrixhoneypot:2006"
# Conpot IEC104 service
conpot_IEC104:
build: conpot/.
image: "dtagdevsec/conpot:2006"
# Cowrie service
cowrie:
build: cowrie/.
image: "dtagdevsec/cowrie:2006"
# Dionaea service
dionaea:
build: dionaea/.
image: "dtagdevsec/dionaea:2006"
# Glutton service
glutton:
build: glutton/.
image: "dtagdevsec/glutton:2006"
# Heralding service
heralding:
build: heralding/.
image: "dtagdevsec/heralding:2006"
# HoneyPy service
honeypy:
build: honeypy/.
image: "dtagdevsec/honeypy:2006"
# Honeytrap service
honeytrap:
build: honeytrap/.
image: "dtagdevsec/honeytrap:2006"
# Mailoney service
mailoney:
build: mailoney/.
image: "dtagdevsec/mailoney:2006"
# Medpot service
medpot:
build: medpot/.
image: "dtagdevsec/medpot:2006"
# Rdpy service
rdpy:
build: rdpy/.
image: "dtagdevsec/rdpy:2006"
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
build: tanner/redis/.
image: "dtagdevsec/redis:2006"
## PHP Sandbox service
tanner_phpox:
build: tanner/phpox/.
image: "dtagdevsec/phpox:2006"
## Tanner API Service
tanner_api:
build: tanner/tanner/.
image: "dtagdevsec/tanner:2006"
## Snare Service
snare:
build: tanner/snare/.
image: "dtagdevsec/snare:2006"
##################
#### NSM
##################
# Fatt service
fatt:
build: fatt/.
image: "dtagdevsec/fatt:2006"
# P0f service
p0f:
build: p0f/.
image: "dtagdevsec/p0f:2006"
# Suricata service
suricata:
build: suricata/.
image: "dtagdevsec/suricata:2006"
##################
#### Tools
##################
# Cyberchef service
cyberchef:
build: cyberchef/.
image: "dtagdevsec/cyberchef:2006"
#### ELK
## Elasticsearch service
elasticsearch:
build: elk/elasticsearch/.
image: "dtagdevsec/elasticsearch:2006"
## Kibana service
kibana:
build: elk/kibana/.
image: "dtagdevsec/kibana:2006"
## Logstash service
logstash:
build: elk/logstash/.
image: "dtagdevsec/logstash:2006"
## Elasticsearch-head service
head:
build: elk/head/.
image: "dtagdevsec/head:2006"
# Ewsposter service
ewsposter:
build: ews/.
image: "dtagdevsec/ewsposter:2006"
# Nginx service
nginx:
build: heimdall/.
image: "dtagdevsec/nginx:2006"
# Spiderfoot service
spiderfoot:
build: spiderfoot/.
image: "dtagdevsec/spiderfoot:2006"

View File

@ -1,4 +1,4 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/

View File

@ -14,7 +14,7 @@ services:
- elasticpot_local - elasticpot_local
ports: ports:
- "9200:9200" - "9200:9200"
image: "dtagdevsec/elasticpot:1903" image: "dtagdevsec/elasticpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log - /data/elasticpot/log:/opt/ElasticpotPY/log

View File

@ -24,7 +24,7 @@ services:
mem_limit: 4g mem_limit: 4g
ports: ports:
- "127.0.0.1:64298:9200" - "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1903" image: "dtagdevsec/elasticsearch:2006"
volumes: volumes:
- /data:/data - /data:/data
@ -39,7 +39,7 @@ services:
condition: service_healthy condition: service_healthy
ports: ports:
- "127.0.0.1:64296:5601" - "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1903" image: "dtagdevsec/kibana:2006"
## Logstash service ## Logstash service
logstash: logstash:
@ -51,10 +51,10 @@ services:
condition: service_healthy condition: service_healthy
env_file: env_file:
- /opt/tpot/etc/compose/elk_environment - /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1903" image: "dtagdevsec/logstash:2006"
volumes: volumes:
- /data:/data - /data:/data
- /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf # - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
## Elasticsearch-head service ## Elasticsearch-head service
head: head:
@ -66,5 +66,5 @@ services:
condition: service_healthy condition: service_healthy
ports: ports:
- "127.0.0.1:64302:9100" - "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1903" image: "dtagdevsec/head:2006"
read_only: true read_only: true

View File

@ -1,5 +1,8 @@
FROM alpine FROM alpine
# #
# VARS
ENV ES_VER=7.6.1 \
JAVA_HOME=/usr/lib/jvm/java-11-openjdk
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
# #
@ -10,13 +13,13 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
bash \ bash \
curl \ curl \
nss \ nss \
openjdk8-jre && \ openjdk11-jre && \
# #
# Get and install packages # Get and install packages
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/elasticsearch/ && \ mkdir -p /usr/share/elasticsearch/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.6.tar.gz && \ aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VER-linux-x86_64.tar.gz && \
tar xvfz elasticsearch-6.8.6.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \ tar xvfz elasticsearch-$ES_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \
# #
# Add and move files # Add and move files
cd /root/dist/ && \ cd /root/dist/ && \
@ -40,5 +43,4 @@ HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9200/_cat/health'
# #
# Start ELK # Start ELK
USER elasticsearch:elasticsearch USER elasticsearch:elasticsearch
ENV JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk
CMD ["/usr/share/elasticsearch/bin/elasticsearch"] CMD ["/usr/share/elasticsearch/bin/elasticsearch"]

View File

@ -1,11 +1,16 @@
cluster.name: tpotcluster cluster.name: tpotcluster
node.name: "tpotcluster-node-01" node.name: "tpotcluster-node-01"
xpack.ml.enabled: false xpack.ml.enabled: false
xpack.security.enabled: false
xpack.ilm.enabled: false
path: path:
logs: /data/elk/log logs: /data/elk/log
data: /data/elk/data data: /data/elk/data
http.host: 0.0.0.0 http.host: 0.0.0.0
http.cors.enabled: true http.cors.enabled: true
http.cors.allow-origin: "*" http.cors.allow-origin: "*"
indices.query.bool.max_clause_count: 2000
cluster.initial_master_nodes:
- "tpotcluster-node-01"
discovery.zen.ping.unicast.hosts: discovery.zen.ping.unicast.hosts:
- localhost - localhost

View File

@ -24,6 +24,6 @@ services:
mem_limit: 2g mem_limit: 2g
ports: ports:
- "127.0.0.1:64298:9200" - "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1903" image: "dtagdevsec/elasticsearch:2006"
volumes: volumes:
- /data:/data - /data:/data

View File

@ -12,5 +12,5 @@ services:
# condition: service_healthy # condition: service_healthy
ports: ports:
- "127.0.0.1:64302:9100" - "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1903" image: "dtagdevsec/head:2006"
read_only: true read_only: true

View File

@ -1,5 +1,8 @@
FROM node:10.15.2-alpine FROM node:10.19.0-alpine
# #
# VARS
ENV KB_VER=7.6.1
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
# #
@ -12,20 +15,20 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
# Get and install packages # Get and install packages
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/kibana/ && \ mkdir -p /usr/share/kibana/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-6.8.6-linux-x86_64.tar.gz && \ aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-$KB_VER-linux-x86_64.tar.gz && \
tar xvfz kibana-6.8.6-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \ tar xvfz kibana-$KB_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \
# #
# Kibana's bundled node does not work in alpine # Kibana's bundled node does not work in alpine
rm /usr/share/kibana/node/bin/node && \ rm /usr/share/kibana/node/bin/node && \
ln -s /usr/bin/node /usr/share/kibana/node/bin/node && \ ln -s /usr/local/bin/node /usr/share/kibana/node/bin/node && \
# #
# Add and move files # Add and move files
cd /root/dist/ && \ cd /root/dist/ && \
cp kibana.svg /usr/share/kibana/src/ui/public/images/kibana.svg && \ # cp kibana.svg /usr/share/kibana/src/ui/public/images/kibana.svg && \
cp kibana.svg /usr/share/kibana/src/ui/public/icons/kibana.svg && \ # cp kibana.svg /usr/share/kibana/src/ui/public/icons/kibana.svg && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon.ico && \ # cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon.ico && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-16x16.png && \ # cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-16x16.png && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-32x32.png && \ # cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-32x32.png && \
# #
# Setup user, groups and configs # Setup user, groups and configs
sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \
@ -33,17 +36,21 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#server.rewriteBasePath: false/server.rewriteBasePath: false/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#server.rewriteBasePath: false/server.rewriteBasePath: false/' /usr/share/kibana/config/kibana.yml && \
sed -i "s/#005571/#e20074/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \ # sed -i "s/#005571/#e20074/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
sed -i "s/#007ba4/#9e0051/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \ # sed -i "s/#007ba4/#9e0051/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
sed -i "s/#00465d/#4f0028/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \ # sed -i "s/#00465d/#4f0028/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
echo "xpack.infra.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.infra.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.logstash.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.logstash.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.canvas.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.canvas.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.spaces.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.spaces.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.apm.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.apm.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.security.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.uptime.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.uptime.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.siem.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "elasticsearch.requestTimeout: 60000" >> /usr/share/kibana/config/kibana.yml && \
echo "elasticsearch.shardTimeout: 60000" >> /usr/share/kibana/config/kibana.yml && \
rm -rf /usr/share/kibana/optimize/bundles/* && \ rm -rf /usr/share/kibana/optimize/bundles/* && \
/usr/share/kibana/bin/kibana --optimize && \ /usr/share/kibana/bin/kibana --optimize --allow-root && \
addgroup -g 2000 kibana && \ addgroup -g 2000 kibana && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \
chown -R kibana:kibana /usr/share/kibana/ && \ chown -R kibana:kibana /usr/share/kibana/ && \

View File

@ -12,4 +12,4 @@ services:
# condition: service_healthy # condition: service_healthy
ports: ports:
- "127.0.0.1:64296:5601" - "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1903" image: "dtagdevsec/kibana:2006"

View File

@ -1,5 +1,7 @@
FROM alpine FROM alpine
# #
# VARS
ENV LS_VER=7.6.1
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
# #
@ -13,7 +15,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
libc6-compat \ libc6-compat \
libzmq \ libzmq \
nss \ nss \
openjdk8-jre && \ openjdk11-jre && \
# #
# Get and install packages # Get and install packages
mkdir -p /etc/listbot && \ mkdir -p /etc/listbot && \
@ -23,8 +25,8 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
bunzip2 *.bz2 && \ bunzip2 *.bz2 && \
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/logstash/ && \ mkdir -p /usr/share/logstash/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-6.8.6.tar.gz && \ aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER.tar.gz && \
tar xvfz logstash-6.8.6.tar.gz --strip-components=1 -C /usr/share/logstash/ && \ tar xvfz logstash-$LS_VER.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
/usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \ /usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \
/usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \ /usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \
# #
@ -34,7 +36,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
chmod u+x /usr/bin/update.sh && \ chmod u+x /usr/bin/update.sh && \
mkdir -p /etc/logstash/conf.d && \ mkdir -p /etc/logstash/conf.d && \
cp logstash.conf /etc/logstash/conf.d/ && \ cp logstash.conf /etc/logstash/conf.d/ && \
cp elasticsearch-template-es6x.json /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/ && \ cp elasticsearch-template-es7x.json /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.1-java/lib/logstash/outputs/elasticsearch/ && \
# #
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 logstash && \ addgroup -g 2000 logstash && \

View File

@ -1,53 +0,0 @@
{
"template" : "logstash-*",
"version" : 50001,
"settings" : {
"index.refresh_interval" : "5s",
"index.number_of_shards" : "1",
"index.number_of_replicas" : "0",
"mapping" : {
"total_fields" : {
"limit" : "2000"
}
}
},
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true, "norms" : false},
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"@timestamp": { "type": "date", "include_in_all": false },
"@version": { "type": "keyword", "include_in_all": false },
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}
}

View File

@ -1,48 +0,0 @@
{
"template" : "logstash-*",
"version" : 60001,
"settings" : {
"index.refresh_interval" : "5s",
"index.number_of_shards" : "1",
"index.number_of_replicas" : "0",
"index.mapping.total_fields.limit": "2000"
},
"mappings" : {
"_default_" : {
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"@timestamp": { "type": "date"},
"@version": { "type": "keyword"},
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}
}

View File

@ -0,0 +1,49 @@
{
"index_patterns" : "logstash-*",
"version" : 60001,
"settings" : {
"index.refresh_interval" : "5s",
"number_of_shards" : 1,
"index.number_of_replicas" : "0",
"index.mapping.total_fields.limit" : "2000",
"index.query": {
"default_field": "fields.*"
}
},
"mappings" : {
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"@timestamp": { "type": "date"},
"@version": { "type": "keyword"},
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}

View File

@ -413,12 +413,12 @@ if "_grokparsefailure" in [tags] { drop {} }
geoip { geoip {
cache_size => 10000 cache_size => 10000
source => "src_ip" source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb" database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"
} }
geoip { geoip {
cache_size => 10000 cache_size => 10000
source => "src_ip" source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-ASN.mmdb" database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-ASN.mmdb"
} }
translate { translate {
refresh_interval => 86400 refresh_interval => 86400

View File

@ -34,3 +34,11 @@ if [ "$myCHECK" == "0" ];
else else
echo "Cannot reach Github, starting Logstash without latest translation maps." echo "Cannot reach Github, starting Logstash without latest translation maps."
fi fi
# Make sure logstash can put latest logstash template by deleting the old one first
echo "Removing logstash template."
curl -XDELETE http://elasticsearch:9200/_template/logstash
echo
echo "Checking if empty."
curl -XGET http://elasticsearch:9200/_template/logstash
echo

View File

@ -12,7 +12,7 @@ services:
# condition: service_healthy # condition: service_healthy
env_file: env_file:
- /opt/tpot/etc/compose/elk_environment - /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1903" image: "dtagdevsec/logstash:2006"
volumes: volumes:
- /data:/data - /data:/data
- /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
# #
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
git \ git \
libffi-dev \ libffi-dev \
@ -32,7 +33,7 @@ RUN apk -U --no-cache add \
# #
# Supply configs # Supply configs
mv /root/dist/ews.cfg /opt/ewsposter/ && \ mv /root/dist/ews.cfg /opt/ewsposter/ && \
mv /root/dist/*.pem /opt/ewsposter/ && \ # mv /root/dist/*.pem /opt/ewsposter/ && \
# #
# Clean up # Clean up
apk del build-base \ apk del build-base \

View File

@ -1,70 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIGBDCCA+ygAwIBAgIBATANBgkqhkiG9w0BAQsFADCBnTEYMBYGA1UEAwwPU0lT
U0RFTiBSb290IENBMQswCQYDVQQGEwJQTDERMA8GA1UEBwwIV2Fyc3phd2ExLjAs
BgNVBAoMJU5hdWtvd2EgaSBBa2FkZW1pY2thIFNpZWMgS29tcHV0ZXJvd2ExEDAO
BgNVBAsMB1NJU1NERU4xHzAdBgkqhkiG9w0BCQEWEGFkbWluQHNpc3NkZW4uZXUw
HhcNMTcwNDExMTMxNDE2WhcNMjcwNDA5MTMxNDE2WjCBjTEbMBkGA1UEAwwSU0lT
U0RFTiBTZXJ2aWNlIENBMQswCQYDVQQGEwJQTDEfMB0GCSqGSIb3DQEJARYQYWRt
aW5Ac2lzc2Rlbi5ldTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2ll
YyBLb21wdXRlcm93YTEQMA4GA1UECwwHU0lTU0RFTjCCAiIwDQYJKoZIhvcNAQEB
BQADggIPADCCAgoCggIBAPFLjU6cLQoGz1s73QMPiRxYISCMUh3CXFe52Uim9a60
nkBDLfjMFW87MNhFCcE2xmxwdPPTz4+f5+DsEV3eZf0y63NxWx+RFV+UpODuEW5n
tWPFUDxmgKx6iAR/tyeLVNqmgtCnWzSthE0cg71dlil6onWvkMc+Wn5Kv6aXoz4e
5YVVhNsymhhrR0BntospY8EvtPm70hHAzOty957/zixOQ/MM+4SHRsWXTlKqv0K2
udWpkUy1Ihs3bpea2KAvn9bBWejFwy7K4q3LyhSyqwpVCYjNi+s+9z4ipSMfvAlT
FvHrMrODv/Iz/TQOfypYSlpX2gBP9WKLgOQj3wulJnMDQlvG1XNgOAqKfEF52YGF
eUu21UraRgDAguIIhWxRwgXenmRo8ngWjfk9Q8734PzzXt8cwzbxJWiJLMew1SiW
I+Kg8uYNGNT4mdBeUMo92S17ZNMXVnkt1TYfxT0A0ZlTCrhXPiWITtsVZXAdqFtl
j5hASmEcRYNgXEUQHBn13O9IinEmks2PEcqbbbKbs2Je0DS/JvxBkqES51UdsaVQ
zITKw3deCk0pISG8WDWZ97LEeDCvAKA5l/ooKjDwfS5vWw11mTUCOdhCoF0m8Lao
TwE1fzzNbSaqMsT6JF/n0ACabfuvF2aqCmWsZC/Hpw8LQQS62zOouCLdcqizL9+z
AgMBAAGjXTBbMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgHuMB0GA1UdDgQWBBQ4
nurxBppBA5PTNvFFU/vhDr/NFzAfBgNVHSMEGDAWgBSDpRyQSgaBD5XvyFOA8YHH
tbUAbzANBgkqhkiG9w0BAQsFAAOCAgEAIvA2gkYsIVH7FGuoIo9RIxgwy7G/SHNC
Xllz6hyTx10UwbttJ+o4gdNt8WPuGnkmywFgsjL1//bFw2+fUO5IRvWKSmXzwx9N
faRJAjQT4JNx2uOW0ctw4USngPrLjXr3UrIQQlJFtZnEyT9u5VJXX8zkhfNJudyJ
N88YVrPEf6Gh1Q0P+yCX0rDEb3PlP2jsYyXZtcYA5kDQ6Qq7jpLT/zrjJdaPTmzh
2NUe7jJOBfZxPCoeev7meafY2vVOgqRqMz1+DZRoOgwq+ysczzRaXmd5a2p9Tabc
L1w5FXKNJQ4apszA0cEScI+4mBIIQ7VFT3GO098GOcYsC2MelRkgONAIyamm66AP
tvLQAKoiK/xz3sEHN4zaZvN/YVHaSYZEXUP0QHdyL62P62a92aCNyrHpzKURhEDA
n8cs6icxKrS4xuVa517m53zun0brjrfeltfbO7z1A2TstFYu9BHKzRuhwV9cGRHP
EDcb7PkfA/08sDHsyfsWtzIysNo3hwCmQ6gtOW5xlrGplFfwSsXmPG4SR3ByW379
RA5h3zzrO0g7iCvbLclqHoqLTJTMS+6U43qXjnQ7DJ+mcbhRGcMHcZVKqO3QmLm+
mmkDNzNYfTgY52D5mXJqUK50750mQ8dwMSkD2TufSAPmAPUp90LdQ8u9CIv6gQ+x
A08hDHJ1cdY=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIGHDCCBASgAwIBAgIJAPZqsOOroxaHMA0GCSqGSIb3DQEBCwUAMIGdMRgwFgYD
VQQDDA9TSVNTREVOIFJvb3QgQ0ExCzAJBgNVBAYTAlBMMREwDwYDVQQHDAhXYXJz
emF3YTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2llYyBLb21wdXRl
cm93YTEQMA4GA1UECwwHU0lTU0RFTjEfMB0GCSqGSIb3DQEJARYQYWRtaW5Ac2lz
c2Rlbi5ldTAeFw0xNzA0MTExMzA3NTZaFw0yNzA0MDkxMzA3NTZaMIGdMRgwFgYD
VQQDDA9TSVNTREVOIFJvb3QgQ0ExCzAJBgNVBAYTAlBMMREwDwYDVQQHDAhXYXJz
emF3YTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2llYyBLb21wdXRl
cm93YTEQMA4GA1UECwwHU0lTU0RFTjEfMB0GCSqGSIb3DQEJARYQYWRtaW5Ac2lz
c2Rlbi5ldTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANKT77EYYEhV
tJUnfnvQtGttfgqIzKIV2W6nPK9aDsKRTX5BVDHF6P5ZAF1u/52ATwdyTK7+LD66
Q/nCzyyA2kqTgdruX6VGucpD2DVVSVF6nZhV9PcISNaMXytoG2HHlqrim53E/rVa
rskColfs7oCxama6lPKZ/rqrJlVjA1Pl5ZtxR0IORjpOyZjSbSzKQwLp/JxHPMCU
2cVirS7aEu5UGj+Q7Ibg0AEyoAu5tnHBKun4hmIoo7LtKWNEe1TdboxOSboGJ5wd
UTEmNH+7izZ5FAogTUINjubkf2zZ65xEnN7DT/zFS30vYU1EclqCTp96EKPANogV
ZeBKntEN6M5azM6Q6+nFI56TV5DWHTIXm85zzeDj5JM7TQlIGTh8A5APHpr0YyUP
AiIUrixV2lqSDrjewey5qQcWV6WbjMS72OFKh/x7+UJICJhoUw+KwnPmWSq1WAlt
n7C+W0raSQzt7puI30LUkInKL6iEQebMoYg0eDRI5vsRIpbo+PzflIuk/Vea/D1Y
twgRc8ujoKI9GpPJyP4yO4nY7BkShLqKJ251lEJZnxq8LiFVi8aN6ZHt//OGEtVs
6L97cPzqFx7qx8vnyLBFk23lb8pilHK1G0nqxCCjakTruT/JgkLXnZcLu/IDSqd3
QLjJL0rmU9q6+RTH8A782pcBUNzeLKnlAgMBAAGjXTBbMAwGA1UdEwQFMAMBAf8w
CwYDVR0PBAQDAgHuMB0GA1UdDgQWBBSDpRyQSgaBD5XvyFOA8YHHtbUAbzAfBgNV
HSMEGDAWgBSDpRyQSgaBD5XvyFOA8YHHtbUAbzANBgkqhkiG9w0BAQsFAAOCAgEA
IA0U6znfPykr5PoQlXb/Wr4L5mY/ZtNAJsvJ8jwNMsj3ZlqLOJfnHHoG5LHkb2b/
xfM1Ee2ojmYBt4VDARqrHLLbup38Ivqt0aEco3Qx/WqbIR4IlvZBF+/qKF/wIUuc
CuBYNIy12PcLzafT+SJosj1BJ+XiUCj/RsVXIT5CxsdXIABWC+5b3T3/PrAtKk+C
sVjA/ck1KAHDd+3VUyRjLAAekYWA9C/hek3YwWQ3OvmyHos5gxifqMMDj6bx5qgv
AuIs4mYJlBlHE19GxRmo2TDwE0eZiUoUdavdRBbl9v7dex+AF2GegmnC1ouYc9kv
9moNBcuPFXuJMCOCU44aTpgEKRm3QTZTvVcUza251T+4kgT2wlFyzPqQ8hcpih4t
knlqHhNc9ibL3/qzWr093AgC9uNaNRqmqu1WAu3vs9g3DVb/RSMrUG/V0YS1GgPq
E+nVJ1AIJoee8YaxHztRfjPsmu1R3pp633lfcRPUKCkz52dZDFRPuQP36DuJzl2M
itTra0MtDUuRCsuJfVGe1op2wFprswLI0qy7O9N21D4Ab8g0ik+lhmpOf5DpYxmx
C2Xpe4d/5Xlg3wIYhEs5MnfeEy4lSMA4cxwJs11gVYHba62L7/5lqzpPmHdRYHu3
Vf0pM/6zniQpy58Pf9+9CNU15I3iWF5K3zmevFArd6s=
-----END CERTIFICATE-----

View File

@ -23,7 +23,7 @@ services:
- EWS_HPFEEDS_FORMAT=json - EWS_HPFEEDS_FORMAT=json
env_file: env_file:
- /opt/tpot/etc/compose/elk_environment - /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1903" image: "dtagdevsec/ewsposter:2006"
volumes: volumes:
- /data:/data - /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip

View File

@ -1,4 +1,4 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
#ADD dist/ /root/dist/ #ADD dist/ /root/dist/

View File

@ -12,6 +12,6 @@ services:
- NET_ADMIN - NET_ADMIN
- SYS_NICE - SYS_NICE
- NET_RAW - NET_RAW
image: "dtagdevsec/fatt:1903" image: "dtagdevsec/fatt:2006"
volumes: volumes:
- /data/fatt/log:/opt/fatt/log - /data/fatt/log:/opt/fatt/log

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
# #
# Setup apk # Setup apk
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
git \ git \
go \ go \

View File

@ -1,54 +0,0 @@
FROM alpine
#
# Include dist
ADD dist/ /root/dist/
#
# Setup apk
RUN apk -U --no-cache add \
build-base \
git \
go \
g++ \
iptables-dev \
libnetfilter_queue-dev \
libcap \
libpcap-dev && \
#
# Setup go, glutton
export GOPATH=/opt/go/ && \
go get -d github.com/mushorg/glutton && \
cd /opt/go/src/github.com/satori/ && \
rm -rf go.uuid && \
git clone https://github.com/satori/go.uuid && \
cd go.uuid && \
git checkout v1.2.0 && \
mv /root/dist/system.go /opt/go/src/github.com/mushorg/glutton/ && \
cd /opt/go/src/github.com/mushorg/glutton/ && \
make build && \
cd / && \
mkdir -p /opt/glutton && \
mv /opt/go/src/github.com/mushorg/glutton/bin /opt/glutton/ && \
mv /opt/go/src/github.com/mushorg/glutton/config /opt/glutton/ && \
mv /opt/go/src/github.com/mushorg/glutton/rules /opt/glutton/ && \
setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \
setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-multi && \
#
# Setup user, groups and configs
addgroup -g 2000 glutton && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 glutton && \
mkdir -p /var/log/glutton && \
mv /root/dist/rules.yaml /opt/glutton/rules/ && \
#
# Clean up
apk del --purge build-base \
git \
go \
g++ && \
rm -rf /var/cache/apk/* \
/opt/go \
/root/dist
#
# Start glutton
WORKDIR /opt/glutton
USER glutton:glutton
CMD exec bin/server -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) -l /var/log/glutton/glutton.log > /dev/null 2>&1

View File

@ -13,7 +13,7 @@ services:
network_mode: "host" network_mode: "host"
cap_add: cap_add:
- NET_ADMIN - NET_ADMIN
image: "dtagdevsec/glutton:1903" image: "dtagdevsec/glutton:2006"
read_only: true read_only: true
volumes: volumes:
- /data/glutton/log:/var/log/glutton - /data/glutton/log:/var/log/glutton

View File

@ -26,7 +26,7 @@ services:
ports: ports:
- "64297:64297" - "64297:64297"
- "127.0.0.1:64304:64304" - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:1903" image: "dtagdevsec/nginx:2006"
read_only: true read_only: true
volumes: volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro - /data/nginx/cert/:/etc/nginx/cert/:ro

View File

@ -1,4 +1,4 @@
FROM alpine:3.10 FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
@ -23,7 +23,6 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
cd /opt/ && \ cd /opt/ && \
git clone --depth=1 https://github.com/johnnykv/heralding && \ git clone --depth=1 https://github.com/johnnykv/heralding && \
cd heralding && \ cd heralding && \
sed -i 's/asyncssh/asyncssh==1.18.0/' requirements.txt && \
pip3 install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir -r requirements.txt && \
pip3 install --no-cache-dir . && \ pip3 install --no-cache-dir . && \
# #
@ -32,7 +31,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 heralding && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 heralding && \
mkdir -p /var/log/heralding/ /etc/heralding && \ mkdir -p /var/log/heralding/ /etc/heralding && \
mv /root/dist/heralding.yml /etc/heralding/ && \ mv /root/dist/heralding.yml /etc/heralding/ && \
setcap cap_net_bind_service=+ep /usr/bin/python3.7 && \ setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
chown -R heralding:heralding /var/log/heralding && \ chown -R heralding:heralding /var/log/heralding && \
# #
# Clean up # Clean up

View File

@ -1,54 +0,0 @@
FROM alpine
# Include dist
ADD dist/ /root/dist/
# Install packages
RUN apk -U --no-cache add \
build-base \
git \
libcap \
libffi-dev \
openssl-dev \
libzmq \
postgresql-dev \
python3 \
python3-dev \
py-virtualenv && \
pip3 install --no-cache-dir --upgrade pip && \
# Setup heralding
mkdir -p /opt && \
cd /opt/ && \
git clone --depth=1 https://github.com/johnnykv/heralding && \
cd heralding && \
pip3 install --no-cache-dir -r requirements.txt && \
pip3 install --no-cache-dir . && \
# Setup user, groups and configs
addgroup -g 2000 heralding && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 heralding && \
mkdir -p /var/log/heralding/ /etc/heralding && \
mv /root/dist/heralding.yml /etc/heralding/ && \
setcap cap_net_bind_service=+ep /usr/bin/python3.6 && \
chown -R heralding:heralding /var/log/heralding && \
# Clean up
apk del --purge \
build-base \
git \
libcap \
libffi-dev \
libressl-dev \
postgresql-dev \
python3-dev \
py-virtualenv && \
rm -rf /root/* \
/var/cache/apk/* \
/opt/heralding
# Start elasticpot
STOPSIGNAL SIGINT
WORKDIR /tmp/heralding/
USER heralding:heralding
CMD exec heralding -c /etc/heralding/heralding.yml -l /var/log/heralding/heralding.log

View File

@ -30,7 +30,7 @@ services:
- "3389:3389" - "3389:3389"
- "5432:5432" - "5432:5432"
- "5900:5900" - "5900:5900"
image: "dtagdevsec/heralding:1903" image: "dtagdevsec/heralding:2006"
read_only: true read_only: true
volumes: volumes:
- /data/heralding/log:/var/log/heralding - /data/heralding/log:/var/log/heralding

View File

@ -1,4 +1,4 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
@ -28,6 +28,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
sed -i 's/bytes/size/g' /opt/honeypy/loggers/file/honeypy_file.py && \ sed -i 's/bytes/size/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/date_time/timestamp/g' /opt/honeypy/loggers/file/honeypy_file.py && \ sed -i 's/date_time/timestamp/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/data,/data.decode("hex"),/g' /opt/honeypy/loggers/file/honeypy_file.py && \ sed -i 's/data,/data.decode("hex"),/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/urllib3/urllib3 == 1.21.1/g' /opt/honeypy/requirements.txt && \
virtualenv env && \ virtualenv env && \
cp /root/dist/services.cfg /opt/honeypy/etc && \ cp /root/dist/services.cfg /opt/honeypy/etc && \
cp /root/dist/honeypy.cfg /opt/honeypy/etc && \ cp /root/dist/honeypy.cfg /opt/honeypy/etc && \
@ -37,7 +38,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
addgroup -g 2000 honeypy && \ addgroup -g 2000 honeypy && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 honeypy && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 honeypy && \
chown -R honeypy:honeypy /opt/honeypy && \ chown -R honeypy:honeypy /opt/honeypy && \
setcap cap_net_bind_service=+ep /opt/honeypy/env/bin/python2 && \ setcap cap_net_bind_service=+ep /opt/honeypy/env/bin/python && \
# #
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \

View File

@ -20,7 +20,7 @@ services:
- "2324:2324" - "2324:2324"
- "4096:4096" - "4096:4096"
- "9200:9200" - "9200:9200"
image: "dtagdevsec/honeypy:1903" image: "dtagdevsec/honeypy:2006"
read_only: true read_only: true
volumes: volumes:
- /data/honeypy/log:/opt/honeypy/log - /data/honeypy/log:/opt/honeypy/log

View File

@ -0,0 +1,39 @@
FROM alpine:latest
#
# Include dist
ADD dist/ /root/dist/
#
# Install packages
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \
git \
libcap \
python2 \
python2-dev \
py2-pip \
tcpdump && \
#
# Clone honeysap from git
git clone --depth=1 https://github.com/SecureAuthCorp/HoneySAP /opt/honeysap && \
cd /opt/honeysap && \
mkdir conf && \
cp /root/dist/* conf/ && \
python setup.py install && \
pip install -r requirements-optional.txt && \
#
# Setup user, groups and configs
addgroup -g 2000 honeysap && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 honeysap && \
chown -R honeysap:honeysap /opt/honeysap && \
# setcap cap_net_bind_service=+ep /opt/honeypy/env/bin/python && \
#
# Clean up
apk del --purge git && \
rm -rf /root/* \
/var/cache/apk/*
#
# Set workdir and start honeysap
USER honeysap:honeysap
WORKDIR /opt/honeysap
CMD ["/opt/honeysap/bin/honeysap", "--config-file", "/opt/honeysap/conf/honeysap.yml"]

View File

@ -0,0 +1,6 @@
# HoneSAP default external profile route table
# ============================================
#
# Allow any protocols to 10.0.0.100 port 3200
- allow,any,10.0.0.100,3200,

103
docker/honeysap/dist/honeysap.yml vendored Normal file
View File

@ -0,0 +1,103 @@
# HoneSAP default external profile configuration
# ==============================================
# Console logging configuration
# -----------------------------
# Level of console logging
verbose: 2
# Use colored output
colored_console: false
# Miscellaneous configuration
# ---------------------------
# Enable reloading after a change in one of the configuration files
reload: true
# Address to listen for all services
listener_address: 0.0.0.0
# SAP instance configuration
# --------------------------
# Release version
release: "720"
# Services configuration
# ----------------------
services:
-
# SAP Router configuration
# ------------------------
service: SAPRouterService
alias: ExternalSAPRouter
enabled: yes
listener_port: 3299
# Router version number
router_version: 40
# Router patch version
router_version_patch: 4
# Password for information requests. If present it will be required
info_password:
# Wether the external administration would be enabled on this SAP Router
external_admin: false
# Route table file
route_table: !include external_route_table.yml
# Hostname for the SAP Router
hostname: saprouter
-
# SAP Dispatcher configuration
# ----------------------------
service: SAPDispatcherService
alias: InternalDispatcherService
enabled: yes
virtual: yes
listener_port: 3200
listener_address: 10.0.0.100
# Name of the instance
instance: NSP
# Client number
client_no: "001"
# SID
sid: PRD
# Hostname
hostname: uscasf-sap01
# Feeds configuration
# -------------------
feeds:
-
feed: LogFeed
log_filename: log/honeysap-external.log
enabled: yes
-
feed: ConsoleFeed
enabled: yes
-
feed: HPFeed
channels:
- honeysap.events
feed_host: 10.250.250.20
feed_port: 20000
feed_ident: honeysap
feed_secret: password
enabled: no

View File

@ -0,0 +1,20 @@
version: '2.3'
networks:
honeysap_local:
services:
# HoneySAP service
honeysap:
build: .
container_name: honeysap
restart: always
networks:
- honeysap_local
ports:
- "3299:3299"
- "8001:8001"
image: "dtagdevsec/honeysap:2006"
volumes:
- /data/honeysap/log:/opt/honeysap/log

View File

@ -1,4 +1,4 @@
FROM debian:stretch-slim FROM debian:buster-slim
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
# #
# Include dist # Include dist
@ -26,8 +26,8 @@ RUN apt-get update -y && \
wget && \ wget && \
# #
# Install honeytrap from source # Install honeytrap from source
cd /root/ && \ git clone https://github.com/armedpot/honeytrap /root/honeytrap && \
git clone https://github.com/armedpot/honeytrap && \ # git clone https://github.com/t3chn0m4g3/honeytrap /root/honeytrap && \
cd /root/honeytrap/ && \ cd /root/honeytrap/ && \
autoreconf -vfi && \ autoreconf -vfi && \
./configure \ ./configure \

View File

@ -12,7 +12,7 @@ services:
network_mode: "host" network_mode: "host"
cap_add: cap_add:
- NET_ADMIN - NET_ADMIN
image: "dtagdevsec/honeytrap:1903" image: "dtagdevsec/honeytrap:2006"
read_only: true read_only: true
volumes: volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks - /data/honeytrap/attacks:/opt/honeytrap/var/attacks

View File

@ -1,4 +1,4 @@
FROM alpine FROM alpine:latest
# #
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \

View File

@ -1,52 +0,0 @@
FROM alpine
#
# Install packages
RUN apk -U --no-cache add \
autoconf \
automake \
build-base \
git \
libcap \
libtool \
py-pip \
python \
python-dev && \
#
# Install libemu
git clone --depth=1 https://github.com/buffer/libemu /root/libemu/ && \
cd /root/libemu/ && \
autoreconf -vi && \
./configure && \
make && \
make install && \
#
# Install libemu python wrapper
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir \
hpfeeds \
pylibemu && \
#
# Install mailoney from git
git clone --depth=1 https://github.com/awhitehatter/mailoney /opt/mailoney && \
#
# Setup user, groups and configs
addgroup -g 2000 mailoney && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 mailoney && \
chown -R mailoney:mailoney /opt/mailoney && \
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \
#
# Clean up
apk del --purge autoconf \
automake \
build-base \
git \
py-pip \
python-dev && \
rm -rf /root/* && \
rm -rf /var/cache/apk/*
#
# Set workdir and start mailoney
STOPSIGNAL SIGINT
USER mailoney:mailoney
WORKDIR /opt/mailoney/
CMD ["/usr/bin/python","mailoney.py","-i","0.0.0.0","-p","25","-s","mailrelay.local","-t","schizo_open_relay"]

View File

@ -20,7 +20,7 @@ services:
- mailoney_local - mailoney_local
ports: ports:
- "25:25" - "25:25"
image: "dtagdevsec/mailoney:1903" image: "dtagdevsec/mailoney:2006"
read_only: true read_only: true
volumes: volumes:
- /data/mailoney/log:/opt/mailoney/logs - /data/mailoney/log:/opt/mailoney/logs

View File

@ -1,4 +1,4 @@
FROM alpine FROM alpine:latest
# #
# Setup apk # Setup apk
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \

View File

@ -14,7 +14,7 @@ services:
- medpot_local - medpot_local
ports: ports:
- "2575:2575" - "2575:2575"
image: "dtagdevsec/medpot:1903" image: "dtagdevsec/medpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/medpot/log/:/var/log/medpot - /data/medpot/log/:/var/log/medpot

View File

@ -1,4 +1,4 @@
FROM alpine FROM alpine:latest
# #
# Add source # Add source
ADD . /opt/p0f ADD . /opt/p0f

View File

@ -8,7 +8,7 @@ services:
container_name: p0f container_name: p0f
restart: always restart: always
network_mode: "host" network_mode: "host"
image: "dtagdevsec/p0f:1903" image: "dtagdevsec/p0f:2006"
read_only: true read_only: true
volumes: volumes:
- /data/p0f/log:/var/log/p0f - /data/p0f/log:/var/log/p0f

View File

@ -1,4 +1,4 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/

View File

@ -22,7 +22,7 @@ services:
- rdpy_local - rdpy_local
ports: ports:
- "3389:3389" - "3389:3389"
image: "dtagdevsec/rdpy:1903" image: "dtagdevsec/rdpy:2006"
read_only: true read_only: true
volumes: volumes:
- /data/rdpy/log:/var/log/rdpy - /data/rdpy/log:/var/log/rdpy

View File

@ -1,4 +1,4 @@
FROM alpine:3.10 FROM alpine:latest
# #
# Get and install dependencies & packages # Get and install dependencies & packages
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
@ -6,51 +6,61 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
build-base \ build-base \
curl \ curl \
git \ git \
jpeg-dev \
libffi-dev \ libffi-dev \
libxml2 \ libxml2 \
libxml2-dev \ libxml2-dev \
libxslt \ libxslt \
libxslt-dev \ libxslt-dev \
musl \
musl-dev \
openjpeg-dev \
openssl \ openssl \
openssl-dev \ openssl-dev \
python \ python3 \
python-dev \ python3-dev \
py-cffi \ py-cffi \
py-pillow \ py-pillow \
py-future \ py-future \
py-pip \ py3-pip \
swig && \ swig \
tinyxml \
tinyxml-dev \
zlib-dev && \
# #
# Setup user # Setup user
addgroup -g 2000 spiderfoot && \ addgroup -g 2000 spiderfoot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 spiderfoot && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 spiderfoot && \
# #
# Install spiderfoot # Install spiderfoot
# git clone --depth=1 https://github.com/smicallef/spiderfoot -b v2.12.0-final /home/spiderfoot && \
git clone --depth=1 https://github.com/smicallef/spiderfoot /home/spiderfoot && \ git clone --depth=1 https://github.com/smicallef/spiderfoot /home/spiderfoot && \
cd /home/spiderfoot && \ cd /home/spiderfoot && \
pip install --no-cache-dir openxmllib wheel && \ pip3 install --no-cache-dir wheel && \
pip install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir -r requirements.txt && \
chown -R spiderfoot:spiderfoot /home/spiderfoot && \ chown -R spiderfoot:spiderfoot /home/spiderfoot && \
sed -i "s#'__docroot': ''#'__docroot': '\/spiderfoot'#" /home/spiderfoot/sf.py && \ sed -i "s#'__docroot': ''#'__docroot': '\/spiderfoot'#" /home/spiderfoot/sf.py && \
sed -i 's#raise cherrypy.HTTPRedirect("\/")#raise cherrypy.HTTPRedirect("\/spiderfoot")#' /home/spiderfoot/sfwebui.py && \ sed -i 's#raise cherrypy.HTTPRedirect("\/")#raise cherrypy.HTTPRedirect("\/spiderfoot")#' /home/spiderfoot/sfwebui.py && \
# #
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
gcc \
git \ git \
libffi-dev \ libffi-dev \
libxml2-dev \ libxml2-dev \
libxslt-dev \ libxslt-dev \
musl-dev \
openssl-dev \ openssl-dev \
python-dev \ python3-dev \
py-pip \ py3-pip \
py-setuptools && \ swig \
tinyxml-dev && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
# #
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080' #HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080'
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080/spiderfoot/'
# #
# Set user, workdir and start spiderfoot # Set user, workdir and start spiderfoot
USER spiderfoot:spiderfoot USER spiderfoot:spiderfoot
WORKDIR /home/spiderfoot WORKDIR /home/spiderfoot
CMD ["/usr/bin/python", "sf.py", "0.0.0.0:8080"] CMD ["/usr/bin/python3.8", "sf.py","-l", "0.0.0.0:8080"]

View File

@ -14,6 +14,6 @@ services:
- spiderfoot_local - spiderfoot_local
ports: ports:
- "127.0.0.1:64303:8080" - "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1903" image: "dtagdevsec/spiderfoot:2006"
volumes: volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db

View File

@ -1,90 +1,17 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
# #
# Install packages # Install packages
#RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN apk -U --no-cache add \
RUN apk -U add \
ca-certificates \ ca-certificates \
curl \ curl \
file \ file \
geoip \ libcap \
hiredis \ wget && \
jansson \ apk -U add --repository http://dl-cdn.alpinelinux.org/alpine/edge/community \
libcap-ng \ suricata && \
libmagic \
libmaxminddb \
libnet \
libnetfilter_queue \
libnfnetlink \
libpcap \
luajit \
lz4-libs \
musl \
nspr \
nss \
pcre \
yaml \
wget \
automake \
autoconf \
build-base \
cargo \
file-dev \
geoip-dev \
hiredis-dev \
jansson-dev \
libtool \
libcap-ng-dev \
luajit-dev \
libmaxminddb-dev \
libpcap-dev \
libnet-dev \
libnetfilter_queue-dev \
libnfnetlink-dev \
lz4-dev \
nss-dev \
nspr-dev \
pcre-dev \
python3 \
rust \
yaml-dev && \
#
# We need latest libhtp[-dev] which is only available in community
apk -U add --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community \
libhtp \
libhtp-dev && \
#
# Upgrade pip, install suricata-update to meet deps, however we will not be using it
# to reduce image (no python needed) and use the update script.
pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir suricata-update && \
#
# Get and build Suricata
mkdir -p /opt/builder/ && \
wget https://www.openinfosecfoundation.org/download/suricata-5.0.0.tar.gz && \
tar xvfz suricata-5.0.0.tar.gz --strip-components=1 -C /opt/builder/ && \
rm suricata-5.0.0.tar.gz && \
cd /opt/builder && \
./configure \
--prefix=/usr \
--sysconfdir=/etc \
--mandir=/usr/share/man \
--localstatedir=/var \
--enable-non-bundled-htp \
--enable-nfqueue \
--enable-rust \
--disable-gccmarch-native \
--enable-hiredis \
--enable-geoip \
--enable-gccprotect \
--enable-pie \
--enable-luajit && \
make && \
make check && \
make install && \
make install-full && \
# #
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 suri && \ addgroup -g 2000 suri && \
@ -92,8 +19,6 @@ RUN apk -U add \
chmod 644 /etc/suricata/*.config && \ chmod 644 /etc/suricata/*.config && \
cp /root/dist/suricata.yaml /etc/suricata/suricata.yaml && \ cp /root/dist/suricata.yaml /etc/suricata/suricata.yaml && \
cp /root/dist/*.bpf /etc/suricata/ && \ cp /root/dist/*.bpf /etc/suricata/ && \
mkdir -p /etc/suricata/rules && \
cp /opt/builder/rules/* /etc/suricata/rules/ && \
# #
# Download the latest EmergingThreats ruleset, replace rulebase and enable all rules # Download the latest EmergingThreats ruleset, replace rulebase and enable all rules
cp /root/dist/update.sh /usr/bin/ && \ cp /root/dist/update.sh /usr/bin/ && \
@ -101,32 +26,6 @@ RUN apk -U add \
update.sh OPEN && \ update.sh OPEN && \
# #
# Clean up # Clean up
apk del --purge \
automake \
autoconf \
build-base \
cargo \
file-dev \
geoip-dev \
hiredis-dev \
jansson-dev \
libtool \
libhtp-dev \
libcap-ng-dev \
luajit-dev \
libpcap-dev \
libmaxminddb-dev \
libnet-dev \
libnetfilter_queue-dev \
libnfnetlink-dev \
lz4-dev \
nss-dev \
nspr-dev \
pcre-dev \
python3 \
rust \
yaml-dev && \
rm -rf /opt/builder && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*

View File

@ -0,0 +1,139 @@
FROM alpine
#
# VARS
ENV VER=5.0.2
#
# Include dist
ADD dist/ /root/dist/
#
# Install packages
#RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
RUN apk -U add \
ca-certificates \
curl \
file \
geoip \
hiredis \
jansson \
libcap-ng \
libmagic \
libmaxminddb \
libnet \
libnetfilter_queue \
libnfnetlink \
libpcap \
luajit \
lz4-libs \
musl \
nspr \
nss \
pcre \
yaml \
wget \
automake \
autoconf \
build-base \
cargo \
file-dev \
geoip-dev \
hiredis-dev \
jansson-dev \
libtool \
libcap-ng-dev \
luajit-dev \
libmaxminddb-dev \
libpcap-dev \
libnet-dev \
libnetfilter_queue-dev \
libnfnetlink-dev \
lz4-dev \
nss-dev \
nspr-dev \
pcre-dev \
python3 \
rust \
yaml-dev && \
#
# We need latest libhtp[-dev] which is only available in community
apk -U add --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community \
libhtp \
libhtp-dev && \
#
# Upgrade pip, install suricata-update to meet deps, however we will not be using it
# to reduce image (no python needed) and use the update script.
pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir suricata-update && \
#
# Get and build Suricata
mkdir -p /opt/builder/ && \
wget https://www.openinfosecfoundation.org/download/suricata-$VER.tar.gz && \
tar xvfz suricata-$VER.tar.gz --strip-components=1 -C /opt/builder/ && \
rm suricata-$VER.tar.gz && \
cd /opt/builder && \
./configure \
--prefix=/usr \
--sysconfdir=/etc \
--mandir=/usr/share/man \
--localstatedir=/var \
--enable-non-bundled-htp \
--enable-nfqueue \
--enable-rust \
--disable-gccmarch-native \
--enable-hiredis \
--enable-geoip \
--enable-gccprotect \
--enable-pie \
--enable-luajit && \
make && \
make check && \
make install && \
make install-full && \
#
# Setup user, groups and configs
addgroup -g 2000 suri && \
adduser -S -H -u 2000 -D -g 2000 suri && \
chmod 644 /etc/suricata/*.config && \
cp /root/dist/suricata.yaml /etc/suricata/suricata.yaml && \
cp /root/dist/*.bpf /etc/suricata/ && \
mkdir -p /etc/suricata/rules && \
cp /opt/builder/rules/* /etc/suricata/rules/ && \
#
# Download the latest EmergingThreats ruleset, replace rulebase and enable all rules
cp /root/dist/update.sh /usr/bin/ && \
chmod 755 /usr/bin/update.sh && \
update.sh OPEN && \
#
# Clean up
apk del --purge \
automake \
autoconf \
build-base \
cargo \
file-dev \
geoip-dev \
hiredis-dev \
jansson-dev \
libtool \
libhtp-dev \
libcap-ng-dev \
luajit-dev \
libpcap-dev \
libmaxminddb-dev \
libnet-dev \
libnetfilter_queue-dev \
libnfnetlink-dev \
lz4-dev \
nss-dev \
nspr-dev \
pcre-dev \
python3 \
rust \
yaml-dev && \
rm -rf /opt/builder && \
rm -rf /root/* && \
rm -rf /tmp/* && \
rm -rf /var/cache/apk/*
#
# Start suricata
STOPSIGNAL SIGINT
CMD SURICATA_CAPTURE_FILTER=$(update.sh $OINKCODE) && exec suricata -v -F $SURICATA_CAPTURE_FILTER -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:])

View File

@ -1,4 +1,3 @@
not (host sicherheitstacho.eu or community.sicherheitstacho.eu) and not (host sicherheitstacho.eu or community.sicherheitstacho.eu) and
not (host deb.debian.org) and not (host deb.debian.org) and
not (host index.docker.io or docker.io) and not (host index.docker.io or docker.io)
not (host hpfeeds.sissden.eu)

View File

@ -15,6 +15,6 @@ services:
- NET_ADMIN - NET_ADMIN
- SYS_NICE - SYS_NICE
- NET_RAW - NET_RAW
image: "dtagdevsec/suricata:1903" image: "dtagdevsec/suricata:2006"
volumes: volumes:
- /data/suricata/log:/var/log/suricata - /data/suricata/log:/var/log/suricata

View File

@ -14,7 +14,7 @@ services:
tty: true tty: true
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/redis:1903" image: "dtagdevsec/redis:2006"
read_only: true read_only: true
# PHP Sandbox service # PHP Sandbox service
@ -23,10 +23,12 @@ services:
container_name: tanner_phpox container_name: tanner_phpox
restart: always restart: always
stop_signal: SIGKILL stop_signal: SIGKILL
tmpfs:
- /tmp:uid=2000,gid=2000
tty: true tty: true
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/phpox:1903" image: "dtagdevsec/phpox:2006"
read_only: true read_only: true
# Tanner API Service # Tanner API Service
@ -40,7 +42,7 @@ services:
tty: true tty: true
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/tanner:1903" image: "dtagdevsec/tanner:2006"
read_only: true read_only: true
volumes: volumes:
- /data/tanner/log:/var/log/tanner - /data/tanner/log:/var/log/tanner
@ -59,7 +61,9 @@ services:
tty: true tty: true
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/tanner:1903" # ports:
# - "127.0.0.1:8091:8091"
image: "dtagdevsec/tanner:2006"
command: tannerweb command: tannerweb
read_only: true read_only: true
volumes: volumes:
@ -78,7 +82,7 @@ services:
tty: true tty: true
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/tanner:1903" image: "dtagdevsec/tanner:2006"
command: tanner command: tanner
read_only: true read_only: true
volumes: volumes:
@ -100,6 +104,6 @@ services:
- tanner_local - tanner_local
ports: ports:
- "80:80" - "80:80"
image: "dtagdevsec/snare:1903" image: "dtagdevsec/snare:2006"
depends_on: depends_on:
- tanner - tanner

View File

@ -1,8 +1,5 @@
FROM alpine:3.10 FROM alpine:3.10
# #
# Include dist
ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \ apk -U --no-cache add \
@ -32,7 +29,6 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
# Install PHP Sandbox # Install PHP Sandbox
git clone --depth=1 https://github.com/mushorg/phpox /opt/phpox && \ git clone --depth=1 https://github.com/mushorg/phpox /opt/phpox && \
cd /opt/phpox && \ cd /opt/phpox && \
cp /root/dist/sandbox.py . && \
pip3 install -r requirements.txt && \ pip3 install -r requirements.txt && \
make && \ make && \
# #

View File

@ -1,125 +0,0 @@
#!/usr/bin/env python3
# Copyright (C) 2016 Lukas Rist
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import os
import tempfile
import json
import asyncio
import hashlib
import argparse
from aiohttp import web
from asyncio.subprocess import PIPE
from pprint import pprint
class PHPSandbox(object):
@classmethod
def php_tag_check(cls, script):
with open(script, "r+") as check_file:
file_content = check_file.read()
if "<?" not in file_content:
file_content = "<?php" + file_content
if "?>" not in file_content:
file_content += "?>"
check_file.write(file_content)
return script
@asyncio.coroutine
def read_process(self):
while True:
line = yield from self.proc.stdout.readline()
if not line:
break
else:
self.stdout_value += line + b'\n'
@asyncio.coroutine
def sandbox(self, script, phpbin="php7.0"):
if not os.path.isfile(script):
raise Exception("Sample not found: {0}".format(script))
try:
cmd = [phpbin, "sandbox.php", script]
self.proc = yield from asyncio.create_subprocess_exec(*cmd, stdout=PIPE)
self.stdout_value = b''
yield from asyncio.wait_for(self.read_process(), timeout=3)
except Exception as e:
try:
self.proc.kill()
except Exception:
pass
print("Error executing the sandbox: {}".format(e))
# raise e
return {'stdout': self.stdout_value.decode('utf-8')}
class EchoServer(asyncio.Protocol):
def connection_made(self, transport):
# peername = transport.get_extra_info('peername')
# print('connection from {}'.format(peername))
self.transport = transport
def data_received(self, data):
# print('data received: {}'.format(data.decode()))
self.transport.write(data)
@asyncio.coroutine
def api(request):
data = yield from request.read()
file_md5 = hashlib.md5(data).hexdigest()
with tempfile.NamedTemporaryFile(suffix='.php') as f:
f.write(data)
f.seek(0)
sb = PHPSandbox()
try:
server = yield from loop.create_server(EchoServer, '127.0.0.1', 1234)
ret = yield from asyncio.wait_for(sb.sandbox(f.name, phpbin), timeout=10)
server.close()
except KeyboardInterrupt:
pass
ret['file_md5'] = file_md5
return web.Response(body=json.dumps(ret, sort_keys=True, indent=4).encode('utf-8'))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--phpbin", help="PHP binary, ex: php7.0", default="php7.0")
args = parser.parse_args()
phpbin = args.phpbin
app = web.Application()
app.router.add_route('POST', '/', api)
loop = asyncio.get_event_loop()
handler = app.make_handler()
f = loop.create_server(handler, '0.0.0.0', 8088)
srv = loop.run_until_complete(f)
print('serving on', srv.sockets[0].getsockname())
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
loop.run_until_complete(handler.finish_connections(1.0))
srv.close()
loop.run_until_complete(srv.wait_closed())
loop.run_until_complete(app.finish())
loop.close()

View File

@ -13,7 +13,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
rm -rf /tmp/* /var/tmp/* && \ rm -rf /tmp/* /var/tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
# #
# Start conpot # Start redis
STOPSIGNAL SIGKILL STOPSIGNAL SIGKILL
USER nobody:nobody USER nobody:nobody
CMD redis-server /etc/redis.conf CMD redis-server /etc/redis.conf

View File

@ -18,8 +18,10 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
# #
# Setup Tanner # Setup Tanner
git clone --depth=1 https://github.com/mushorg/tanner /opt/tanner && \ git clone --depth=1 https://github.com/mushorg/tanner /opt/tanner && \
cp /root/dist/config.py /opt/tanner/tanner/ && \
cd /opt/tanner/ && \ cd /opt/tanner/ && \
# git fetch origin pull/364/head:test && \
# git checkout test && \
cp /root/dist/config.py /opt/tanner/tanner/ && \
pip3 install --no-cache-dir setuptools && \ pip3 install --no-cache-dir setuptools && \
pip3 install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir -r requirements.txt && \
python3 setup.py install && \ python3 setup.py install && \
@ -56,7 +58,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
rm -rf /tmp/* /var/tmp/* && \ rm -rf /tmp/* /var/tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
# #
# Start conpot # Start tanner
STOPSIGNAL SIGKILL STOPSIGNAL SIGKILL
USER tanner:tanner USER tanner:tanner
WORKDIR /opt/tanner WORKDIR /opt/tanner

View File

@ -13,10 +13,10 @@ config_template = {'DATA': {'db_config': '/opt/tanner/db/db_config.json',
'tornado': '/opt/tanner/data/tornado.py', 'tornado': '/opt/tanner/data/tornado.py',
'mako': '/opt/tanner/data/mako.py' 'mako': '/opt/tanner/data/mako.py'
}, },
'TANNER': {'host': '0.0.0.0', 'port': 8090}, 'TANNER': {'host': 'tanner', 'port': 8090},
'WEB': {'host': '0.0.0.0', 'port': 8091}, 'WEB': {'host': 'tanner_web', 'port': 8091},
'API': {'host': '0.0.0.0', 'port': 8092}, 'API': {'host': 'tanner_api', 'port': 8092, 'auth': False, 'auth_signature': 'tanner_api_auth'},
'PHPOX': {'host': '0.0.0.0', 'port': 8088}, 'PHPOX': {'host': 'tanner_phpox', 'port': 8088},
'REDIS': {'host': 'tanner_redis', 'port': 6379, 'poolsize': 80, 'timeout': 1}, 'REDIS': {'host': 'tanner_redis', 'port': 6379, 'poolsize': 80, 'timeout': 1},
'EMULATORS': {'root_dir': '/opt/tanner'}, 'EMULATORS': {'root_dir': '/opt/tanner'},
'EMULATOR_ENABLED': {'sqli': True, 'rfi': True, 'lfi': False, 'xss': True, 'cmd_exec': False, 'EMULATOR_ENABLED': {'sqli': True, 'rfi': True, 'lfi': False, 'xss': True, 'cmd_exec': False,
@ -25,6 +25,7 @@ config_template = {'DATA': {'db_config': '/opt/tanner/db/db_config.json',
'SQLI': {'type': 'SQLITE', 'db_name': 'tanner_db', 'host': 'localhost', 'user': 'root', 'SQLI': {'type': 'SQLITE', 'db_name': 'tanner_db', 'host': 'localhost', 'user': 'root',
'password': 'user_pass'}, 'password': 'user_pass'},
'XXE_INJECTION': {'OUT_OF_BAND': False}, 'XXE_INJECTION': {'OUT_OF_BAND': False},
'RFI': {"allow_insecure": True},
'DOCKER': {'host_image': 'busybox:latest'}, 'DOCKER': {'host_image': 'busybox:latest'},
'LOGGER': {'log_debug': '/tmp/tanner/tanner.log', 'log_err': '/tmp/tanner/tanner.err'}, 'LOGGER': {'log_debug': '/tmp/tanner/tanner.log', 'log_err': '/tmp/tanner/tanner.err'},
'MONGO': {'enabled': False, 'URI': 'mongodb://localhost'}, 'MONGO': {'enabled': False, 'URI': 'mongodb://localhost'},
@ -33,7 +34,8 @@ config_template = {'DATA': {'db_config': '/opt/tanner/db/db_config.json',
'LOCALLOG': {'enabled': True, 'PATH': '/var/log/tanner/tanner_report.json'}, 'LOCALLOG': {'enabled': True, 'PATH': '/var/log/tanner/tanner_report.json'},
'CLEANLOG': {'enabled': False}, 'CLEANLOG': {'enabled': False},
'REMOTE_DOCKERFILE': {'GITHUB': "https://raw.githubusercontent.com/mushorg/tanner/master/docker/" 'REMOTE_DOCKERFILE': {'GITHUB': "https://raw.githubusercontent.com/mushorg/tanner/master/docker/"
"tanner/template_injection/Dockerfile"} "tanner/template_injection/Dockerfile"},
'SESSIONS': {"delete_timeout": 300}
} }

View File

@ -34,7 +34,7 @@ services:
- adbhoney_local - adbhoney_local
ports: ports:
- "5555:5555" - "5555:5555"
image: "dtagdevsec/adbhoney:1903" image: "dtagdevsec/adbhoney:2006"
read_only: true read_only: true
volumes: volumes:
- /data/adbhoney/log:/opt/adbhoney/log - /data/adbhoney/log:/opt/adbhoney/log
@ -50,7 +50,7 @@ services:
ports: ports:
- "5000:5000/udp" - "5000:5000/udp"
- "8443:8443" - "8443:8443"
image: "dtagdevsec/ciscoasa:1903" image: "dtagdevsec/ciscoasa:2006"
read_only: true read_only: true
volumes: volumes:
- /data/ciscoasa/log:/var/log/ciscoasa - /data/ciscoasa/log:/var/log/ciscoasa
@ -63,7 +63,7 @@ services:
- citrixhoneypot_local - citrixhoneypot_local
ports: ports:
- "443:443" - "443:443"
image: "dtagdevsec/citrixhoneypot:1903" image: "dtagdevsec/citrixhoneypot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs - /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
@ -85,7 +85,7 @@ services:
ports: ports:
- "161:161" - "161:161"
- "2404:2404" - "2404:2404"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -106,7 +106,7 @@ services:
- conpot_local_guardian_ast - conpot_local_guardian_ast
ports: ports:
- "10001:10001" - "10001:10001"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -127,7 +127,7 @@ services:
- conpot_local_ipmi - conpot_local_ipmi
ports: ports:
- "623:623" - "623:623"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -149,7 +149,7 @@ services:
ports: ports:
- "1025:1025" - "1025:1025"
- "50100:50100" - "50100:50100"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -166,7 +166,7 @@ services:
ports: ports:
- "22:22" - "22:22"
- "23:23" - "23:23"
image: "dtagdevsec/cowrie:1903" image: "dtagdevsec/cowrie:2006"
read_only: true read_only: true
volumes: volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl - /data/cowrie/downloads:/home/cowrie/cowrie/dl
@ -198,7 +198,7 @@ services:
- "5060:5060/udp" - "5060:5060/udp"
- "5061:5061" - "5061:5061"
- "27017:27017" - "27017:27017"
image: "dtagdevsec/dionaea:1903" image: "dtagdevsec/dionaea:2006"
read_only: true read_only: true
volumes: volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp - /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
@ -220,7 +220,7 @@ services:
network_mode: "host" network_mode: "host"
cap_add: cap_add:
- NET_ADMIN - NET_ADMIN
image: "dtagdevsec/glutton:1903" image: "dtagdevsec/glutton:2006"
read_only: true read_only: true
volumes: volumes:
- /data/glutton/log:/var/log/glutton - /data/glutton/log:/var/log/glutton
@ -250,7 +250,7 @@ services:
- "1080:1080" - "1080:1080"
- "5432:5432" - "5432:5432"
- "5900:5900" - "5900:5900"
image: "dtagdevsec/heralding:1903" image: "dtagdevsec/heralding:2006"
read_only: true read_only: true
volumes: volumes:
- /data/heralding/log:/var/log/heralding - /data/heralding/log:/var/log/heralding
@ -269,7 +269,7 @@ services:
- "2324:2324" - "2324:2324"
- "4096:4096" - "4096:4096"
- "9200:9200" - "9200:9200"
image: "dtagdevsec/honeypy:1903" image: "dtagdevsec/honeypy:2006"
read_only: true read_only: true
volumes: volumes:
- /data/honeypy/log:/opt/honeypy/log - /data/honeypy/log:/opt/honeypy/log
@ -301,7 +301,7 @@ services:
- medpot_local - medpot_local
ports: ports:
- "2575:2575" - "2575:2575"
image: "dtagdevsec/medpot:1903" image: "dtagdevsec/medpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/medpot/log/:/var/log/medpot - /data/medpot/log/:/var/log/medpot
@ -322,7 +322,7 @@ services:
- rdpy_local - rdpy_local
ports: ports:
- "3389:3389" - "3389:3389"
image: "dtagdevsec/rdpy:1903" image: "dtagdevsec/rdpy:2006"
read_only: true read_only: true
volumes: volumes:
- /data/rdpy/log:/var/log/rdpy - /data/rdpy/log:/var/log/rdpy
@ -335,7 +335,7 @@ services:
tty: true tty: true
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/redis:1903" image: "dtagdevsec/redis:2006"
read_only: true read_only: true
## PHP Sandbox service ## PHP Sandbox service
@ -345,7 +345,7 @@ services:
tty: true tty: true
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/phpox:1903" image: "dtagdevsec/phpox:2006"
read_only: true read_only: true
## Tanner API Service ## Tanner API Service
@ -357,7 +357,7 @@ services:
tty: true tty: true
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/tanner:1903" image: "dtagdevsec/tanner:2006"
read_only: true read_only: true
volumes: volumes:
- /data/tanner/log:/var/log/tanner - /data/tanner/log:/var/log/tanner
@ -366,21 +366,21 @@ services:
- tanner_redis - tanner_redis
## Tanner WEB Service ## Tanner WEB Service
tanner_web: # tanner_web:
container_name: tanner_web # container_name: tanner_web
restart: always # restart: always
tmpfs: # tmpfs:
- /tmp/tanner:uid=2000,gid=2000 # - /tmp/tanner:uid=2000,gid=2000
tty: true # tty: true
networks: # networks:
- tanner_local # - tanner_local
image: "dtagdevsec/tanner:1903" # image: "dtagdevsec/tanner:2006"
command: tannerweb # command: tannerweb
read_only: true # read_only: true
volumes: # volumes:
- /data/tanner/log:/var/log/tanner # - /data/tanner/log:/var/log/tanner
depends_on: # depends_on:
- tanner_redis # - tanner_redis
## Tanner Service ## Tanner Service
tanner: tanner:
@ -391,7 +391,7 @@ services:
tty: true tty: true
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/tanner:1903" image: "dtagdevsec/tanner:2006"
command: tanner command: tanner
read_only: true read_only: true
volumes: volumes:
@ -399,7 +399,7 @@ services:
- /data/tanner/files:/opt/tanner/files - /data/tanner/files:/opt/tanner/files
depends_on: depends_on:
- tanner_api - tanner_api
- tanner_web # - tanner_web
- tanner_phpox - tanner_phpox
## Snare Service ## Snare Service
@ -411,7 +411,7 @@ services:
- tanner_local - tanner_local
ports: ports:
- "80:80" - "80:80"
image: "dtagdevsec/snare:1903" image: "dtagdevsec/snare:2006"
depends_on: depends_on:
- tanner - tanner
@ -429,7 +429,7 @@ services:
- NET_ADMIN - NET_ADMIN
- SYS_NICE - SYS_NICE
- NET_RAW - NET_RAW
image: "dtagdevsec/fatt:1903" image: "dtagdevsec/fatt:2006"
volumes: volumes:
- /data/fatt/log:/opt/fatt/log - /data/fatt/log:/opt/fatt/log
@ -438,7 +438,7 @@ services:
container_name: p0f container_name: p0f
restart: always restart: always
network_mode: "host" network_mode: "host"
image: "dtagdevsec/p0f:1903" image: "dtagdevsec/p0f:2006"
read_only: true read_only: true
volumes: volumes:
- /data/p0f/log:/var/log/p0f - /data/p0f/log:/var/log/p0f
@ -455,7 +455,7 @@ services:
- NET_ADMIN - NET_ADMIN
- SYS_NICE - SYS_NICE
- NET_RAW - NET_RAW
image: "dtagdevsec/suricata:1903" image: "dtagdevsec/suricata:2006"
volumes: volumes:
- /data/suricata/log:/var/log/suricata - /data/suricata/log:/var/log/suricata
@ -472,7 +472,7 @@ services:
- cyberchef_local - cyberchef_local
ports: ports:
- "127.0.0.1:64299:8000" - "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:1903" image: "dtagdevsec/cyberchef:2006"
read_only: true read_only: true
#### ELK #### ELK
@ -482,7 +482,7 @@ services:
restart: always restart: always
environment: environment:
- bootstrap.memory_lock=true - bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms1024m -Xmx1024m - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp - ES_TMPDIR=/tmp
cap_add: cap_add:
- IPC_LOCK - IPC_LOCK
@ -553,7 +553,7 @@ services:
- EWS_HPFEEDS_FORMAT=json - EWS_HPFEEDS_FORMAT=json
env_file: env_file:
- /opt/tpot/etc/compose/elk_environment - /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1903" image: "dtagdevsec/ewsposter:2006"
volumes: volumes:
- /data:/data - /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@ -599,6 +599,6 @@ services:
- spiderfoot_local - spiderfoot_local
ports: ports:
- "127.0.0.1:64303:8080" - "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1903" image: "dtagdevsec/spiderfoot:2006"
volumes: volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db

View File

@ -16,11 +16,11 @@ actions:
disable_action: False disable_action: False
filters: filters:
- filtertype: pattern - filtertype: pattern
kind: prefix kind: timestring
value: logstash- value: '%Y.%m.%d'
- filtertype: age - filtertype: age
source: name source: name
direction: older direction: older
timestring: '%Y.%m.%d' timestring: '%Y.%m.%d'
unit: days unit: days
unit_count: 90 unit_count: 60

View File

@ -11,22 +11,21 @@ myPROGRESSBOXCONF=" --backtitle "$myBACKTITLE" --progressbox 24 80"
mySITES="https://hub.docker.com https://github.com https://pypi.python.org https://debian.org" mySITES="https://hub.docker.com https://github.com https://pypi.python.org https://debian.org"
myTPOTCOMPOSE="/opt/tpot/etc/tpot.yml" myTPOTCOMPOSE="/opt/tpot/etc/tpot.yml"
myLSB_STABLE_SUPPORTED="stretch buster" myLSB_STABLE_SUPPORTED="stretch buster"
myLSB_TESTING_SUPPORTED="sid" myLSB_TESTING_SUPPORTED="stable"
myREMOTESITES="https://hub.docker.com https://github.com https://pypi.python.org https://debian.org" myREMOTESITES="https://hub.docker.com https://github.com https://pypi.python.org https://debian.org"
myPREINSTALLPACKAGES="aria2 apache2-utils curl cracklib-runtime dialog figlet fuse grc libcrack2 libpq-dev lsb-release netselect-apt net-tools software-properties-common toilet" myPREINSTALLPACKAGES="aria2 apache2-utils cracklib-runtime curl dialog figlet fuse grc libcrack2 libpq-dev lsb-release netselect-apt net-tools software-properties-common toilet"
myINSTALLPACKAGES="aria2 apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit cockpit-docker console-setup console-setup-linux curl debconf-utils dialog dnsutils docker.io docker-compose ethtool fail2ban figlet genisoimage git glances grc haveged html2text htop iptables iw jq kbd libcrack2 libltdl7 man mosh multitail netselect-apt net-tools npm ntp openssh-server openssl pass pigz prips software-properties-common syslinux psmisc pv python3-pip toilet unattended-upgrades unzip vim wget wireless-tools wpasupplicant" myINSTALLPACKAGES="aria2 apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit console-setup console-setup-linux cracklib-runtime curl debconf-utils dialog dnsutils docker.io docker-compose elasticsearch-curator ethtool fail2ban figlet genisoimage git glances grc haveged html2text htop iptables iw jq kbd libcrack2 libltdl7 libpam-google-authenticator man mosh multitail netselect-apt net-tools npm ntp openssh-server openssl pass pigz prips software-properties-common syslinux psmisc pv python3-pip toilet unattended-upgrades unzip vim wget wireless-tools wpasupplicant"
myINFO="\ myINFO="\
######################################## ###########################################
### T-Pot Installer for Debian (Sid) ### ### T-Pot Installer for Debian (Stable) ###
######################################## ###########################################
Disclaimer: Disclaimer:
This script will install T-Pot on this system. This script will install T-Pot on this system.
By running the script you know what you are doing: By running the script you know what you are doing:
1. SSH will be reconfigured to tcp/64295. 1. SSH will be reconfigured to tcp/64295.
2. Your Debian installation will be upgraded to Sid / unstable. 2. Please ensure other means of access to this system in case something goes wrong.
3. Please ensure other means of access to this system in case something goes wrong. 3. At best this script will be executed on the console instead through a SSH session.
4. At best this script will be executed on the console instead through a SSH session.
######################################## ########################################
@ -279,21 +278,21 @@ function fuCHECKNET {
# Install T-Pot dependencies # Install T-Pot dependencies
function fuGET_DEPS { function fuGET_DEPS {
export DEBIAN_FRONTEND=noninteractive export DEBIAN_FRONTEND=noninteractive
# # Determine fastest mirror # Determine fastest mirror
# echo echo
# echo "### Determine fastest mirror for your location." echo "### Determine fastest mirror for your location."
# echo echo
# netselect-apt -n -a amd64 unstable && cp sources.list /etc/apt/ netselect-apt -n -a amd64 stable && cp sources.list /etc/apt/
# mySOURCESCHECK=$(cat /etc/apt/sources.list | grep -c unstable) mySOURCESCHECK=$(cat /etc/apt/sources.list | grep -c stable)
# if [ "$mySOURCESCHECK" == "0" ] if [ "$mySOURCESCHECK" == "0" ]
# then then
# echo "### Automatic mirror selection failed, using main mirror." echo "### Automatic mirror selection failed, using main mirror."
# Point to Debian (Sid, unstable) # Point to Debian (stable)
tee /etc/apt/sources.list <<EOF tee /etc/apt/sources.list <<EOF
deb http://deb.debian.org/debian unstable main contrib non-free deb http://deb.debian.org/debian stable main contrib non-free
deb-src http://deb.debian.org/debian unstable main contrib non-free deb-src http://deb.debian.org/debian stable main contrib non-free
EOF EOF
# fi fi
echo echo
echo "### Getting update information." echo "### Getting update information."
echo echo
@ -403,7 +402,7 @@ for i in "$@"
echo " A configuration example is available in \"tpotce/iso/installer/tpot.conf.dist\"." echo " A configuration example is available in \"tpotce/iso/installer/tpot.conf.dist\"."
echo echo
echo "--type=<[user, auto, iso]>" echo "--type=<[user, auto, iso]>"
echo " user, use this if you want to manually install a T-Pot on a Debian (testing) machine." echo " user, use this if you want to manually install a T-Pot on a Debian (Stable) machine."
echo " auto, implied if a configuration file is passed as an argument for automatic deployment." echo " auto, implied if a configuration file is passed as an argument for automatic deployment."
echo " iso, use this if you are a T-Pot developer and want to install a T-Pot from a pre-compiled iso." echo " iso, use this if you are a T-Pot developer and want to install a T-Pot from a pre-compiled iso."
echo echo
@ -684,8 +683,8 @@ echo "UseRoaming no" | tee -a /etc/ssh/ssh_config
# Installing elasticdump, yq # Installing elasticdump, yq
fuBANNER "Installing pkgs" fuBANNER "Installing pkgs"
npm install https://github.com/taskrabbit/elasticsearch-dump -g npm install elasticdump -g
pip3 install elasticsearch-curator yq pip3 install yq
hash -r hash -r
# Cloning T-Pot from GitHub # Cloning T-Pot from GitHub

Some files were not shown because too many files have changed in this diff Show More