139 Commits

Author SHA1 Message Date
198305fc57 listbot integration 2020-05-12 10:41:32 +00:00
c50d147926 Update capture-filter.bpf 2020-04-22 17:58:36 +02:00
0405a20402 Update update.sh 2020-04-22 17:47:57 +02:00
775ea76b98 Update Dockerfile 2020-04-22 17:46:55 +02:00
b98d9d352e Load listbot data from OTC 2020-04-22 17:11:54 +02:00
c277f313dc Update Dockerfile 2020-04-02 17:21:00 +02:00
b12cf0e418 offload listbot data from netlify CDN 2020-04-02 17:15:35 +02:00
b7b6e9fa0e Merge pull request #553 from skoops/skoops-patch-1
Update install.sh
2020-02-24 13:31:26 +01:00
d889651d63 Update install.sh
fix password check by providing cracklib-check for later usage
2020-02-24 13:22:00 +01:00
f2abb1d1bd release mailoney, elk 7.x into NextGen 19.03.x 2020-02-03 17:46:11 +01:00
b31225b97c Merge pull request #524 from pisces-period/pisces-period-cowrie-patch
make Dockerfile compatible with any Python version
2020-02-03 17:17:25 +01:00
64907a2eba random loop timer ewsposter 2020-01-30 11:07:28 +00:00
fa0fdbb579 prepare for ELK migration to 7.x 2020-01-29 14:21:40 +00:00
1e47497c30 fixes for update.sh 2020-01-28 17:52:44 +00:00
a3e0c51493 switch to new nginx, heimdall, landing page in nextgen 2020-01-28 16:11:05 +00:00
33222a92b6 finish heimdall integration 2020-01-27 17:03:44 +00:00
1167231560 fix error log path 2020-01-27 08:51:34 +00:00
62b519999e tweaking 2020-01-24 15:38:00 +00:00
8b19228d99 tweaking heimdall, read only for now 2020-01-24 15:16:25 +00:00
2d16a9c9f6 tweaking new landing page 2020-01-24 14:14:09 +00:00
95a075e764 start working on new landing page 2020-01-24 02:21:33 +00:00
dc75b5567a make Dockerfile compatible with any Python version
adding a temporary variable to store the current (updated) version of Python, thus fixing the situation where the version is != 3.7 (e.g. Alpine python package at version 3.8.1-r1), causing lines 39-41 to break in the original code (install path is hard-coded at 3.7).
2020-01-23 17:42:48 +01:00
d643ca7a01 logrotate all mailoney log files 2020-01-22 12:23:21 +00:00
f110eb08b0 prepare for mailoney json logging 2020-01-22 12:17:30 +00:00
a470a7b12f Update CHANGELOG.md 2020-01-16 22:10:03 +01:00
c7eed86bd7 update changelog 2020-01-16 20:05:45 +00:00
20d6c6ab7f include citrixhoneypot dashboards
for fresh installs of NextGen
2020-01-16 19:56:05 +00:00
b033d515c6 dashboard files with citrixhoneypot support
for manual kibana import
2020-01-16 20:49:32 +01:00
1d0aad3b34 tweak logstash.conf for citrixhoneypot 2020-01-16 18:04:29 +00:00
a6ed6613a5 prepare citrixhoneypot for ELK integration 2020-01-16 15:13:58 +00:00
a953542f8f rebase citrixhoneypot 2020-01-16 10:29:58 +00:00
be3e998a92 prepare citrixhoneypot for JSON logging 2020-01-15 13:59:11 +00:00
1bc514a067 Update update.sh 2020-01-15 14:19:38 +01:00
9ad83fae51 Update CHANGELOG.md 2020-01-15 13:41:45 +01:00
e803d188c9 prepare for citrixhoneypot 2020-01-15 12:33:41 +00:00
8a844e6dd3 prepare for CitrixHoneypot 2020-01-15 12:14:23 +00:00
0ef2b083fc Merge branch 'master' of https://github.com/dtag-dev-sec/tpotce 2020-01-15 10:39:48 +00:00
755cbb77db prepare for citrixhoneypot 2020-01-15 10:37:48 +00:00
3498f3e635 fix typo 2020-01-13 22:44:14 +01:00
2ed0f939d1 rebuild, tweak spiderfoot 2020-01-03 17:04:18 +00:00
af3ef271d4 rebuild cyberchef 2020-01-03 16:25:33 +00:00
3713139fc6 rebuild snare, tanner 2020-01-03 14:06:29 +00:00
0928e37326 rebuild Dionaea, Heralding 2020-01-02 17:37:08 +00:00
f7a6a30c90 update.sh should be executed as root only
Fixes #508
2020-01-02 10:16:55 +01:00
ec46dc9ab0 Fix typo, Fixes #504 2020-01-02 09:40:55 +01:00
7c5fc000c0 rebuild fatt 2019-12-27 20:52:23 +00:00
64628c1293 rebuild rdpy 2019-12-27 20:09:15 +00:00
29d223865f tweaking, rebuild honeypy 2019-12-27 19:58:22 +00:00
0ed60329b8 tweak installer
fixes #389
2019-12-27 19:45:38 +00:00
1442a257e5 conpot tweaking 2019-12-27 18:34:13 +00:00
a1d903db01 bump conpot to latest master 2019-12-27 16:21:12 +00:00
756215519c add sAN to selfsigned cert
fixes #478
2019-12-27 14:53:07 +00:00
659831cf99 Update CHANGELOG.md 2019-12-24 12:14:44 +01:00
a370e2b414 introduce pigz to logrotate
pigz will now handle compression of t-pot logfiles
logrotate will only rotate archives instead of packing them again
should improve #501 #494 #489 #482 and others with regard to a volume of logs
2019-12-24 10:55:39 +00:00
f4a078c443 introduce pigz for clean.sh
See #501 and thanks to @workandresearchgithub
2019-12-24 10:31:54 +00:00
02bdc8194a bump adbhoney to latest master with py3 support 2019-11-21 13:56:38 +00:00
878538e3df Update README.md
fixes #485
2019-11-20 10:23:03 +01:00
ca01bfd82f Merge pull request #484 from shaderecker/debian10
Switch to Debian 10 image for Open Telekom Cloud
2019-11-13 19:55:11 +01:00
71dc3227c4 Update README.md 2019-11-13 17:17:14 +01:00
fd39b3a94d Switch to Debian 10 image for Open Telekom Cloud 2019-11-13 14:50:56 +01:00
3b43c55c04 Merge pull request #480 from shaderecker/ansible-updates
Ansible updates
2019-11-04 09:20:18 +01:00
d15005195d Increase ServerAliveInterval 2019-11-03 22:15:52 +00:00
c5ddfd0a72 Add SSH ServerAliveInterval
Fixes occasional hangup of long running tasks
2019-11-03 19:58:32 +00:00
e9520eefb5 Final touches for #477 2019-10-28 17:01:44 +01:00
72709bc186 Test #477 2019-10-28 16:40:46 +01:00
59757f87f0 test for #477 2019-10-28 15:39:10 +01:00
60ef4eeeea Test for #477 2019-10-28 15:37:10 +01:00
68a10a2f1f Fire and forget: Move reboot task to background
Execute the reboot command asynchronously, so Ansible doesn't report an error.
2019-10-28 11:59:39 +00:00
170439d977 Tweak hpfeeds setup
- Fix owner and file permissions for proper comparison
- Only execute the hpfeeds script when the config file has changed
2019-10-28 11:49:57 +00:00
9c7c6ac4a3 Update README.md 2019-10-28 10:23:03 +00:00
6224146cde Update README:md: Agent Forwarding 2019-10-28 10:22:51 +00:00
8314a7d34a Fix wrong order of variables
- Align with all example configs
- This is important for Ansible to check wether the file has changed
2019-10-28 10:22:20 +00:00
145856960c Use copy module 2019-10-28 10:22:03 +00:00
71523cf7ef I love double quotes 2019-10-28 10:21:49 +00:00
cbb2b66a72 Hide secrets from log output 2019-10-28 10:21:40 +00:00
2076cea40f Shorten task name 2019-10-28 10:21:30 +00:00
34f335c7e6 Don't print user password in taskname 2019-10-28 10:21:13 +00:00
602ebfc952 Remove waiting delay 2019-10-28 10:19:50 +00:00
78f9a83b04 Remove unneeded become declarations 2019-10-28 10:19:19 +00:00
4c9ff2c006 Simplify and consolidate tasks 2019-10-28 10:15:32 +00:00
7d56264a8d removing cockpit, pcp for now since these overflow swap for some reason 2019-10-26 10:40:09 +00:00
78135df9e7 Bump Suricata to 5.0.0 2019-10-22 15:20:23 +00:00
3d85ca94f1 bump cowrie to v2.0.0 2019-10-21 20:59:36 +00:00
4d7ee46cd5 update changelog 2019-10-16 15:01:04 +00:00
6921857573 bump heralding to latest master 2019-10-16 14:46:58 +00:00
5ee19e3e30 move installer to pip3 2019-10-16 11:02:59 +00:00
4fa66a2747 move to pip3 2019-10-16 10:50:13 +00:00
a1e81b57c9 Update CHANGELOG.md 2019-10-16 12:32:47 +02:00
1813b78ff0 update changelog 2019-10-16 10:30:27 +00:00
6cff8e390d tweaking cockpit, pcp 2019-10-16 10:01:41 +00:00
5079b57f94 add option to unlock ES for r/w 2019-10-15 15:41:21 +00:00
42c19e4d81 bump glutton, tune down noisy log 2019-10-15 14:50:39 +00:00
b9fb3d4695 tune down noisy log 2019-10-15 07:49:30 +00:00
544def9481 Merge pull request #461 from piffey/455
Fix AWS Terraform Deploy by switching to Debian Buster pre-release AMIs.
2019-10-04 17:15:42 +02:00
dca06918c0 Merge pull request #454 from Oogy/shell-enhancement
small change to handle non-interactive shells
2019-10-04 17:12:33 +02:00
9137440d3c Fix AWS Terraform Deploy by switching to Debian Buster pre-release AMIs. 2019-10-02 12:34:47 -07:00
d75a612416 testing change in user login 2019-09-24 10:00:31 -04:00
487ce4bed5 bump ewsposter to latest master 2019-09-21 12:09:17 +00:00
ba8564b348 small change to handle non-interactive shells 2019-09-19 15:32:15 -04:00
e914643882 Some wallpaper tweaking 2019-09-07 19:52:43 +02:00
1c8d3451ef Some logo tweaking 2019-09-07 19:50:09 +02:00
e7fe917738 Add T-Pot QR Code 2019-09-07 19:44:18 +02:00
0ed394db6a Delete t-pot_qr.png 2019-09-07 19:43:53 +02:00
99cc91d671 Add T-Pot QR Code 2019-09-07 19:42:30 +02:00
357f40d573 Update CHANGELOG.md 2019-08-29 10:17:13 +02:00
24ac6d203f bump medpot to latest master 2019-08-28 14:52:25 +00:00
08ff1377fd prep mailoney rebuild 2019-08-28 14:41:35 +00:00
42c57636b9 prep honeytrap rebuild 2019-08-28 14:34:20 +00:00
c86d6f15af prep rebuild for elasticpot 2019-08-28 14:12:52 +00:00
670dddfea0 bump nginx to 1.16.1 2019-08-28 14:09:16 +00:00
2132f80988 prep rebuild for ciscoasa 2019-08-28 13:59:41 +00:00
cae95ebe20 bump adbhoney to latest master 2019-08-28 12:46:19 +00:00
221f75be33 bump elk stack to 6.8.2 2019-08-28 13:53:43 +02:00
66bb9443f9 bump elk stack to 6.8.2 2019-08-28 11:49:03 +00:00
29c6be5571 wallpaper res 1920 1080 2019-08-27 20:02:45 +02:00
16868a7532 just some swag ... t-pot 4k wallpaper 2019-08-24 20:49:31 +02:00
4620666d4e add logo 2019-08-24 20:31:17 +02:00
9a5dd587b3 Add files via upload 2019-08-24 20:29:25 +02:00
cca1d0f727 Workaround for #442 2019-08-23 19:12:31 +02:00
bc6e94d329 spiderfoot, head bump to latest master 2019-08-16 17:29:41 +00:00
78d9d1f7c7 bump cyberchef to latest master 2019-08-16 17:14:58 +00:00
f1275e5b07 fix 2019-08-16 16:55:36 +00:00
4164b75bea Fixed
DockerHub already uses 3.7
2019-08-16 17:59:05 +02:00
c2afdc0f1f Fix for DockerHub
Works just fine on local build.
2019-08-16 17:46:17 +02:00
e0427cfc21 bump tanner to latest master 2019-08-16 14:43:10 +00:00
786ab5c082 adjust dionaea, fixes #435 2019-08-16 12:18:28 +00:00
a59fc19133 bump elastic stack to 6.7.2 2019-08-15 17:40:01 +02:00
bf39c0f5b2 bump elastic stack to 6.7.2 2019-08-15 15:38:12 +00:00
364831ae58 fix cd 2019-08-15 08:32:04 +00:00
31d7707d19 download instead of git pull
download translation maps rather than running a git pull
translation maps will now be bzip2 compressed to reduce traffic to a minimum
fixes #432
2019-08-14 14:43:47 +00:00
a053be50f3 Merge pull request #436 from TheHADILP/native-os
Create Security Group / network / subnet / router with Ansible
2019-08-13 15:11:38 +02:00
ade81e2dc2 Update documentation 2019-08-13 12:59:05 +00:00
3f15373e7b Create Network/Subnet/Router with Ansible 2019-08-13 12:00:19 +00:00
3186b88641 Update readme: remove security group from example 2019-08-13 10:42:08 +00:00
fc4c4e8675 Update readme 2019-08-13 10:40:24 +00:00
f80e693d8b Add rules to security group and adapt server creation 2019-08-13 10:31:46 +00:00
bf9a14081d Create Security Group with Ansible 2019-08-13 09:16:02 +00:00
a906633cfd Merge pull request #433 from TheHADILP/ansible-updates
Update Ansible README: System updates
2019-08-13 10:43:53 +02:00
7fcf406781 Update README: System updates 2019-08-08 05:48:40 +00:00
104 changed files with 1676 additions and 638 deletions

View File

@ -1,5 +1,107 @@
# Changelog # Changelog
## 20200116
- **Bump ELK to latest 6.8.6**
- **Update ISO image to fix upstream bug of missing kernel modules**
- **Include dashboards for CitrixHoneypot**
- Please run `/opt/tpot/update.sh` for the necessary modifications, omit the reboot and run `/opt/tpot/bin/tped.sh` to (re-)select the NextGen installation type.
- This update requires the latest Kibana objects as well. Download the latest from https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/etc/objects/kibana_export.json.zip, unzip and import the objects within Kibana WebUI > Management > Saved Objects > Export / Import". All objects will be overwritten upon import, make sure to run an export first.
## 20200115
- **Prepare integration of CitrixHoneypot**
- Prepare integration of [CitrixHoneypot](https://github.com/MalwareTech/CitrixHoneypot) by MalwareTech
- Integration into ELK is still open
- Please run `/opt/tpot/update.sh` for the necessary modifications, omit the reboot and run `/opt/tpot/bin/tped.sh` to (re-)select the NextGen installation type.
## 20191224
- **Use pigz, optimize logrotate.conf**
- Use `pigz` for faster archiving, especially with regard to high volumes of logs - Thanks to @workandresearchgithub!
- Optimize `logrotate.conf` to improve archiving speed and get rid of multiple compression, also introduce `pigz`.
## 20191121
- **Bump ADBHoney to latest master**
- Use latest version of ADBHoney, which now fully support Python 3.x - Thanks to @huuck!
## 20191113, 20191104, 20191103, 20191028
- **Switch to Debian 10 on OTC, Ansible Improvements**
- OTC now supporting Debian 10 - Thanks to @shaderecker!
## 20191028
- **Fix an issue with pip3, yq**
- `yq` needs rehashing.
## 20191026
- **Remove cockpit-pcp**
- `cockpit-pcp` floods swap for some reason - removing for now.
## 20191022
- **Bump Suricata to 5.0.0**
## 20191021
- **Bump Cowrie to 2.0.0**
## 20191016
- **Tweak installer, pip3, Heralding**
- Install `cockpit-pcp` right from the start for machine monitoring in cockpit.
- Move installer and update script to use pip3.
- Bump heralding to latest master (1.0.6) - Thanks @johnnykv!
## 20191015
- **Tweaking, Bump glutton, unlock ES script**
- Add `unlock.sh` to unlock ES indices in case of lockdown after disk quota has been reached.
- Prevent too much terminal logging from p0f and glutton since `daemon.log` was filled up.
- Bump glutton to latest master now supporting payload_hex. Thanks to @glaslos.
## 20191002
- **Merge**
- Support Debian Buster images for AWS #454
- Thank you @piffey
## 20190924
- **Bump EWSPoster**
- Supports Python 3.x
- Thank you @Trixam
## 20190919
- **Merge**
- Handle non-interactive shells #454
- Thank you @Oogy
## 20190907
- **Logo tweaking**
- Add QR logo
## 20190828
- **Upgrades and rebuilds**
- Bump Medpot, Nginx and Adbhoney to latest master
- Bump ELK stack to 6.8.2
- Rebuild Mailoney, Honeytrap, Elasticpot and Ciscoasa
- Add 1080p T-Pot wallpaper for download
## 20190824
- **Add some logo work**
- Thanks to @thehadilps's suggestion adjusted social preview
- Added 4k T-Pot wallpaper for download
## 20190823
- **Fix for broken Fuse package**
- Fuse package in upstream is broken
- Adjust installer as workaround, fixes #442
## 20190816
- **Upgrades and rebuilds**
- Adjust Dionaea to avoid nmap detection, fixes #435 (thanks @iukea1)
- Bump Tanner, Cyberchef, Spiderfoot and ES Head to latest master
## 20190815
- **Bump ELK stack to 6.7.2**
- Transition to 7.x must iterate slowly through previous versions to prevent changes breaking T-Pots
## 20190814
- **Logstash Translation Maps improvement**
- Download translation maps rather than running a git pull
- Translation maps will now be bzip2 compressed to reduce traffic to a minimum
- Fixes #432
## 20190802 ## 20190802
- **Add support for Buster as base image** - **Add support for Buster as base image**

View File

@ -1,4 +1,4 @@
# T-Pot 19.03 ![T-Pot](doc/tpotsocial.png)
T-Pot 19.03 runs on Debian (Sid), is based heavily on T-Pot 19.03 runs on Debian (Sid), is based heavily on
@ -8,6 +8,7 @@ and includes dockerized versions of the following honeypots
* [adbhoney](https://github.com/huuck/ADBHoney), * [adbhoney](https://github.com/huuck/ADBHoney),
* [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot), * [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot),
* [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot),
* [conpot](http://conpot.org/), * [conpot](http://conpot.org/),
* [cowrie](https://github.com/cowrie/cowrie), * [cowrie](https://github.com/cowrie/cowrie),
* [dionaea](https://github.com/DinoTools/dionaea), * [dionaea](https://github.com/DinoTools/dionaea),
@ -139,10 +140,11 @@ This allows us to run multiple honeypot daemons on the same network interface wh
In T-Pot we combine the dockerized honeypots ... In T-Pot we combine the dockerized honeypots ...
* [adbhoney](https://github.com/huuck/ADBHoney), * [adbhoney](https://github.com/huuck/ADBHoney),
* [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot), * [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot),
* [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot),
* [conpot](http://conpot.org/), * [conpot](http://conpot.org/),
* [cowrie](http://www.micheloosterhof.com/cowrie/), * [cowrie](http://www.micheloosterhof.com/cowrie/),
* [dionaea](https://github.com/DinoTools/dionaea), * [dionaea](https://github.com/DinoTools/dionaea),
* [elasticpot](https://github.com/schmalle/ElasticPot), * [elasticpot](https://github.com/schmalle/ElasticpotPY),
* [glutton](https://github.com/mushorg/glutton), * [glutton](https://github.com/mushorg/glutton),
* [heralding](https://github.com/johnnykv/heralding), * [heralding](https://github.com/johnnykv/heralding),
* [honeypy](https://github.com/foospidy/HoneyPy), * [honeypy](https://github.com/foospidy/HoneyPy),
@ -221,7 +223,7 @@ Depending on your installation type, whether you install on [real hardware](#har
- A working, non-proxied, internet connection - A working, non-proxied, internet connection
##### NextGen Installation (Glutton replacing Honeytrap, HoneyPy replacing Elasticpot) ##### NextGen Installation (Glutton replacing Honeytrap, HoneyPy replacing Elasticpot)
- Honeypots: adbhoney, ciscoasa, conpot, cowrie, dionaea, glutton, heralding, honeypy, mailoney, rdpy, snare & tanner - Honeypots: adbhoney, ciscoasa, citrixhoneypot, conpot, cowrie, dionaea, glutton, heralding, honeypy, mailoney, rdpy, snare & tanner
- Tools: cockpit, cyberchef, ELK, elasticsearch head, ewsposter, fatt, NGINX, spiderfoot, p0f and suricata - Tools: cockpit, cyberchef, ELK, elasticsearch head, ewsposter, fatt, NGINX, spiderfoot, p0f and suricata
- 6-8 GB RAM (less RAM is possible but might introduce swapping) - 6-8 GB RAM (less RAM is possible but might introduce swapping)
@ -338,7 +340,7 @@ If you would like to contribute, you can add other cloud deployments like Chef o
You can find an [Ansible](https://www.ansible.com/) based T-Pot deployment in the [`cloud/ansible`](cloud/ansible) folder. You can find an [Ansible](https://www.ansible.com/) based T-Pot deployment in the [`cloud/ansible`](cloud/ansible) folder.
The Playbook in the [`cloud/ansible/openstack`](cloud/ansible/openstack) folder is reusable for all OpenStack clouds out of the box. The Playbook in the [`cloud/ansible/openstack`](cloud/ansible/openstack) folder is reusable for all OpenStack clouds out of the box.
It first creates a new server and then installs and configures T-Pot. It first creates all resources (security group, network, subnet, router), deploys a new server and then installs and configures T-Pot.
You can have a look at the Playbook and easily adapt the deploy role for other [cloud providers](https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html). You can have a look at the Playbook and easily adapt the deploy role for other [cloud providers](https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html).
@ -526,10 +528,10 @@ We hope you understand that we cannot provide support on an individual basis. We
# Licenses # Licenses
The software that T-Pot is built on uses the following licenses. The software that T-Pot is built on uses the following licenses.
<br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeypy](https://github.com/foospidy/HoneyPy/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/) <br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeypy](https://github.com/foospidy/HoneyPy/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
<br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://github.com/schmalle/ElasticPot), [ewsposter](https://github.com/dtag-dev-sec/ews/), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE) <br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://github.com/schmalle/ElasticpotPY), [ewsposter](https://github.com/dtag-dev-sec/ews/), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE)
<br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE) <br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE)
<br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE) <br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE)
<br> Other: [cowrie](https://github.com/micheloosterhof/cowrie/blob/master/LICENSE.md), [mailoney](https://github.com/awhitehatter/mailoney), [Debian licensing](https://www.debian.org/legal/licenses/) <br> Other: [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot#licencing-agreement-malwaretech-public-licence), [cowrie](https://github.com/micheloosterhof/cowrie/blob/master/LICENSE.md), [mailoney](https://github.com/awhitehatter/mailoney), [Debian licensing](https://www.debian.org/legal/licenses/)
<a name="credits"></a> <a name="credits"></a>
# Credits # Credits
@ -540,6 +542,7 @@ Without open source and the fruitful development community (we are proud to be a
* [adbhoney](https://github.com/huuck/ADBHoney/graphs/contributors) * [adbhoney](https://github.com/huuck/ADBHoney/graphs/contributors)
* [apt-fast](https://github.com/ilikenwf/apt-fast/graphs/contributors) * [apt-fast](https://github.com/ilikenwf/apt-fast/graphs/contributors)
* [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/graphs/contributors) * [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/graphs/contributors)
* [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot/graphs/contributors)
* [cockpit](https://github.com/cockpit-project/cockpit/graphs/contributors) * [cockpit](https://github.com/cockpit-project/cockpit/graphs/contributors)
* [conpot](https://github.com/mushorg/conpot/graphs/contributors) * [conpot](https://github.com/mushorg/conpot/graphs/contributors)
* [cowrie](https://github.com/micheloosterhof/cowrie/graphs/contributors) * [cowrie](https://github.com/micheloosterhof/cowrie/graphs/contributors)

View File

@ -5,6 +5,9 @@ myRED=""
myGREEN="" myGREEN=""
myWHITE="" myWHITE=""
# Set pigz
myPIGZ=$(which pigz)
# Set persistence # Set persistence
myPERSISTENCE=$1 myPERSISTENCE=$1
@ -46,14 +49,14 @@ chmod 644 /data/nginx/cert -R
logrotate -f -s $mySTATUS $myCONF logrotate -f -s $mySTATUS $myCONF
# Compressing some folders first and rotate them later # Compressing some folders first and rotate them later
if [ "$(fuEMPTY $myADBHONEYDL)" != "0" ]; then tar cvfz $myADBHONEYTGZ $myADBHONEYDL; fi if [ "$(fuEMPTY $myADBHONEYDL)" != "0" ]; then tar -I $myPIGZ -cvf $myADBHONEYTGZ $myADBHONEYDL; fi
if [ "$(fuEMPTY $myCOWRIETTYLOGS)" != "0" ]; then tar cvfz $myCOWRIETTYTGZ $myCOWRIETTYLOGS; fi if [ "$(fuEMPTY $myCOWRIETTYLOGS)" != "0" ]; then tar -I $myPIGZ -cvf $myCOWRIETTYTGZ $myCOWRIETTYLOGS; fi
if [ "$(fuEMPTY $myCOWRIEDL)" != "0" ]; then tar cvfz $myCOWRIEDLTGZ $myCOWRIEDL; fi if [ "$(fuEMPTY $myCOWRIEDL)" != "0" ]; then tar -I $myPIGZ -cvf $myCOWRIEDLTGZ $myCOWRIEDL; fi
if [ "$(fuEMPTY $myDIONAEABI)" != "0" ]; then tar cvfz $myDIONAEABITGZ $myDIONAEABI; fi if [ "$(fuEMPTY $myDIONAEABI)" != "0" ]; then tar -I $myPIGZ -cvf $myDIONAEABITGZ $myDIONAEABI; fi
if [ "$(fuEMPTY $myDIONAEABIN)" != "0" ]; then tar cvfz $myDIONAEABINTGZ $myDIONAEABIN; fi if [ "$(fuEMPTY $myDIONAEABIN)" != "0" ]; then tar -I $myPIGZ -cvf $myDIONAEABINTGZ $myDIONAEABIN; fi
if [ "$(fuEMPTY $myHONEYTRAPATTACKS)" != "0" ]; then tar cvfz $myHONEYTRAPATTACKSTGZ $myHONEYTRAPATTACKS; fi if [ "$(fuEMPTY $myHONEYTRAPATTACKS)" != "0" ]; then tar -I $myPIGZ -cvf $myHONEYTRAPATTACKSTGZ $myHONEYTRAPATTACKS; fi
if [ "$(fuEMPTY $myHONEYTRAPDL)" != "0" ]; then tar cvfz $myHONEYTRAPDLTGZ $myHONEYTRAPDL; fi if [ "$(fuEMPTY $myHONEYTRAPDL)" != "0" ]; then tar -I $myPIGZ -cvf $myHONEYTRAPDLTGZ $myHONEYTRAPDL; fi
if [ "$(fuEMPTY $myTANNERF)" != "0" ]; then tar cvfz $myTANNERFTGZ $myTANNERF; fi if [ "$(fuEMPTY $myTANNERF)" != "0" ]; then tar -I $myPIGZ -cvf $myTANNERFTGZ $myTANNERF; fi
# Ensure correct permissions and ownership for previously created archives # Ensure correct permissions and ownership for previously created archives
chmod 770 $myADBHONEYTGZ $myCOWRIETTYTGZ $myCOWRIEDLTGZ $myDIONAEABITGZ $myDIONAEABINTGZ $myHONEYTRAPATTACKSTGZ $myHONEYTRAPDLTGZ $myTANNERFTGZ chmod 770 $myADBHONEYTGZ $myCOWRIETTYTGZ $myCOWRIEDLTGZ $myDIONAEABITGZ $myDIONAEABINTGZ $myHONEYTRAPATTACKSTGZ $myHONEYTRAPDLTGZ $myTANNERFTGZ
@ -87,6 +90,14 @@ fuCISCOASA () {
chown tpot:tpot /data/ciscoasa -R chown tpot:tpot /data/ciscoasa -R
} }
# Let's create a function to clean up and prepare citrixhoneypot data
fuCITRIXHONEYPOT () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/citrixhoneypot/*; fi
mkdir -p /data/citrixhoneypot/logs/
chmod 770 /data/citrixhoneypot/ -R
chown tpot:tpot /data/citrixhoneypot/ -R
}
# Let's create a function to clean up and prepare conpot data # Let's create a function to clean up and prepare conpot data
fuCONPOT () { fuCONPOT () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/conpot/*; fi if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/conpot/*; fi
@ -257,6 +268,7 @@ if [ "$myPERSISTENCE" = "on" ];
echo "Cleaning up and preparing data folders." echo "Cleaning up and preparing data folders."
fuADBHONEY fuADBHONEY
fuCISCOASA fuCISCOASA
fuCITRIXHONEYPOT
fuCONPOT fuCONPOT
fuCOWRIE fuCOWRIE
fuDIONAEA fuDIONAEA

View File

@ -1,4 +1,4 @@
#/bin/bash #!/bin/bash
# Run as root only. # Run as root only.
myWHOAMI=$(whoami) myWHOAMI=$(whoami)

View File

@ -78,9 +78,9 @@ myENABLE=$myENABLE
myHOST=$myHOST myHOST=$myHOST
myPORT=$myPORT myPORT=$myPORT
myCHANNEL=$myCHANNEL myCHANNEL=$myCHANNEL
myCERT=$myCERT
myIDENT=$myIDENT myIDENT=$myIDENT
mySECRET=$mySECRET mySECRET=$mySECRET
myCERT=$myCERT
myFORMAT=$myFORMAT myFORMAT=$myFORMAT
EOF EOF
} }

19
bin/unlock_es.sh Executable file
View File

@ -0,0 +1,19 @@
#/bin/bash
# Unlock all ES indices for read / write mode
# Useful in cases where ES locked all indices after disk quota has been reached
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c "green\|yellow")
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start tpot'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
echo "### Trying to unlock all ES indices for read / write operation: "
curl -XPUT -H "Content-Type: application/json" ''$myES'_all/_settings' -d '{"index.blocks.read_only_allow_delete": null}'
echo

View File

@ -4,7 +4,7 @@ Here you can find a ready-to-use solution for your automated T-Pot deployment us
It consists of an Ansible Playbook with multiple roles, which is reusable for all [OpenStack](https://www.openstack.org/) based clouds (e.g. Open Telekom Cloud, Orange Cloud, Telefonica Open Cloud, OVH) out of the box. It consists of an Ansible Playbook with multiple roles, which is reusable for all [OpenStack](https://www.openstack.org/) based clouds (e.g. Open Telekom Cloud, Orange Cloud, Telefonica Open Cloud, OVH) out of the box.
Apart from that you can easily adapt the deploy role to use other [cloud providers](https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html) (e.g. AWS, Azure, Digital Ocean, Google). Apart from that you can easily adapt the deploy role to use other [cloud providers](https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html) (e.g. AWS, Azure, Digital Ocean, Google).
The Playbook first creates a new server and then installs and configures T-Pot. The Playbook first creates all resources (security group, network, subnet, router), deploys a new server and then installs and configures T-Pot.
This example showcases the deployment on our own OpenStack based Public Cloud Offering [Open Telekom Cloud](https://open-telekom-cloud.com/en). This example showcases the deployment on our own OpenStack based Public Cloud Offering [Open Telekom Cloud](https://open-telekom-cloud.com/en).
@ -16,7 +16,6 @@ This example showcases the deployment on our own OpenStack based Public Cloud Of
- [Create new project](#project) - [Create new project](#project)
- [Create API user](#api-user) - [Create API user](#api-user)
- [Import Key Pair](#key-pair) - [Import Key Pair](#key-pair)
- [Create VPC, Subnet and Security Group](#vpc-subnet-securitygroup)
- [Clone Git Repository](#clone-git) - [Clone Git Repository](#clone-git)
- [Settings and recommended values](#settings) - [Settings and recommended values](#settings)
- [OpenStack authentication variables](#os-auth) - [OpenStack authentication variables](#os-auth)
@ -38,7 +37,12 @@ Ansible works over the SSH Port, so you don't have to add any special rules to y
<a name="ansible"></a> <a name="ansible"></a>
## Ansible Installation ## Ansible Installation
Example for Ubuntu 18.04: Example for Ubuntu 18.04:
At first we need to add the repository and install Ansible:
At first we update the system:
`sudo apt update`
`sudo apt dist-upgrade`
Then we need to add the repository and install Ansible:
`sudo apt-add-repository --yes --update ppa:ansible/ansible` `sudo apt-add-repository --yes --update ppa:ansible/ansible`
`sudo apt install ansible` `sudo apt install ansible`
@ -46,26 +50,20 @@ For other OSes and Distros have a look at the official [Ansible Documentation](h
<a name="agent-forwarding"></a> <a name="agent-forwarding"></a>
## Agent Forwarding ## Agent Forwarding
Agent Forwarding must be enabled in order to let Ansible do its work. If you run the Ansible Playbook remotely on your Ansible Master Server, Agent Forwarding must be enabled in order to let Ansible connect to newly created machines.
- On Linux or macOS: - On Linux or macOS:
- Create or edit `~/.ssh/config` - Create or edit `~/.ssh/config`
- If you run the Ansible Playbook remotely on your Ansible Master Server:
``` ```
Host ANSIBLE_MASTER_IP Host ANSIBLE_MASTER_IP
ForwardAgent yes ForwardAgent yes
``` ```
- If you run the Ansible Playbook locally, enable it for all hosts, as this includes newly generated T-Pots: - On Windows using Putty:
```
Host *
ForwardAgent yes
```
- On Windows using Putty for connecting to your Ansible Master Server:
![Putty Agent Forwarding](doc/putty_agent_forwarding.png) ![Putty Agent Forwarding](doc/putty_agent_forwarding.png)
<a name="preparation"></a> <a name="preparation"></a>
# Preparations in Open Telekom Cloud Console # Preparations in Open Telekom Cloud Console
(You can skip this if you have already set up an API account, VPC, Subnet and Security Group) (You can skip this if you have already set up a project and an API account with key pair)
(Just make sure you know the naming for everything, as you will need it to configure the Ansible variables.) (Just make sure you know the naming for everything, as you need to configure the Ansible variables.)
Before we can start deploying, we have to prepare the Open Telekom Cloud tenant. Before we can start deploying, we have to prepare the Open Telekom Cloud tenant.
For that, go to the [Web Console](https://auth.otc.t-systems.com/authui/login) and log in with an admin user. For that, go to the [Web Console](https://auth.otc.t-systems.com/authui/login) and log in with an admin user.
@ -90,22 +88,10 @@ This ensures that the API access is limited to that project.
![Login as API user](doc/otc_3_login.gif) ![Login as API user](doc/otc_3_login.gif)
Import your SSH public key. Import your SSH public key.
![Import SSH Public Key](doc/otc_4_import_key.gif) ![Import SSH Public Key](doc/otc_4_import_key.gif)
<a name="vpc-subnet-securitygroup"></a>
## Create VPC, Subnet and Security Group
- VPC (Virtual Private Cloud) and Subnet:
![Create VPC and Subnet](doc/otc_5_vpc_subnet.gif)
- Security Group:
The configured Security Group should allow all incoming TCP / UDP traffic.
If you want to secure the management interfaces, you can limit the incoming "allow all" traffic to the port range of 1-64000 and allow access to ports > 64000 only from your trusted IPs.
![Create Security Group](doc/otc_6_sec_group.gif)
<a name="clone-git"></a> <a name="clone-git"></a>
# Clone Git Repository # Clone Git Repository
@ -142,24 +128,20 @@ Located at [`openstack/roles/deploy/vars/main.yaml`](openstack/roles/deploy/vars
Here you can customize your virtual machine specifications: Here you can customize your virtual machine specifications:
- Specify the region name - Specify the region name
- Choose an availability zone. For Open Telekom Cloud reference see [here](https://docs.otc.t-systems.com/en-us/endpoint/index.html). - Choose an availability zone. For Open Telekom Cloud reference see [here](https://docs.otc.t-systems.com/en-us/endpoint/index.html).
- Change the OS image (For T-Pot we need Debian 9) - Change the OS image (For T-Pot we need Debian)
- (Optional) Change the volume size - (Optional) Change the volume size
- Specify your key pair - Specify your key pair (:warning: Mandatory)
- (Optional) Change the instance type (flavor) - (Optional) Change the instance type (flavor)
`s2.medium.8` corresponds to 1 vCPU and 8GB of RAM and is the minimum required flavor. `s2.medium.8` corresponds to 1 vCPU and 8GB of RAM and is the minimum required flavor.
A full list of Open telekom Cloud flavors can be found [here](https://docs.otc.t-systems.com/en-us/usermanual/ecs/en-us_topic_0035470096.html). A full list of Open telekom Cloud flavors can be found [here](https://docs.otc.t-systems.com/en-us/usermanual/ecs/en-us_topic_0035470096.html).
- Specify the security group
- Specify the network ID (For Open Telekom Cloud you can find the ID in the Web Console under `Virtual Private Cloud --> your-vpc --> your-subnet --> Network ID`; In general for OpenStack clouds you can use the `python-openstackclient` to retrieve information about your resources)
``` ```
region_name: eu-de region_name: eu-de
availability_zone: eu-de-03 availability_zone: eu-de-03
image: Standard_Debian_9_latest image: Standard_Debian_10_latest
volume_size: 128 volume_size: 128
key_name: your-KeyPair key_name: your-KeyPair
flavor: s2.medium.8 flavor: s2.medium.8
security_groups: your-sg
network: your-network-id
``` ```
<a name="user-password"></a> <a name="user-password"></a>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 337 KiB

View File

@ -3,3 +3,4 @@ host_key_checking = false
[ssh_connection] [ssh_connection]
scp_if_ssh = true scp_if_ssh = true
ssh_args = -o ServerAliveInterval=60

View File

@ -1,8 +1,6 @@
- name: Check host prerequisites - name: Check host prerequisites
hosts: localhost hosts: localhost
become: yes become: yes
become_user: root
become_method: sudo
roles: roles:
- check - check
@ -15,8 +13,6 @@
hosts: TPOT hosts: TPOT
remote_user: linux remote_user: linux
become: yes become: yes
become_user: root
become_method: sudo
gather_facts: no gather_facts: no
roles: roles:
- install - install

View File

@ -1,28 +1,17 @@
- name: Install pwgen - name: Install dependencies
package: package:
name: pwgen name:
state: present - pwgen
- python-setuptools
- name: Install setuptools - python-pip
package:
name: python-setuptools
state: present
- name: Install pip
package:
name: python-pip
state: present state: present
- name: Install openstacksdk - name: Install openstacksdk
pip: pip:
name: openstacksdk name: openstacksdk
- name: Set fact for agent forwarding
set_fact:
agent_forwarding: "{{ lookup('env','SSH_AUTH_SOCK') }}"
- name: Check if agent forwarding is enabled - name: Check if agent forwarding is enabled
fail: fail:
msg: Please enable agent forwarding to allow Ansible to connect to the remote host! msg: Please enable agent forwarding to allow Ansible to connect to the remote host!
ignore_errors: yes ignore_errors: yes
when: agent_forwarding == "" when: lookup('env','SSH_AUTH_SOCK') == ""

View File

@ -9,5 +9,5 @@
- name: Patching tpot.yml with custom ews configuration file - name: Patching tpot.yml with custom ews configuration file
lineinfile: lineinfile:
path: /opt/tpot/etc/tpot.yml path: /opt/tpot/etc/tpot.yml
insertafter: '/opt/ewsposter/ews.ip' insertafter: "/opt/ewsposter/ews.ip"
line: ' - /data/ews/conf/ews.cfg:/opt/ewsposter/ews.cfg' line: " - /data/ews/conf/ews.cfg:/opt/ewsposter/ews.cfg"

View File

@ -1,10 +1,12 @@
- name: Copy hpfeeds configuration file - name: Copy hpfeeds configuration file
template: copy:
src: ../templates/hpfeeds.cfg src: ../files/hpfeeds.cfg
dest: /data/ews/conf dest: /data/ews/conf
owner: root owner: tpot
group: root group: tpot
mode: 0644 mode: 0770
register: config
- name: Applying hpfeeds settings - name: Applying hpfeeds settings
command: /opt/tpot/bin/hpfeeds_optin.sh --conf=/data/ews/conf/hpfeeds.cfg command: /opt/tpot/bin/hpfeeds_optin.sh --conf=/data/ews/conf/hpfeeds.cfg
when: config.changed == true

View File

@ -5,6 +5,66 @@
- name: Import OpenStack authentication variables - name: Import OpenStack authentication variables
include_vars: include_vars:
file: roles/deploy/vars/os_auth.yaml file: roles/deploy/vars/os_auth.yaml
no_log: true
- name: Create security group
os_security_group:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
name: sg-tpot-any
description: tpot any-any
- name: Add rules to security group
os_security_group_rule:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
security_group: sg-tpot-any
remote_ip_prefix: 0.0.0.0/0
- name: Create network
os_network:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
name: network-tpot
- name: Create subnet
os_subnet:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
network_name: network-tpot
name: subnet-tpot
cidr: 192.168.0.0/24
dns_nameservers:
- 1.1.1.1
- 8.8.8.8
- name: Create router
os_router:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
name: router-tpot
interfaces:
- subnet-tpot
- name: Launch an instance - name: Launch an instance
os_server: os_server:
@ -23,8 +83,8 @@
key_name: "{{ key_name }}" key_name: "{{ key_name }}"
timeout: 200 timeout: 200
flavor: "{{ flavor }}" flavor: "{{ flavor }}"
security_groups: "{{ security_groups }}" security_groups: sg-tpot-any
network: "{{ network }}" network: network-tpot
register: tpot register: tpot
- name: Add instance to inventory - name: Add instance to inventory

View File

@ -1,8 +1,6 @@
region_name: eu-de region_name: eu-de
availability_zone: eu-de-03 availability_zone: eu-de-03
image: Standard_Debian_9_latest image: Standard_Debian_10_latest
volume_size: 128 volume_size: 128
key_name: your-KeyPair key_name: your-KeyPair
flavor: s2.medium.8 flavor: s2.medium.8
security_groups: your-sg
network: your-network-id

View File

@ -1,7 +1,5 @@
- name: Waiting for SSH connection - name: Waiting for SSH connection
wait_for_connection: wait_for_connection:
delay: 30
timeout: 300
- name: Gathering facts - name: Gathering facts
setup: setup:
@ -14,16 +12,15 @@
- name: Prepare to set user password - name: Prepare to set user password
set_fact: set_fact:
user_name: "{{ ansible_user }}" user_name: "{{ ansible_user }}"
user_password: "{{ user_password }}"
user_salt: "s0mew1ck3dTpoT" user_salt: "s0mew1ck3dTpoT"
no_log: true
- name: Changing password for user {{ user_name }} to {{ user_password }} - name: Changing password for user {{ user_name }}
user: user:
name: "{{ ansible_user }}" name: "{{ ansible_user }}"
password: "{{ user_password | password_hash('sha512', user_salt) }}" password: "{{ user_password | password_hash('sha512', user_salt) }}"
state: present state: present
shell: /bin/bash shell: /bin/bash
update_password: always
- name: Copy T-Pot configuration file - name: Copy T-Pot configuration file
template: template:
@ -33,7 +30,7 @@
group: root group: root
mode: 0644 mode: 0644
- name: Install T-Pot on instance - be patient, this might take 15 to 30 minutes depending on the connection speed. No further output is given. - name: Install T-Pot on instance - be patient, this might take 15 to 30 minutes depending on the connection speed.
command: /root/tpot/iso/installer/install.sh --type=auto --conf=/root/tpot.conf command: /root/tpot/iso/installer/install.sh --type=auto --conf=/root/tpot.conf
- name: Delete T-Pot configuration file - name: Delete T-Pot configuration file

View File

@ -1,6 +1,7 @@
- name: Finally rebooting T-Pot in one minute - name: Finally rebooting T-Pot
shell: /sbin/shutdown -r -t 1 command: shutdown -r now
become: true async: 1
poll: 0
- name: Next login options - name: Next login options
debug: debug:

View File

@ -62,4 +62,5 @@ resource "aws_instance" "tpot" {
} }
user_data = "${file("../cloud-init.yaml")} content: ${base64encode(file("../tpot.conf"))}" user_data = "${file("../cloud-init.yaml")} content: ${base64encode(file("../tpot.conf"))}"
vpc_security_group_ids = [aws_security_group.tpot.id] vpc_security_group_ids = [aws_security_group.tpot.id]
associate_public_ip_address = true
} }

View File

@ -28,26 +28,27 @@ variable "ec2_instance_type" {
default = "t3.large" default = "t3.large"
} }
# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch # Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Buster
variable "ec2_ami" { variable "ec2_ami" {
type = map(string) type = map(string)
default = { default = {
"ap-northeast-1" = "ami-09fbcd30452841cb9" "ap-east-1" = "ami-b7d0abc6"
"ap-northeast-2" = "ami-08363ccce96df1fff" "ap-northeast-1" = "ami-01f4f0c9374675b99"
"ap-south-1" = "ami-0dc98cbb0d0e49162" "ap-northeast-2" = "ami-0855cb0c55370c38c"
"ap-southeast-1" = "ami-0555b1a5444087dd4" "ap-south-1" = "ami-00d7d1cbdcb087cf3"
"ap-southeast-2" = "ami-029c54f988446691a" "ap-southeast-1" = "ami-03779b1b2fbb3a9d4"
"ca-central-1" = "ami-04413a263a7d94982" "ap-southeast-2" = "ami-0ce3a7c68c6b1678d"
"eu-central-1" = "ami-01fb3b7bab31acac5" "ca-central-1" = "ami-037099906a22f210f"
"eu-north-1" = "ami-050f04ca573daa1fb" "eu-central-1" = "ami-0845c3902a6f2af32"
"eu-west-1" = "ami-0968f6a31fc6cffc0" "eu-north-1" = "ami-e634bf98"
"eu-west-2" = "ami-0faa9c9b5399088fd" "eu-west-1" = "ami-06a53bf81914447b5"
"eu-west-3" = "ami-0cd23820af84edc85" "eu-west-2" = "ami-053d9f0770cd2e34c"
"sa-east-1" = "ami-030580e61468e54bd" "eu-west-3" = "ami-060bf1f444f742af9"
"us-east-1" = "ami-0357081a1383dc76b" "me-south-1" = "ami-04a9a536105c72d30"
"us-east-2" = "ami-09c10a66337c79669" "sa-east-1" = "ami-0a5fd18ed0b9c7f35"
"us-west-1" = "ami-0adbaf2e0ce044437" "us-east-1" = "ami-01db78123b2b99496"
"us-west-2" = "ami-05a3ef6744aa96514" "us-east-2" = "ami-010ffea14ff17ebf5"
"us-west-1" = "ami-0ed1af421f2a3cf40"
"us-west-2" = "ami-030a304a76b181155"
} }
} }

BIN
doc/t-pot_qr.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

BIN
doc/t-pot_wallpaper_4k.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 606 KiB

BIN
doc/tpotsocial.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

View File

@ -1,31 +1,35 @@
FROM alpine FROM alpine
#
# Include dist
ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN apk -U add \
git \ git \
libcap \ libcap \
python \ python3 \
python-dev && \ python3-dev && \
#
# Install adbhoney from git # Install adbhoney from git
git clone --depth=1 https://github.com/huuck/ADBHoney /opt/adbhoney && \ git clone --depth=1 https://github.com/huuck/ADBHoney /opt/adbhoney && \
sed -i 's/dst_ip/dest_ip/' /opt/adbhoney/main.py && \ cp /root/dist/adbhoney.cfg /opt/adbhoney && \
sed -i 's/dst_port/dest_port/' /opt/adbhoney/main.py && \ sed -i 's/dst_ip/dest_ip/' /opt/adbhoney/adbhoney/core.py && \
sed -i 's/dst_port/dest_port/' /opt/adbhoney/adbhoney/core.py && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 adbhoney && \ addgroup -g 2000 adbhoney && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 adbhoney && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 adbhoney && \
chown -R adbhoney:adbhoney /opt/adbhoney && \ chown -R adbhoney:adbhoney /opt/adbhoney && \
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \ setcap cap_net_bind_service=+ep /usr/bin/python3.7 && \
#
# Clean up # Clean up
apk del --purge git \ apk del --purge git \
python-dev && \ python3-dev && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Set workdir and start adbhoney # Set workdir and start adbhoney
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
USER adbhoney:adbhoney USER adbhoney:adbhoney
WORKDIR /opt/adbhoney/ WORKDIR /opt/adbhoney/
CMD nohup /usr/bin/python main.py -l log/adbhoney.log -j log/adbhoney.json -d dl/ CMD nohup /usr/bin/python3 run.py

19
docker/adbhoney/dist/adbhoney.cfg vendored Normal file
View File

@ -0,0 +1,19 @@
[honeypot]
hostname = honeypot01
address = 0.0.0.0
port = 5555
download_dir = dl/
log_dir = log/
device_id = device::http://ro.product.name =starltexx;ro.product.model=SM-G960F;ro.product.device=starlte;features=cmd,stat_v2,shell_v2
[output_log]
enabled = true
log_file = adbhoney.log
log_level = info
[output_json]
enabled = true
log_file = adbhoney.json

View File

@ -1,8 +1,8 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup env and apt # Setup env and apt
RUN apk -U upgrade && \ RUN apk -U upgrade && \
apk add build-base \ apk add build-base \
@ -13,11 +13,11 @@ RUN apk -U upgrade && \
openssl-dev \ openssl-dev \
python3 \ python3 \
python3-dev && \ python3-dev && \
#
# Setup user # Setup user
addgroup -g 2000 ciscoasa && \ addgroup -g 2000 ciscoasa && \
adduser -S -s /bin/bash -u 2000 -D -g 2000 ciscoasa && \ adduser -S -s /bin/bash -u 2000 -D -g 2000 ciscoasa && \
#
# Get and install packages # Get and install packages
mkdir -p /opt/ && \ mkdir -p /opt/ && \
cd /opt/ && \ cd /opt/ && \
@ -27,7 +27,7 @@ RUN apk -U upgrade && \
pip3 install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir -r requirements.txt && \
cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \ cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \
chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \ chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
@ -36,7 +36,7 @@ RUN apk -U upgrade && \
python3-dev && \ python3-dev && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start ciscoasa # Start ciscoasa
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
WORKDIR /tmp/ciscoasa/ WORKDIR /tmp/ciscoasa/

View File

@ -0,0 +1,44 @@
FROM alpine
#
# Install packages
RUN apk -U add \
git \
libcap \
openssl \
python3 \
python3-dev && \
#
pip3 install --no-cache-dir python-json-logger && \
#
# Install CitrixHoneypot from GitHub
# git clone --depth=1 https://github.com/malwaretech/citrixhoneypot /opt/citrixhoneypot && \
# git clone --depth=1 https://github.com/vorband/CitrixHoneypot /opt/citrixhoneypot && \
git clone --depth=1 https://github.com/t3chn0m4g3/CitrixHoneypot /opt/citrixhoneypot && \
#
# Setup user, groups and configs
mkdir -p /opt/citrixhoneypot/logs /opt/citrixhoneypot/ssl && \
openssl req \
-nodes \
-x509 \
-newkey rsa:2048 \
-keyout "/opt/citrixhoneypot/ssl/key.pem" \
-out "/opt/citrixhoneypot/ssl/cert.pem" \
-days 365 \
-subj '/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd' && \
addgroup -g 2000 citrixhoneypot && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 citrixhoneypot && \
chown -R citrixhoneypot:citrixhoneypot /opt/citrixhoneypot && \
setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
#
# Clean up
apk del --purge git \
openssl \
python3-dev && \
rm -rf /root/* && \
rm -rf /var/cache/apk/*
#
# Set workdir and start citrixhoneypot
STOPSIGNAL SIGINT
USER citrixhoneypot:citrixhoneypot
WORKDIR /opt/citrixhoneypot/
CMD nohup /usr/bin/python3 CitrixHoneypot.py

View File

@ -0,0 +1,20 @@
version: '2.3'
networks:
citrixhoneypot_local:
services:
# CitrixHoneypot service
citrixhoneypot:
build: .
container_name: citrixhoneypot
restart: always
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: "dtagdevsec/citrixhoneypot:1903"
read_only: true
volumes:
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:3.10
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup apt # Setup apt
RUN apk -U add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
build-base \ build-base \
file \ file \
git \ git \
@ -21,7 +22,7 @@ RUN apk -U add \
py-cryptography \ py-cryptography \
tcpdump \ tcpdump \
wget && \ wget && \
#
# Setup ConPot # Setup ConPot
git clone --depth=1 https://github.com/mushorg/conpot /opt/conpot && \ git clone --depth=1 https://github.com/mushorg/conpot /opt/conpot && \
cd /opt/conpot/ && \ cd /opt/conpot/ && \
@ -37,20 +38,20 @@ RUN apk -U add \
sed -i 's/port="6969"/port="69"/' /opt/conpot/conpot/templates/default/tftp/tftp.xml && \ sed -i 's/port="6969"/port="69"/' /opt/conpot/conpot/templates/default/tftp/tftp.xml && \
sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/IEC104/snmp/snmp.xml && \ sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/IEC104/snmp/snmp.xml && \
sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/ipmi/ipmi/ipmi.xml && \ sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/ipmi/ipmi/ipmi.xml && \
pip3 install --no-cache-dir -U pip setuptools && \ pip3 install --no-cache-dir -U setuptools && \
pip3 install --no-cache-dir . && \ pip3 install --no-cache-dir . && \
cd / && \ cd / && \
rm -rf /opt/conpot /tmp/* /var/tmp/* && \ rm -rf /opt/conpot /tmp/* /var/tmp/* && \
setcap cap_net_bind_service=+ep /usr/bin/python3.6 && \ setcap cap_net_bind_service=+ep /usr/bin/python3.7 && \
#
# Get wireshark manuf db for scapy, setup configs, user, groups # Get wireshark manuf db for scapy, setup configs, user, groups
mkdir -p /etc/conpot /var/log/conpot /usr/share/wireshark && \ mkdir -p /etc/conpot /var/log/conpot /usr/share/wireshark && \
wget https://github.com/wireshark/wireshark/raw/master/manuf -o /usr/share/wireshark/manuf && \ wget https://github.com/wireshark/wireshark/raw/master/manuf -o /usr/share/wireshark/manuf && \
cp /root/dist/conpot.cfg /etc/conpot/conpot.cfg && \ cp /root/dist/conpot.cfg /etc/conpot/conpot.cfg && \
cp -R /root/dist/templates /usr/lib/python3.6/site-packages/conpot/ && \ cp -R /root/dist/templates /usr/lib/python3.7/site-packages/conpot/ && \
addgroup -g 2000 conpot && \ addgroup -g 2000 conpot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 conpot && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 conpot && \
#
# Clean up # Clean up
apk del --purge \ apk del --purge \
build-base \ build-base \
@ -68,7 +69,7 @@ RUN apk -U add \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start conpot # Start conpot
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
USER conpot:conpot USER conpot:conpot

View File

@ -3,7 +3,7 @@ sensorid = conpot
[virtual_file_system] [virtual_file_system]
data_fs_url = %(CONPOT_TMP)s data_fs_url = %(CONPOT_TMP)s
fs_url = tar:///usr/lib/python3.6/site-packages/conpot/data.tar fs_url = tar:///usr/lib/python3.7/site-packages/conpot/data.tar
[session] [session]
timeout = 30 timeout = 30

View File

@ -1,10 +1,10 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U --no-cache add \ RUN apk -U add \
bash \ bash \
build-base \ build-base \
git \ git \
@ -15,38 +15,38 @@ RUN apk -U --no-cache add \
mpfr-dev \ mpfr-dev \
openssl \ openssl \
openssl-dev \ openssl-dev \
python \ python3 \
python-dev \ python3-dev \
py-bcrypt \ py3-bcrypt \
py-mysqldb \ py3-mysqlclient \
py-pip \ py3-requests \
py-requests \ py3-setuptools && \
py-setuptools && \ #
# Setup user # Setup user
addgroup -g 2000 cowrie && \ addgroup -g 2000 cowrie && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 cowrie && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 cowrie && \
#
# Install cowrie # Install cowrie
mkdir -p /home/cowrie && \ mkdir -p /home/cowrie && \
cd /home/cowrie && \ cd /home/cowrie && \
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b 1.5.3 && \ git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.0.0 && \
cd cowrie && \ cd cowrie && \
mkdir -p log && \ mkdir -p log && \
pip install --upgrade pip && \ pip3 install --upgrade pip && \
pip install --upgrade -r requirements.txt && \ pip3 install --upgrade -r requirements.txt && \
#
# Setup configs # Setup configs
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \ export PYTHON_DIR=$(python3 --version | tr '[A-Z]' '[a-z]' | tr -d ' ' | cut -d '.' -f 1,2 ) && \
setcap cap_net_bind_service=+ep /usr/bin/$PYTHON_DIR && \
cp /root/dist/cowrie.cfg /home/cowrie/cowrie/cowrie.cfg && \ cp /root/dist/cowrie.cfg /home/cowrie/cowrie/cowrie.cfg && \
chown cowrie:cowrie -R /home/cowrie/* /usr/lib/python2.7/site-packages/twisted/plugins && \ chown cowrie:cowrie -R /home/cowrie/* /usr/lib/$PYTHON_DIR/site-packages/twisted/plugins && \
#
# Start Cowrie once to prevent dropin.cache errors upon container start caused by read-only filesystem # Start Cowrie once to prevent dropin.cache errors upon container start caused by read-only filesystem
su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie:/home/cowrie/cowrie/src && \ su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie:/home/cowrie/cowrie/src && \
cd /home/cowrie/cowrie && \ cd /home/cowrie/cowrie && \
/usr/bin/twistd --uid=2000 --gid=2000 -y cowrie.tac --pidfile cowrie.pid cowrie &" && \ /usr/bin/twistd --uid=2000 --gid=2000 -y cowrie.tac --pidfile cowrie.pid cowrie &" && \
sleep 10 && \ sleep 10 && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
@ -56,13 +56,13 @@ RUN apk -U --no-cache add \
mpc1-dev \ mpc1-dev \
mpfr-dev \ mpfr-dev \
openssl-dev \ openssl-dev \
python-dev \ python3-dev \
py-mysqldb \ py3-mysqlclient && \
py-pip && \ rm -rf /root/* /tmp/* && \
rm -rf /root/* && \
rm -rf /var/cache/apk/* && \ rm -rf /var/cache/apk/* && \
rm -rf /home/cowrie/cowrie/cowrie.pid rm -rf /home/cowrie/cowrie/cowrie.pid && \
unset PYTHON_DIR
#
# Start cowrie # Start cowrie
ENV PYTHONPATH /home/cowrie/cowrie:/home/cowrie/cowrie/src ENV PYTHONPATH /home/cowrie/cowrie:/home/cowrie/cowrie/src
WORKDIR /home/cowrie/cowrie WORKDIR /home/cowrie/cowrie

View File

@ -0,0 +1,70 @@
FROM alpine
# Include dist
ADD dist/ /root/dist/
# Get and install dependencies & packages
RUN apk -U --no-cache add \
bash \
build-base \
git \
gmp-dev \
libcap \
libffi-dev \
mpc1-dev \
mpfr-dev \
openssl \
openssl-dev \
python \
python-dev \
py-bcrypt \
py-mysqldb \
py-pip \
py-requests \
py-setuptools && \
# Setup user
addgroup -g 2000 cowrie && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 cowrie && \
# Install cowrie
mkdir -p /home/cowrie && \
cd /home/cowrie && \
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b 1.5.3 && \
cd cowrie && \
mkdir -p log && \
pip install --upgrade pip && \
pip install --upgrade -r requirements.txt && \
# Setup configs
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \
cp /root/dist/cowrie.cfg /home/cowrie/cowrie/cowrie.cfg && \
chown cowrie:cowrie -R /home/cowrie/* /usr/lib/python2.7/site-packages/twisted/plugins && \
# Start Cowrie once to prevent dropin.cache errors upon container start caused by read-only filesystem
su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie:/home/cowrie/cowrie/src && \
cd /home/cowrie/cowrie && \
/usr/bin/twistd --uid=2000 --gid=2000 -y cowrie.tac --pidfile cowrie.pid cowrie &" && \
sleep 10 && \
# Clean up
apk del --purge build-base \
git \
gmp-dev \
libcap \
libffi-dev \
mpc1-dev \
mpfr-dev \
openssl-dev \
python-dev \
py-mysqldb \
py-pip && \
rm -rf /root/* && \
rm -rf /var/cache/apk/* && \
rm -rf /home/cowrie/cowrie/cowrie.pid
# Start cowrie
ENV PYTHONPATH /home/cowrie/cowrie:/home/cowrie/cowrie/src
WORKDIR /home/cowrie/cowrie
USER cowrie:cowrie
CMD ["/usr/bin/twistd", "--nodaemon", "-y", "cowrie.tac", "--pidfile", "/tmp/cowrie/cowrie.pid", "cowrie"]

View File

@ -2,7 +2,6 @@
hostname = ubuntu hostname = ubuntu
log_path = log log_path = log
download_path = dl download_path = dl
report_public_ip = true
share_path= share/cowrie share_path= share/cowrie
state_path = /tmp/cowrie/data state_path = /tmp/cowrie/data
etc_path = etc etc_path = etc
@ -13,6 +12,8 @@ ttylog_path = log/tty
interactive_timeout = 180 interactive_timeout = 180
authentication_timeout = 120 authentication_timeout = 120
backend = shell backend = shell
timezone = UTC
report_public_ip = true
auth_class = AuthRandom auth_class = AuthRandom
auth_class_parameters = 2, 5, 10 auth_class_parameters = 2, 5, 10
reported_ssh_port = 22 reported_ssh_port = 22
@ -21,11 +22,13 @@ data_path = /tmp/cowrie/data
[shell] [shell]
filesystem = share/cowrie/fs.pickle filesystem = share/cowrie/fs.pickle
processes = share/cowrie/cmdoutput.json processes = share/cowrie/cmdoutput.json
arch = linux-x64-lsb #arch = linux-x64-lsb
arch = bsd-aarch64-lsb, bsd-aarch64-msb, bsd-bfin-msb, bsd-mips-lsb, bsd-mips-msb, bsd-mips64-lsb, bsd-mips64-msb, bsd-powepc-msb, bsd-powepc64-lsb, bsd-riscv64-lsb, bsd-sparc-msb, bsd-sparc64-msb, bsd-x32-lsb, bsd-x64-lsb, linux-aarch64-lsb, linux-aarch64-msb, linux-alpha-lsb, linux-am33-lsb, linux-arc-lsb, linux-arc-msb, linux-arm-lsb, linux-arm-msb, linux-avr32-lsb, linux-bfin-lsb, linux-c6x-lsb, linux-c6x-msb, linux-cris-lsb, linux-frv-msb, linux-h8300-msb, linux-hppa-msb, linux-hppa64-msb, linux-ia64-lsb, linux-m32r-msb, linux-m68k-msb, linux-microblaze-msb, linux-mips-lsb, linux-mips-msb, linux-mips64-lsb, linux-mips64-msb, linux-mn10300-lsb, linux-nios-lsb, linux-nios-msb, linux-powerpc-lsb, linux-powerpc-msb, linux-powerpc64-lsb, linux-powerpc64-msb, linux-riscv64-lsb, linux-s390x-msb, linux-sh-lsb, linux-sh-msb, linux-sparc-msb, linux-sparc64-msb, linux-tilegx-lsb, linux-tilegx-msb, linux-tilegx64-lsb, linux-tilegx64-msb, linux-x64-lsb, linux-x86-lsb, linux-xtensa-msb, osx-x32-lsb, osx-x64-lsb
kernel_version = 3.2.0-4-amd64 kernel_version = 3.2.0-4-amd64
kernel_build_string = #1 SMP Debian 3.2.68-1+deb7u1 kernel_build_string = #1 SMP Debian 3.2.68-1+deb7u1
hardware_platform = x86_64 hardware_platform = x86_64
operating_system = GNU/Linux operating_system = GNU/Linux
ssh_version = OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018
[ssh] [ssh]
enabled = true enabled = true
@ -33,12 +36,18 @@ rsa_public_key = etc/ssh_host_rsa_key.pub
rsa_private_key = etc/ssh_host_rsa_key rsa_private_key = etc/ssh_host_rsa_key
dsa_public_key = etc/ssh_host_dsa_key.pub dsa_public_key = etc/ssh_host_dsa_key.pub
dsa_private_key = etc/ssh_host_dsa_key dsa_private_key = etc/ssh_host_dsa_key
version = SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2 #version = SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2
version = SSH-2.0-OpenSSH_7.9p1
ciphers = aes128-ctr,aes192-ctr,aes256-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc
macs = hmac-sha2-512,hmac-sha2-384,hmac-sha2-56,hmac-sha1,hmac-md5
compression = zlib@openssh.com,zlib,none
listen_endpoints = tcp:22:interface=0.0.0.0 listen_endpoints = tcp:22:interface=0.0.0.0
sftp_enabled = true sftp_enabled = true
forwarding = true forwarding = true
forward_redirect = false forward_redirect = false
forward_tunnel = false forward_tunnel = false
auth_none_enabled = false
auth_keyboard_interactive_enabled = true
[telnet] [telnet]
enabled = true enabled = true
@ -55,3 +64,6 @@ enabled = false
logfile = log/cowrie-textlog.log logfile = log/cowrie-textlog.log
format = text format = text
[output_crashreporter]
enabled = false
debug = false

View File

@ -1,7 +1,8 @@
FROM alpine:3.8 FROM alpine:3.10
#
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
curl \ curl \
git \ git \
npm \ npm \
@ -9,7 +10,7 @@ RUN apk -U --no-cache add \
npm install -g grunt-cli && \ npm install -g grunt-cli && \
npm install -g http-server && \ npm install -g http-server && \
npm install npm@latest -g && \ npm install npm@latest -g && \
#
# Install CyberChef # Install CyberChef
cd /root && \ cd /root && \
git clone https://github.com/gchq/cyberchef --depth=1 && \ git clone https://github.com/gchq/cyberchef --depth=1 && \
@ -20,16 +21,16 @@ RUN apk -U --no-cache add \
mkdir -p /opt/cyberchef && \ mkdir -p /opt/cyberchef && \
mv build/prod/* /opt/cyberchef && \ mv build/prod/* /opt/cyberchef && \
cd / && \ cd / && \
#
# Clean up # Clean up
apk del --purge git \ apk del --purge git \
npm && \ npm && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8000' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8000'
#
# Set user, workdir and start spiderfoot # Set user, workdir and start spiderfoot
USER nobody:nobody USER nobody:nobody
WORKDIR /opt/cyberchef WORKDIR /opt/cyberchef

View File

@ -1,9 +1,9 @@
FROM debian:stretch-slim FROM debian:stretch-slim
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install dependencies and packages # Install dependencies and packages
RUN apt-get update -y && \ RUN apt-get update -y && \
apt-get dist-upgrade -y && \ apt-get dist-upgrade -y && \
@ -32,7 +32,7 @@ RUN apt-get update -y && \
python3-bson \ python3-bson \
python3-yaml \ python3-yaml \
ttf-liberation && \ ttf-liberation && \
#
# Get and install dionaea # Get and install dionaea
git clone --depth=1 https://github.com/dinotools/dionaea -b 0.8.0 /root/dionaea/ && \ git clone --depth=1 https://github.com/dinotools/dionaea -b 0.8.0 /root/dionaea/ && \
cd /root/dionaea && \ cd /root/dionaea && \
@ -41,17 +41,17 @@ RUN apt-get update -y && \
cmake -DCMAKE_INSTALL_PREFIX:PATH=/opt/dionaea .. && \ cmake -DCMAKE_INSTALL_PREFIX:PATH=/opt/dionaea .. && \
make && \ make && \
make install && \ make install && \
#
# Setup user and groups # Setup user and groups
addgroup --gid 2000 dionaea && \ addgroup --gid 2000 dionaea && \
adduser --system --no-create-home --shell /bin/bash --uid 2000 --disabled-password --disabled-login --gid 2000 dionaea && \ adduser --system --no-create-home --shell /bin/bash --uid 2000 --disabled-password --disabled-login --gid 2000 dionaea && \
setcap cap_net_bind_service=+ep /opt/dionaea/bin/dionaea && \ setcap cap_net_bind_service=+ep /opt/dionaea/bin/dionaea && \
#
# Supply configs and set permissions # Supply configs and set permissions
chown -R dionaea:dionaea /opt/dionaea/var && \ chown -R dionaea:dionaea /opt/dionaea/var && \
rm -rf /opt/dionaea/etc/dionaea/* && \ rm -rf /opt/dionaea/etc/dionaea/* && \
mv /root/dist/etc/* /opt/dionaea/etc/dionaea/ && \ mv /root/dist/etc/* /opt/dionaea/etc/dionaea/ && \
#
# Setup runtime and clean up # Setup runtime and clean up
apt-get purge -y \ apt-get purge -y \
build-essential \ build-essential \
@ -75,7 +75,7 @@ RUN apt-get update -y && \
python3-dev \ python3-dev \
python3-bson \ python3-bson \
python3-yaml && \ python3-yaml && \
#
apt-get install -y \ apt-get install -y \
ca-certificates \ ca-certificates \
python3 \ python3 \
@ -90,11 +90,11 @@ RUN apt-get update -y && \
libpcap0.8 \ libpcap0.8 \
libpython3.5 \ libpython3.5 \
libudns0 && \ libudns0 && \
#
apt-get autoremove --purge -y && \ apt-get autoremove --purge -y && \
apt-get clean && \ apt-get clean && \
rm -rf /root/* /var/lib/apt/lists/* /tmp/* /var/tmp/* rm -rf /root/* /var/lib/apt/lists/* /tmp/* /var/tmp/*
#
# Start dionaea # Start dionaea
USER dionaea:dionaea USER dionaea:dionaea
CMD ["/opt/dionaea/bin/dionaea", "-u", "dionaea", "-g", "dionaea", "-c", "/opt/dionaea/etc/dionaea/dionaea.cfg"] CMD ["/opt/dionaea/bin/dionaea", "-u", "dionaea", "-g", "dionaea", "-c", "/opt/dionaea/etc/dionaea/dionaea.cfg"]

View File

@ -11,9 +11,9 @@
os_type: 4 os_type: 4
# Additional config # Additional config
primary_domain: WORKGROUP primary_domain: DACH
oem_domain_name: WORKGROUP oem_domain_name: DACH
server_name: WIN_SRV server_name: ADFS
## Windows 7 ## ## Windows 7 ##
native_os: Windows 7 Professional 7600 native_os: Windows 7 Professional 7600

View File

@ -1,8 +1,8 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
git \ git \
@ -15,18 +15,18 @@ RUN apk -U --no-cache add \
mkdir -p /opt && \ mkdir -p /opt && \
cd /opt/ && \ cd /opt/ && \
git clone --depth=1 https://github.com/schmalle/ElasticpotPY.git && \ git clone --depth=1 https://github.com/schmalle/ElasticpotPY.git && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 elasticpot && \ addgroup -g 2000 elasticpot && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticpot && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticpot && \
mv /root/dist/elasticpot.cfg /opt/ElasticpotPY/ && \ mv /root/dist/elasticpot.cfg /opt/ElasticpotPY/ && \
mkdir /opt/ElasticpotPY/log && \ mkdir /opt/ElasticpotPY/log && \
#
# Clean up # Clean up
apk del --purge git && \ apk del --purge git && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start elasticpot # Start elasticpot
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
USER elasticpot:elasticpot USER elasticpot:elasticpot

View File

@ -1,8 +1,8 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup env and apt # Setup env and apt
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \ apk -U --no-cache add \
@ -11,33 +11,34 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
curl \ curl \
nss \ nss \
openjdk8-jre && \ openjdk8-jre && \
#
# Get and install packages # Get and install packages
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/elasticsearch/ && \ mkdir -p /usr/share/elasticsearch/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.2.tar.gz && \ aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.6.tar.gz && \
tar xvfz elasticsearch-6.6.2.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \ tar xvfz elasticsearch-6.8.6.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \
#
# Add and move files # Add and move files
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/elasticsearch/config && \ mkdir -p /usr/share/elasticsearch/config && \
cp elasticsearch.yml /usr/share/elasticsearch/config/ && \ cp elasticsearch.yml /usr/share/elasticsearch/config/ && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 elasticsearch && \ addgroup -g 2000 elasticsearch && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticsearch && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticsearch && \
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/ && \ chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/ && \
rm -rf /usr/share/elasticsearch/modules/x-pack-ml && \ rm -rf /usr/share/elasticsearch/modules/x-pack-ml && \
#
# Clean up # Clean up
apk del --purge aria2 && \ apk del --purge aria2 && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9200/_cat/health' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9200/_cat/health'
#
# Start ELK # Start ELK
USER elasticsearch:elasticsearch USER elasticsearch:elasticsearch
ENV JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk
CMD ["/usr/share/elasticsearch/bin/elasticsearch"] CMD ["/usr/share/elasticsearch/bin/elasticsearch"]

View File

@ -7,3 +7,5 @@ path:
http.host: 0.0.0.0 http.host: 0.0.0.0
http.cors.enabled: true http.cors.enabled: true
http.cors.allow-origin: "*" http.cors.allow-origin: "*"
discovery.zen.ping.unicast.hosts:
- localhost

View File

@ -1,33 +1,33 @@
FROM alpine FROM alpine
#
# Setup env and apt # Setup env and apt
RUN apk -U add \ RUN apk -U add \
curl \ curl \
git \ git \
nodejs \ nodejs \
nodejs-npm && \ nodejs-npm && \
#
# Get and install packages # Get and install packages
mkdir -p /usr/src/app/ && \ mkdir -p /usr/src/app/ && \
cd /usr/src/app/ && \ cd /usr/src/app/ && \
git clone --depth=1 https://github.com/mobz/elasticsearch-head . && \ git clone --depth=1 https://github.com/mobz/elasticsearch-head . && \
npm install http-server && \ npm install http-server && \
sed -i "s#\"http\:\/\/localhost\:9200\"#window.location.protocol \+ \'\/\/\' \+ window.location.hostname \+ \'\:\' \+ window.location.port \+ \'\/es\/\'#" /usr/src/app/_site/app.js && \ sed -i "s#\"http\:\/\/localhost\:9200\"#window.location.protocol \+ \'\/\/\' \+ window.location.hostname \+ \'\:\' \+ window.location.port \+ \'\/es\/\'#" /usr/src/app/_site/app.js && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 head && \ addgroup -g 2000 head && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 head && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 head && \
chown -R head:head /usr/src/app/ && \ chown -R head:head /usr/src/app/ && \
#
# Clean up # Clean up
apk del --purge git && \ apk del --purge git && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9100' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9100'
#
# Start elasticsearch-head # Start elasticsearch-head
USER head:head USER head:head
WORKDIR /usr/src/app WORKDIR /usr/src/app

View File

@ -12,5 +12,5 @@ services:
# condition: service_healthy # condition: service_healthy
ports: ports:
- "127.0.0.1:64302:9100" - "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1811" image: "dtagdevsec/head:1903"
read_only: true read_only: true

View File

@ -1,24 +1,24 @@
FROM node:10.15.2-alpine FROM node:10.15.2-alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup env and apt # Setup env and apt
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \ apk -U --no-cache add \
aria2 \ aria2 \
curl && \ curl && \
#
# Get and install packages # Get and install packages
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/kibana/ && \ mkdir -p /usr/share/kibana/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-6.6.2-linux-x86_64.tar.gz && \ aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-6.8.6-linux-x86_64.tar.gz && \
tar xvfz kibana-6.6.2-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \ tar xvfz kibana-6.8.6-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \
#
# Kibana's bundled node does not work in alpine # Kibana's bundled node does not work in alpine
rm /usr/share/kibana/node/bin/node && \ rm /usr/share/kibana/node/bin/node && \
ln -s /usr/bin/node /usr/share/kibana/node/bin/node && \ ln -s /usr/bin/node /usr/share/kibana/node/bin/node && \
#
# Add and move files # Add and move files
cd /root/dist/ && \ cd /root/dist/ && \
cp kibana.svg /usr/share/kibana/src/ui/public/images/kibana.svg && \ cp kibana.svg /usr/share/kibana/src/ui/public/images/kibana.svg && \
@ -26,36 +26,37 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon.ico && \ cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon.ico && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-16x16.png && \ cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-16x16.png && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-32x32.png && \ cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-32x32.png && \
#
# Setup user, groups and configs # Setup user, groups and configs
sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#kibana.defaultAppId: "home"/kibana.defaultAppId: "dashboards"/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#kibana.defaultAppId: "home"/kibana.defaultAppId: "dashboards"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#server.rewriteBasePath: false/server.rewriteBasePath: false/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#server.rewriteBasePath: false/server.rewriteBasePath: false/' /usr/share/kibana/config/kibana.yml && \
sed -i "s/#005571/#e20074/g" /usr/share/kibana/src/legacy/core_plugins/kibana/public/index.css && \ sed -i "s/#005571/#e20074/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
sed -i "s/#007ba4/#9e0051/g" /usr/share/kibana/src/legacy/core_plugins/kibana/public/index.css && \ sed -i "s/#007ba4/#9e0051/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
sed -i "s/#00465d/#4f0028/g" /usr/share/kibana/src/legacy/core_plugins/kibana/public/index.css && \ sed -i "s/#00465d/#4f0028/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
echo "xpack.infra.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.infra.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.logstash.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.logstash.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.canvas.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.canvas.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.spaces.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.spaces.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.apm.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.apm.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.uptime.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
rm -rf /usr/share/kibana/optimize/bundles/* && \ rm -rf /usr/share/kibana/optimize/bundles/* && \
/usr/share/kibana/bin/kibana --optimize && \ /usr/share/kibana/bin/kibana --optimize && \
addgroup -g 2000 kibana && \ addgroup -g 2000 kibana && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \
chown -R kibana:kibana /usr/share/kibana/ && \ chown -R kibana:kibana /usr/share/kibana/ && \
#
# Clean up # Clean up
apk del --purge aria2 && \ apk del --purge aria2 && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:5601' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:5601'
#
# Start kibana # Start kibana
STOPSIGNAL SIGKILL STOPSIGNAL SIGKILL
USER kibana:kibana USER kibana:kibana

View File

@ -1,55 +1,56 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup env and apt # Setup env and apt
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \ apk -U --no-cache add \
aria2 \ aria2 \
bash \ bash \
curl \ bzip2 \
git \ curl \
libc6-compat \ libc6-compat \
libzmq \ libzmq \
nss \ nss \
openjdk8-jre && \ openjdk8-jre && \
#
# Get and install packages # Get and install packages
git clone --depth=1 https://github.com/dtag-dev-sec/listbot /etc/listbot && \ mkdir -p /etc/listbot && \
cd /etc/listbot && \
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
bunzip2 *.bz2 && \
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/logstash/ && \ mkdir -p /usr/share/logstash/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-6.6.2.tar.gz && \ aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-6.8.6.tar.gz && \
tar xvfz logstash-6.6.2.tar.gz --strip-components=1 -C /usr/share/logstash/ && \ tar xvfz logstash-6.8.6.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
/usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \ /usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \
/usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \ /usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \
aria2c -s 16 -x 16 -o GeoLite2-ASN.tar.gz http://geolite.maxmind.com/download/geoip/database/GeoLite2-ASN.tar.gz && \ #
tar xvfz GeoLite2-ASN.tar.gz --strip-components=1 -C /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor && \
# Add and move files # Add and move files
cd /root/dist/ && \ cd /root/dist/ && \
cp update.sh /usr/bin/ && \ cp update.sh /usr/bin/ && \
chmod u+x /usr/bin/update.sh && \ chmod u+x /usr/bin/update.sh && \
mkdir -p /etc/logstash/conf.d && \ mkdir -p /etc/logstash/conf.d && \
cp logstash.conf /etc/logstash/conf.d/ && \ cp logstash.conf /etc/logstash/conf.d/ && \
cp elasticsearch-template-es6x.json /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.3.2-java/lib/logstash/outputs/elasticsearch/ && \ cp elasticsearch-template-es6x.json /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/ && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 logstash && \ addgroup -g 2000 logstash && \
adduser -S -H -s /bin/bash -u 2000 -D -g 2000 logstash && \ adduser -S -H -s /bin/bash -u 2000 -D -g 2000 logstash && \
chown -R logstash:logstash /usr/share/logstash && \ chown -R logstash:logstash /usr/share/logstash && \
chown -R logstash:logstash /etc/listbot && \ chown -R logstash:logstash /etc/listbot && \
chmod 755 /usr/bin/update.sh && \ chmod 755 /usr/bin/update.sh && \
#
# Clean up # Clean up
apk del --purge aria2 && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600'
#
# Start logstash # Start logstash
#USER logstash:logstash #USER logstash:logstash
CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution

View File

@ -36,6 +36,13 @@ input {
type => "Ciscoasa" type => "Ciscoasa"
} }
# CitrixHoneypot
file {
path => ["/data/citrixhoneypot/logs/server.log"]
codec => json
type => "CitrixHoneypot"
}
# Conpot # Conpot
file { file {
path => ["/data/conpot/log/*.json"] path => ["/data/conpot/log/*.json"]
@ -94,6 +101,7 @@ input {
# Mailoney # Mailoney
file { file {
path => ["/data/mailoney/log/commands.log"] path => ["/data/mailoney/log/commands.log"]
codec => json
type => "Mailoney" type => "Mailoney"
} }
@ -206,6 +214,31 @@ filter {
} }
} }
# CitrixHoneypot
if [type] == "CitrixHoneypot" {
grok {
match => {
"message" => [ "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{UNIXPATH:fileinfo.filename:string}",
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string}",
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{S3_REQUEST_LINE:msg:string} %{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string:string}",
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{GREEDYDATA:msg:string}" ]
}
}
date {
match => [ "asctime", "ISO8601" ]
remove_field => ["asctime"]
remove_field => ["message"]
}
mutate {
add_field => {
"dest_port" => "443"
}
rename => {
"levelname" => "level"
}
}
}
# Conpot # Conpot
if [type] == "ConPot" { if [type] == "ConPot" {
date { date {
@ -312,18 +345,14 @@ filter {
# Mailoney # Mailoney
if [type] == "Mailoney" { if [type] == "Mailoney" {
grok { date {
match => [ "message", "\A%{NAGIOSTIME}\[%{IPV4:src_ip}:%{INT:src_port:integer}] %{GREEDYDATA:smtp_input}" ] match => [ "timestamp", "ISO8601" ]
} }
mutate { mutate {
add_field => { add_field => {
"dest_port" => "25" "dest_port" => "25"
} }
} }
date {
match => [ "nagios_epoch", "UNIX" ]
remove_field => ["nagios_epoch"]
}
} }
# Medpot # Medpot
@ -384,12 +413,12 @@ if "_grokparsefailure" in [tags] { drop {} }
geoip { geoip {
cache_size => 10000 cache_size => 10000
source => "src_ip" source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb" database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"
} }
geoip { geoip {
cache_size => 10000 cache_size => 10000
source => "src_ip" source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-ASN.mmdb" database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-ASN.mmdb"
} }
translate { translate {
refresh_interval => 86400 refresh_interval => 86400
@ -417,7 +446,7 @@ if "_grokparsefailure" in [tags] { drop {} }
} }
# Add T-Pot hostname and external IP # Add T-Pot hostname and external IP
if [type] == "Adbhoney" or [type] == "Ciscoasa" or [type] == "ConPot" or [type] == "Cowrie" or [type] == "Dionaea" or [type] == "ElasticPot" or [type] == "Fatt" or [type] == "Glutton" or [type] == "Honeytrap" or [type] == "Heralding" or [type] == "Honeypy" or [type] == "Mailoney" or [type] == "Medpot" or [type] == "P0f" or [type] == "Rdpy" or [type] == "Suricata" or [type] == "Tanner" { if [type] == "Adbhoney" or [type] == "Ciscoasa" or [type] == "CitrixHoneypot" or [type] == "ConPot" or [type] == "Cowrie" or [type] == "Dionaea" or [type] == "ElasticPot" or [type] == "Fatt" or [type] == "Glutton" or [type] == "Honeytrap" or [type] == "Heralding" or [type] == "Honeypy" or [type] == "Mailoney" or [type] == "Medpot" or [type] == "P0f" or [type] == "Rdpy" or [type] == "Suricata" or [type] == "Tanner" {
mutate { mutate {
add_field => { add_field => {
"t-pot_ip_ext" => "${MY_EXTIP}" "t-pot_ip_ext" => "${MY_EXTIP}"
@ -443,7 +472,7 @@ output {
# } # }
#} #}
# Debug output # Debug output
#if [type] == "XYZ" { #if [type] == "CitrixHoneypot" {
# stdout { # stdout {
# codec => rubydebug # codec => rubydebug
# } # }

View File

@ -6,7 +6,31 @@ function fuCLEANUP {
} }
trap fuCLEANUP EXIT trap fuCLEANUP EXIT
# Download updated translation maps # Check internet availability
cd /etc/listbot function fuCHECKINET () {
git pull --all --depth=1 mySITES=$1
cd / error=0
for i in $mySITES;
do
curl --connect-timeout 5 -Is $i 2>&1 > /dev/null
if [ $? -ne 0 ];
then
let error+=1
fi;
done;
echo $error
}
# Check for connectivity and download latest translation maps
myCHECK=$(fuCHECKINET "listbot.sicherheitstacho.eu")
if [ "$myCHECK" == "0" ];
then
echo "Connection to Listbot looks good, now downloading latest translation maps."
cd /etc/listbot
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
bunzip2 -f *.bz2
cd /
else
echo "Cannot reach Listbot, starting Logstash without latest translation maps."
fi

View File

@ -1,8 +1,8 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
build-base \ build-base \
@ -10,45 +10,40 @@ RUN apk -U --no-cache add \
libffi-dev \ libffi-dev \
libssl1.1 \ libssl1.1 \
openssl-dev \ openssl-dev \
python-dev \ python3 \
py-cffi \ python3-dev \
py-ipaddress \ py3-cffi \
py-lxml \ py3-ipaddress \
py-mysqldb \ py3-lxml \
py-pip \ py3-mysqlclient \
py-pysqlite \ py3-requests \
py-requests \ py3-setuptools && \
py-setuptools && \ pip3 install --no-cache-dir -U pip && \
pip install --no-cache-dir -U pip && \ pip3 install --no-cache-dir configparser hpfeeds3 pyOpenSSL xmljson && \
pip install --no-cache-dir pyOpenSSL xmljson && \ #
# Setup ewsposter # Setup ewsposter
git clone --depth=1 https://github.com/rep/hpfeeds /opt/hpfeeds && \ git clone --depth=1 https://github.com/dtag-dev-sec/ewsposter /opt/ewsposter && \
cd /opt/hpfeeds && \
python setup.py install && \
git clone --depth=1 https://github.com/vorband/ewsposter /opt/ewsposter && \
mkdir -p /opt/ewsposter/spool /opt/ewsposter/log && \ mkdir -p /opt/ewsposter/spool /opt/ewsposter/log && \
#
# Setup user and groups # Setup user and groups
addgroup -g 2000 ews && \ addgroup -g 2000 ews && \
adduser -S -H -u 2000 -D -g 2000 ews && \ adduser -S -H -u 2000 -D -g 2000 ews && \
chown -R ews:ews /opt/ewsposter && \ chown -R ews:ews /opt/ewsposter && \
#
# Supply configs # Supply configs
mv /root/dist/ews.cfg /opt/ewsposter/ && \ mv /root/dist/ews.cfg /opt/ewsposter/ && \
mv /root/dist/*.pem /opt/ewsposter/ && \ mv /root/dist/*.pem /opt/ewsposter/ && \
#
# Clean up # Clean up
apk del build-base \ apk del build-base \
git \ git \
openssl-dev \ openssl-dev \
python-dev \ python3-dev \
py-pip \
py-setuptools && \ py-setuptools && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Run ewsposter # Run ewsposter
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
USER ews:ews USER ews:ews
CMD sleep 10 && exec /usr/bin/python -u /opt/ewsposter/ews.py -l 60 CMD sleep 10 && exec /usr/bin/python3 -u /opt/ewsposter/ews.py -l $(shuf -i 10-60 -n 1)

View File

@ -4,7 +4,8 @@ FROM alpine
#ADD dist/ /root/dist/ #ADD dist/ /root/dist/
# #
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
git \ git \
py3-libxml2 \ py3-libxml2 \
py3-lxml \ py3-lxml \

View File

@ -1,8 +1,8 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup apk # Setup apk
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
build-base \ build-base \
@ -13,32 +13,32 @@ RUN apk -U --no-cache add \
libnetfilter_queue-dev \ libnetfilter_queue-dev \
libcap \ libcap \
libpcap-dev && \ libpcap-dev && \
#
# Setup go, glutton # Setup go, glutton
export GOPATH=/opt/go/ && \ export GOPATH=/opt/go/ && \
go get -d github.com/mushorg/glutton && \ export GO111MODULE=on && \
cd /opt/go/src/github.com/satori/ && \ mkdir -p /opt/go && \
rm -rf go.uuid && \ cd /opt/go/ && \
git clone https://github.com/satori/go.uuid && \ git clone https://github.com/mushorg/glutton && \
cd go.uuid && \ cd /opt/go/glutton/ && \
git checkout v1.2.0 && \ mv /root/dist/system.go /opt/go/glutton/ && \
mv /root/dist/system.go /opt/go/src/github.com/mushorg/glutton/ && \ go mod download && \
cd /opt/go/src/github.com/mushorg/glutton/ && \
make build && \ make build && \
cd / && \ cd / && \
mkdir -p /opt/glutton && \ mkdir -p /opt/glutton && \
mv /opt/go/src/github.com/mushorg/glutton/bin /opt/glutton/ && \ mv /opt/go/glutton/bin /opt/glutton/ && \
mv /opt/go/src/github.com/mushorg/glutton/config /opt/glutton/ && \ mv /opt/go/glutton/config /opt/glutton/ && \
mv /opt/go/src/github.com/mushorg/glutton/rules /opt/glutton/ && \ mv /opt/go/glutton/rules /opt/glutton/ && \
ln -s /sbin/xtables-legacy-multi /sbin/xtables-multi && \
setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \ setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \
setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-multi && \ setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-legacy-multi && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 glutton && \ addgroup -g 2000 glutton && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 glutton && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 glutton && \
mkdir -p /var/log/glutton && \ mkdir -p /var/log/glutton && \
mv /root/dist/rules.yaml /opt/glutton/rules/ && \ mv /root/dist/rules.yaml /opt/glutton/rules/ && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
@ -47,8 +47,8 @@ RUN apk -U --no-cache add \
rm -rf /var/cache/apk/* \ rm -rf /var/cache/apk/* \
/opt/go \ /opt/go \
/root/dist /root/dist
#
# Start glutton # Start glutton
WORKDIR /opt/glutton WORKDIR /opt/glutton
USER glutton:glutton USER glutton:glutton
CMD exec bin/server -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) -l /var/log/glutton/glutton.log CMD exec bin/server -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) -l /var/log/glutton/glutton.log > /dev/null 2>&1

View File

@ -0,0 +1,54 @@
FROM alpine
#
# Include dist
ADD dist/ /root/dist/
#
# Setup apk
RUN apk -U --no-cache add \
build-base \
git \
go \
g++ \
iptables-dev \
libnetfilter_queue-dev \
libcap \
libpcap-dev && \
#
# Setup go, glutton
export GOPATH=/opt/go/ && \
go get -d github.com/mushorg/glutton && \
cd /opt/go/src/github.com/satori/ && \
rm -rf go.uuid && \
git clone https://github.com/satori/go.uuid && \
cd go.uuid && \
git checkout v1.2.0 && \
mv /root/dist/system.go /opt/go/src/github.com/mushorg/glutton/ && \
cd /opt/go/src/github.com/mushorg/glutton/ && \
make build && \
cd / && \
mkdir -p /opt/glutton && \
mv /opt/go/src/github.com/mushorg/glutton/bin /opt/glutton/ && \
mv /opt/go/src/github.com/mushorg/glutton/config /opt/glutton/ && \
mv /opt/go/src/github.com/mushorg/glutton/rules /opt/glutton/ && \
setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \
setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-multi && \
#
# Setup user, groups and configs
addgroup -g 2000 glutton && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 glutton && \
mkdir -p /var/log/glutton && \
mv /root/dist/rules.yaml /opt/glutton/rules/ && \
#
# Clean up
apk del --purge build-base \
git \
go \
g++ && \
rm -rf /var/cache/apk/* \
/opt/go \
/root/dist
#
# Start glutton
WORKDIR /opt/glutton
USER glutton:glutton
CMD exec bin/server -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) -l /var/log/glutton/glutton.log > /dev/null 2>&1

View File

@ -1,6 +1,7 @@
package glutton package glutton
import ( import (
"errors"
"fmt" "fmt"
"log" "log"
"os" "os"
@ -10,13 +11,19 @@ import (
"time" "time"
) )
func countOpenFiles() int { func countOpenFiles() (int, error) {
out, err := exec.Command("/bin/sh", "-c", fmt.Sprintf("lsof -p %v", os.Getpid())).Output() if runtime.GOOS == "linux" {
if err != nil { if isCommandAvailable("lsof") {
log.Fatal(err) out, err := exec.Command("/bin/sh", "-c", fmt.Sprintf("lsof -p %d", os.Getpid())).Output()
if err != nil {
log.Fatal(err)
}
lines := strings.Split(string(out), "\n")
return len(lines) - 1, nil
}
return 0, errors.New("lsof command does not exist. Kindly run sudo apt install lsof")
} }
lines := strings.Split(string(out), "\n") return 0, errors.New("Operating system type not supported for this command")
return len(lines) - 1
} }
func countRunningRoutines() int { func countRunningRoutines() int {
@ -36,3 +43,11 @@ func (g *Glutton) startMonitor(quit chan struct{}) {
} }
}() }()
} }
func isCommandAvailable(name string) bool {
cmd := exec.Command("/bin/sh", "-c", "command -v "+name)
if err := cmd.Run(); err != nil {
return false
}
return true
}

View File

@ -9,6 +9,7 @@ services:
restart: always restart: always
tmpfs: tmpfs:
- /var/lib/glutton:uid=2000,gid=2000 - /var/lib/glutton:uid=2000,gid=2000
- /run:uid=2000,gid=2000
network_mode: "host" network_mode: "host"
cap_add: cap_add:
- NET_ADMIN - NET_ADMIN

View File

@ -0,0 +1,78 @@
FROM alpine
#
# Include dist
ADD dist/ /root/dist/
#
# Get and install dependencies & packages
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
git \
nginx \
nginx-mod-http-headers-more \
php7 \
php7-cgi \
php7-ctype \
php7-fileinfo \
php7-fpm \
php7-json \
php7-mbstring \
php7-openssl \
php7-pdo \
php7-pdo_pgsql \
php7-pdo_sqlite \
php7-session \
php7-sqlite3 \
php7-tokenizer \
php7-xml \
php7-zip && \
#
# Clone and setup Heimdall, Nginx
git clone https://github.com/linuxserver/heimdall && \
cp -R heimdall/. /var/lib/nginx/html && \
rm -rf heimdall && \
cd /var/lib/nginx/html && \
cp .env.example .env && \
php artisan key:generate && \
#
## Add previously configured content
mkdir -p /var/lib/nginx/html/storage/app/public/backgrounds/ && \
cp /root/dist/app/bg1.jpg /var/lib/nginx/html/public/img/bg1.jpg && \
cp /root/dist/app/t-pot.png /var/lib/nginx/html/public/img/heimdall-icon-small.png && \
cp /root/dist/app/app.sqlite /var/lib/nginx/html/database/app.sqlite && \
cp /root/dist/app/cyberchef.png /var/lib/nginx/html/storage/app/public/icons/ZotKKZA2QKplZhdoF3WLx4UdKKhLFamf3lSMcLkr.png && \
cp /root/dist/app/eshead.png /var/lib/nginx/html/storage/app/public/icons/77KqFv4YIshXUDLDoOvZ1NUbsKDtsMAjJvg4sYqN.png && \
cp /root/dist/app/tsec.png /var/lib/nginx/html/storage/app/public/icons/RHwXCfCeGNDdhYgzlShL9o4NBFL2LHZWajgyeL0a.png && \
cp /root/dist/app/spiderfoot.png /var/lib/nginx/html/storage/app/public/icons/s7uPe1frJqjv76oI6SNqNbWUsgU1GHYqRALMlwYb.png && \
cp /root/dist/html/*.html /var/lib/nginx/html/public/ && \
cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon-16x16.png && \
cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon-32x32.png && \
cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon-96x96.png && \
cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon.ico && \
#
## Change ownership, permissions
chown root:www-data -R /var/lib/nginx/html && \
chmod 775 -R /var/lib/nginx/html/storage && \
chmod 775 -R /var/lib/nginx/html/database && \
sed -i "s/user = nobody/user = nginx/g" /etc/php7/php-fpm.d/www.conf && \
sed -i "s/group = nobody/group = nginx/g" /etc/php7/php-fpm.d/www.conf && \
sed -i "s#;upload_tmp_dir =#upload_tmp_dir = /var/lib/nginx/tmp#g" /etc/php7/php.ini && \
sed -i "s/9000/64304/g" /etc/php7/php-fpm.d/www.conf && \
sed -i "s/APP_NAME=Heimdall/APP_NAME=T-Pot/g" /var/lib/nginx/html/.env && \
## Add Nginx / T-Pot specific configs
rm -rf /etc/nginx/conf.d/* /usr/share/nginx/html/* && \
cp /root/dist/conf/nginx.conf /etc/nginx/ && \
cp -R /root/dist/conf/ssl /etc/nginx/ && \
cp /root/dist/conf/tpotweb.conf /etc/nginx/conf.d/ && \
cp /root/dist/start.sh / && \
## Pack database for first time usage
cd /var/lib/nginx && \
tar cvfz first.tgz /var/lib/nginx/html/database /var/lib/nginx/html/storage && \
#
# Clean up
apk del --purge \
git && \
rm -rf /root/* && \
rm -rf /var/cache/apk/*
#
# Start nginx
CMD /start.sh && php-fpm7 && exec nginx -g 'daemon off;'

BIN
docker/heimdall/dist/app/app.sqlite vendored Executable file

Binary file not shown.

BIN
docker/heimdall/dist/app/bg1.jpg vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 510 KiB

BIN
docker/heimdall/dist/app/cyberchef.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

BIN
docker/heimdall/dist/app/eshead.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docker/heimdall/dist/app/spiderfoot.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

BIN
docker/heimdall/dist/app/t-pot.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

BIN
docker/heimdall/dist/app/tsec.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.0 KiB

76
docker/heimdall/dist/conf/nginx.conf vendored Normal file
View File

@ -0,0 +1,76 @@
user nginx;
worker_processes auto;
pid /run/nginx.pid;
load_module /usr/lib/nginx/modules/ngx_http_headers_more_filter_module.so;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_protocols TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
log_format le_json '{ "timestamp": "$time_iso8601", '
'"src_ip": "$remote_addr", '
'"remote_user": "$remote_user", '
'"body_bytes_sent": "$body_bytes_sent", '
'"request_time": "$request_time", '
'"status": "$status", '
'"request": "$request", '
'"request_method": "$request_method", '
'"http_referrer": "$http_referer", '
'"http_user_agent": "$http_user_agent" }';
access_log /var/log/nginx/access.log le_json;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

View File

@ -0,0 +1,13 @@
-----BEGIN DH PARAMETERS-----
MIICCAKCAgEAiHmfakVLOStSULBdaTbZY/zeFyEeQ19GY9Z5CJg06dIIgIzhxk9L
4xsQdQk8giKOjP6SfX0ZgF5CYaurQ3ljYlP0UlAQQo9+fEErbqj3hCzAxtIpd6Yj
SV6zFdnSjwxWuKAPPywiQNljnHH+Y1KBdbl5VQ9gC3ehtaLo1A4y8q96f6fC5rGU
nfgw4lTxLvPD7NwaOdFTCyK8tTxvUGNJIvf7805IxZ0BvAiBuVaXStaMcqf5BHLP
fYpvIiVaCrtto4elu18nL0tf2CN5n9ai4hlr0nPmNrE/Zrrur78Re5F4Ien9kr4d
xabXvVJJQa9j2NdQO7vk7Cz/dAIiqt/1XKFhll4TTYBqrFVXIwF+FNx636zyOjcO
nlZk/V+IL/UTPnZOv2PGt5+WetvJJubi6B9XgOgVLduI07woAp5qnRJJt6fJW1aA
M86By6WLy5P31Py6eFj8nYgj1V703XgQ5lESKYpeVgqA0bh7daNzOCoGQvvUKlTP
RTu6fs7clw5ta4yYUyvuIKTngH5yGBNdTuP0GWo6Y+Dy1BctVwl2xSw+FhYeuIf/
EB2A3129H59HhbWyNH337+1dfntHfQRXBsT0YSyDxPurI5/FNGcmw+GZEYk4BB8j
g7TwH3GBjbKnjnr7SnhanqmWgybgQw6oR9gDC399eR4LiOk9sbxpX1MCAQI=
-----END DH PARAMETERS-----

View File

@ -0,0 +1,12 @@
#!/bin/bash
# Got root?
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
openssl req -nodes -x509 -sha512 -newkey rsa:8192 -keyout "nginx.key" -out "nginx.crt" -days 3650

View File

@ -0,0 +1,16 @@
#!/bin/bash
# Got root?
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
if [ "$1" = "2048" ] || [ "$1" = "4096" ] || [ "$1" = "8192" ]
then
openssl dhparam -outform PEM -out dhparam$1.pem $1
else
echo "Usage: ./gen-dhparam [2048, 4096, 8192]..."
fi

152
docker/heimdall/dist/conf/tpotweb.conf vendored Normal file
View File

@ -0,0 +1,152 @@
############################################
### NGINX T-Pot configuration file by mo ###
############################################
server {
#########################
### Basic server settings
#########################
listen 64297 ssl http2;
#index tpotweb.html;
index index.php;
ssl_protocols TLSv1.3;
server_name example.com;
error_page 300 301 302 400 401 402 403 404 500 501 502 503 504 /error.html;
root /var/lib/nginx/html/public;
##############################################
### Remove version number add different header
##############################################
server_tokens off;
more_set_headers 'Server: apache';
##############################################
### SSL settings and Cipher Suites
##############################################
ssl_certificate /etc/nginx/cert/nginx.crt;
ssl_certificate_key /etc/nginx/cert/nginx.key;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!DHE:!SHA:!SHA256';
ssl_ecdh_curve secp384r1;
ssl_dhparam /etc/nginx/ssl/dhparam4096.pem;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
####################################
### OWASP recommendations / settings
####################################
### Size Limits & Buffer Overflows
### the size may be configured based on the needs.
client_body_buffer_size 128k;
client_header_buffer_size 1k;
client_max_body_size 2M;
large_client_header_buffers 2 1k;
### Mitigate Slow HHTP DoS Attack
### Timeouts definition ##
client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 5 5;
send_timeout 10;
### X-Frame-Options is to prevent from clickJacking attack
add_header X-Frame-Options SAMEORIGIN;
### disable content-type sniffing on some browsers.
add_header X-Content-Type-Options nosniff;
### This header enables the Cross-site scripting (XSS) filter
add_header X-XSS-Protection "1; mode=block";
### This will enforce HTTP browsing into HTTPS and avoid ssl stripping attack
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
##################################
### Restrict access and basic auth
##################################
# satisfy all;
satisfy any;
# allow 10.0.0.0/8;
# allow 172.16.0.0/12;
# allow 192.168.0.0/16;
allow 127.0.0.1;
allow ::1;
deny all;
auth_basic "closed site";
auth_basic_user_file /etc/nginx/nginxpasswd;
############
### Heimdall
############
location / {
auth_basic "closed site";
auth_basic_user_file /etc/nginx/nginxpasswd;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:64304;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
#################
### Proxied sites
#################
### Kibana
location /kibana/ {
proxy_pass http://127.0.0.1:64296;
rewrite /kibana/(.*)$ /$1 break;
}
### ES
location /es/ {
proxy_pass http://127.0.0.1:64298/;
rewrite /es/(.*)$ /$1 break;
}
### head standalone
location /myhead/ {
proxy_pass http://127.0.0.1:64302/;
rewrite /myhead/(.*)$ /$1 break;
}
### CyberChef
location /cyberchef {
proxy_pass http://127.0.0.1:64299;
rewrite ^/cyberchef(.*)$ /$1 break;
}
### spiderfoot
location /spiderfoot {
proxy_pass http://127.0.0.1:64303;
}
location /static {
proxy_pass http://127.0.0.1:64303/spiderfoot/static;
}
location /scanviz {
proxy_pass http://127.0.0.1:64303/spiderfoot/scanviz;
}
location /scandelete {
proxy_pass http://127.0.0.1:64303/spiderfoot/scandelete;
}
}

11
docker/heimdall/dist/html/cockpit.html vendored Normal file
View File

@ -0,0 +1,11 @@
<!DOCTYPE HTML>
<html lang="en-US">
<head>
<meta charset="UTF-8">
<meta http-equiv="refresh">
<script type="text/javascript">
window.location.href = window.location.protocol + '//' + window.location.hostname + ':64294'
</script>
<title>Redirect to Cockpit</title>
</head>
</html>

0
docker/heimdall/dist/html/error.html vendored Normal file
View File

BIN
docker/heimdall/dist/html/favicon.ico vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

10
docker/heimdall/dist/start.sh vendored Executable file
View File

@ -0,0 +1,10 @@
#!/bin/ash
if [ "$(ls /var/lib/nginx/html/database)" = "" ] && [ "$HEIMDALL_PERSIST" = "YES" ];
then
tar xvfz /var/lib/nginx/first.tgz -C /
fi
if [ "$HEIMDALL_PERSIST" = "YES" ];
then
chmod 770 -R /var/lib/nginx/html/database /var/lib/nginx/html/storage
chown root:www-data -R /var/lib/nginx/html/database /var/lib/nginx/html/storage
fi

View File

@ -0,0 +1,37 @@
version: '2.3'
services:
# nginx service
nginx:
build: .
container_name: nginx
restart: always
environment:
### If set to YES all changes within Heimdall will remain for the next start
### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
- HEIMDALL_PERSIST=NO
tmpfs:
- /var/tmp/nginx/client_body
- /var/tmp/nginx/proxy
- /var/tmp/nginx/fastcgi
- /var/tmp/nginx/uwsgi
- /var/tmp/nginx/scgi
- /run
- /var/log/php7/
- /var/lib/nginx/tmp:uid=100,gid=82
- /var/lib/nginx/html/storage/logs:uid=100,gid=82
- /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:1903"
read_only: true
volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro
- /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- /data/nginx/log/:/var/log/nginx/
### Enable the following volumes if you set HEIMDALL_PERSIST=YES
# - /data/nginx/heimdall/database:/var/lib/nginx/html/database
# - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:3.10
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
git \ git \
libcap \ libcap \
@ -16,23 +17,24 @@ RUN apk -U --no-cache add \
python3-dev \ python3-dev \
py-virtualenv && \ py-virtualenv && \
pip3 install --no-cache-dir --upgrade pip && \ pip3 install --no-cache-dir --upgrade pip && \
#
# Setup heralding # Setup heralding
mkdir -p /opt && \ mkdir -p /opt && \
cd /opt/ && \ cd /opt/ && \
git clone --depth=1 https://github.com/johnnykv/heralding && \ git clone --depth=1 https://github.com/johnnykv/heralding && \
cd heralding && \ cd heralding && \
sed -i 's/asyncssh/asyncssh==1.18.0/' requirements.txt && \
pip3 install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir -r requirements.txt && \
pip3 install --no-cache-dir . && \ pip3 install --no-cache-dir . && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 heralding && \ addgroup -g 2000 heralding && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 heralding && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 heralding && \
mkdir -p /var/log/heralding/ /etc/heralding && \ mkdir -p /var/log/heralding/ /etc/heralding && \
mv /root/dist/heralding.yml /etc/heralding/ && \ mv /root/dist/heralding.yml /etc/heralding/ && \
setcap cap_net_bind_service=+ep /usr/bin/python3.6 && \ setcap cap_net_bind_service=+ep /usr/bin/python3.7 && \
chown -R heralding:heralding /var/log/heralding && \ chown -R heralding:heralding /var/log/heralding && \
#
# Clean up # Clean up
apk del --purge \ apk del --purge \
build-base \ build-base \
@ -46,8 +48,8 @@ RUN apk -U --no-cache add \
rm -rf /root/* \ rm -rf /root/* \
/var/cache/apk/* \ /var/cache/apk/* \
/opt/heralding /opt/heralding
#
# Start elasticpot # Start Heralding
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
WORKDIR /tmp/heralding/ WORKDIR /tmp/heralding/
USER heralding:heralding USER heralding:heralding

View File

@ -0,0 +1,54 @@
FROM alpine
# Include dist
ADD dist/ /root/dist/
# Install packages
RUN apk -U --no-cache add \
build-base \
git \
libcap \
libffi-dev \
openssl-dev \
libzmq \
postgresql-dev \
python3 \
python3-dev \
py-virtualenv && \
pip3 install --no-cache-dir --upgrade pip && \
# Setup heralding
mkdir -p /opt && \
cd /opt/ && \
git clone --depth=1 https://github.com/johnnykv/heralding && \
cd heralding && \
pip3 install --no-cache-dir -r requirements.txt && \
pip3 install --no-cache-dir . && \
# Setup user, groups and configs
addgroup -g 2000 heralding && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 heralding && \
mkdir -p /var/log/heralding/ /etc/heralding && \
mv /root/dist/heralding.yml /etc/heralding/ && \
setcap cap_net_bind_service=+ep /usr/bin/python3.6 && \
chown -R heralding:heralding /var/log/heralding && \
# Clean up
apk del --purge \
build-base \
git \
libcap \
libffi-dev \
libressl-dev \
postgresql-dev \
python3-dev \
py-virtualenv && \
rm -rf /root/* \
/var/cache/apk/* \
/opt/heralding
# Start elasticpot
STOPSIGNAL SIGINT
WORKDIR /tmp/heralding/
USER heralding:heralding
CMD exec heralding -c /etc/heralding/heralding.yml -l /var/log/heralding/heralding.log

View File

@ -8,7 +8,14 @@ bind_host: 0.0.0.0
activity_logging: activity_logging:
file: file:
enabled: true enabled: true
session_log_file: "/var/log/heralding/session.csv" # Session details common for all protocols (capabilities) in CSV format,
# written to file when the session ends. Set to "" to disable.
session_csv_log_file: "/var/log/heralding/session.csv"
# Complete session details (including protocol specific data) in JSONL format,
# written to file when the session ends. Set to "" to disable
session_json_log_file: "/var/log/heralding/log_session.json"
# Writes each authentication attempt to file, including credentials,
# set to "" to disable
authentication_log_file: "/var/log/heralding/auth.csv" authentication_log_file: "/var/log/heralding/auth.csv"
syslog: syslog:
@ -27,6 +34,10 @@ activity_logging:
enabled: false enabled: false
port: 23400 port: 23400
hash_cracker:
enabled: true
wordlist_file: 'wordlist.txt'
# protocols to enable # protocols to enable
capabilities: capabilities:
ftp: ftp:
@ -155,3 +166,27 @@ capabilities:
enabled: true enabled: true
port: 1080 port: 1080
timeout: 30 timeout: 30
mysql:
enabled: true
port: 3306
timeout: 30
rdp:
enabled: true
port: 3389
timeout: 30
protocol_specific_data:
banner: ""
# if a .pem file is not found in work dir, a new pem file will be created
# using these values
cert:
common_name: "*"
country: "US"
state: None
locality: None
organization: None
organizational_unit: None
# how many days should the certificate be valid for
valid_days: 365
serial_number: 0

View File

@ -26,6 +26,8 @@ services:
- "993:993" - "993:993"
- "995:995" - "995:995"
- "1080:1080" - "1080:1080"
- "3306:3306"
- "3389:3389"
- "5432:5432" - "5432:5432"
- "5900:5900" - "5900:5900"
image: "dtagdevsec/heralding:1903" image: "dtagdevsec/heralding:1903"

View File

@ -1,8 +1,8 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \ apk -U --no-cache add \
@ -12,11 +12,10 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
python2 \ python2 \
python2-dev \ python2-dev \
py2-pip && \ py2-pip && \
#
# Upgrade pip, install virtualenv # Upgrade pip, install virtualenv
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir virtualenv && \ pip install --no-cache-dir virtualenv && \
#
# Clone honeypy from git # Clone honeypy from git
git clone --depth=1 https://github.com/foospidy/HoneyPy /opt/honeypy && \ git clone --depth=1 https://github.com/foospidy/HoneyPy /opt/honeypy && \
cd /opt/honeypy && \ cd /opt/honeypy && \
@ -33,13 +32,13 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
cp /root/dist/services.cfg /opt/honeypy/etc && \ cp /root/dist/services.cfg /opt/honeypy/etc && \
cp /root/dist/honeypy.cfg /opt/honeypy/etc && \ cp /root/dist/honeypy.cfg /opt/honeypy/etc && \
/opt/honeypy/env/bin/pip install -r /opt/honeypy/requirements.txt && \ /opt/honeypy/env/bin/pip install -r /opt/honeypy/requirements.txt && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 honeypy && \ addgroup -g 2000 honeypy && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 honeypy && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 honeypy && \
chown -R honeypy:honeypy /opt/honeypy && \ chown -R honeypy:honeypy /opt/honeypy && \
setcap cap_net_bind_service=+ep /opt/honeypy/env/bin/python2 && \ setcap cap_net_bind_service=+ep /opt/honeypy/env/bin/python2 && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
@ -47,7 +46,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
py2-pip && \ py2-pip && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Set workdir and start mailoney # Set workdir and start mailoney
USER honeypy:honeypy USER honeypy:honeypy
WORKDIR /opt/honeypy WORKDIR /opt/honeypy

View File

@ -1,13 +1,13 @@
FROM debian:stretch-slim FROM debian:stretch-slim
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup apt # Setup apt
RUN apt-get update -y && \ RUN apt-get update -y && \
apt-get dist-upgrade -y && \ apt-get dist-upgrade -y && \
#
# Install packages # Install packages
apt-get install -y autoconf \ apt-get install -y autoconf \
build-essential \ build-essential \
@ -24,7 +24,7 @@ RUN apt-get update -y && \
netbase \ netbase \
procps \ procps \
wget && \ wget && \
#
# Install honeytrap from source # Install honeytrap from source
cd /root/ && \ cd /root/ && \
git clone https://github.com/armedpot/honeytrap && \ git clone https://github.com/armedpot/honeytrap && \
@ -38,14 +38,14 @@ RUN apt-get update -y && \
make && \ make && \
make install && \ make install && \
make clean && \ make clean && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup --gid 2000 honeytrap && \ addgroup --gid 2000 honeytrap && \
adduser --system --no-create-home --shell /bin/bash --uid 2000 --disabled-password --disabled-login --gid 2000 honeytrap && \ adduser --system --no-create-home --shell /bin/bash --uid 2000 --disabled-password --disabled-login --gid 2000 honeytrap && \
mkdir -p /opt/honeytrap/etc/honeytrap/ /opt/honeytrap/var/attacks /opt/honeytrap/var/downloads /opt/honeytrap/var/log && \ mkdir -p /opt/honeytrap/etc/honeytrap/ /opt/honeytrap/var/attacks /opt/honeytrap/var/downloads /opt/honeytrap/var/log && \
mv /root/dist/honeytrap.conf /opt/honeytrap/etc/honeytrap/ && \ mv /root/dist/honeytrap.conf /opt/honeytrap/etc/honeytrap/ && \
setcap cap_net_admin=+ep /opt/honeytrap/sbin/honeytrap && \ setcap cap_net_admin=+ep /opt/honeytrap/sbin/honeytrap && \
#
# Clean up # Clean up
rm -rf /root/* && \ rm -rf /root/* && \
apt-get purge -y autoconf \ apt-get purge -y autoconf \
@ -55,7 +55,7 @@ RUN apt-get update -y && \
libpq-dev && \ libpq-dev && \
apt-get autoremove -y --purge && \ apt-get autoremove -y --purge && \
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
#
# Start honeytrap # Start honeytrap
USER honeytrap:honeytrap USER honeytrap:honeytrap
CMD ["/opt/honeytrap/sbin/honeytrap", "-D", "-C", "/opt/honeytrap/etc/honeytrap/honeytrap.conf", "-P", "/tmp/honeytrap/honeytrap.pid", "-t", "5", "-u", "honeytrap", "-g", "honeytrap"] CMD ["/opt/honeytrap/sbin/honeytrap", "-D", "-C", "/opt/honeytrap/etc/honeytrap/honeytrap.conf", "-P", "/tmp/honeytrap/honeytrap.pid", "-t", "5", "-u", "honeytrap", "-g", "honeytrap"]

View File

@ -1,5 +1,5 @@
FROM alpine FROM alpine
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
autoconf \ autoconf \
@ -11,7 +11,7 @@ RUN apk -U --no-cache add \
py-pip \ py-pip \
python \ python \
python-dev && \ python-dev && \
#
# Install libemu # Install libemu
git clone --depth=1 https://github.com/buffer/libemu /root/libemu/ && \ git clone --depth=1 https://github.com/buffer/libemu /root/libemu/ && \
cd /root/libemu/ && \ cd /root/libemu/ && \
@ -19,22 +19,22 @@ RUN apk -U --no-cache add \
./configure && \ ./configure && \
make && \ make && \
make install && \ make install && \
#
# Install libemu python wrapper # Install libemu python wrapper
pip install --no-cache-dir --upgrade pip && \ pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir \ pip install --no-cache-dir \
hpfeeds \ hpfeeds \
pylibemu && \ pylibemu && \
#
# Install mailoney from git # Install mailoney from git
git clone --depth=1 https://github.com/awhitehatter/mailoney /opt/mailoney && \ git clone --depth=1 https://github.com/t3chn0m4g3/mailoney /opt/mailoney && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 mailoney && \ addgroup -g 2000 mailoney && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 mailoney && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 mailoney && \
chown -R mailoney:mailoney /opt/mailoney && \ chown -R mailoney:mailoney /opt/mailoney && \
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \ setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \
#
# Clean up # Clean up
apk del --purge autoconf \ apk del --purge autoconf \
automake \ automake \
@ -44,7 +44,7 @@ RUN apk -U --no-cache add \
python-dev && \ python-dev && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Set workdir and start mailoney # Set workdir and start mailoney
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
USER mailoney:mailoney USER mailoney:mailoney

View File

@ -0,0 +1,52 @@
FROM alpine
#
# Install packages
RUN apk -U --no-cache add \
autoconf \
automake \
build-base \
git \
libcap \
libtool \
py-pip \
python \
python-dev && \
#
# Install libemu
git clone --depth=1 https://github.com/buffer/libemu /root/libemu/ && \
cd /root/libemu/ && \
autoreconf -vi && \
./configure && \
make && \
make install && \
#
# Install libemu python wrapper
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir \
hpfeeds \
pylibemu && \
#
# Install mailoney from git
git clone --depth=1 https://github.com/awhitehatter/mailoney /opt/mailoney && \
#
# Setup user, groups and configs
addgroup -g 2000 mailoney && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 mailoney && \
chown -R mailoney:mailoney /opt/mailoney && \
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \
#
# Clean up
apk del --purge autoconf \
automake \
build-base \
git \
py-pip \
python-dev && \
rm -rf /root/* && \
rm -rf /var/cache/apk/*
#
# Set workdir and start mailoney
STOPSIGNAL SIGINT
USER mailoney:mailoney
WORKDIR /opt/mailoney/
CMD ["/usr/bin/python","mailoney.py","-i","0.0.0.0","-p","25","-s","mailrelay.local","-t","schizo_open_relay"]

View File

@ -1,12 +1,12 @@
FROM alpine FROM alpine
#
# Setup apk # Setup apk
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
build-base \ build-base \
git \ git \
go \ go \
g++ && \ g++ && \
#
# Setup go, build medpot # Setup go, build medpot
export GOPATH=/opt/go/ && \ export GOPATH=/opt/go/ && \
mkdir -p /opt/go/src && \ mkdir -p /opt/go/src && \
@ -19,18 +19,18 @@ RUN apk -U --no-cache add \
cd medpot && \ cd medpot && \
cp dist/etc/ews.cfg /etc/ && \ cp dist/etc/ews.cfg /etc/ && \
go build medpot && \ go build medpot && \
#
# Setup medpot # Setup medpot
mkdir -p /opt/medpot \ mkdir -p /opt/medpot \
/var/log/medpot && \ /var/log/medpot && \
cp medpot /opt/medpot && \ cp medpot /opt/medpot && \
cp /opt/go/src/medpot/template/*.xml /opt/medpot/ && \ cp /opt/go/src/medpot/template/*.xml /opt/medpot/ && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 medpot && \ addgroup -g 2000 medpot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 medpot && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 medpot && \
chown -R medpot:medpot /var/log/medpot && \ chown -R medpot:medpot /var/log/medpot && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
@ -39,7 +39,7 @@ RUN apk -U --no-cache add \
rm -rf /var/cache/apk/* \ rm -rf /var/cache/apk/* \
/opt/go \ /opt/go \
/root/dist /root/dist
#
# Start medpot # Start medpot
WORKDIR /opt/medpot WORKDIR /opt/medpot
USER medpot:medpot USER medpot:medpot

View File

@ -1,13 +1,13 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
nginx \ nginx \
nginx-mod-http-headers-more && \ nginx-mod-http-headers-more && \
#
# Setup configs # Setup configs
mkdir -p /run/nginx && \ mkdir -p /run/nginx && \
rm -rf /etc/nginx/conf.d/* /usr/share/nginx/html/* && \ rm -rf /etc/nginx/conf.d/* /usr/share/nginx/html/* && \
@ -15,10 +15,10 @@ RUN apk -U --no-cache add \
cp -R /root/dist/conf/ssl /etc/nginx/ && \ cp -R /root/dist/conf/ssl /etc/nginx/ && \
cp /root/dist/conf/tpotweb.conf /etc/nginx/conf.d/ && \ cp /root/dist/conf/tpotweb.conf /etc/nginx/conf.d/ && \
cp -R /root/dist/html/ /var/lib/nginx/ && \ cp -R /root/dist/html/ /var/lib/nginx/ && \
#
# Clean up # Clean up
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start nginx # Start nginx
CMD exec nginx -g 'daemon off;' CMD exec nginx -g 'daemon off;'

View File

@ -1,8 +1,8 @@
FROM alpine FROM alpine
#
# Add source # Add source
ADD . /opt/p0f ADD . /opt/p0f
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
bash \ bash \
@ -12,24 +12,24 @@ RUN apk -U --no-cache add \
libcap \ libcap \
libpcap \ libpcap \
libpcap-dev && \ libpcap-dev && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 p0f && \ addgroup -g 2000 p0f && \
adduser -S -s /bin/bash -u 2000 -D -g 2000 p0f && \ adduser -S -s /bin/bash -u 2000 -D -g 2000 p0f && \
#
# Download and compile p0f # Download and compile p0f
cd /opt/p0f && \ cd /opt/p0f && \
./build.sh && \ ./build.sh && \
setcap cap_sys_chroot,cap_setgid,cap_net_raw=+ep /opt/p0f/p0f && \ setcap cap_sys_chroot,cap_setgid,cap_net_raw=+ep /opt/p0f/p0f && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
jansson-dev \ jansson-dev \
libpcap-dev && \ libpcap-dev && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start suricata # Start suricata
WORKDIR /opt/p0f WORKDIR /opt/p0f
USER p0f:p0f USER p0f:p0f
CMD exec /opt/p0f/p0f -u p0f -j -o /var/log/p0f/p0f.json -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) CMD exec /opt/p0f/p0f -u p0f -j -o /var/log/p0f/p0f.json -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) > /dev/null

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
build-base \ build-base \
git \ git \
libffi-dev \ libffi-dev \
@ -14,11 +15,11 @@ RUN apk -U add \
python-dev \ python-dev \
py-pip \ py-pip \
py-setuptools && \ py-setuptools && \
#
# Setup user # Setup user
addgroup -g 2000 rdpy && \ addgroup -g 2000 rdpy && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 rdpy && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 rdpy && \
#
# Install deps # Install deps
pip install --no-cache-dir --upgrade pip && \ pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir --upgrade cffi && \ pip install --no-cache-dir --upgrade cffi && \
@ -30,19 +31,19 @@ RUN apk -U add \
service_identity \ service_identity \
rsa \ rsa \
pyasn1 && \ pyasn1 && \
#
# Install rdpy from git # Install rdpy from git
mkdir -p /opt && \ mkdir -p /opt && \
cd /opt && \ cd /opt && \
git clone --depth=1 https://github.com/t3chn0m4g3/rdpy && \ git clone --depth=1 https://github.com/t3chn0m4g3/rdpy && \
cd rdpy && \ cd rdpy && \
python setup.py install && \ python setup.py install && \
#
# Setup user, groups and configs # Setup user, groups and configs
cp /root/dist/* /opt/rdpy/ && \ cp /root/dist/* /opt/rdpy/ && \
chown rdpy:rdpy -R /opt/rdpy/* && \ chown rdpy:rdpy -R /opt/rdpy/* && \
mkdir -p /var/log/rdpy && \ mkdir -p /var/log/rdpy && \
#
# Clean up # Clean up
rm -rf /root/* && \ rm -rf /root/* && \
apk del --purge build-base \ apk del --purge build-base \
@ -52,7 +53,7 @@ RUN apk -U add \
python-dev \ python-dev \
py-pip && \ py-pip && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start rdpy # Start rdpy
USER rdpy:rdpy USER rdpy:rdpy
CMD exec /usr/bin/python2 -i /usr/bin/rdpy-rdphoneypot.py /opt/rdpy/$(shuf -i 1-3 -n 1) >> /var/log/rdpy/rdpy.log CMD exec /usr/bin/python2 -i /usr/bin/rdpy-rdphoneypot.py /opt/rdpy/$(shuf -i 1-3 -n 1) >> /var/log/rdpy/rdpy.log

View File

@ -1,11 +1,12 @@
FROM alpine FROM alpine:3.10
#
# Get and install dependencies & packages # Get and install dependencies & packages
RUN sed -i 's/dl-cdn/dl-4/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \ apk -U --no-cache add \
build-base \ build-base \
curl \ curl \
git \ git \
libffi-dev \
libxml2 \ libxml2 \
libxml2-dev \ libxml2-dev \
libxslt \ libxslt \
@ -14,28 +15,30 @@ RUN sed -i 's/dl-cdn/dl-4/g' /etc/apk/repositories && \
openssl-dev \ openssl-dev \
python \ python \
python-dev \ python-dev \
py-cffi \
py-pillow \
py-future \ py-future \
py-pip \ py-pip \
swig && \ swig && \
#
# Setup user # Setup user
addgroup -g 2000 spiderfoot && \ addgroup -g 2000 spiderfoot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 spiderfoot && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 spiderfoot && \
#
# Install spiderfoot # Install spiderfoot
# git clone --depth=1 https://github.com/smicallef/spiderfoot -b v2.12.0-final /home/spiderfoot && \ # git clone --depth=1 https://github.com/smicallef/spiderfoot -b v2.12.0-final /home/spiderfoot && \
git clone --depth=1 https://github.com/smicallef/spiderfoot /home/spiderfoot && \ git clone --depth=1 https://github.com/smicallef/spiderfoot /home/spiderfoot && \
cd /home/spiderfoot && \ cd /home/spiderfoot && \
pip install --no-cache-dir --upgrade pip && \ pip install --no-cache-dir openxmllib wheel && \
pip install --no-cache-dir wheel && \
pip install --no-cache-dir -r requirements.txt && \ pip install --no-cache-dir -r requirements.txt && \
chown -R spiderfoot:spiderfoot /home/spiderfoot && \ chown -R spiderfoot:spiderfoot /home/spiderfoot && \
sed -i "s#'__docroot': ''#'__docroot': '\/spiderfoot'#" /home/spiderfoot/sf.py && \ sed -i "s#'__docroot': ''#'__docroot': '\/spiderfoot'#" /home/spiderfoot/sf.py && \
sed -i 's#raise cherrypy.HTTPRedirect("\/")#raise cherrypy.HTTPRedirect("\/spiderfoot")#' /home/spiderfoot/sfwebui.py && \ sed -i 's#raise cherrypy.HTTPRedirect("\/")#raise cherrypy.HTTPRedirect("\/spiderfoot")#' /home/spiderfoot/sfwebui.py && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
libffi-dev \
libxml2-dev \ libxml2-dev \
libxslt-dev \ libxslt-dev \
openssl-dev \ openssl-dev \
@ -43,10 +46,10 @@ RUN sed -i 's/dl-cdn/dl-4/g' /etc/apk/repositories && \
py-pip \ py-pip \
py-setuptools && \ py-setuptools && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080'
#
# Set user, workdir and start spiderfoot # Set user, workdir and start spiderfoot
USER spiderfoot:spiderfoot USER spiderfoot:spiderfoot
WORKDIR /home/spiderfoot WORKDIR /home/spiderfoot

View File

@ -5,7 +5,7 @@ ADD dist/ /root/dist/
# #
# Install packages # Install packages
#RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ #RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
RUN apk -U --no-cache add \ RUN apk -U add \
ca-certificates \ ca-certificates \
curl \ curl \
file \ file \
@ -13,8 +13,8 @@ RUN apk -U --no-cache add \
hiredis \ hiredis \
jansson \ jansson \
libcap-ng \ libcap-ng \
libhtp \
libmagic \ libmagic \
libmaxminddb \
libnet \ libnet \
libnetfilter_queue \ libnetfilter_queue \
libnfnetlink \ libnfnetlink \
@ -36,9 +36,9 @@ RUN apk -U --no-cache add \
hiredis-dev \ hiredis-dev \
jansson-dev \ jansson-dev \
libtool \ libtool \
libhtp-dev \
libcap-ng-dev \ libcap-ng-dev \
luajit-dev \ luajit-dev \
libmaxminddb-dev \
libpcap-dev \ libpcap-dev \
libnet-dev \ libnet-dev \
libnetfilter_queue-dev \ libnetfilter_queue-dev \
@ -47,20 +47,25 @@ RUN apk -U --no-cache add \
nss-dev \ nss-dev \
nspr-dev \ nspr-dev \
pcre-dev \ pcre-dev \
python2 \ python3 \
py2-pip \
rust \ rust \
yaml-dev && \ yaml-dev && \
# #
# Upgrade pip, install virtualenv # We need latest libhtp[-dev] which is only available in community
pip install --no-cache-dir --upgrade pip && \ apk -U add --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community \
pip install --no-cache-dir suricata-update && \ libhtp \
libhtp-dev && \
#
# Upgrade pip, install suricata-update to meet deps, however we will not be using it
# to reduce image (no python needed) and use the update script.
pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir suricata-update && \
# #
# Get and build Suricata # Get and build Suricata
mkdir -p /opt/builder/ && \ mkdir -p /opt/builder/ && \
wget https://www.openinfosecfoundation.org/download/suricata-4.1.4.tar.gz && \ wget https://www.openinfosecfoundation.org/download/suricata-5.0.0.tar.gz && \
tar xvfz suricata-4.1.4.tar.gz --strip-components=1 -C /opt/builder/ && \ tar xvfz suricata-5.0.0.tar.gz --strip-components=1 -C /opt/builder/ && \
rm suricata-4.1.4.tar.gz && \ rm suricata-5.0.0.tar.gz && \
cd /opt/builder && \ cd /opt/builder && \
./configure \ ./configure \
--prefix=/usr \ --prefix=/usr \
@ -110,6 +115,7 @@ RUN apk -U --no-cache add \
libcap-ng-dev \ libcap-ng-dev \
luajit-dev \ luajit-dev \
libpcap-dev \ libpcap-dev \
libmaxminddb-dev \
libnet-dev \ libnet-dev \
libnetfilter_queue-dev \ libnetfilter_queue-dev \
libnfnetlink-dev \ libnfnetlink-dev \
@ -117,12 +123,12 @@ RUN apk -U --no-cache add \
nss-dev \ nss-dev \
nspr-dev \ nspr-dev \
pcre-dev \ pcre-dev \
python2 \ python3 \
py2-pip \
rust \ rust \
yaml-dev && \ yaml-dev && \
rm -rf /opt/builder && \ rm -rf /opt/builder && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
# #
# Start suricata # Start suricata

View File

@ -1,4 +1,3 @@
not (host sicherheitstacho.eu or community.sicherheitstacho.eu) and not (host sicherheitstacho.eu or community.sicherheitstacho.eu or listbot.sicherheitstacho.eu) and
not (host deb.debian.org) and not (host deb.debian.org) and
not (host index.docker.io or docker.io) and not (host index.docker.io or docker.io)
not (host hpfeeds.sissden.eu)

View File

@ -44,6 +44,7 @@ vars:
MODBUS_PORTS: 502 MODBUS_PORTS: 502
FILE_DATA_PORTS: "[$HTTP_PORTS,110,143]" FILE_DATA_PORTS: "[$HTTP_PORTS,110,143]"
FTP_PORTS: 21 FTP_PORTS: 21
VXLAN_PORTS: 4789
## ##
## Step 2: select outputs to enable ## Step 2: select outputs to enable
@ -154,6 +155,40 @@ outputs:
# Enable the logging of tagged packets for rules using the # Enable the logging of tagged packets for rules using the
# "tag" keyword. # "tag" keyword.
tagged-packets: yes tagged-packets: yes
- anomaly:
# Anomaly log records describe unexpected conditions such
# as truncated packets, packets with invalid IP/UDP/TCP
# length values, and other events that render the packet
# invalid for further processing or describe unexpected
# behavior on an established stream. Networks which
# experience high occurrences of anomalies may experience
# packet processing degradation.
#
# Anomalies are reported for the following:
# 1. Decode: Values and conditions that are detected while
# decoding individual packets. This includes invalid or
# unexpected values for low-level protocol lengths as well
# as stream related events (TCP 3-way handshake issues,
# unexpected sequence number, etc).
# 2. Stream: This includes stream related events (TCP
# 3-way handshake issues, unexpected sequence number,
# etc).
# 3. Application layer: These denote application layer
# specific conditions that are unexpected, invalid or are
# unexpected given the application monitoring state.
#
# By default, anomaly logging is disabled. When anomaly
# logging is enabled, applayer anomaly reporting is
# enabled.
enabled: yes
#
# Choose one or more types of anomaly logging and whether to enable
# logging of the packet header for packet anomalies.
types:
# decode: no
# stream: no
# applayer: yes
#packethdr: no
- http: - http:
extended: yes # enable this for extended logging information extended: yes # enable this for extended logging information
# custom allows additional http fields to be included in eve-log # custom allows additional http fields to be included in eve-log
@ -162,16 +197,14 @@ outputs:
- dns: - dns:
# This configuration uses the new DNS logging format, # This configuration uses the new DNS logging format,
# the old configuration is still available: # the old configuration is still available:
# http://suricata.readthedocs.io/en/latest/configuration/suricata-yaml.html#eve-extensible-event-format # https://suricata.readthedocs.io/en/latest/output/eve/eve-json-output.html#dns-v1-format
# Use version 2 logging with the new format:
# DNS answers will be logged in one single event # As of Suricata 5.0, version 2 of the eve dns output
# rather than an event for each of it. # format is the default.
# Without setting a version the version
# will fallback to 1 for backwards compatibility.
version: 2 version: 2
# Enable/disable this logger. Default: enabled. # Enable/disable this logger. Default: enabled.
#enabled: no #enabled: yes
# Control logging of requests and responses: # Control logging of requests and responses:
# - requests: enable logging of DNS queries # - requests: enable logging of DNS queries
@ -186,8 +219,8 @@ outputs:
# Default: all # Default: all
#formats: [detailed, grouped] #formats: [detailed, grouped]
# Answer types to log. # Types to log, based on the query type.
# Default: all # Default: all.
#types: [a, aaaa, cname, mx, ns, ptr, txt] #types: [a, aaaa, cname, mx, ns, ptr, txt]
- tls: - tls:
extended: yes # enable this for extended logging information extended: yes # enable this for extended logging information
@ -196,7 +229,7 @@ outputs:
#session-resumption: no #session-resumption: no
# custom allows to control which tls fields that are included # custom allows to control which tls fields that are included
# in eve-log # in eve-log
custom: [subject, issuer, session_resumed, serial, fingerprint, sni, version, not_before, not_after, certificate, ja3] custom: [subject, issuer, session_resumed, serial, fingerprint, sni, version, not_before, not_after, certificate, ja3, ja3s]
- files: - files:
force-magic: yes # force logging magic on all logged files force-magic: yes # force logging magic on all logged files
# force logging of checksums, available hash functions are md5, # force logging of checksums, available hash functions are md5,
@ -220,11 +253,15 @@ outputs:
md5: [body, subject] md5: [body, subject]
- dnp3 - dnp3
- ftp
- rdp
- nfs - nfs
- smb - smb
- tftp - tftp
- ikev2 - ikev2
- krb5 - krb5
- snmp
- sip
- dhcp: - dhcp:
# DHCP logging requires Rust. # DHCP logging requires Rust.
enabled: no enabled: no
@ -248,47 +285,11 @@ outputs:
# flowints. # flowints.
#- metadata #- metadata
# alert output for use with Barnyard2 # deprecated - unified2 alert format for use with Barnyard2
- unified2-alert: - unified2-alert:
enabled: no enabled: no
filename: unified2.alert # for further options see:
# https://suricata.readthedocs.io/en/suricata-5.0.0/configuration/suricata-yaml.html#alert-output-for-use-with-barnyard2-unified2-alert
# File size limit. Can be specified in kb, mb, gb. Just a number
# is parsed as bytes.
#limit: 32mb
# By default unified2 log files have the file creation time (in
# unix epoch format) appended to the filename. Set this to yes to
# disable this behaviour.
#nostamp: no
# Sensor ID field of unified2 alerts.
#sensor-id: 0
# Include payload of packets related to alerts. Defaults to true, set to
# false if payload is not required.
#payload: yes
# HTTP X-Forwarded-For support by adding the unified2 extra header or
# overwriting the source or destination IP address (depending on flow
# direction) with the one reported in the X-Forwarded-For HTTP header.
# This is helpful when reviewing alerts for traffic that is being reverse
# or forward proxied.
xff:
enabled: yes
# Two operation modes are available, "extra-data" and "overwrite". Note
# that in the "overwrite" mode, if the reported IP address in the HTTP
# X-Forwarded-For header is of a different version of the packet
# received, it will fall-back to "extra-data" mode.
mode: extra-data
# Two proxy deployments are supported, "reverse" and "forward". In
# a "reverse" deployment the IP address used is the last one, in a
# "forward" deployment the first IP address is used.
deployment: reverse
# Header name where the actual IP address will be reported, if more
# than one IP address is present, the last IP address will be the
# one taken into consideration.
header: X-Forwarded-For
# a line based log of HTTP requests (no alerts) # a line based log of HTTP requests (no alerts)
- http-log: - http-log:
@ -318,14 +319,6 @@ outputs:
enabled: no enabled: no
#certs-log-dir: certs # directory to store the certificates files #certs-log-dir: certs # directory to store the certificates files
# a line based log of DNS requests and/or replies (no alerts)
# Note: not available when Rust is enabled (--enable-rust).
- dns-log:
enabled: no
filename: dns.log
append: yes
#filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
# Packet log... log packets in pcap format. 3 modes of operation: "normal" # Packet log... log packets in pcap format. 3 modes of operation: "normal"
# "multi" and "sguil". # "multi" and "sguil".
# #
@ -423,12 +416,11 @@ outputs:
#level: Info ## possible levels: Emergency, Alert, Critical, #level: Info ## possible levels: Emergency, Alert, Critical,
## Error, Warning, Notice, Info, Debug ## Error, Warning, Notice, Info, Debug
# a line based information for dropped packets in IPS mode # deprecated a line based information for dropped packets in IPS mode
- drop: - drop:
enabled: no enabled: no
filename: drop.log # further options documented at:
append: yes # https://suricata.readthedocs.io/en/suricata-5.0.0/configuration/suricata-yaml.html#drop-log-a-line-based-information-for-dropped-packets
#filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
# Output module for storing files on disk. Files are stored in a # Output module for storing files on disk. Files are stored in a
# directory names consisting of the first 2 characters of the # directory names consisting of the first 2 characters of the
@ -446,6 +438,7 @@ outputs:
# #
# To prune the filestore directory see the "suricatactl filestore # To prune the filestore directory see the "suricatactl filestore
# prune" command which can delete files over a certain age. # prune" command which can delete files over a certain age.
- file-store: - file-store:
version: 2 version: 2
enabled: no enabled: no
@ -495,51 +488,11 @@ outputs:
# one taken into consideration. # one taken into consideration.
header: X-Forwarded-For header: X-Forwarded-For
# output module to store extracted files to disk (old style, deprecated) # deprecated - file-store v1
#
# The files are stored to the log-dir in a format "file.<id>" where <id> is
# an incrementing number starting at 1. For each file "file.<id>" a meta
# file "file.<id>.meta" is created. Before they are finalized, they will
# have a ".tmp" suffix to indicate that they are still being processed.
#
# If include-pid is yes, then the files are instead "file.<pid>.<id>", with
# meta files named as "file.<pid>.<id>.meta"
#
# File extraction depends on a lot of things to be fully done:
# - file-store stream-depth. For optimal results, set this to 0 (unlimited)
# - http request / response body sizes. Again set to 0 for optimal results.
# - rules that contain the "filestore" keyword.
- file-store: - file-store:
enabled: no # set to yes to enable
log-dir: files # directory to store the files
force-magic: no # force logging magic on all stored files
# force logging of checksums, available hash functions are md5,
# sha1 and sha256
#force-hash: [md5]
force-filestore: no # force storing of all files
# override global stream-depth for sessions in which we want to
# perform file extraction. Set to 0 for unlimited.
#stream-depth: 0
#waldo: file.waldo # waldo file to store the file_id across runs
# uncomment to disable meta file writing
#write-meta: no
# uncomment the following variable to define how many files can
# remain open for filestore by Suricata. Default value is 0 which
# means files get closed after each write
#max-open-files: 1000
include-pid: no # set to yes to include pid in file names
# output module to log files tracked in a easily parsable JSON format
- file-log:
enabled: no enabled: no
filename: files-json.log # further options documented at:
append: yes # https://suricata.readthedocs.io/en/suricata-5.0.0/file-extraction/file-extraction.html#file-store-version-1
#filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
force-magic: no # force logging magic on all logged files
# force logging of checksums, available hash functions are md5,
# sha1 and sha256
#force-hash: [md5]
# Log TCP data after stream normalization # Log TCP data after stream normalization
# 2 types: file or dir. File logs into a single logfile. Dir creates # 2 types: file or dir. File logs into a single logfile. Dir creates
@ -771,6 +724,8 @@ app-layer:
protocols: protocols:
krb5: krb5:
enabled: yes enabled: yes
snmp:
enabled: yes
ikev2: ikev2:
enabled: yes enabled: yes
tls: tls:
@ -800,6 +755,8 @@ app-layer:
ftp: ftp:
enabled: yes enabled: yes
# memcap: 64mb # memcap: 64mb
rdp:
enabled: yes
ssh: ssh:
enabled: yes enabled: yes
smtp: smtp:
@ -832,8 +789,6 @@ app-layer:
content-inspect-window: 4096 content-inspect-window: 4096
imap: imap:
enabled: detection-only enabled: detection-only
msn:
enabled: detection-only
# Note: --enable-rust is required for full SMB1/2 support. W/o rust # Note: --enable-rust is required for full SMB1/2 support. W/o rust
# only minimal SMB1 support is available. # only minimal SMB1 support is available.
smb: smb:
@ -869,7 +824,8 @@ app-layer:
dp: 53 dp: 53
http: http:
enabled: yes enabled: yes
# memcap: 64mb # memcap: Maximum memory capacity for http
# Default is unlimited, value can be such as 64mb
# default-config: Used when no server-config matches # default-config: Used when no server-config matches
# personality: List of personalities used by default # personality: List of personalities used by default
@ -877,37 +833,15 @@ app-layer:
# by http_client_body & pcre /P option. # by http_client_body & pcre /P option.
# response-body-limit: Limit reassembly of response body for inspection # response-body-limit: Limit reassembly of response body for inspection
# by file_data, http_server_body & pcre /Q option. # by file_data, http_server_body & pcre /Q option.
# double-decode-path: Double decode path section of the URI
# double-decode-query: Double decode query section of the URI
# response-body-decompress-layer-limit:
# Limit to how many layers of compression will be
# decompressed. Defaults to 2.
# #
# For advanced options, see the user guide
# server-config: List of server configurations to use if address matches # server-config: List of server configurations to use if address matches
# address: List of IP addresses or networks for this block # address: List of IP addresses or networks for this block
# personalitiy: List of personalities used by this block # personalitiy: List of personalities used by this block
# request-body-limit: Limit reassembly of request body for inspection
# by http_client_body & pcre /P option.
# response-body-limit: Limit reassembly of response body for inspection
# by file_data, http_server_body & pcre /Q option.
# double-decode-path: Double decode path section of the URI
# double-decode-query: Double decode query section of the URI
# #
# uri-include-all: Include all parts of the URI. By default the # Then, all the fields from default-config can be overloaded
# 'scheme', username/password, hostname and port
# are excluded. Setting this option to true adds
# all of them to the normalized uri as inspected
# by http_uri, urilen, pcre with /U and the other
# keywords that inspect the normalized uri.
# Note that this does not affect http_raw_uri.
# Also, note that including all was the default in
# 1.4 and 2.0beta1.
#
# meta-field-limit: Hard size limit for request and response size
# limits. Applies to request line and headers,
# response line and headers. Does not apply to
# request or response bodies. Default is 18k.
# If this limit is reached an event is raised.
# #
# Currently Available Personalities: # Currently Available Personalities:
# Minimal, Generic, IDS (default), IIS_4_0, IIS_5_0, IIS_5_1, IIS_6_0, # Minimal, Generic, IDS (default), IIS_4_0, IIS_5_0, IIS_5_1, IIS_6_0,
@ -1027,6 +961,11 @@ app-layer:
dhcp: dhcp:
enabled: no enabled: no
# SIP, disabled by default.
sip:
enabled: yes
# Limit for the maximum number of asn1 frames to decode (default 256) # Limit for the maximum number of asn1 frames to decode (default 256)
asn1-max-frames: 256 asn1-max-frames: 256
@ -1565,7 +1504,7 @@ profiling:
limit: 10 limit: 10
# output to json # output to json
json: yes json: no
# per keyword profiling # per keyword profiling
keywords: keywords:
@ -1814,32 +1753,45 @@ napatech:
# a range of streams (e.g. streams: ["0-3"]) # a range of streams (e.g. streams: ["0-3"])
streams: ["0-3"] streams: ["0-3"]
# Tilera mpipe configuration. for use on Tilera TILE-Gx. # When auto-config is enabled the streams will be created and assigned
mpipe: # automatically to the NUMA node where the thread resides. If cpu-affinity
# is enabled in the threading section. Then the streams will be created
# according to the number of worker threads specified in the worker cpu set.
# Otherwise, the streams array is used to define the streams.
#
# This option cannot be used simultaneous with "use-all-streams".
#
auto-config: yes
# Load balancing modes: "static", "dynamic", "sticky", or "round-robin". # Ports indicates which napatech ports are to be used in auto-config mode.
load-balance: dynamic # these are the port ID's of the ports that will be merged prior to the
# traffic being distributed to the streams.
#
# This can be specified in any of the following ways:
#
# a list of individual ports (e.g. ports: [0,1,2,3])
#
# a range of ports (e.g. ports: [0-3])
#
# "all" to indicate that all ports are to be merged together
# (e.g. ports: [all])
#
# This has no effect if auto-config is disabled.
#
ports: [all]
# Number of Packets in each ingress packet queue. Must be 128, 512, 2028 or 65536 # When auto-config is enabled the hashmode specifies the algorithm for
iqueue-packets: 2048 # determining to which stream a given packet is to be delivered.
# This can be any valid Napatech NTPL hashmode command.
# List of interfaces we will listen on. #
inputs: # The most common hashmode commands are: hash2tuple, hash2tuplesorted,
- interface: xgbe2 # hash5tuple, hash5tuplesorted and roundrobin.
- interface: xgbe3 #
- interface: xgbe4 # See Napatech NTPL documentation other hashmodes and details on their use.
#
# This has no effect if auto-config is disabled.
# Relative weight of memory for packets of each mPipe buffer size. #
stack: hashmode: hash5tuplesorted
size128: 0
size256: 9
size512: 0
size1024: 0
size1664: 7
size4096: 0
size10386: 0
size16384: 0
## ##
## Configure Suricata to load Suricata-Update managed rules. ## Configure Suricata to load Suricata-Update managed rules.
@ -1870,29 +1822,34 @@ rule-files:
- drop.rules - drop.rules
- dshield.rules - dshield.rules
- emerging-activex.rules - emerging-activex.rules
- emerging-adware_pup.rules
- emerging-attack_response.rules - emerging-attack_response.rules
- emerging-chat.rules - emerging-chat.rules
- emerging-coinminer.rules
- emerging-current_events.rules - emerging-current_events.rules
- emerging-dns.rules - emerging-dns.rules
- emerging-dos.rules - emerging-dos.rules
- emerging-exploit.rules - emerging-exploit.rules
- emerging-exploit_kit.rules
- emerging-ftp.rules - emerging-ftp.rules
- emerging-games.rules - emerging-games.rules
- emerging-hunting.rules
- emerging-icmp_info.rules - emerging-icmp_info.rules
- emerging-icmp.rules - emerging-icmp.rules
- emerging-imap.rules - emerging-imap.rules
- emerging-inappropriate.rules - emerging-inappropriate.rules
- emerging-info.rules - emerging-info.rules
- emerging-ja3.rules
- emerging-malware.rules - emerging-malware.rules
- emerging-misc.rules - emerging-misc.rules
- emerging-mobile_malware.rules - emerging-mobile_malware.rules
- emerging-netbios.rules - emerging-netbios.rules
- emerging-p2p.rules - emerging-p2p.rules
- emerging-phishing.rules
- emerging-policy.rules - emerging-policy.rules
- emerging-pop3.rules - emerging-pop3.rules
- emerging-rpc.rules - emerging-rpc.rules
- emerging-scada.rules - emerging-scada.rules
#- emerging-scada_special.rules
- emerging-scan.rules - emerging-scan.rules
- emerging-shellcode.rules - emerging-shellcode.rules
- emerging-smtp.rules - emerging-smtp.rules
@ -1900,7 +1857,7 @@ rule-files:
- emerging-sql.rules - emerging-sql.rules
- emerging-telnet.rules - emerging-telnet.rules
- emerging-tftp.rules - emerging-tftp.rules
- emerging-trojan.rules # - emerging-trojan.rules
- emerging-user_agents.rules - emerging-user_agents.rules
- emerging-voip.rules - emerging-voip.rules
- emerging-web_client.rules - emerging-web_client.rules

View File

@ -14,12 +14,12 @@ function fuDLRULES {
if [ "$myOINKCODE" != "" ] && [ "$myOINKCODE" == "OPEN" ]; if [ "$myOINKCODE" != "" ] && [ "$myOINKCODE" == "OPEN" ];
then then
echo "Downloading ET open ruleset." echo "Downloading ET open ruleset."
wget -q --tries=2 --timeout=2 https://rules.emergingthreats.net/open/suricata-4.0/emerging.rules.tar.gz -O /tmp/rules.tar.gz wget -q --tries=2 --timeout=2 https://rules.emergingthreats.net/open/suricata-5.0/emerging.rules.tar.gz -O /tmp/rules.tar.gz
else else
if [ "$myOINKCODE" != "" ]; if [ "$myOINKCODE" != "" ];
then then
echo "Downloading ET pro ruleset with Oinkcode $myOINKCODE." echo "Downloading ET pro ruleset with Oinkcode $myOINKCODE."
wget -q --tries=2 --timeout=2 https://rules.emergingthreatspro.com/$myOINKCODE/suricata-4.0/etpro.rules.tar.gz -O /tmp/rules.tar.gz wget -q --tries=2 --timeout=2 https://rules.emergingthreatspro.com/$myOINKCODE/suricata-5.0/etpro.rules.tar.gz -O /tmp/rules.tar.gz
else else
echo "Usage: update.sh <[OPEN, OINKCODE]>" echo "Usage: update.sh <[OPEN, OINKCODE]>"
exit exit

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:3.10
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
file \ file \
git \ git \
@ -14,8 +15,7 @@ RUN apk -U --no-cache add \
python3 \ python3 \
python3-dev \ python3-dev \
re2c && \ re2c && \
pip3 install --no-cache-dir --upgrade pip && \ #
# Install bfr sandbox from git # Install bfr sandbox from git
git clone --depth=1 https://github.com/mushorg/BFR /opt/BFR && \ git clone --depth=1 https://github.com/mushorg/BFR /opt/BFR && \
cd /opt/BFR && \ cd /opt/BFR && \
@ -28,14 +28,14 @@ RUN apk -U --no-cache add \
cd / && \ cd / && \
rm -rf /opt/BFR /tmp/* /var/tmp/* && \ rm -rf /opt/BFR /tmp/* /var/tmp/* && \
echo "zend_extension = "$(find /usr -name bfr.so) >> /etc/php7/php.ini && \ echo "zend_extension = "$(find /usr -name bfr.so) >> /etc/php7/php.ini && \
#
# Install PHP Sandbox # Install PHP Sandbox
git clone --depth=1 https://github.com/mushorg/phpox /opt/phpox && \ git clone --depth=1 https://github.com/mushorg/phpox /opt/phpox && \
cd /opt/phpox && \ cd /opt/phpox && \
cp /root/dist/sandbox.py . && \ cp /root/dist/sandbox.py . && \
pip3 install -r requirements.txt && \ pip3 install -r requirements.txt && \
make && \ make && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
@ -43,9 +43,9 @@ RUN apk -U --no-cache add \
python3-dev && \ python3-dev && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Set workdir and start phpsandbox # Set workdir and start phpsandbox
STOPSIGNAL SIGKILL STOPSIGNAL SIGKILL
USER nobody:nobody USER nobody:nobody
WORKDIR /opt/phpox WORKDIR /opt/phpox
CMD ["python3.6", "sandbox.py"] CMD ["python3", "sandbox.py"]

View File

@ -1,18 +1,18 @@
FROM redis:alpine FROM redis:alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup apt # Setup apt
RUN apk -U --no-cache add redis && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add redis && \
cp /root/dist/redis.conf /etc && \ cp /root/dist/redis.conf /etc && \
#
# Clean up # Clean up
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* /var/tmp/* && \ rm -rf /tmp/* /var/tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start conpot # Start conpot
STOPSIGNAL SIGKILL STOPSIGNAL SIGKILL
USER nobody:nobody USER nobody:nobody

View File

@ -1,27 +1,28 @@
FROM alpine FROM alpine:3.10
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup apt # Setup apt
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
git \ git \
linux-headers \ linux-headers \
python3 \ python3 \
python3-dev && \ python3-dev && \
#
# Setup Snare # Setup Snare
git clone --depth=1 https://github.com/mushorg/snare /opt/snare && \ git clone --depth=1 https://github.com/mushorg/snare /opt/snare && \
cd /opt/snare/ && \ cd /opt/snare/ && \
pip3 install --no-cache-dir --upgrade pip setuptools && \ pip3 install --no-cache-dir setuptools && \
pip3 install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir -r requirements.txt && \
python3.6 setup.py install && \ python3 setup.py install && \
cd / && \ cd / && \
rm -rf /opt/snare && \ rm -rf /opt/snare && \
clone --target http://example.com && \ clone --target http://example.com && \
mv /root/dist/pages/* /opt/snare/pages/ && \ mv /root/dist/pages/* /opt/snare/pages/ && \
#
# Clean up # Clean up
apk del --purge \ apk del --purge \
build-base \ build-base \
@ -30,7 +31,7 @@ RUN apk -U --no-cache add \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* /var/tmp/* && \ rm -rf /tmp/* /var/tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start snare # Start snare
STOPSIGNAL SIGKILL STOPSIGNAL SIGKILL
CMD snare --tanner tanner --debug true --no-dorks true --auto-update false --host-ip 0.0.0.0 --port 80 --page-dir $(shuf -i 1-10 -n 1) CMD snare --tanner tanner --debug true --no-dorks true --auto-update false --host-ip 0.0.0.0 --port 80 --page-dir $(shuf -i 1-10 -n 1)

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:3.10
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup apt # Setup apt
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
git \ git \
libcap \ libcap \
@ -14,12 +15,12 @@ RUN apk -U --no-cache add \
py3-yarl \ py3-yarl \
python3 \ python3 \
python3-dev && \ python3-dev && \
#
# Setup Tanner # Setup Tanner
git clone --depth=1 https://github.com/mushorg/tanner /opt/tanner && \ git clone --depth=1 https://github.com/mushorg/tanner /opt/tanner && \
cp /root/dist/config.py /opt/tanner/tanner/ && \ cp /root/dist/config.py /opt/tanner/tanner/ && \
cd /opt/tanner/ && \ cd /opt/tanner/ && \
pip3 install --no-cache-dir --upgrade pip setuptools && \ pip3 install --no-cache-dir setuptools && \
pip3 install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir -r requirements.txt && \
python3 setup.py install && \ python3 setup.py install && \
rm -rf .coveragerc \ rm -rf .coveragerc \
@ -35,13 +36,13 @@ RUN apk -U --no-cache add \
setup.py \ setup.py \
tanner/data && \ tanner/data && \
cd / && \ cd / && \
#
# Setup configs, user, groups # Setup configs, user, groups
addgroup -g 2000 tanner && \ addgroup -g 2000 tanner && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 tanner && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 tanner && \
mkdir /var/log/tanner && \ mkdir /var/log/tanner && \
chown -R tanner:tanner /opt/tanner /var/log/tanner && \ chown -R tanner:tanner /opt/tanner /var/log/tanner && \
#
# Clean up # Clean up
apk del --purge \ apk del --purge \
build-base \ build-base \
@ -54,7 +55,7 @@ RUN apk -U --no-cache add \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* /var/tmp/* && \ rm -rf /tmp/* /var/tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start conpot # Start conpot
STOPSIGNAL SIGKILL STOPSIGNAL SIGKILL
USER tanner:tanner USER tanner:tanner

View File

@ -9,7 +9,9 @@ config_template = {'DATA': {'db_config': '/opt/tanner/db/db_config.json',
'dorks': '/opt/tanner/data/dorks.pickle', 'dorks': '/opt/tanner/data/dorks.pickle',
'user_dorks': '/opt/tanner/data/user_dorks.pickle', 'user_dorks': '/opt/tanner/data/user_dorks.pickle',
'crawler_stats': '/opt/tanner/data/crawler_user_agents.txt', 'crawler_stats': '/opt/tanner/data/crawler_user_agents.txt',
'geo_db': '/opt/tanner/db/GeoLite2-City.mmdb' 'geo_db': '/opt/tanner/db/GeoLite2-City.mmdb',
'tornado': '/opt/tanner/data/tornado.py',
'mako': '/opt/tanner/data/mako.py'
}, },
'TANNER': {'host': '0.0.0.0', 'port': 8090}, 'TANNER': {'host': '0.0.0.0', 'port': 8090},
'WEB': {'host': '0.0.0.0', 'port': 8091}, 'WEB': {'host': '0.0.0.0', 'port': 8091},
@ -18,16 +20,20 @@ config_template = {'DATA': {'db_config': '/opt/tanner/db/db_config.json',
'REDIS': {'host': 'tanner_redis', 'port': 6379, 'poolsize': 80, 'timeout': 1}, 'REDIS': {'host': 'tanner_redis', 'port': 6379, 'poolsize': 80, 'timeout': 1},
'EMULATORS': {'root_dir': '/opt/tanner'}, 'EMULATORS': {'root_dir': '/opt/tanner'},
'EMULATOR_ENABLED': {'sqli': True, 'rfi': True, 'lfi': False, 'xss': True, 'cmd_exec': False, 'EMULATOR_ENABLED': {'sqli': True, 'rfi': True, 'lfi': False, 'xss': True, 'cmd_exec': False,
'php_code_injection': True, "crlf": True}, 'php_code_injection': True, 'php_object_injection': True, "crlf": True,
'xxe_injection': True, 'template_injection': False},
'SQLI': {'type': 'SQLITE', 'db_name': 'tanner_db', 'host': 'localhost', 'user': 'root', 'SQLI': {'type': 'SQLITE', 'db_name': 'tanner_db', 'host': 'localhost', 'user': 'root',
'password': 'user_pass'}, 'password': 'user_pass'},
'XXE_INJECTION': {'OUT_OF_BAND': False},
'DOCKER': {'host_image': 'busybox:latest'}, 'DOCKER': {'host_image': 'busybox:latest'},
'LOGGER': {'log_debug': '/tmp/tanner/tanner.log', 'log_err': '/tmp/tanner/tanner.err'}, 'LOGGER': {'log_debug': '/tmp/tanner/tanner.log', 'log_err': '/tmp/tanner/tanner.err'},
'MONGO': {'enabled': False, 'URI': 'mongodb://localhost'}, 'MONGO': {'enabled': False, 'URI': 'mongodb://localhost'},
'HPFEEDS': {'enabled': False, 'HOST': 'localhost', 'PORT': 10000, 'IDENT': '', 'SECRET': '', 'HPFEEDS': {'enabled': False, 'HOST': 'localhost', 'PORT': 10000, 'IDENT': '', 'SECRET': '',
'CHANNEL': 'tanner.events'}, 'CHANNEL': 'tanner.events'},
'LOCALLOG': {'enabled': True, 'PATH': '/var/log/tanner/tanner_report.json'}, 'LOCALLOG': {'enabled': True, 'PATH': '/var/log/tanner/tanner_report.json'},
'CLEANLOG': {'enabled': False} 'CLEANLOG': {'enabled': False},
'REMOTE_DOCKERFILE': {'GITHUB': "https://raw.githubusercontent.com/mushorg/tanner/master/docker/"
"tanner/template_injection/Dockerfile"}
} }

View File

@ -34,6 +34,8 @@ services:
- "993:993" - "993:993"
- "995:995" - "995:995"
- "1080:1080" - "1080:1080"
- "3306:3306"
- "3389:3389"
- "5432:5432" - "5432:5432"
- "5900:5900" - "5900:5900"
image: "dtagdevsec/heralding:1903" image: "dtagdevsec/heralding:1903"

View File

@ -24,7 +24,6 @@ services:
# Conpot default service # Conpot default service
conpot_default: conpot_default:
build: .
container_name: conpot_default container_name: conpot_default
restart: always restart: always
environment: environment:
@ -177,6 +176,8 @@ services:
# - "443:443" # - "443:443"
# - "993:993" # - "993:993"
# - "995:995" # - "995:995"
# - "3306:3306"
# - "3389:3389"
# - "5432:5432" # - "5432:5432"
- "5900:5900" - "5900:5900"
image: "dtagdevsec/heralding:1903" image: "dtagdevsec/heralding:1903"

View File

@ -4,6 +4,7 @@ version: '2.3'
networks: networks:
adbhoney_local: adbhoney_local:
citrixhoneypot_local:
conpot_local_IEC104: conpot_local_IEC104:
conpot_local_guardian_ast: conpot_local_guardian_ast:
conpot_local_ipmi: conpot_local_ipmi:
@ -54,6 +55,19 @@ services:
volumes: volumes:
- /data/ciscoasa/log:/var/log/ciscoasa - /data/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: "dtagdevsec/citrixhoneypot:1903"
read_only: true
volumes:
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
# Conpot IEC104 service # Conpot IEC104 service
conpot_IEC104: conpot_IEC104:
container_name: conpot_iec104 container_name: conpot_iec104
@ -174,7 +188,7 @@ services:
- "69:69/udp" - "69:69/udp"
- "81:81" - "81:81"
- "135:135" - "135:135"
- "443:443" # - "443:443"
- "445:445" - "445:445"
- "1433:1433" - "1433:1433"
- "1723:1723" - "1723:1723"
@ -198,7 +212,6 @@ services:
# Glutton service # Glutton service
glutton: glutton:
build: .
container_name: glutton container_name: glutton
restart: always restart: always
tmpfs: tmpfs:
@ -232,6 +245,8 @@ services:
# - "443:443" # - "443:443"
- "993:993" - "993:993"
- "995:995" - "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080" - "1080:1080"
- "5432:5432" - "5432:5432"
- "5900:5900" - "5900:5900"
@ -242,7 +257,6 @@ services:
# HoneyPy service # HoneyPy service
honeypy: honeypy:
build: .
container_name: honeypy container_name: honeypy
restart: always restart: always
networks: networks:
@ -274,7 +288,7 @@ services:
- mailoney_local - mailoney_local
ports: ports:
- "25:25" - "25:25"
image: "dtagdevsec/mailoney:1903" image: "dtagdevsec/mailoney:2006"
read_only: true read_only: true
volumes: volumes:
- /data/mailoney/log:/opt/mailoney/logs - /data/mailoney/log:/opt/mailoney/logs
@ -408,7 +422,6 @@ services:
# Fatt service # Fatt service
fatt: fatt:
build: .
container_name: fatt container_name: fatt
restart: always restart: always
network_mode: "host" network_mode: "host"
@ -483,7 +496,7 @@ services:
mem_limit: 4g mem_limit: 4g
ports: ports:
- "127.0.0.1:64298:9200" - "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1903" image: "dtagdevsec/elasticsearch:2006"
volumes: volumes:
- /data:/data - /data:/data
@ -496,7 +509,7 @@ services:
condition: service_healthy condition: service_healthy
ports: ports:
- "127.0.0.1:64296:5601" - "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1903" image: "dtagdevsec/kibana:2006"
## Logstash service ## Logstash service
logstash: logstash:
@ -507,7 +520,7 @@ services:
condition: service_healthy condition: service_healthy
env_file: env_file:
- /opt/tpot/etc/compose/elk_environment - /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1903" image: "dtagdevsec/logstash:2006"
volumes: volumes:
- /data:/data - /data:/data
@ -520,7 +533,7 @@ services:
condition: service_healthy condition: service_healthy
ports: ports:
- "127.0.0.1:64302:9100" - "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1903" image: "dtagdevsec/head:2006"
read_only: true read_only: true
# Ewsposter service # Ewsposter service
@ -549,22 +562,34 @@ services:
nginx: nginx:
container_name: nginx container_name: nginx
restart: always restart: always
environment:
### If set to YES all changes within Heimdall will remain for the next start
### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
- HEIMDALL_PERSIST=NO
tmpfs: tmpfs:
- /var/tmp/nginx/client_body - /var/tmp/nginx/client_body
- /var/tmp/nginx/proxy - /var/tmp/nginx/proxy
- /var/tmp/nginx/fastcgi - /var/tmp/nginx/fastcgi
- /var/tmp/nginx/uwsgi - /var/tmp/nginx/uwsgi
- /var/tmp/nginx/scgi - /var/tmp/nginx/scgi
- /run - /run
- /var/log/php7/
- /var/lib/nginx/tmp:uid=100,gid=82
- /var/lib/nginx/html/storage/logs:uid=100,gid=82
- /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
network_mode: "host" network_mode: "host"
ports: ports:
- "64297:64297" - "64297:64297"
image: "dtagdevsec/nginx:1903" - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2006"
read_only: true read_only: true
volumes: volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro - /data/nginx/cert/:/etc/nginx/cert/:ro
- /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- /data/nginx/log/:/var/log/nginx/ - /data/nginx/log/:/var/log/nginx/
### Enable the following volumes if you set HEIMDALL_PERSIST=YES
# - /data/nginx/heimdall/database:/var/lib/nginx/html/database
# - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
# Spiderfoot service # Spiderfoot service
spiderfoot: spiderfoot:

View File

@ -227,6 +227,8 @@ services:
# - "443:443" # - "443:443"
- "993:993" - "993:993"
- "995:995" - "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080" - "1080:1080"
- "5432:5432" - "5432:5432"
- "5900:5900" - "5900:5900"

View File

@ -228,6 +228,8 @@ services:
# - "443:443" # - "443:443"
- "993:993" - "993:993"
- "995:995" - "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080" - "1080:1080"
- "5432:5432" - "5432:5432"
- "5900:5900" - "5900:5900"

View File

@ -1,18 +1,14 @@
/data/adbhoney/log/*.json /data/adbhoney/log/*.json
/data/adbhoney/log/*.log /data/adbhoney/log/*.log
/data/adbhoney/downloads.tgz
/data/ciscoasa/log/ciscoasa.log /data/ciscoasa/log/ciscoasa.log
/data/citrixhoneypot/logs/server.log
/data/conpot/log/conpot*.json /data/conpot/log/conpot*.json
/data/conpot/log/conpot*.log /data/conpot/log/conpot*.log
/data/cowrie/log/cowrie.json /data/cowrie/log/cowrie.json
/data/cowrie/log/cowrie-textlog.log /data/cowrie/log/cowrie-textlog.log
/data/cowrie/log/lastlog.txt /data/cowrie/log/lastlog.txt
/data/cowrie/log/ttylogs.tgz
/data/cowrie/downloads.tgz
/data/dionaea/log/dionaea.json /data/dionaea/log/dionaea.json
/data/dionaea/log/dionaea.sqlite /data/dionaea/log/dionaea.sqlite
/data/dionaea/bistreams.tgz
/data/dionaea/binaries.tgz
/data/dionaea/dionaea-errors.log /data/dionaea/dionaea-errors.log
/data/elasticpot/log/elasticpot.log /data/elasticpot/log/elasticpot.log
/data/elk/log/*.log /data/elk/log/*.log
@ -21,12 +17,11 @@
/data/glutton/log/*.err /data/glutton/log/*.err
/data/heralding/log/*.log /data/heralding/log/*.log
/data/heralding/log/*.csv /data/heralding/log/*.csv
/data/heralding/log/*.json
/data/honeypy/log/*.log /data/honeypy/log/*.log
/data/honeytrap/log/*.log /data/honeytrap/log/*.log
/data/honeytrap/log/*.json /data/honeytrap/log/*.json
/data/honeytrap/attacks.tgz /data/mailoney/log/*.log
/data/honeytrap/downloads.tgz
/data/mailoney/log/commands.log
/data/medpot/log/*.log /data/medpot/log/*.log
/data/nginx/log/*.log /data/nginx/log/*.log
/data/p0f/log/p0f.json /data/p0f/log/p0f.json
@ -43,4 +38,22 @@
notifempty notifempty
rotate 30 rotate 30
compress compress
compresscmd /usr/bin/pigz
}
/data/adbhoney/downloads.tgz
/data/cowrie/log/ttylogs.tgz
/data/cowrie/downloads.tgz
/data/dionaea/bistreams.tgz
/data/dionaea/binaries.tgz
/data/honeytrap/attacks.tgz
/data/honeytrap/downloads.tgz
{
su tpot tpot
copytruncate
create 770 tpot tpot
daily
missingok
notifempty
rotate 30
} }

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More