194 Commits

Author SHA1 Message Date
e588e62815 Update README.md 2020-03-16 16:38:39 +01:00
20cdb4f454 Update CHANGELOG.md 2020-03-16 16:29:39 +01:00
9d7b37b126 Merge pull request #585 from dtag-dev-sec/dev
Prepare release 19.03.3
2020-03-16 16:18:23 +01:00
62aae45dd6 prepare for release 19.03.3 2020-03-16 15:01:18 +00:00
21d48ca2bb remove honeysap for testing 2020-03-15 21:55:10 +00:00
80ee3cc5dd update elasticdump install location 2020-03-15 21:24:01 +00:00
67e70780bf tweaking for testing 2020-03-15 21:10:28 +00:00
5bbebd6fc4 Merge pull request #583 from dtag-dev-sec/t3chn0m4g3-patch-1
t3chn0m4g3 patch 1
2020-03-15 21:32:35 +01:00
cc70144c41 Update version 2020-03-15 21:29:10 +01:00
140a3d22ac Update update.sh 2020-03-15 21:28:46 +01:00
6a1f4f9aea Update update.sh 2020-03-15 21:27:33 +01:00
4409d9cdac Update tpot.seed 2020-03-15 21:25:44 +01:00
1452ca4e4c Update install.sh 2020-03-15 21:24:42 +01:00
313df2f644 Merge pull request #582 from dtag-dev-sec/master
sync
2020-03-15 21:20:57 +01:00
f6503cce3c Update update.sh 2020-03-15 21:13:07 +01:00
5badf352be deal with changes in sid
move to testing
cockpit-docker removed upstream, remove here
2020-03-15 21:11:26 +01:00
2201e072f6 testing honeysap 2020-03-12 16:02:43 +00:00
5192ce1dc7 Merge pull request #578 from dtag-dev-sec/dev
get top 100 src_ip's
2020-03-11 14:56:37 +01:00
5319c548ad get top 100 src_ip's 2020-03-11 13:51:49 +00:00
c32a150c51 typo 2020-03-10 16:49:41 +01:00
e77d24db08 Merge pull request #576 from dtag-dev-sec/dev
Dev
2020-03-10 16:47:31 +01:00
857190ec20 add 2fa, update reamde and changelog 2020-03-10 15:39:16 +00:00
809d598076 reactivate netselect-apt
automatic mirror detection needs ICMP
2020-03-10 10:12:50 +00:00
9a64c88aba Merge pull request #574 from dtag-dev-sec/dev
Update CHANGELOG.md
2020-03-09 15:15:23 +01:00
af3242e8d5 Update CHANGELOG.md 2020-03-09 15:14:46 +01:00
5ddf1fdd07 Merge pull request #573 from dtag-dev-sec/dev
bump version
2020-03-09 13:12:40 +01:00
020d4e9738 bump version 2020-03-09 12:11:13 +00:00
7081bafb6e Merge pull request #572 from dtag-dev-sec/dev
Bump NextGen to 20.06
2020-03-09 13:00:24 +01:00
fb06c46793 Merge branch 'dev' of https://github.com/dtag-dev-sec/tpotce into dev 2020-03-09 10:44:36 +00:00
f76d8ab161 update delivery window 2020-03-09 10:43:52 +00:00
a256ecedc8 Merge branch 'master' into dev 2020-03-09 11:20:39 +01:00
fb3777141b tanner, prepare merger w/ master 2020-03-09 09:44:26 +00:00
a18304dfdc tanner, prepare merger w/ master 2020-03-09 09:35:19 +00:00
6a703544c6 tweaking 2020-03-05 23:58:27 +00:00
941a0e1587 tweaking 2020-03-05 23:22:03 +00:00
692a21ddb1 tanner tweaking and testing
include unsecure, fix name bug
2020-03-05 23:12:49 +00:00
df22adb45d bump elk stack to 7.6.1 2020-03-05 21:20:11 +00:00
07c68c85bb tweaking 2020-03-04 14:36:03 +00:00
a4227e6a9f tweaking 2020-03-04 12:12:12 +00:00
3b8c959c66 tweaking 2020-03-03 12:30:57 +00:00
5d7a6f3270 tweaking 2020-03-02 15:23:05 +00:00
ee1342ce2a remove tanner_web from nextgen 2020-02-27 11:29:42 +00:00
53e9470d58 cleanup 2020-02-27 10:35:50 +00:00
21c68f75e2 tweaking 2020-02-26 14:43:02 +00:00
bf7d1299ca tweaking 2020-02-26 14:22:48 +00:00
70dca02ce4 tweaking 2020-02-25 16:59:22 +00:00
6bfcf8b1c4 tweaking 2020-02-24 16:43:34 +00:00
b7b6e9fa0e Merge pull request #553 from skoops/skoops-patch-1
Update install.sh
2020-02-24 13:31:26 +01:00
d889651d63 Update install.sh
fix password check by providing cracklib-check for later usage
2020-02-24 13:22:00 +01:00
bd0e6936eb bump heralding to latest master
fixed by https://github.com/johnnykv/heralding/issues/129#event-3058184614
2020-02-21 11:38:29 +00:00
545209dce6 fix for honeytrap 2020-02-15 15:40:47 +00:00
153f7be9dc cleanup 2020-02-14 17:26:53 +00:00
faa5667246 bump adbhoney, cowrie, honeytrap to 20.06 2020-02-14 17:22:30 +00:00
aa4a93684d bump more images to 20.06 2020-02-14 15:30:55 +00:00
f11ad6b523 tweaking
ELK 7.6.0 is not ready for production, however it works if APM is enabled (disabled in config, so image wont build as precaution)
Remove SISSDEN from ewsposter, suricata
Bump suricata to 5.0.1
Alpine now support suricata incl. enabled JA3 support, move back to Alpine install
2020-02-14 15:28:06 +00:00
a49d560809 up java mem limit 2020-02-05 15:24:32 +00:00
f2abb1d1bd release mailoney, elk 7.x into NextGen 19.03.x 2020-02-03 17:46:11 +01:00
b31225b97c Merge pull request #524 from pisces-period/pisces-period-cowrie-patch
make Dockerfile compatible with any Python version
2020-02-03 17:17:25 +01:00
ad861200de update mailoney 2020-02-03 14:46:43 +00:00
5ce5911ec1 cleanup 2020-02-03 12:59:21 +00:00
b9da9f04af adjust default field 2020-02-03 12:18:43 +00:00
92c0543c55 Merge branch 'dev' of https://github.com/dtag-dev-sec/tpotce into dev 2020-02-01 14:09:33 +00:00
984ba958fb logstash template not upgraded
with daily index enabled logstash will not be able to put new events into ES
simple solution, just deleting logstash template upon logstash start and leave it to logstash to upload the latest template
.
2020-02-01 14:08:23 +00:00
2d249ac6b1 tweak export script for new references 2020-01-31 17:43:04 +00:00
64729f5064 remove ilm support, breaks existing index at upgrade 2020-01-31 15:50:34 +00:00
5a4724bcba elk 7.x dev test 2020-01-31 14:21:55 +00:00
64907a2eba random loop timer ewsposter 2020-01-30 11:07:28 +00:00
fa0fdbb579 prepare for ELK migration to 7.x 2020-01-29 14:21:40 +00:00
1e47497c30 fixes for update.sh 2020-01-28 17:52:44 +00:00
a3e0c51493 switch to new nginx, heimdall, landing page in nextgen 2020-01-28 16:11:05 +00:00
33222a92b6 finish heimdall integration 2020-01-27 17:03:44 +00:00
1167231560 fix error log path 2020-01-27 08:51:34 +00:00
62b519999e tweaking 2020-01-24 15:38:00 +00:00
8b19228d99 tweaking heimdall, read only for now 2020-01-24 15:16:25 +00:00
2d16a9c9f6 tweaking new landing page 2020-01-24 14:14:09 +00:00
95a075e764 start working on new landing page 2020-01-24 02:21:33 +00:00
dc75b5567a make Dockerfile compatible with any Python version
adding a temporary variable to store the current (updated) version of Python, thus fixing the situation where the version is != 3.7 (e.g. Alpine python package at version 3.8.1-r1), causing lines 39-41 to break in the original code (install path is hard-coded at 3.7).
2020-01-23 17:42:48 +01:00
d643ca7a01 logrotate all mailoney log files 2020-01-22 12:23:21 +00:00
f110eb08b0 prepare for mailoney json logging 2020-01-22 12:17:30 +00:00
a470a7b12f Update CHANGELOG.md 2020-01-16 22:10:03 +01:00
c7eed86bd7 update changelog 2020-01-16 20:05:45 +00:00
20d6c6ab7f include citrixhoneypot dashboards
for fresh installs of NextGen
2020-01-16 19:56:05 +00:00
b033d515c6 dashboard files with citrixhoneypot support
for manual kibana import
2020-01-16 20:49:32 +01:00
1d0aad3b34 tweak logstash.conf for citrixhoneypot 2020-01-16 18:04:29 +00:00
a6ed6613a5 prepare citrixhoneypot for ELK integration 2020-01-16 15:13:58 +00:00
a953542f8f rebase citrixhoneypot 2020-01-16 10:29:58 +00:00
be3e998a92 prepare citrixhoneypot for JSON logging 2020-01-15 13:59:11 +00:00
1bc514a067 Update update.sh 2020-01-15 14:19:38 +01:00
9ad83fae51 Update CHANGELOG.md 2020-01-15 13:41:45 +01:00
e803d188c9 prepare for citrixhoneypot 2020-01-15 12:33:41 +00:00
8a844e6dd3 prepare for CitrixHoneypot 2020-01-15 12:14:23 +00:00
0ef2b083fc Merge branch 'master' of https://github.com/dtag-dev-sec/tpotce 2020-01-15 10:39:48 +00:00
755cbb77db prepare for citrixhoneypot 2020-01-15 10:37:48 +00:00
3498f3e635 fix typo 2020-01-13 22:44:14 +01:00
2ed0f939d1 rebuild, tweak spiderfoot 2020-01-03 17:04:18 +00:00
af3ef271d4 rebuild cyberchef 2020-01-03 16:25:33 +00:00
3713139fc6 rebuild snare, tanner 2020-01-03 14:06:29 +00:00
0928e37326 rebuild Dionaea, Heralding 2020-01-02 17:37:08 +00:00
f7a6a30c90 update.sh should be executed as root only
Fixes #508
2020-01-02 10:16:55 +01:00
ec46dc9ab0 Fix typo, Fixes #504 2020-01-02 09:40:55 +01:00
7c5fc000c0 rebuild fatt 2019-12-27 20:52:23 +00:00
64628c1293 rebuild rdpy 2019-12-27 20:09:15 +00:00
29d223865f tweaking, rebuild honeypy 2019-12-27 19:58:22 +00:00
0ed60329b8 tweak installer
fixes #389
2019-12-27 19:45:38 +00:00
1442a257e5 conpot tweaking 2019-12-27 18:34:13 +00:00
a1d903db01 bump conpot to latest master 2019-12-27 16:21:12 +00:00
756215519c add sAN to selfsigned cert
fixes #478
2019-12-27 14:53:07 +00:00
659831cf99 Update CHANGELOG.md 2019-12-24 12:14:44 +01:00
a370e2b414 introduce pigz to logrotate
pigz will now handle compression of t-pot logfiles
logrotate will only rotate archives instead of packing them again
should improve #501 #494 #489 #482 and others with regard to a volume of logs
2019-12-24 10:55:39 +00:00
f4a078c443 introduce pigz for clean.sh
See #501 and thanks to @workandresearchgithub
2019-12-24 10:31:54 +00:00
02bdc8194a bump adbhoney to latest master with py3 support 2019-11-21 13:56:38 +00:00
878538e3df Update README.md
fixes #485
2019-11-20 10:23:03 +01:00
ca01bfd82f Merge pull request #484 from shaderecker/debian10
Switch to Debian 10 image for Open Telekom Cloud
2019-11-13 19:55:11 +01:00
71dc3227c4 Update README.md 2019-11-13 17:17:14 +01:00
fd39b3a94d Switch to Debian 10 image for Open Telekom Cloud 2019-11-13 14:50:56 +01:00
3b43c55c04 Merge pull request #480 from shaderecker/ansible-updates
Ansible updates
2019-11-04 09:20:18 +01:00
d15005195d Increase ServerAliveInterval 2019-11-03 22:15:52 +00:00
c5ddfd0a72 Add SSH ServerAliveInterval
Fixes occasional hangup of long running tasks
2019-11-03 19:58:32 +00:00
e9520eefb5 Final touches for #477 2019-10-28 17:01:44 +01:00
72709bc186 Test #477 2019-10-28 16:40:46 +01:00
59757f87f0 test for #477 2019-10-28 15:39:10 +01:00
60ef4eeeea Test for #477 2019-10-28 15:37:10 +01:00
68a10a2f1f Fire and forget: Move reboot task to background
Execute the reboot command asynchronously, so Ansible doesn't report an error.
2019-10-28 11:59:39 +00:00
170439d977 Tweak hpfeeds setup
- Fix owner and file permissions for proper comparison
- Only execute the hpfeeds script when the config file has changed
2019-10-28 11:49:57 +00:00
9c7c6ac4a3 Update README.md 2019-10-28 10:23:03 +00:00
6224146cde Update README:md: Agent Forwarding 2019-10-28 10:22:51 +00:00
8314a7d34a Fix wrong order of variables
- Align with all example configs
- This is important for Ansible to check wether the file has changed
2019-10-28 10:22:20 +00:00
145856960c Use copy module 2019-10-28 10:22:03 +00:00
71523cf7ef I love double quotes 2019-10-28 10:21:49 +00:00
cbb2b66a72 Hide secrets from log output 2019-10-28 10:21:40 +00:00
2076cea40f Shorten task name 2019-10-28 10:21:30 +00:00
34f335c7e6 Don't print user password in taskname 2019-10-28 10:21:13 +00:00
602ebfc952 Remove waiting delay 2019-10-28 10:19:50 +00:00
78f9a83b04 Remove unneeded become declarations 2019-10-28 10:19:19 +00:00
4c9ff2c006 Simplify and consolidate tasks 2019-10-28 10:15:32 +00:00
7d56264a8d removing cockpit, pcp for now since these overflow swap for some reason 2019-10-26 10:40:09 +00:00
78135df9e7 Bump Suricata to 5.0.0 2019-10-22 15:20:23 +00:00
3d85ca94f1 bump cowrie to v2.0.0 2019-10-21 20:59:36 +00:00
4d7ee46cd5 update changelog 2019-10-16 15:01:04 +00:00
6921857573 bump heralding to latest master 2019-10-16 14:46:58 +00:00
5ee19e3e30 move installer to pip3 2019-10-16 11:02:59 +00:00
4fa66a2747 move to pip3 2019-10-16 10:50:13 +00:00
a1e81b57c9 Update CHANGELOG.md 2019-10-16 12:32:47 +02:00
1813b78ff0 update changelog 2019-10-16 10:30:27 +00:00
6cff8e390d tweaking cockpit, pcp 2019-10-16 10:01:41 +00:00
5079b57f94 add option to unlock ES for r/w 2019-10-15 15:41:21 +00:00
42c19e4d81 bump glutton, tune down noisy log 2019-10-15 14:50:39 +00:00
b9fb3d4695 tune down noisy log 2019-10-15 07:49:30 +00:00
544def9481 Merge pull request #461 from piffey/455
Fix AWS Terraform Deploy by switching to Debian Buster pre-release AMIs.
2019-10-04 17:15:42 +02:00
dca06918c0 Merge pull request #454 from Oogy/shell-enhancement
small change to handle non-interactive shells
2019-10-04 17:12:33 +02:00
9137440d3c Fix AWS Terraform Deploy by switching to Debian Buster pre-release AMIs. 2019-10-02 12:34:47 -07:00
d75a612416 testing change in user login 2019-09-24 10:00:31 -04:00
487ce4bed5 bump ewsposter to latest master 2019-09-21 12:09:17 +00:00
ba8564b348 small change to handle non-interactive shells 2019-09-19 15:32:15 -04:00
e914643882 Some wallpaper tweaking 2019-09-07 19:52:43 +02:00
1c8d3451ef Some logo tweaking 2019-09-07 19:50:09 +02:00
e7fe917738 Add T-Pot QR Code 2019-09-07 19:44:18 +02:00
0ed394db6a Delete t-pot_qr.png 2019-09-07 19:43:53 +02:00
99cc91d671 Add T-Pot QR Code 2019-09-07 19:42:30 +02:00
357f40d573 Update CHANGELOG.md 2019-08-29 10:17:13 +02:00
24ac6d203f bump medpot to latest master 2019-08-28 14:52:25 +00:00
08ff1377fd prep mailoney rebuild 2019-08-28 14:41:35 +00:00
42c57636b9 prep honeytrap rebuild 2019-08-28 14:34:20 +00:00
c86d6f15af prep rebuild for elasticpot 2019-08-28 14:12:52 +00:00
670dddfea0 bump nginx to 1.16.1 2019-08-28 14:09:16 +00:00
2132f80988 prep rebuild for ciscoasa 2019-08-28 13:59:41 +00:00
cae95ebe20 bump adbhoney to latest master 2019-08-28 12:46:19 +00:00
221f75be33 bump elk stack to 6.8.2 2019-08-28 13:53:43 +02:00
66bb9443f9 bump elk stack to 6.8.2 2019-08-28 11:49:03 +00:00
29c6be5571 wallpaper res 1920 1080 2019-08-27 20:02:45 +02:00
16868a7532 just some swag ... t-pot 4k wallpaper 2019-08-24 20:49:31 +02:00
4620666d4e add logo 2019-08-24 20:31:17 +02:00
9a5dd587b3 Add files via upload 2019-08-24 20:29:25 +02:00
cca1d0f727 Workaround for #442 2019-08-23 19:12:31 +02:00
bc6e94d329 spiderfoot, head bump to latest master 2019-08-16 17:29:41 +00:00
78d9d1f7c7 bump cyberchef to latest master 2019-08-16 17:14:58 +00:00
f1275e5b07 fix 2019-08-16 16:55:36 +00:00
4164b75bea Fixed
DockerHub already uses 3.7
2019-08-16 17:59:05 +02:00
c2afdc0f1f Fix for DockerHub
Works just fine on local build.
2019-08-16 17:46:17 +02:00
e0427cfc21 bump tanner to latest master 2019-08-16 14:43:10 +00:00
786ab5c082 adjust dionaea, fixes #435 2019-08-16 12:18:28 +00:00
a59fc19133 bump elastic stack to 6.7.2 2019-08-15 17:40:01 +02:00
bf39c0f5b2 bump elastic stack to 6.7.2 2019-08-15 15:38:12 +00:00
364831ae58 fix cd 2019-08-15 08:32:04 +00:00
31d7707d19 download instead of git pull
download translation maps rather than running a git pull
translation maps will now be bzip2 compressed to reduce traffic to a minimum
fixes #432
2019-08-14 14:43:47 +00:00
a053be50f3 Merge pull request #436 from TheHADILP/native-os
Create Security Group / network / subnet / router with Ansible
2019-08-13 15:11:38 +02:00
ade81e2dc2 Update documentation 2019-08-13 12:59:05 +00:00
3f15373e7b Create Network/Subnet/Router with Ansible 2019-08-13 12:00:19 +00:00
3186b88641 Update readme: remove security group from example 2019-08-13 10:42:08 +00:00
fc4c4e8675 Update readme 2019-08-13 10:40:24 +00:00
f80e693d8b Add rules to security group and adapt server creation 2019-08-13 10:31:46 +00:00
bf9a14081d Create Security Group with Ansible 2019-08-13 09:16:02 +00:00
a906633cfd Merge pull request #433 from TheHADILP/ansible-updates
Update Ansible README: System updates
2019-08-13 10:43:53 +02:00
7fcf406781 Update README: System updates 2019-08-08 05:48:40 +00:00
160 changed files with 2302 additions and 1283 deletions

View File

@ -1,5 +1,128 @@
# Changelog # Changelog
## 20200316
- **Move from Sid to Stable**
- Debian Stable has now all the packages and versions we need for T-Pot. As a consequence we can now move to the `stable` branch.
## 20200310
- **Add 2FA to Cockpit**
- Just run `2fa.sh` to enable two factor authentication in Cockpit.
- **Find fastest mirror with netselect-apt**
- Netselect-apt will find the fastest mirror close to you (outgoing ICMP required).
## 20200309
- **Bump Nextgen to 20.06**
- All NextGen images have been rebuilt to their latest master.
- ElasticStack bumped to 7.6.1 (Elasticsearch will need at least 2048MB of RAM now, T-Pot at least 8GB of RAM) and tweak to accomodate changes of 7.x.
- Fixed errors in Tanner / Snare which will now handle downloads of malware via SSL and store them correctly (thanks to @afeena).
- Fixed errors in Heralding which will now improve on RDP connections (thanks to @johnnykv, @realsdx).
- Fixed error in honeytrap which will now build in Debian/Buster (thanks to @tillmannw).
- Mailoney is now logging in JSON format (thanks to @monsherko).
- Base T-Pot landing page on Heimdall.
- Tweaking of tools and some minor bug fixing
## 20200116
- **Bump ELK to latest 6.8.6**
- **Update ISO image to fix upstream bug of missing kernel modules**
- **Include dashboards for CitrixHoneypot**
- Please run `/opt/tpot/update.sh` for the necessary modifications, omit the reboot and run `/opt/tpot/bin/tped.sh` to (re-)select the NextGen installation type.
- This update requires the latest Kibana objects as well. Download the latest from https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/etc/objects/kibana_export.json.zip, unzip and import the objects within Kibana WebUI > Management > Saved Objects > Export / Import". All objects will be overwritten upon import, make sure to run an export first.
## 20200115
- **Prepare integration of CitrixHoneypot**
- Prepare integration of [CitrixHoneypot](https://github.com/MalwareTech/CitrixHoneypot) by MalwareTech
- Integration into ELK is still open
- Please run `/opt/tpot/update.sh` for the necessary modifications, omit the reboot and run `/opt/tpot/bin/tped.sh` to (re-)select the NextGen installation type.
## 20191224
- **Use pigz, optimize logrotate.conf**
- Use `pigz` for faster archiving, especially with regard to high volumes of logs - Thanks to @workandresearchgithub!
- Optimize `logrotate.conf` to improve archiving speed and get rid of multiple compression, also introduce `pigz`.
## 20191121
- **Bump ADBHoney to latest master**
- Use latest version of ADBHoney, which now fully support Python 3.x - Thanks to @huuck!
## 20191113, 20191104, 20191103, 20191028
- **Switch to Debian 10 on OTC, Ansible Improvements**
- OTC now supporting Debian 10 - Thanks to @shaderecker!
## 20191028
- **Fix an issue with pip3, yq**
- `yq` needs rehashing.
## 20191026
- **Remove cockpit-pcp**
- `cockpit-pcp` floods swap for some reason - removing for now.
## 20191022
- **Bump Suricata to 5.0.0**
## 20191021
- **Bump Cowrie to 2.0.0**
## 20191016
- **Tweak installer, pip3, Heralding**
- Install `cockpit-pcp` right from the start for machine monitoring in cockpit.
- Move installer and update script to use pip3.
- Bump heralding to latest master (1.0.6) - Thanks @johnnykv!
## 20191015
- **Tweaking, Bump glutton, unlock ES script**
- Add `unlock.sh` to unlock ES indices in case of lockdown after disk quota has been reached.
- Prevent too much terminal logging from p0f and glutton since `daemon.log` was filled up.
- Bump glutton to latest master now supporting payload_hex. Thanks to @glaslos.
## 20191002
- **Merge**
- Support Debian Buster images for AWS #454
- Thank you @piffey
## 20190924
- **Bump EWSPoster**
- Supports Python 3.x
- Thank you @Trixam
## 20190919
- **Merge**
- Handle non-interactive shells #454
- Thank you @Oogy
## 20190907
- **Logo tweaking**
- Add QR logo
## 20190828
- **Upgrades and rebuilds**
- Bump Medpot, Nginx and Adbhoney to latest master
- Bump ELK stack to 6.8.2
- Rebuild Mailoney, Honeytrap, Elasticpot and Ciscoasa
- Add 1080p T-Pot wallpaper for download
## 20190824
- **Add some logo work**
- Thanks to @thehadilps's suggestion adjusted social preview
- Added 4k T-Pot wallpaper for download
## 20190823
- **Fix for broken Fuse package**
- Fuse package in upstream is broken
- Adjust installer as workaround, fixes #442
## 20190816
- **Upgrades and rebuilds**
- Adjust Dionaea to avoid nmap detection, fixes #435 (thanks @iukea1)
- Bump Tanner, Cyberchef, Spiderfoot and ES Head to latest master
## 20190815
- **Bump ELK stack to 6.7.2**
- Transition to 7.x must iterate slowly through previous versions to prevent changes breaking T-Pots
## 20190814
- **Logstash Translation Maps improvement**
- Download translation maps rather than running a git pull
- Translation maps will now be bzip2 compressed to reduce traffic to a minimum
- Fixes #432
## 20190802 ## 20190802
- **Add support for Buster as base image** - **Add support for Buster as base image**

View File

@ -1,6 +1,6 @@
# T-Pot 19.03 ![T-Pot](doc/tpotsocial.png)
T-Pot 19.03 runs on Debian (Sid), is based heavily on T-Pot 19.03 runs on Debian (Stable), is based heavily on
[docker](https://www.docker.com/), [docker-compose](https://docs.docker.com/compose/) [docker](https://www.docker.com/), [docker-compose](https://docs.docker.com/compose/)
@ -8,6 +8,7 @@ and includes dockerized versions of the following honeypots
* [adbhoney](https://github.com/huuck/ADBHoney), * [adbhoney](https://github.com/huuck/ADBHoney),
* [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot), * [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot),
* [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot),
* [conpot](http://conpot.org/), * [conpot](http://conpot.org/),
* [cowrie](https://github.com/cowrie/cowrie), * [cowrie](https://github.com/cowrie/cowrie),
* [dionaea](https://github.com/DinoTools/dionaea), * [dionaea](https://github.com/DinoTools/dionaea),
@ -42,7 +43,6 @@ Furthermore we use the following tools
# Table of Contents # Table of Contents
- [Changelog](#changelog)
- [Technical Concept](#concept) - [Technical Concept](#concept)
- [System Requirements](#requirements) - [System Requirements](#requirements)
- [Installation](#installation) - [Installation](#installation)
@ -73,76 +73,22 @@ Furthermore we use the following tools
- [Credits](#credits) - [Credits](#credits)
- [Stay tuned](#staytuned) - [Stay tuned](#staytuned)
- [Testimonial](#testimonial) - [Testimonial](#testimonial)
- [Fun Fact](#funfact)
<a name="changelog"></a>
# Release Notes
- **Move from Ubuntu 18.04 to Debian (Sid)**
- For almost 5 years Ubuntu LTS versions were our distributions of choice. Last year we made a design choice for T-Pot to be closer to a rolling release model and thus allowing us to issue smaller changes and releases in a more timely manner. The distribution of choice is Debian (Sid / unstable) which will provide us with the latest advancements in a Debian based distribution.
- **Include HoneyPy honeypot**
- *HoneyPy* is now included in the NEXTGEN installation type
- **Include Suricata 4.1.3**
- Building *Suricata 4.1.3* from scratch to enable JA3 and overall better protocol support.
- **Update tools to the latest versions**
- ELK Stack 6.6.2
- CyberChef 8.27.0
- SpiderFoot v3.0
- Cockpit 188
- NGINX is now built to enforce TLS 1.3 on the T-Pot WebUI
- **Update honeypots**
- Where possible / feasible the honeypots have been updated to their latest versions.
- *Cowrie* now supports *HASSH* generated hashes which allows for an easier identification of an attacker accross IP adresses.
- *Heralding* now supports *SOCKS5* emulation.
- **Update Dashboards & Visualizations**
- *Offset Dashboard* added to easily spot changes in attacks on a single dashboard in 24h time window.
- *Cowrie Dashboard* modified to integrate *HASSH* support / visualizations.
- *HoneyPy Dashboard* added to support latest honeypot addition.
- *Suricata Dashboard* modified to integrate *JA3* support / visualizations.
- **Debian mirror selection**
- During base install you now have to manually select a mirror.
- Upon T-Pot install the mirror closest to you will be determined automatically, `netselect-apt` requires you to allow ICMP outbound.
- This solves peering problems for most of the users speeding up installation and updates.
- **Bugs**
- Fixed issue #298 where the import and export of objects on the shell did not work.
- Fixed issue #313 where Spiderfoot raised a KeyError, which was previously fixed in upstream.
- Fixed error in Suricata where path for reference.config changed.
- **Release Cycle**
- As far as possible we will integrate changes now faster into the master branch, eliminating the need for monolithic releases. The update feature will be continuously improved on that behalf. However this might not account for all feature changes.
- **HPFEEDS Opt-In**
- If you want to share your T-Pot data with a 3rd party HPFEEDS broker such as [SISSDEN](https://sissden.eu) you can do so by creating an account at the SISSDEN portal and run `hpfeeds_optin.sh` on T-Pot.
- **Update Feature**
- For the ones who like to live on the bleeding edge of T-Pot development there is now an update script available in `/opt/tpot/update.sh`.
- This feature is beta and is mostly intended to provide you with the latest development advances without the need of reinstalling T-Pot.
- **Deprecated tools**
- *ctop* will no longer be part of T-Pot.
- **Fix #332**
- If T-Pot, opposed to the requirements, does not have full internet access netselect-apt fails to determine the fastest mirror as it needs ICMP and UDP outgoing. Should netselect-apt fail the default mirrors will be used.
- **Improve install speed with apt-fast**
- Migrating from a stable base install to Debian (Sid) requires downloading lots of packages. Depending on your geo location the download speed was already improved by introducing netselect-apt to determine the fastest mirror. With apt-fast the downloads will be even faster by downloading packages not only in parallel but also with multiple connections per package.
- **HPFEEDS Opt-In commandline option**
- Pass a hpfeeds config file as a commandline argument
- hpfeeds config is saved in `/data/ews/conf/hpfeeds.cfg`
- Update script restores hpfeeds config
- **Ansible T-Pot Deployment**
- Transitioned from bash script to all Ansible
- Reusable Ansible Playbook for OpenStack clouds
- Example Showcase with our Open Telekom Cloud
- Adaptable for other cloud providers
<a name="concept"></a> <a name="concept"></a>
# Technical Concept # Technical Concept
T-Pot is based on the network installer Debian (Stretch). During installation the whole system will be updated to Debian (Sid). T-Pot is based on the network installer Debian (Stable).
The honeypot daemons as well as other support components being used have been containerized using [docker](http://docker.io). The honeypot daemons as well as other support components being used have been containerized using [docker](http://docker.io).
This allows us to run multiple honeypot daemons on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment. This allows us to run multiple honeypot daemons on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment.
In T-Pot we combine the dockerized honeypots ... In T-Pot we combine the dockerized honeypots ...
* [adbhoney](https://github.com/huuck/ADBHoney), * [adbhoney](https://github.com/huuck/ADBHoney),
* [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot), * [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot),
* [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot),
* [conpot](http://conpot.org/), * [conpot](http://conpot.org/),
* [cowrie](http://www.micheloosterhof.com/cowrie/), * [cowrie](http://www.micheloosterhof.com/cowrie/),
* [dionaea](https://github.com/DinoTools/dionaea), * [dionaea](https://github.com/DinoTools/dionaea),
* [elasticpot](https://github.com/schmalle/ElasticPot), * [elasticpot](https://github.com/schmalle/ElasticpotPY),
* [glutton](https://github.com/mushorg/glutton), * [glutton](https://github.com/mushorg/glutton),
* [heralding](https://github.com/johnnykv/heralding), * [heralding](https://github.com/johnnykv/heralding),
* [honeypy](https://github.com/foospidy/HoneyPy), * [honeypy](https://github.com/foospidy/HoneyPy),
@ -221,7 +167,7 @@ Depending on your installation type, whether you install on [real hardware](#har
- A working, non-proxied, internet connection - A working, non-proxied, internet connection
##### NextGen Installation (Glutton replacing Honeytrap, HoneyPy replacing Elasticpot) ##### NextGen Installation (Glutton replacing Honeytrap, HoneyPy replacing Elasticpot)
- Honeypots: adbhoney, ciscoasa, conpot, cowrie, dionaea, glutton, heralding, honeypy, mailoney, rdpy, snare & tanner - Honeypots: adbhoney, ciscoasa, citrixhoneypot, conpot, cowrie, dionaea, glutton, heralding, honeypy, mailoney, rdpy, snare & tanner
- Tools: cockpit, cyberchef, ELK, elasticsearch head, ewsposter, fatt, NGINX, spiderfoot, p0f and suricata - Tools: cockpit, cyberchef, ELK, elasticsearch head, ewsposter, fatt, NGINX, spiderfoot, p0f and suricata
- 6-8 GB RAM (less RAM is possible but might introduce swapping) - 6-8 GB RAM (less RAM is possible but might introduce swapping)
@ -300,7 +246,7 @@ In some cases it is necessary to install Debian 9.7 (Stretch) on your own:
- Within your company you have to setup special policies, software etc. - Within your company you have to setup special policies, software etc.
- You just like to stay on top of things. - You just like to stay on top of things.
The T-Pot Universal Installer will upgrade the system to Debian (Sid) and install all required T-Pot dependencies. The T-Pot Universal Installer will upgrade the system and install all required T-Pot dependencies.
Just follow these steps: Just follow these steps:
@ -338,7 +284,7 @@ If you would like to contribute, you can add other cloud deployments like Chef o
You can find an [Ansible](https://www.ansible.com/) based T-Pot deployment in the [`cloud/ansible`](cloud/ansible) folder. You can find an [Ansible](https://www.ansible.com/) based T-Pot deployment in the [`cloud/ansible`](cloud/ansible) folder.
The Playbook in the [`cloud/ansible/openstack`](cloud/ansible/openstack) folder is reusable for all OpenStack clouds out of the box. The Playbook in the [`cloud/ansible/openstack`](cloud/ansible/openstack) folder is reusable for all OpenStack clouds out of the box.
It first creates a new server and then installs and configures T-Pot. It first creates all resources (security group, network, subnet, router), deploys a new server and then installs and configures T-Pot.
You can have a look at the Playbook and easily adapt the deploy role for other [cloud providers](https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html). You can have a look at the Playbook and easily adapt the deploy role for other [cloud providers](https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html).
@ -385,7 +331,7 @@ In case you need external Admin UI access, forward TCP port 64294 to T-Pot, see
In case you need external SSH access, forward TCP port 64295 to T-Pot, see below. In case you need external SSH access, forward TCP port 64295 to T-Pot, see below.
In case you need external Web UI access, forward TCP port 64297 to T-Pot, see below. In case you need external Web UI access, forward TCP port 64297 to T-Pot, see below.
T-Pot requires outgoing git, http, https connections for updates (Debian, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location. T-Pot requires outgoing git, http, https connections for updates (Debian, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location. Also during first install outgoing ICMP is required additionally to find the closest and fastest mirror to you.
<a name="updates"></a> <a name="updates"></a>
# Updates # Updates
@ -394,7 +340,7 @@ For the ones of you who want to live on the bleeding edge of T-Pot development w
The Update script will: The Update script will:
- **mercilessly** overwrite local changes to be in sync with the T-Pot master branch - **mercilessly** overwrite local changes to be in sync with the T-Pot master branch
- upgrade the system to the packages available in Debian (Sid) - upgrade the system to the packages available in Debian (Stable)
- update all resources to be in-sync with the T-Pot master branch - update all resources to be in-sync with the T-Pot master branch
- ensure all T-Pot relevant system files will be patched / copied into the original T-Pot state - ensure all T-Pot relevant system files will be patched / copied into the original T-Pot state
- restore your custom ews.cfg and HPFEED settings from `/data/ews/conf` - restore your custom ews.cfg and HPFEED settings from `/data/ews/conf`
@ -422,6 +368,8 @@ If you do not have a SSH client at hand and still want to access the machine via
- user: **[tsec or user]** *you chose during one of the post install methods* - user: **[tsec or user]** *you chose during one of the post install methods*
- pass: **[password]** *you chose during the installation* - pass: **[password]** *you chose during the installation*
You can also add two factor authentication to Cockpit just by running `2fa.sh` on the command line.
![Cockpit Terminal](doc/cockpit3.png) ![Cockpit Terminal](doc/cockpit3.png)
<a name="kibana"></a> <a name="kibana"></a>
@ -485,9 +433,8 @@ We encourage you not to disable the data submission as it is the main purpose of
<a name="hpfeeds-optin"></a> <a name="hpfeeds-optin"></a>
## Opt-In HPFEEDS Data Submission ## Opt-In HPFEEDS Data Submission
As an Opt-In it is now possible to also share T-Pot data with 3rd party HPFEEDS brokers, such as [SISSDEN](https://sissden.eu). As an Opt-In it is now possible to also share T-Pot data with 3rd party HPFEEDS brokers.
If you want to share your T-Pot data you simply have to register an account with a 3rd party broker with its own benefits towards the community. Once registered you will receive your credentials to share events with the broker. In T-Pot you simply run `hpfeeds_optin.sh` which will ask for your credentials, in case of SISSDEN this is just `Ident` and `Secret`, everything else is pre-configured. If you want to share your T-Pot data you simply have to register an account with a 3rd party broker with its own benefits towards the community. You simply run `hpfeeds_optin.sh` which will ask for your credentials. It will automatically update `/opt/tpot/etc/tpot.yml` to deliver events to your desired broker.
It will automatically update `/opt/tpot/etc/tpot.yml` to deliver events to your desired broker.
The script can accept a config file as an argument, e.g. `./hpfeeds_optin.sh --conf=hpfeeds.cfg` The script can accept a config file as an argument, e.g. `./hpfeeds_optin.sh --conf=hpfeeds.cfg`
@ -526,10 +473,10 @@ We hope you understand that we cannot provide support on an individual basis. We
# Licenses # Licenses
The software that T-Pot is built on uses the following licenses. The software that T-Pot is built on uses the following licenses.
<br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeypy](https://github.com/foospidy/HoneyPy/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/) <br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeypy](https://github.com/foospidy/HoneyPy/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
<br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://github.com/schmalle/ElasticPot), [ewsposter](https://github.com/dtag-dev-sec/ews/), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE) <br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://github.com/schmalle/ElasticpotPY), [ewsposter](https://github.com/dtag-dev-sec/ews/), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE)
<br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE) <br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE)
<br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE) <br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE)
<br> Other: [cowrie](https://github.com/micheloosterhof/cowrie/blob/master/LICENSE.md), [mailoney](https://github.com/awhitehatter/mailoney), [Debian licensing](https://www.debian.org/legal/licenses/) <br> Other: [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot#licencing-agreement-malwaretech-public-licence), [cowrie](https://github.com/micheloosterhof/cowrie/blob/master/LICENSE.md), [mailoney](https://github.com/awhitehatter/mailoney), [Debian licensing](https://www.debian.org/legal/licenses/)
<a name="credits"></a> <a name="credits"></a>
# Credits # Credits
@ -540,6 +487,7 @@ Without open source and the fruitful development community (we are proud to be a
* [adbhoney](https://github.com/huuck/ADBHoney/graphs/contributors) * [adbhoney](https://github.com/huuck/ADBHoney/graphs/contributors)
* [apt-fast](https://github.com/ilikenwf/apt-fast/graphs/contributors) * [apt-fast](https://github.com/ilikenwf/apt-fast/graphs/contributors)
* [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/graphs/contributors) * [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/graphs/contributors)
* [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot/graphs/contributors)
* [cockpit](https://github.com/cockpit-project/cockpit/graphs/contributors) * [cockpit](https://github.com/cockpit-project/cockpit/graphs/contributors)
* [conpot](https://github.com/mushorg/conpot/graphs/contributors) * [conpot](https://github.com/mushorg/conpot/graphs/contributors)
* [cowrie](https://github.com/micheloosterhof/cowrie/graphs/contributors) * [cowrie](https://github.com/micheloosterhof/cowrie/graphs/contributors)
@ -583,8 +531,3 @@ We will be releasing a new version of T-Pot about every 6-12 months.
# Testimonial # Testimonial
One of the greatest feedback we have gotten so far is by one of the Conpot developers:<br> One of the greatest feedback we have gotten so far is by one of the Conpot developers:<br>
***"[...] I highly recommend T-Pot which is ... it's not exactly a swiss army knife .. it's more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. [...]"*** ***"[...] I highly recommend T-Pot which is ... it's not exactly a swiss army knife .. it's more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. [...]"***
<a name="funfact"></a>
# Fun Fact
In an effort of saving the environment we are now brewing our own Mate Ice Tea and consumed 73 liters so far for the T-Pot 19.03 development 😇

77
bin/2fa.sh Executable file
View File

@ -0,0 +1,77 @@
#!/bin/bash
# Make sure script is started as non-root.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" = "root" ]
then
echo "Need to run as non-root ..."
echo ""
exit
fi
# set vars, check deps
myPAM_COCKPIT_FILE="/etc/pam.d/cockpit"
if ! [ -s "$myPAM_COCKPIT_FILE" ];
then
echo "### Cockpit PAM module config does not exist. Something went wrong."
echo ""
exit 1
fi
myPAM_COCKPIT_GA="
# google authenticator for two-factor
auth required pam_google_authenticator.so
"
myAUTHENTICATOR=$(which google-authenticator)
if [ "$myAUTHENTICATOR" == "" ];
then
echo "### Could not locate google-authenticator, trying to install (if asked provide root password)."
echo ""
sudo apt-get update
sudo apt-get install -y libpam-google-authenticator
exec "$1" "$2"
exit 1
fi
# write PAM changes
function fuWRITE_PAM_CHANGES {
myCHECK=$(cat $myPAM_COCKPIT_FILE | grep -c "google")
if ! [ "$myCHECK" == "0" ];
then
echo "### PAM config already enabled. Skipped."
echo ""
else
echo "### Updating PAM config for Cockpit (if asked provide root password)."
echo "$myPAM_COCKPIT_GA" | sudo tee -a $myPAM_COCKPIT_FILE
sudo systemctl restart cockpit
fi
}
# create 2fa
function fuGEN_TOKEN {
echo "### Now generating token for Google Authenticator."
echo ""
google-authenticator -t -d -r 3 -R 30 -w 17
}
# main
echo "### This script will enable Two Factor Authentication for Cockpit."
echo ""
echo "### Please download one of the many authenticator apps from the appstore of your choice."
echo ""
while true;
do
read -p "### Ready to start (y/n)? " myANSWER
case $myANSWER in
[Yy]* ) echo "### OK. Starting ..."; break;;
[Nn]* ) echo "### Exiting."; exit;;
esac
done
fuWRITE_PAM_CHANGES
fuGEN_TOKEN
echo "Done. Re-run this script by every user who needs Cockpit access."
echo ""

View File

@ -5,6 +5,9 @@ myRED=""
myGREEN="" myGREEN=""
myWHITE="" myWHITE=""
# Set pigz
myPIGZ=$(which pigz)
# Set persistence # Set persistence
myPERSISTENCE=$1 myPERSISTENCE=$1
@ -46,14 +49,14 @@ chmod 644 /data/nginx/cert -R
logrotate -f -s $mySTATUS $myCONF logrotate -f -s $mySTATUS $myCONF
# Compressing some folders first and rotate them later # Compressing some folders first and rotate them later
if [ "$(fuEMPTY $myADBHONEYDL)" != "0" ]; then tar cvfz $myADBHONEYTGZ $myADBHONEYDL; fi if [ "$(fuEMPTY $myADBHONEYDL)" != "0" ]; then tar -I $myPIGZ -cvf $myADBHONEYTGZ $myADBHONEYDL; fi
if [ "$(fuEMPTY $myCOWRIETTYLOGS)" != "0" ]; then tar cvfz $myCOWRIETTYTGZ $myCOWRIETTYLOGS; fi if [ "$(fuEMPTY $myCOWRIETTYLOGS)" != "0" ]; then tar -I $myPIGZ -cvf $myCOWRIETTYTGZ $myCOWRIETTYLOGS; fi
if [ "$(fuEMPTY $myCOWRIEDL)" != "0" ]; then tar cvfz $myCOWRIEDLTGZ $myCOWRIEDL; fi if [ "$(fuEMPTY $myCOWRIEDL)" != "0" ]; then tar -I $myPIGZ -cvf $myCOWRIEDLTGZ $myCOWRIEDL; fi
if [ "$(fuEMPTY $myDIONAEABI)" != "0" ]; then tar cvfz $myDIONAEABITGZ $myDIONAEABI; fi if [ "$(fuEMPTY $myDIONAEABI)" != "0" ]; then tar -I $myPIGZ -cvf $myDIONAEABITGZ $myDIONAEABI; fi
if [ "$(fuEMPTY $myDIONAEABIN)" != "0" ]; then tar cvfz $myDIONAEABINTGZ $myDIONAEABIN; fi if [ "$(fuEMPTY $myDIONAEABIN)" != "0" ]; then tar -I $myPIGZ -cvf $myDIONAEABINTGZ $myDIONAEABIN; fi
if [ "$(fuEMPTY $myHONEYTRAPATTACKS)" != "0" ]; then tar cvfz $myHONEYTRAPATTACKSTGZ $myHONEYTRAPATTACKS; fi if [ "$(fuEMPTY $myHONEYTRAPATTACKS)" != "0" ]; then tar -I $myPIGZ -cvf $myHONEYTRAPATTACKSTGZ $myHONEYTRAPATTACKS; fi
if [ "$(fuEMPTY $myHONEYTRAPDL)" != "0" ]; then tar cvfz $myHONEYTRAPDLTGZ $myHONEYTRAPDL; fi if [ "$(fuEMPTY $myHONEYTRAPDL)" != "0" ]; then tar -I $myPIGZ -cvf $myHONEYTRAPDLTGZ $myHONEYTRAPDL; fi
if [ "$(fuEMPTY $myTANNERF)" != "0" ]; then tar cvfz $myTANNERFTGZ $myTANNERF; fi if [ "$(fuEMPTY $myTANNERF)" != "0" ]; then tar -I $myPIGZ -cvf $myTANNERFTGZ $myTANNERF; fi
# Ensure correct permissions and ownership for previously created archives # Ensure correct permissions and ownership for previously created archives
chmod 770 $myADBHONEYTGZ $myCOWRIETTYTGZ $myCOWRIEDLTGZ $myDIONAEABITGZ $myDIONAEABINTGZ $myHONEYTRAPATTACKSTGZ $myHONEYTRAPDLTGZ $myTANNERFTGZ chmod 770 $myADBHONEYTGZ $myCOWRIETTYTGZ $myCOWRIEDLTGZ $myDIONAEABITGZ $myDIONAEABINTGZ $myHONEYTRAPATTACKSTGZ $myHONEYTRAPDLTGZ $myTANNERFTGZ
@ -87,6 +90,14 @@ fuCISCOASA () {
chown tpot:tpot /data/ciscoasa -R chown tpot:tpot /data/ciscoasa -R
} }
# Let's create a function to clean up and prepare citrixhoneypot data
fuCITRIXHONEYPOT () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/citrixhoneypot/*; fi
mkdir -p /data/citrixhoneypot/logs/
chmod 770 /data/citrixhoneypot/ -R
chown tpot:tpot /data/citrixhoneypot/ -R
}
# Let's create a function to clean up and prepare conpot data # Let's create a function to clean up and prepare conpot data
fuCONPOT () { fuCONPOT () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/conpot/*; fi if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/conpot/*; fi
@ -257,6 +268,7 @@ if [ "$myPERSISTENCE" = "on" ];
echo "Cleaning up and preparing data folders." echo "Cleaning up and preparing data folders."
fuADBHONEY fuADBHONEY
fuCISCOASA fuCISCOASA
fuCITRIXHONEYPOT
fuCONPOT fuCONPOT
fuCOWRIE fuCOWRIE
fuDIONAEA fuDIONAEA

View File

@ -1,4 +1,4 @@
#/bin/bash #!/bin/bash
# Run as root only. # Run as root only.
myWHOAMI=$(whoami) myWHOAMI=$(whoami)

View File

@ -32,7 +32,7 @@ trap fuCLEANUP EXIT
# Export index patterns # Export index patterns
mkdir -p patterns mkdir -p patterns
echo $myCOL1"### Now exporting"$myCOL0 $myINDEXCOUNT $myCOL1"index pattern fields." $myCOL0 echo $myCOL1"### Now exporting"$myCOL0 $myINDEXCOUNT $myCOL1"index pattern fields." $myCOL0
curl -s -XGET ''$myKIBANA'api/saved_objects/index-pattern/'$myINDEXID'' | jq '. | {attributes}' > patterns/$myINDEXID.json & curl -s -XGET ''$myKIBANA'api/saved_objects/index-pattern/'$myINDEXID'' | jq '. | {attributes, references}' > patterns/$myINDEXID.json &
echo echo
# Export dashboards # Export dashboards
@ -41,7 +41,7 @@ echo $myCOL1"### Now exporting"$myCOL0 $(echo $myDASHBOARDS | wc -w) $myCOL1"das
for i in $myDASHBOARDS; for i in $myDASHBOARDS;
do do
echo $myCOL1"###### "$i $myCOL0 echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myKIBANA'api/saved_objects/dashboard/'$i'' | jq '. | {attributes}' > dashboards/$i.json & curl -s -XGET ''$myKIBANA'api/saved_objects/dashboard/'$i'' | jq '. | {attributes, references}' > dashboards/$i.json &
done; done;
echo echo
@ -51,7 +51,7 @@ echo $myCOL1"### Now exporting"$myCOL0 $(echo $myVISUALIZATIONS | wc -w) $myCOL1
for i in $myVISUALIZATIONS; for i in $myVISUALIZATIONS;
do do
echo $myCOL1"###### "$i $myCOL0 echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myKIBANA'api/saved_objects/visualization/'$i'' | jq '. | {attributes}' > visualizations/$i.json & curl -s -XGET ''$myKIBANA'api/saved_objects/visualization/'$i'' | jq '. | {attributes, references}' > visualizations/$i.json &
done; done;
echo echo
@ -61,7 +61,7 @@ echo $myCOL1"### Now exporting"$myCOL0 $(echo $mySEARCHES | wc -w) $myCOL1"searc
for i in $mySEARCHES; for i in $mySEARCHES;
do do
echo $myCOL1"###### "$i $myCOL0 echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myKIBANA'api/saved_objects/search/'$i'' | jq '. | {attributes}' > searches/$i.json & curl -s -XGET ''$myKIBANA'api/saved_objects/search/'$i'' | jq '. | {attributes, references}' > searches/$i.json &
done; done;
echo echo

View File

@ -10,20 +10,6 @@ fi
myTPOTYMLFILE="/opt/tpot/etc/tpot.yml" myTPOTYMLFILE="/opt/tpot/etc/tpot.yml"
function fuSISSDEN () {
echo
echo "You chose SISSDEN, you just need to provide ident and secret"
echo
myENABLE="true"
myHOST="hpfeeds.sissden.eu"
myPORT="10000"
myCHANNEL="t-pot.events"
myCERT="/opt/ewsposter/sissden.pem"
read -p "Ident: " myIDENT
read -p "Secret: " mySECRET
myFORMAT="json"
}
function fuGENERIC () { function fuGENERIC () {
echo echo
echo "You chose generic, please provide all the details of the broker" echo "You chose generic, please provide all the details of the broker"
@ -78,9 +64,9 @@ myENABLE=$myENABLE
myHOST=$myHOST myHOST=$myHOST
myPORT=$myPORT myPORT=$myPORT
myCHANNEL=$myCHANNEL myCHANNEL=$myCHANNEL
myCERT=$myCERT
myIDENT=$myIDENT myIDENT=$myIDENT
mySECRET=$mySECRET mySECRET=$mySECRET
myCERT=$myCERT
myFORMAT=$myFORMAT myFORMAT=$myFORMAT
EOF EOF
} }
@ -119,8 +105,7 @@ echo
echo echo
echo "Please choose your broker" echo "Please choose your broker"
echo "---------------------------" echo "---------------------------"
echo "[1] - SISSDEN" echo "[1] - Generic (enter details manually)"
echo "[2] - Generic (enter details manually)"
echo "[0] - Opt out of HPFEEDS" echo "[0] - Opt out of HPFEEDS"
echo "[q] - Do not agree end exit" echo "[q] - Do not agree end exit"
echo echo
@ -130,10 +115,6 @@ while [ 1 != 2 ]
echo $mySELECT echo $mySELECT
case "$mySELECT" in case "$mySELECT" in
[1]) [1])
fuSISSDEN
break
;;
[2])
fuGENERIC fuGENERIC
break break
;; ;;

27
bin/mytopips.sh Executable file
View File

@ -0,0 +1,27 @@
#!/bin/bash
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit 1
else
echo "### Elasticsearch is available, now continuing."
echo
fi
function fuMYTOPIPS {
curl -s -XGET $myES"_search" -H 'Content-Type: application/json' -d'
{
"aggs": {
"ips": {
"terms": { "field": "src_ip.keyword", "size": 100 }
}
},
"size" : 0
}'
}
echo "### Aggregating top 100 source IPs in ES"
fuMYTOPIPS | jq '.aggregations.ips.buckets[].key' | tr -d '"'

19
bin/unlock_es.sh Executable file
View File

@ -0,0 +1,19 @@
#/bin/bash
# Unlock all ES indices for read / write mode
# Useful in cases where ES locked all indices after disk quota has been reached
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c "green\|yellow")
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start tpot'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
echo "### Trying to unlock all ES indices for read / write operation: "
curl -XPUT -H "Content-Type: application/json" ''$myES'_all/_settings' -d '{"index.blocks.read_only_allow_delete": null}'
echo

View File

@ -4,7 +4,7 @@ Here you can find a ready-to-use solution for your automated T-Pot deployment us
It consists of an Ansible Playbook with multiple roles, which is reusable for all [OpenStack](https://www.openstack.org/) based clouds (e.g. Open Telekom Cloud, Orange Cloud, Telefonica Open Cloud, OVH) out of the box. It consists of an Ansible Playbook with multiple roles, which is reusable for all [OpenStack](https://www.openstack.org/) based clouds (e.g. Open Telekom Cloud, Orange Cloud, Telefonica Open Cloud, OVH) out of the box.
Apart from that you can easily adapt the deploy role to use other [cloud providers](https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html) (e.g. AWS, Azure, Digital Ocean, Google). Apart from that you can easily adapt the deploy role to use other [cloud providers](https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html) (e.g. AWS, Azure, Digital Ocean, Google).
The Playbook first creates a new server and then installs and configures T-Pot. The Playbook first creates all resources (security group, network, subnet, router), deploys a new server and then installs and configures T-Pot.
This example showcases the deployment on our own OpenStack based Public Cloud Offering [Open Telekom Cloud](https://open-telekom-cloud.com/en). This example showcases the deployment on our own OpenStack based Public Cloud Offering [Open Telekom Cloud](https://open-telekom-cloud.com/en).
@ -16,7 +16,6 @@ This example showcases the deployment on our own OpenStack based Public Cloud Of
- [Create new project](#project) - [Create new project](#project)
- [Create API user](#api-user) - [Create API user](#api-user)
- [Import Key Pair](#key-pair) - [Import Key Pair](#key-pair)
- [Create VPC, Subnet and Security Group](#vpc-subnet-securitygroup)
- [Clone Git Repository](#clone-git) - [Clone Git Repository](#clone-git)
- [Settings and recommended values](#settings) - [Settings and recommended values](#settings)
- [OpenStack authentication variables](#os-auth) - [OpenStack authentication variables](#os-auth)
@ -38,7 +37,12 @@ Ansible works over the SSH Port, so you don't have to add any special rules to y
<a name="ansible"></a> <a name="ansible"></a>
## Ansible Installation ## Ansible Installation
Example for Ubuntu 18.04: Example for Ubuntu 18.04:
At first we need to add the repository and install Ansible:
At first we update the system:
`sudo apt update`
`sudo apt dist-upgrade`
Then we need to add the repository and install Ansible:
`sudo apt-add-repository --yes --update ppa:ansible/ansible` `sudo apt-add-repository --yes --update ppa:ansible/ansible`
`sudo apt install ansible` `sudo apt install ansible`
@ -46,26 +50,20 @@ For other OSes and Distros have a look at the official [Ansible Documentation](h
<a name="agent-forwarding"></a> <a name="agent-forwarding"></a>
## Agent Forwarding ## Agent Forwarding
Agent Forwarding must be enabled in order to let Ansible do its work. If you run the Ansible Playbook remotely on your Ansible Master Server, Agent Forwarding must be enabled in order to let Ansible connect to newly created machines.
- On Linux or macOS: - On Linux or macOS:
- Create or edit `~/.ssh/config` - Create or edit `~/.ssh/config`
- If you run the Ansible Playbook remotely on your Ansible Master Server:
``` ```
Host ANSIBLE_MASTER_IP Host ANSIBLE_MASTER_IP
ForwardAgent yes ForwardAgent yes
``` ```
- If you run the Ansible Playbook locally, enable it for all hosts, as this includes newly generated T-Pots: - On Windows using Putty:
```
Host *
ForwardAgent yes
```
- On Windows using Putty for connecting to your Ansible Master Server:
![Putty Agent Forwarding](doc/putty_agent_forwarding.png) ![Putty Agent Forwarding](doc/putty_agent_forwarding.png)
<a name="preparation"></a> <a name="preparation"></a>
# Preparations in Open Telekom Cloud Console # Preparations in Open Telekom Cloud Console
(You can skip this if you have already set up an API account, VPC, Subnet and Security Group) (You can skip this if you have already set up a project and an API account with key pair)
(Just make sure you know the naming for everything, as you will need it to configure the Ansible variables.) (Just make sure you know the naming for everything, as you need to configure the Ansible variables.)
Before we can start deploying, we have to prepare the Open Telekom Cloud tenant. Before we can start deploying, we have to prepare the Open Telekom Cloud tenant.
For that, go to the [Web Console](https://auth.otc.t-systems.com/authui/login) and log in with an admin user. For that, go to the [Web Console](https://auth.otc.t-systems.com/authui/login) and log in with an admin user.
@ -90,22 +88,10 @@ This ensures that the API access is limited to that project.
![Login as API user](doc/otc_3_login.gif) ![Login as API user](doc/otc_3_login.gif)
Import your SSH public key. Import your SSH public key.
![Import SSH Public Key](doc/otc_4_import_key.gif) ![Import SSH Public Key](doc/otc_4_import_key.gif)
<a name="vpc-subnet-securitygroup"></a>
## Create VPC, Subnet and Security Group
- VPC (Virtual Private Cloud) and Subnet:
![Create VPC and Subnet](doc/otc_5_vpc_subnet.gif)
- Security Group:
The configured Security Group should allow all incoming TCP / UDP traffic.
If you want to secure the management interfaces, you can limit the incoming "allow all" traffic to the port range of 1-64000 and allow access to ports > 64000 only from your trusted IPs.
![Create Security Group](doc/otc_6_sec_group.gif)
<a name="clone-git"></a> <a name="clone-git"></a>
# Clone Git Repository # Clone Git Repository
@ -142,24 +128,20 @@ Located at [`openstack/roles/deploy/vars/main.yaml`](openstack/roles/deploy/vars
Here you can customize your virtual machine specifications: Here you can customize your virtual machine specifications:
- Specify the region name - Specify the region name
- Choose an availability zone. For Open Telekom Cloud reference see [here](https://docs.otc.t-systems.com/en-us/endpoint/index.html). - Choose an availability zone. For Open Telekom Cloud reference see [here](https://docs.otc.t-systems.com/en-us/endpoint/index.html).
- Change the OS image (For T-Pot we need Debian 9) - Change the OS image (For T-Pot we need Debian)
- (Optional) Change the volume size - (Optional) Change the volume size
- Specify your key pair - Specify your key pair (:warning: Mandatory)
- (Optional) Change the instance type (flavor) - (Optional) Change the instance type (flavor)
`s2.medium.8` corresponds to 1 vCPU and 8GB of RAM and is the minimum required flavor. `s2.medium.8` corresponds to 1 vCPU and 8GB of RAM and is the minimum required flavor.
A full list of Open telekom Cloud flavors can be found [here](https://docs.otc.t-systems.com/en-us/usermanual/ecs/en-us_topic_0035470096.html). A full list of Open telekom Cloud flavors can be found [here](https://docs.otc.t-systems.com/en-us/usermanual/ecs/en-us_topic_0035470096.html).
- Specify the security group
- Specify the network ID (For Open Telekom Cloud you can find the ID in the Web Console under `Virtual Private Cloud --> your-vpc --> your-subnet --> Network ID`; In general for OpenStack clouds you can use the `python-openstackclient` to retrieve information about your resources)
``` ```
region_name: eu-de region_name: eu-de
availability_zone: eu-de-03 availability_zone: eu-de-03
image: Standard_Debian_9_latest image: Standard_Debian_10_latest
volume_size: 128 volume_size: 128
key_name: your-KeyPair key_name: your-KeyPair
flavor: s2.medium.8 flavor: s2.medium.8
security_groups: your-sg
network: your-network-id
``` ```
<a name="user-password"></a> <a name="user-password"></a>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 337 KiB

View File

@ -3,3 +3,4 @@ host_key_checking = false
[ssh_connection] [ssh_connection]
scp_if_ssh = true scp_if_ssh = true
ssh_args = -o ServerAliveInterval=60

View File

@ -1,8 +1,6 @@
- name: Check host prerequisites - name: Check host prerequisites
hosts: localhost hosts: localhost
become: yes become: yes
become_user: root
become_method: sudo
roles: roles:
- check - check
@ -15,8 +13,6 @@
hosts: TPOT hosts: TPOT
remote_user: linux remote_user: linux
become: yes become: yes
become_user: root
become_method: sudo
gather_facts: no gather_facts: no
roles: roles:
- install - install

View File

@ -1,28 +1,17 @@
- name: Install pwgen - name: Install dependencies
package: package:
name: pwgen name:
state: present - pwgen
- python-setuptools
- name: Install setuptools - python-pip
package:
name: python-setuptools
state: present
- name: Install pip
package:
name: python-pip
state: present state: present
- name: Install openstacksdk - name: Install openstacksdk
pip: pip:
name: openstacksdk name: openstacksdk
- name: Set fact for agent forwarding
set_fact:
agent_forwarding: "{{ lookup('env','SSH_AUTH_SOCK') }}"
- name: Check if agent forwarding is enabled - name: Check if agent forwarding is enabled
fail: fail:
msg: Please enable agent forwarding to allow Ansible to connect to the remote host! msg: Please enable agent forwarding to allow Ansible to connect to the remote host!
ignore_errors: yes ignore_errors: yes
when: agent_forwarding == "" when: lookup('env','SSH_AUTH_SOCK') == ""

View File

@ -9,5 +9,5 @@
- name: Patching tpot.yml with custom ews configuration file - name: Patching tpot.yml with custom ews configuration file
lineinfile: lineinfile:
path: /opt/tpot/etc/tpot.yml path: /opt/tpot/etc/tpot.yml
insertafter: '/opt/ewsposter/ews.ip' insertafter: "/opt/ewsposter/ews.ip"
line: ' - /data/ews/conf/ews.cfg:/opt/ewsposter/ews.cfg' line: " - /data/ews/conf/ews.cfg:/opt/ewsposter/ews.cfg"

View File

@ -1,10 +1,12 @@
- name: Copy hpfeeds configuration file - name: Copy hpfeeds configuration file
template: copy:
src: ../templates/hpfeeds.cfg src: ../files/hpfeeds.cfg
dest: /data/ews/conf dest: /data/ews/conf
owner: root owner: tpot
group: root group: tpot
mode: 0644 mode: 0770
register: config
- name: Applying hpfeeds settings - name: Applying hpfeeds settings
command: /opt/tpot/bin/hpfeeds_optin.sh --conf=/data/ews/conf/hpfeeds.cfg command: /opt/tpot/bin/hpfeeds_optin.sh --conf=/data/ews/conf/hpfeeds.cfg
when: config.changed == true

View File

@ -5,6 +5,66 @@
- name: Import OpenStack authentication variables - name: Import OpenStack authentication variables
include_vars: include_vars:
file: roles/deploy/vars/os_auth.yaml file: roles/deploy/vars/os_auth.yaml
no_log: true
- name: Create security group
os_security_group:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
name: sg-tpot-any
description: tpot any-any
- name: Add rules to security group
os_security_group_rule:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
security_group: sg-tpot-any
remote_ip_prefix: 0.0.0.0/0
- name: Create network
os_network:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
name: network-tpot
- name: Create subnet
os_subnet:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
network_name: network-tpot
name: subnet-tpot
cidr: 192.168.0.0/24
dns_nameservers:
- 1.1.1.1
- 8.8.8.8
- name: Create router
os_router:
auth:
auth_url: "{{ auth_url }}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project_name }}"
os_user_domain_name: "{{ os_user_domain_name }}"
name: router-tpot
interfaces:
- subnet-tpot
- name: Launch an instance - name: Launch an instance
os_server: os_server:
@ -23,8 +83,8 @@
key_name: "{{ key_name }}" key_name: "{{ key_name }}"
timeout: 200 timeout: 200
flavor: "{{ flavor }}" flavor: "{{ flavor }}"
security_groups: "{{ security_groups }}" security_groups: sg-tpot-any
network: "{{ network }}" network: network-tpot
register: tpot register: tpot
- name: Add instance to inventory - name: Add instance to inventory

View File

@ -1,8 +1,6 @@
region_name: eu-de region_name: eu-de
availability_zone: eu-de-03 availability_zone: eu-de-03
image: Standard_Debian_9_latest image: Standard_Debian_10_latest
volume_size: 128 volume_size: 128
key_name: your-KeyPair key_name: your-KeyPair
flavor: s2.medium.8 flavor: s2.medium.8
security_groups: your-sg
network: your-network-id

View File

@ -1,7 +1,5 @@
- name: Waiting for SSH connection - name: Waiting for SSH connection
wait_for_connection: wait_for_connection:
delay: 30
timeout: 300
- name: Gathering facts - name: Gathering facts
setup: setup:
@ -14,16 +12,15 @@
- name: Prepare to set user password - name: Prepare to set user password
set_fact: set_fact:
user_name: "{{ ansible_user }}" user_name: "{{ ansible_user }}"
user_password: "{{ user_password }}"
user_salt: "s0mew1ck3dTpoT" user_salt: "s0mew1ck3dTpoT"
no_log: true
- name: Changing password for user {{ user_name }} to {{ user_password }} - name: Changing password for user {{ user_name }}
user: user:
name: "{{ ansible_user }}" name: "{{ ansible_user }}"
password: "{{ user_password | password_hash('sha512', user_salt) }}" password: "{{ user_password | password_hash('sha512', user_salt) }}"
state: present state: present
shell: /bin/bash shell: /bin/bash
update_password: always
- name: Copy T-Pot configuration file - name: Copy T-Pot configuration file
template: template:
@ -33,7 +30,7 @@
group: root group: root
mode: 0644 mode: 0644
- name: Install T-Pot on instance - be patient, this might take 15 to 30 minutes depending on the connection speed. No further output is given. - name: Install T-Pot on instance - be patient, this might take 15 to 30 minutes depending on the connection speed.
command: /root/tpot/iso/installer/install.sh --type=auto --conf=/root/tpot.conf command: /root/tpot/iso/installer/install.sh --type=auto --conf=/root/tpot.conf
- name: Delete T-Pot configuration file - name: Delete T-Pot configuration file

View File

@ -1,6 +1,7 @@
- name: Finally rebooting T-Pot in one minute - name: Finally rebooting T-Pot
shell: /sbin/shutdown -r -t 1 command: shutdown -r now
become: true async: 1
poll: 0
- name: Next login options - name: Next login options
debug: debug:

View File

@ -62,4 +62,5 @@ resource "aws_instance" "tpot" {
} }
user_data = "${file("../cloud-init.yaml")} content: ${base64encode(file("../tpot.conf"))}" user_data = "${file("../cloud-init.yaml")} content: ${base64encode(file("../tpot.conf"))}"
vpc_security_group_ids = [aws_security_group.tpot.id] vpc_security_group_ids = [aws_security_group.tpot.id]
associate_public_ip_address = true
} }

View File

@ -28,26 +28,27 @@ variable "ec2_instance_type" {
default = "t3.large" default = "t3.large"
} }
# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch # Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Buster
variable "ec2_ami" { variable "ec2_ami" {
type = map(string) type = map(string)
default = { default = {
"ap-northeast-1" = "ami-09fbcd30452841cb9" "ap-east-1" = "ami-b7d0abc6"
"ap-northeast-2" = "ami-08363ccce96df1fff" "ap-northeast-1" = "ami-01f4f0c9374675b99"
"ap-south-1" = "ami-0dc98cbb0d0e49162" "ap-northeast-2" = "ami-0855cb0c55370c38c"
"ap-southeast-1" = "ami-0555b1a5444087dd4" "ap-south-1" = "ami-00d7d1cbdcb087cf3"
"ap-southeast-2" = "ami-029c54f988446691a" "ap-southeast-1" = "ami-03779b1b2fbb3a9d4"
"ca-central-1" = "ami-04413a263a7d94982" "ap-southeast-2" = "ami-0ce3a7c68c6b1678d"
"eu-central-1" = "ami-01fb3b7bab31acac5" "ca-central-1" = "ami-037099906a22f210f"
"eu-north-1" = "ami-050f04ca573daa1fb" "eu-central-1" = "ami-0845c3902a6f2af32"
"eu-west-1" = "ami-0968f6a31fc6cffc0" "eu-north-1" = "ami-e634bf98"
"eu-west-2" = "ami-0faa9c9b5399088fd" "eu-west-1" = "ami-06a53bf81914447b5"
"eu-west-3" = "ami-0cd23820af84edc85" "eu-west-2" = "ami-053d9f0770cd2e34c"
"sa-east-1" = "ami-030580e61468e54bd" "eu-west-3" = "ami-060bf1f444f742af9"
"us-east-1" = "ami-0357081a1383dc76b" "me-south-1" = "ami-04a9a536105c72d30"
"us-east-2" = "ami-09c10a66337c79669" "sa-east-1" = "ami-0a5fd18ed0b9c7f35"
"us-west-1" = "ami-0adbaf2e0ce044437" "us-east-1" = "ami-01db78123b2b99496"
"us-west-2" = "ami-05a3ef6744aa96514" "us-east-2" = "ami-010ffea14ff17ebf5"
"us-west-1" = "ami-0ed1af421f2a3cf40"
"us-west-2" = "ami-030a304a76b181155"
} }
} }

BIN
doc/t-pot_qr.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

BIN
doc/t-pot_wallpaper_4k.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 606 KiB

BIN
doc/tpotsocial.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

View File

@ -1,31 +1,36 @@
FROM alpine FROM alpine:latest
#
# Include dist
ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
git \ git \
libcap \ libcap \
python \ python3 \
python-dev && \ python3-dev && \
#
# Install adbhoney from git # Install adbhoney from git
git clone --depth=1 https://github.com/huuck/ADBHoney /opt/adbhoney && \ git clone --depth=1 https://github.com/huuck/ADBHoney /opt/adbhoney && \
sed -i 's/dst_ip/dest_ip/' /opt/adbhoney/main.py && \ cp /root/dist/adbhoney.cfg /opt/adbhoney && \
sed -i 's/dst_port/dest_port/' /opt/adbhoney/main.py && \ sed -i 's/dst_ip/dest_ip/' /opt/adbhoney/adbhoney/core.py && \
sed -i 's/dst_port/dest_port/' /opt/adbhoney/adbhoney/core.py && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 adbhoney && \ addgroup -g 2000 adbhoney && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 adbhoney && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 adbhoney && \
chown -R adbhoney:adbhoney /opt/adbhoney && \ chown -R adbhoney:adbhoney /opt/adbhoney && \
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \ setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
#
# Clean up # Clean up
apk del --purge git \ apk del --purge git \
python-dev && \ python3-dev && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Set workdir and start adbhoney # Set workdir and start adbhoney
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
USER adbhoney:adbhoney USER adbhoney:adbhoney
WORKDIR /opt/adbhoney/ WORKDIR /opt/adbhoney/
CMD nohup /usr/bin/python main.py -l log/adbhoney.log -j log/adbhoney.json -d dl/ CMD nohup /usr/bin/python3 run.py

19
docker/adbhoney/dist/adbhoney.cfg vendored Normal file
View File

@ -0,0 +1,19 @@
[honeypot]
hostname = honeypot01
address = 0.0.0.0
port = 5555
download_dir = dl/
log_dir = log/
device_id = device::http://ro.product.name =starltexx;ro.product.model=SM-G960F;ro.product.device=starlte;features=cmd,stat_v2,shell_v2
[output_log]
enabled = true
log_file = adbhoney.log
log_level = info
[output_json]
enabled = true
log_file = adbhoney.json

View File

@ -14,7 +14,7 @@ services:
- adbhoney_local - adbhoney_local
ports: ports:
- "5555:5555" - "5555:5555"
image: "dtagdevsec/adbhoney:1903" image: "dtagdevsec/adbhoney:2006"
read_only: true read_only: true
volumes: volumes:
- /data/adbhoney/log:/opt/adbhoney/log - /data/adbhoney/log:/opt/adbhoney/log

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:latest
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup env and apt # Setup env and apt
RUN apk -U upgrade && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U upgrade && \
apk add build-base \ apk add build-base \
git \ git \
libffi \ libffi \
@ -13,21 +14,20 @@ RUN apk -U upgrade && \
openssl-dev \ openssl-dev \
python3 \ python3 \
python3-dev && \ python3-dev && \
#
# Setup user # Setup user
addgroup -g 2000 ciscoasa && \ addgroup -g 2000 ciscoasa && \
adduser -S -s /bin/bash -u 2000 -D -g 2000 ciscoasa && \ adduser -S -s /bin/bash -u 2000 -D -g 2000 ciscoasa && \
#
# Get and install packages # Get and install packages
mkdir -p /opt/ && \ mkdir -p /opt/ && \
cd /opt/ && \ cd /opt/ && \
git clone --depth=1 https://github.com/cymmetria/ciscoasa_honeypot && \ git clone --depth=1 https://github.com/cymmetria/ciscoasa_honeypot && \
cd ciscoasa_honeypot && \ cd ciscoasa_honeypot && \
pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir -r requirements.txt && \
cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \ cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \
chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \ chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
@ -36,7 +36,7 @@ RUN apk -U upgrade && \
python3-dev && \ python3-dev && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start ciscoasa # Start ciscoasa
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
WORKDIR /tmp/ciscoasa/ WORKDIR /tmp/ciscoasa/

View File

@ -13,7 +13,7 @@ services:
ports: ports:
- "5000:5000/udp" - "5000:5000/udp"
- "8443:8443" - "8443:8443"
image: "dtagdevsec/ciscoasa:1903" image: "dtagdevsec/ciscoasa:2006"
read_only: true read_only: true
volumes: volumes:
- /data/ciscoasa/log:/var/log/ciscoasa - /data/ciscoasa/log:/var/log/ciscoasa

View File

@ -0,0 +1,45 @@
FROM alpine:latest
#
# Install packages
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
git \
libcap \
openssl \
python3 \
python3-dev && \
#
pip3 install --no-cache-dir python-json-logger && \
#
# Install CitrixHoneypot from GitHub
# git clone --depth=1 https://github.com/malwaretech/citrixhoneypot /opt/citrixhoneypot && \
# git clone --depth=1 https://github.com/vorband/CitrixHoneypot /opt/citrixhoneypot && \
git clone --depth=1 https://github.com/t3chn0m4g3/CitrixHoneypot /opt/citrixhoneypot && \
#
# Setup user, groups and configs
mkdir -p /opt/citrixhoneypot/logs /opt/citrixhoneypot/ssl && \
openssl req \
-nodes \
-x509 \
-newkey rsa:2048 \
-keyout "/opt/citrixhoneypot/ssl/key.pem" \
-out "/opt/citrixhoneypot/ssl/cert.pem" \
-days 365 \
-subj '/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd' && \
addgroup -g 2000 citrixhoneypot && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 citrixhoneypot && \
chown -R citrixhoneypot:citrixhoneypot /opt/citrixhoneypot && \
setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
#
# Clean up
apk del --purge git \
openssl \
python3-dev && \
rm -rf /root/* && \
rm -rf /var/cache/apk/*
#
# Set workdir and start citrixhoneypot
STOPSIGNAL SIGINT
USER citrixhoneypot:citrixhoneypot
WORKDIR /opt/citrixhoneypot/
CMD nohup /usr/bin/python3 CitrixHoneypot.py

View File

@ -0,0 +1,20 @@
version: '2.3'
networks:
citrixhoneypot_local:
services:
# CitrixHoneypot service
citrixhoneypot:
build: .
container_name: citrixhoneypot
restart: always
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: "dtagdevsec/citrixhoneypot:2006"
read_only: true
volumes:
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:3.10
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup apt # Setup apt
RUN apk -U add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
build-base \ build-base \
file \ file \
git \ git \
@ -21,7 +22,7 @@ RUN apk -U add \
py-cryptography \ py-cryptography \
tcpdump \ tcpdump \
wget && \ wget && \
#
# Setup ConPot # Setup ConPot
git clone --depth=1 https://github.com/mushorg/conpot /opt/conpot && \ git clone --depth=1 https://github.com/mushorg/conpot /opt/conpot && \
cd /opt/conpot/ && \ cd /opt/conpot/ && \
@ -37,20 +38,20 @@ RUN apk -U add \
sed -i 's/port="6969"/port="69"/' /opt/conpot/conpot/templates/default/tftp/tftp.xml && \ sed -i 's/port="6969"/port="69"/' /opt/conpot/conpot/templates/default/tftp/tftp.xml && \
sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/IEC104/snmp/snmp.xml && \ sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/IEC104/snmp/snmp.xml && \
sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/ipmi/ipmi/ipmi.xml && \ sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/ipmi/ipmi/ipmi.xml && \
pip3 install --no-cache-dir -U pip setuptools && \ pip3 install --no-cache-dir -U setuptools && \
pip3 install --no-cache-dir . && \ pip3 install --no-cache-dir . && \
cd / && \ cd / && \
rm -rf /opt/conpot /tmp/* /var/tmp/* && \ rm -rf /opt/conpot /tmp/* /var/tmp/* && \
setcap cap_net_bind_service=+ep /usr/bin/python3.6 && \ setcap cap_net_bind_service=+ep /usr/bin/python3.7 && \
#
# Get wireshark manuf db for scapy, setup configs, user, groups # Get wireshark manuf db for scapy, setup configs, user, groups
mkdir -p /etc/conpot /var/log/conpot /usr/share/wireshark && \ mkdir -p /etc/conpot /var/log/conpot /usr/share/wireshark && \
wget https://github.com/wireshark/wireshark/raw/master/manuf -o /usr/share/wireshark/manuf && \ wget https://github.com/wireshark/wireshark/raw/master/manuf -o /usr/share/wireshark/manuf && \
cp /root/dist/conpot.cfg /etc/conpot/conpot.cfg && \ cp /root/dist/conpot.cfg /etc/conpot/conpot.cfg && \
cp -R /root/dist/templates /usr/lib/python3.6/site-packages/conpot/ && \ cp -R /root/dist/templates /usr/lib/python3.7/site-packages/conpot/ && \
addgroup -g 2000 conpot && \ addgroup -g 2000 conpot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 conpot && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 conpot && \
#
# Clean up # Clean up
apk del --purge \ apk del --purge \
build-base \ build-base \
@ -68,7 +69,7 @@ RUN apk -U add \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start conpot # Start conpot
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
USER conpot:conpot USER conpot:conpot

View File

@ -3,7 +3,7 @@ sensorid = conpot
[virtual_file_system] [virtual_file_system]
data_fs_url = %(CONPOT_TMP)s data_fs_url = %(CONPOT_TMP)s
fs_url = tar:///usr/lib/python3.6/site-packages/conpot/data.tar fs_url = tar:///usr/lib/python3.7/site-packages/conpot/data.tar
[session] [session]
timeout = 30 timeout = 30

View File

@ -35,7 +35,7 @@ services:
- "2121:21" - "2121:21"
- "44818:44818" - "44818:44818"
- "47808:47808" - "47808:47808"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -58,7 +58,7 @@ services:
ports: ports:
# - "161:161" # - "161:161"
- "2404:2404" - "2404:2404"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -80,7 +80,7 @@ services:
- conpot_local_guardian_ast - conpot_local_guardian_ast
ports: ports:
- "10001:10001" - "10001:10001"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -102,7 +102,7 @@ services:
- conpot_local_ipmi - conpot_local_ipmi
ports: ports:
- "623:623" - "623:623"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot
@ -125,7 +125,7 @@ services:
ports: ports:
- "1025:1025" - "1025:1025"
- "50100:50100" - "50100:50100"
image: "dtagdevsec/conpot:1903" image: "dtagdevsec/conpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/conpot/log:/var/log/conpot - /data/conpot/log:/var/log/conpot

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
bash \ bash \
build-base \ build-base \
git \ git \
@ -15,38 +16,38 @@ RUN apk -U --no-cache add \
mpfr-dev \ mpfr-dev \
openssl \ openssl \
openssl-dev \ openssl-dev \
python \ python3 \
python-dev \ python3-dev \
py-bcrypt \ py3-bcrypt \
py-mysqldb \ py3-mysqlclient \
py-pip \ py3-requests \
py-requests \ py3-setuptools && \
py-setuptools && \ #
# Setup user # Setup user
addgroup -g 2000 cowrie && \ addgroup -g 2000 cowrie && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 cowrie && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 cowrie && \
#
# Install cowrie # Install cowrie
mkdir -p /home/cowrie && \ mkdir -p /home/cowrie && \
cd /home/cowrie && \ cd /home/cowrie && \
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b 1.5.3 && \ git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.0.2 && \
cd cowrie && \ cd cowrie && \
mkdir -p log && \ mkdir -p log && \
pip install --upgrade pip && \ pip3 install --upgrade pip && \
pip install --upgrade -r requirements.txt && \ pip3 install --upgrade -r requirements.txt && \
#
# Setup configs # Setup configs
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \ export PYTHON_DIR=$(python3 --version | tr '[A-Z]' '[a-z]' | tr -d ' ' | cut -d '.' -f 1,2 ) && \
setcap cap_net_bind_service=+ep /usr/bin/$PYTHON_DIR && \
cp /root/dist/cowrie.cfg /home/cowrie/cowrie/cowrie.cfg && \ cp /root/dist/cowrie.cfg /home/cowrie/cowrie/cowrie.cfg && \
chown cowrie:cowrie -R /home/cowrie/* /usr/lib/python2.7/site-packages/twisted/plugins && \ chown cowrie:cowrie -R /home/cowrie/* /usr/lib/$PYTHON_DIR/site-packages/twisted/plugins && \
#
# Start Cowrie once to prevent dropin.cache errors upon container start caused by read-only filesystem # Start Cowrie once to prevent dropin.cache errors upon container start caused by read-only filesystem
su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie:/home/cowrie/cowrie/src && \ su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie:/home/cowrie/cowrie/src && \
cd /home/cowrie/cowrie && \ cd /home/cowrie/cowrie && \
/usr/bin/twistd --uid=2000 --gid=2000 -y cowrie.tac --pidfile cowrie.pid cowrie &" && \ /usr/bin/twistd --uid=2000 --gid=2000 -y cowrie.tac --pidfile cowrie.pid cowrie &" && \
sleep 10 && \ sleep 10 && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
@ -56,13 +57,13 @@ RUN apk -U --no-cache add \
mpc1-dev \ mpc1-dev \
mpfr-dev \ mpfr-dev \
openssl-dev \ openssl-dev \
python-dev \ python3-dev \
py-mysqldb \ py3-mysqlclient && \
py-pip && \ rm -rf /root/* /tmp/* && \
rm -rf /root/* && \
rm -rf /var/cache/apk/* && \ rm -rf /var/cache/apk/* && \
rm -rf /home/cowrie/cowrie/cowrie.pid rm -rf /home/cowrie/cowrie/cowrie.pid && \
unset PYTHON_DIR
#
# Start cowrie # Start cowrie
ENV PYTHONPATH /home/cowrie/cowrie:/home/cowrie/cowrie/src ENV PYTHONPATH /home/cowrie/cowrie:/home/cowrie/cowrie/src
WORKDIR /home/cowrie/cowrie WORKDIR /home/cowrie/cowrie

View File

@ -2,7 +2,6 @@
hostname = ubuntu hostname = ubuntu
log_path = log log_path = log
download_path = dl download_path = dl
report_public_ip = true
share_path= share/cowrie share_path= share/cowrie
state_path = /tmp/cowrie/data state_path = /tmp/cowrie/data
etc_path = etc etc_path = etc
@ -13,6 +12,8 @@ ttylog_path = log/tty
interactive_timeout = 180 interactive_timeout = 180
authentication_timeout = 120 authentication_timeout = 120
backend = shell backend = shell
timezone = UTC
report_public_ip = true
auth_class = AuthRandom auth_class = AuthRandom
auth_class_parameters = 2, 5, 10 auth_class_parameters = 2, 5, 10
reported_ssh_port = 22 reported_ssh_port = 22
@ -21,11 +22,13 @@ data_path = /tmp/cowrie/data
[shell] [shell]
filesystem = share/cowrie/fs.pickle filesystem = share/cowrie/fs.pickle
processes = share/cowrie/cmdoutput.json processes = share/cowrie/cmdoutput.json
arch = linux-x64-lsb #arch = linux-x64-lsb
arch = bsd-aarch64-lsb, bsd-aarch64-msb, bsd-bfin-msb, bsd-mips-lsb, bsd-mips-msb, bsd-mips64-lsb, bsd-mips64-msb, bsd-powepc-msb, bsd-powepc64-lsb, bsd-riscv64-lsb, bsd-sparc-msb, bsd-sparc64-msb, bsd-x32-lsb, bsd-x64-lsb, linux-aarch64-lsb, linux-aarch64-msb, linux-alpha-lsb, linux-am33-lsb, linux-arc-lsb, linux-arc-msb, linux-arm-lsb, linux-arm-msb, linux-avr32-lsb, linux-bfin-lsb, linux-c6x-lsb, linux-c6x-msb, linux-cris-lsb, linux-frv-msb, linux-h8300-msb, linux-hppa-msb, linux-hppa64-msb, linux-ia64-lsb, linux-m32r-msb, linux-m68k-msb, linux-microblaze-msb, linux-mips-lsb, linux-mips-msb, linux-mips64-lsb, linux-mips64-msb, linux-mn10300-lsb, linux-nios-lsb, linux-nios-msb, linux-powerpc-lsb, linux-powerpc-msb, linux-powerpc64-lsb, linux-powerpc64-msb, linux-riscv64-lsb, linux-s390x-msb, linux-sh-lsb, linux-sh-msb, linux-sparc-msb, linux-sparc64-msb, linux-tilegx-lsb, linux-tilegx-msb, linux-tilegx64-lsb, linux-tilegx64-msb, linux-x64-lsb, linux-x86-lsb, linux-xtensa-msb, osx-x32-lsb, osx-x64-lsb
kernel_version = 3.2.0-4-amd64 kernel_version = 3.2.0-4-amd64
kernel_build_string = #1 SMP Debian 3.2.68-1+deb7u1 kernel_build_string = #1 SMP Debian 3.2.68-1+deb7u1
hardware_platform = x86_64 hardware_platform = x86_64
operating_system = GNU/Linux operating_system = GNU/Linux
ssh_version = OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018
[ssh] [ssh]
enabled = true enabled = true
@ -33,12 +36,18 @@ rsa_public_key = etc/ssh_host_rsa_key.pub
rsa_private_key = etc/ssh_host_rsa_key rsa_private_key = etc/ssh_host_rsa_key
dsa_public_key = etc/ssh_host_dsa_key.pub dsa_public_key = etc/ssh_host_dsa_key.pub
dsa_private_key = etc/ssh_host_dsa_key dsa_private_key = etc/ssh_host_dsa_key
version = SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2 #version = SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2
version = SSH-2.0-OpenSSH_7.9p1
ciphers = aes128-ctr,aes192-ctr,aes256-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc
macs = hmac-sha2-512,hmac-sha2-384,hmac-sha2-56,hmac-sha1,hmac-md5
compression = zlib@openssh.com,zlib,none
listen_endpoints = tcp:22:interface=0.0.0.0 listen_endpoints = tcp:22:interface=0.0.0.0
sftp_enabled = true sftp_enabled = true
forwarding = true forwarding = true
forward_redirect = false forward_redirect = false
forward_tunnel = false forward_tunnel = false
auth_none_enabled = false
auth_keyboard_interactive_enabled = true
[telnet] [telnet]
enabled = true enabled = true
@ -55,3 +64,6 @@ enabled = false
logfile = log/cowrie-textlog.log logfile = log/cowrie-textlog.log
format = text format = text
[output_crashreporter]
enabled = false
debug = false

View File

@ -18,7 +18,7 @@ services:
ports: ports:
- "22:22" - "22:22"
- "23:23" - "23:23"
image: "dtagdevsec/cowrie:1903" image: "dtagdevsec/cowrie:2006"
read_only: true read_only: true
volumes: volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl - /data/cowrie/downloads:/home/cowrie/cowrie/dl

View File

@ -1,7 +1,8 @@
FROM alpine:3.8 FROM alpine:3.10
#
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
curl \ curl \
git \ git \
npm \ npm \
@ -9,7 +10,7 @@ RUN apk -U --no-cache add \
npm install -g grunt-cli && \ npm install -g grunt-cli && \
npm install -g http-server && \ npm install -g http-server && \
npm install npm@latest -g && \ npm install npm@latest -g && \
#
# Install CyberChef # Install CyberChef
cd /root && \ cd /root && \
git clone https://github.com/gchq/cyberchef --depth=1 && \ git clone https://github.com/gchq/cyberchef --depth=1 && \
@ -20,16 +21,16 @@ RUN apk -U --no-cache add \
mkdir -p /opt/cyberchef && \ mkdir -p /opt/cyberchef && \
mv build/prod/* /opt/cyberchef && \ mv build/prod/* /opt/cyberchef && \
cd / && \ cd / && \
#
# Clean up # Clean up
apk del --purge git \ apk del --purge git \
npm && \ npm && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8000' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8000'
#
# Set user, workdir and start spiderfoot # Set user, workdir and start spiderfoot
USER nobody:nobody USER nobody:nobody
WORKDIR /opt/cyberchef WORKDIR /opt/cyberchef

View File

@ -14,5 +14,5 @@ services:
- cyberchef_local - cyberchef_local
ports: ports:
- "127.0.0.1:64299:8000" - "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:1903" image: "dtagdevsec/cyberchef:2006"
read_only: true read_only: true

View File

Before

Width:  |  Height:  |  Size: 793 KiB

After

Width:  |  Height:  |  Size: 793 KiB

View File

@ -1,10 +1,11 @@
### This is only for testing purposes, do NOT use for production ### This is only for testing purposes, do NOT use for production
FROM alpine FROM alpine:latest
#
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
coreutils \ coreutils \
git \ git \
@ -15,7 +16,7 @@ RUN apk -U --no-cache add \
python \ python \
python-dev \ python-dev \
sqlite && \ sqlite && \
#
# Install php sandbox from git # Install php sandbox from git
git clone --depth=1 https://github.com/rep/hpfeeds /opt/hpfeeds && \ git clone --depth=1 https://github.com/rep/hpfeeds /opt/hpfeeds && \
cd /opt/hpfeeds/broker && \ cd /opt/hpfeeds/broker && \
@ -23,10 +24,10 @@ RUN apk -U --no-cache add \
cp /root/dist/adduser.sql . && \ cp /root/dist/adduser.sql . && \
cd /opt/hpfeeds/broker && timeout 5 python broker.py || : && \ cd /opt/hpfeeds/broker && timeout 5 python broker.py || : && \
sqlite3 db.sqlite3 < adduser.sql && \ sqlite3 db.sqlite3 < adduser.sql && \
#
#python setup.py build && \ #python setup.py build && \
#python setup.py install && \ #python setup.py install && \
#
# Clean up # Clean up
apk del --purge autoconf \ apk del --purge autoconf \
build-base \ build-base \
@ -35,7 +36,7 @@ RUN apk -U --no-cache add \
python-dev && \ python-dev && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Set workdir and start glastopf # Set workdir and start glastopf
WORKDIR /opt/hpfeeds/broker WORKDIR /opt/hpfeeds/broker
CMD python broker.py CMD python broker.py

View File

@ -1,13 +1,13 @@
FROM alpine FROM alpine:latest
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
nginx \ nginx \
nginx-mod-http-headers-more && \ nginx-mod-http-headers-more && \
#
# Setup configs # Setup configs
mkdir -p /run/nginx && \ mkdir -p /run/nginx && \
rm -rf /etc/nginx/conf.d/* /usr/share/nginx/html/* && \ rm -rf /etc/nginx/conf.d/* /usr/share/nginx/html/* && \
@ -15,10 +15,10 @@ RUN apk -U --no-cache add \
cp -R /root/dist/conf/ssl /etc/nginx/ && \ cp -R /root/dist/conf/ssl /etc/nginx/ && \
cp /root/dist/conf/tpotweb.conf /etc/nginx/conf.d/ && \ cp /root/dist/conf/tpotweb.conf /etc/nginx/conf.d/ && \
cp -R /root/dist/html/ /var/lib/nginx/ && \ cp -R /root/dist/html/ /var/lib/nginx/ && \
#
# Clean up # Clean up
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start nginx # Start nginx
CMD exec nginx -g 'daemon off;' CMD exec nginx -g 'daemon off;'

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -1,9 +1,9 @@
FROM debian:stretch-slim FROM debian:stretch-slim
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install dependencies and packages # Install dependencies and packages
RUN apt-get update -y && \ RUN apt-get update -y && \
apt-get dist-upgrade -y && \ apt-get dist-upgrade -y && \
@ -32,7 +32,7 @@ RUN apt-get update -y && \
python3-bson \ python3-bson \
python3-yaml \ python3-yaml \
ttf-liberation && \ ttf-liberation && \
#
# Get and install dionaea # Get and install dionaea
git clone --depth=1 https://github.com/dinotools/dionaea -b 0.8.0 /root/dionaea/ && \ git clone --depth=1 https://github.com/dinotools/dionaea -b 0.8.0 /root/dionaea/ && \
cd /root/dionaea && \ cd /root/dionaea && \
@ -41,17 +41,17 @@ RUN apt-get update -y && \
cmake -DCMAKE_INSTALL_PREFIX:PATH=/opt/dionaea .. && \ cmake -DCMAKE_INSTALL_PREFIX:PATH=/opt/dionaea .. && \
make && \ make && \
make install && \ make install && \
#
# Setup user and groups # Setup user and groups
addgroup --gid 2000 dionaea && \ addgroup --gid 2000 dionaea && \
adduser --system --no-create-home --shell /bin/bash --uid 2000 --disabled-password --disabled-login --gid 2000 dionaea && \ adduser --system --no-create-home --shell /bin/bash --uid 2000 --disabled-password --disabled-login --gid 2000 dionaea && \
setcap cap_net_bind_service=+ep /opt/dionaea/bin/dionaea && \ setcap cap_net_bind_service=+ep /opt/dionaea/bin/dionaea && \
#
# Supply configs and set permissions # Supply configs and set permissions
chown -R dionaea:dionaea /opt/dionaea/var && \ chown -R dionaea:dionaea /opt/dionaea/var && \
rm -rf /opt/dionaea/etc/dionaea/* && \ rm -rf /opt/dionaea/etc/dionaea/* && \
mv /root/dist/etc/* /opt/dionaea/etc/dionaea/ && \ mv /root/dist/etc/* /opt/dionaea/etc/dionaea/ && \
#
# Setup runtime and clean up # Setup runtime and clean up
apt-get purge -y \ apt-get purge -y \
build-essential \ build-essential \
@ -75,7 +75,7 @@ RUN apt-get update -y && \
python3-dev \ python3-dev \
python3-bson \ python3-bson \
python3-yaml && \ python3-yaml && \
#
apt-get install -y \ apt-get install -y \
ca-certificates \ ca-certificates \
python3 \ python3 \
@ -90,11 +90,11 @@ RUN apt-get update -y && \
libpcap0.8 \ libpcap0.8 \
libpython3.5 \ libpython3.5 \
libudns0 && \ libudns0 && \
#
apt-get autoremove --purge -y && \ apt-get autoremove --purge -y && \
apt-get clean && \ apt-get clean && \
rm -rf /root/* /var/lib/apt/lists/* /tmp/* /var/tmp/* rm -rf /root/* /var/lib/apt/lists/* /tmp/* /var/tmp/*
#
# Start dionaea # Start dionaea
USER dionaea:dionaea USER dionaea:dionaea
CMD ["/opt/dionaea/bin/dionaea", "-u", "dionaea", "-g", "dionaea", "-c", "/opt/dionaea/etc/dionaea/dionaea.cfg"] CMD ["/opt/dionaea/bin/dionaea", "-u", "dionaea", "-g", "dionaea", "-c", "/opt/dionaea/etc/dionaea/dionaea.cfg"]

View File

@ -11,9 +11,9 @@
os_type: 4 os_type: 4
# Additional config # Additional config
primary_domain: WORKGROUP primary_domain: DACH
oem_domain_name: WORKGROUP oem_domain_name: DACH
server_name: WIN_SRV server_name: ADFS
## Windows 7 ## ## Windows 7 ##
native_os: Windows 7 Professional 7600 native_os: Windows 7 Professional 7600

View File

@ -27,7 +27,7 @@ services:
- "5060:5060/udp" - "5060:5060/udp"
- "5061:5061" - "5061:5061"
- "27017:27017" - "27017:27017"
image: "dtagdevsec/dionaea:1903" image: "dtagdevsec/dionaea:2006"
read_only: true read_only: true
volumes: volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp - /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp

160
docker/docker-compose.yml Normal file
View File

@ -0,0 +1,160 @@
# T-Pot Image Builder (use only for building docker images)
version: '2.3'
services:
##################
#### Honeypots
##################
# Adbhoney service
adbhoney:
build: adbhoney/.
image: "dtagdevsec/adbhoney:2006"
# Ciscoasa service
ciscoasa:
build: ciscoasa/.
image: "dtagdevsec/ciscoasa:2006"
# CitrixHoneypot service
citrixhoneypot:
build: citrixhoneypot/.
image: "dtagdevsec/citrixhoneypot:2006"
# Conpot IEC104 service
conpot_IEC104:
build: conpot/.
image: "dtagdevsec/conpot:2006"
# Cowrie service
cowrie:
build: cowrie/.
image: "dtagdevsec/cowrie:2006"
# Dionaea service
dionaea:
build: dionaea/.
image: "dtagdevsec/dionaea:2006"
# Glutton service
glutton:
build: glutton/.
image: "dtagdevsec/glutton:2006"
# Heralding service
heralding:
build: heralding/.
image: "dtagdevsec/heralding:2006"
# HoneyPy service
honeypy:
build: honeypy/.
image: "dtagdevsec/honeypy:2006"
# Honeytrap service
honeytrap:
build: honeytrap/.
image: "dtagdevsec/honeytrap:2006"
# Mailoney service
mailoney:
build: mailoney/.
image: "dtagdevsec/mailoney:2006"
# Medpot service
medpot:
build: medpot/.
image: "dtagdevsec/medpot:2006"
# Rdpy service
rdpy:
build: rdpy/.
image: "dtagdevsec/rdpy:2006"
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
build: tanner/redis/.
image: "dtagdevsec/redis:2006"
## PHP Sandbox service
tanner_phpox:
build: tanner/phpox/.
image: "dtagdevsec/phpox:2006"
## Tanner API Service
tanner_api:
build: tanner/tanner/.
image: "dtagdevsec/tanner:2006"
## Snare Service
snare:
build: tanner/snare/.
image: "dtagdevsec/snare:2006"
##################
#### NSM
##################
# Fatt service
fatt:
build: fatt/.
image: "dtagdevsec/fatt:2006"
# P0f service
p0f:
build: p0f/.
image: "dtagdevsec/p0f:2006"
# Suricata service
suricata:
build: suricata/.
image: "dtagdevsec/suricata:2006"
##################
#### Tools
##################
# Cyberchef service
cyberchef:
build: cyberchef/.
image: "dtagdevsec/cyberchef:2006"
#### ELK
## Elasticsearch service
elasticsearch:
build: elk/elasticsearch/.
image: "dtagdevsec/elasticsearch:2006"
## Kibana service
kibana:
build: elk/kibana/.
image: "dtagdevsec/kibana:2006"
## Logstash service
logstash:
build: elk/logstash/.
image: "dtagdevsec/logstash:2006"
## Elasticsearch-head service
head:
build: elk/head/.
image: "dtagdevsec/head:2006"
# Ewsposter service
ewsposter:
build: ews/.
image: "dtagdevsec/ewsposter:2006"
# Nginx service
nginx:
build: heimdall/.
image: "dtagdevsec/nginx:2006"
# Spiderfoot service
spiderfoot:
build: spiderfoot/.
image: "dtagdevsec/spiderfoot:2006"

View File

@ -1,8 +1,8 @@
FROM alpine FROM alpine:latest
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
git \ git \
@ -15,18 +15,18 @@ RUN apk -U --no-cache add \
mkdir -p /opt && \ mkdir -p /opt && \
cd /opt/ && \ cd /opt/ && \
git clone --depth=1 https://github.com/schmalle/ElasticpotPY.git && \ git clone --depth=1 https://github.com/schmalle/ElasticpotPY.git && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 elasticpot && \ addgroup -g 2000 elasticpot && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticpot && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticpot && \
mv /root/dist/elasticpot.cfg /opt/ElasticpotPY/ && \ mv /root/dist/elasticpot.cfg /opt/ElasticpotPY/ && \
mkdir /opt/ElasticpotPY/log && \ mkdir /opt/ElasticpotPY/log && \
#
# Clean up # Clean up
apk del --purge git && \ apk del --purge git && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Start elasticpot # Start elasticpot
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
USER elasticpot:elasticpot USER elasticpot:elasticpot

View File

@ -14,7 +14,7 @@ services:
- elasticpot_local - elasticpot_local
ports: ports:
- "9200:9200" - "9200:9200"
image: "dtagdevsec/elasticpot:1903" image: "dtagdevsec/elasticpot:2006"
read_only: true read_only: true
volumes: volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log - /data/elasticpot/log:/opt/ElasticpotPY/log

View File

@ -24,7 +24,7 @@ services:
mem_limit: 4g mem_limit: 4g
ports: ports:
- "127.0.0.1:64298:9200" - "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1903" image: "dtagdevsec/elasticsearch:2006"
volumes: volumes:
- /data:/data - /data:/data
@ -39,7 +39,7 @@ services:
condition: service_healthy condition: service_healthy
ports: ports:
- "127.0.0.1:64296:5601" - "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1903" image: "dtagdevsec/kibana:2006"
## Logstash service ## Logstash service
logstash: logstash:
@ -51,10 +51,10 @@ services:
condition: service_healthy condition: service_healthy
env_file: env_file:
- /opt/tpot/etc/compose/elk_environment - /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1903" image: "dtagdevsec/logstash:2006"
volumes: volumes:
- /data:/data - /data:/data
- /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf # - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
## Elasticsearch-head service ## Elasticsearch-head service
head: head:
@ -66,5 +66,5 @@ services:
condition: service_healthy condition: service_healthy
ports: ports:
- "127.0.0.1:64302:9100" - "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1903" image: "dtagdevsec/head:2006"
read_only: true read_only: true

View File

@ -1,8 +1,11 @@
FROM alpine FROM alpine
#
# VARS
ENV ES_VER=7.6.1 \
JAVA_HOME=/usr/lib/jvm/java-11-openjdk
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup env and apt # Setup env and apt
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \ apk -U --no-cache add \
@ -10,34 +13,34 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
bash \ bash \
curl \ curl \
nss \ nss \
openjdk8-jre && \ openjdk11-jre && \
#
# Get and install packages # Get and install packages
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/elasticsearch/ && \ mkdir -p /usr/share/elasticsearch/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.2.tar.gz && \ aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VER-linux-x86_64.tar.gz && \
tar xvfz elasticsearch-6.6.2.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \ tar xvfz elasticsearch-$ES_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \
#
# Add and move files # Add and move files
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/elasticsearch/config && \ mkdir -p /usr/share/elasticsearch/config && \
cp elasticsearch.yml /usr/share/elasticsearch/config/ && \ cp elasticsearch.yml /usr/share/elasticsearch/config/ && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 elasticsearch && \ addgroup -g 2000 elasticsearch && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticsearch && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticsearch && \
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/ && \ chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/ && \
rm -rf /usr/share/elasticsearch/modules/x-pack-ml && \ rm -rf /usr/share/elasticsearch/modules/x-pack-ml && \
#
# Clean up # Clean up
apk del --purge aria2 && \ apk del --purge aria2 && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9200/_cat/health' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9200/_cat/health'
#
# Start ELK # Start ELK
USER elasticsearch:elasticsearch USER elasticsearch:elasticsearch
CMD ["/usr/share/elasticsearch/bin/elasticsearch"] CMD ["/usr/share/elasticsearch/bin/elasticsearch"]

View File

@ -1,9 +1,16 @@
cluster.name: tpotcluster cluster.name: tpotcluster
node.name: "tpotcluster-node-01" node.name: "tpotcluster-node-01"
xpack.ml.enabled: false xpack.ml.enabled: false
xpack.security.enabled: false
xpack.ilm.enabled: false
path: path:
logs: /data/elk/log logs: /data/elk/log
data: /data/elk/data data: /data/elk/data
http.host: 0.0.0.0 http.host: 0.0.0.0
http.cors.enabled: true http.cors.enabled: true
http.cors.allow-origin: "*" http.cors.allow-origin: "*"
indices.query.bool.max_clause_count: 2000
cluster.initial_master_nodes:
- "tpotcluster-node-01"
discovery.zen.ping.unicast.hosts:
- localhost

View File

@ -24,6 +24,6 @@ services:
mem_limit: 2g mem_limit: 2g
ports: ports:
- "127.0.0.1:64298:9200" - "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1903" image: "dtagdevsec/elasticsearch:2006"
volumes: volumes:
- /data:/data - /data:/data

View File

@ -1,33 +1,33 @@
FROM alpine FROM alpine
#
# Setup env and apt # Setup env and apt
RUN apk -U add \ RUN apk -U add \
curl \ curl \
git \ git \
nodejs \ nodejs \
nodejs-npm && \ nodejs-npm && \
#
# Get and install packages # Get and install packages
mkdir -p /usr/src/app/ && \ mkdir -p /usr/src/app/ && \
cd /usr/src/app/ && \ cd /usr/src/app/ && \
git clone --depth=1 https://github.com/mobz/elasticsearch-head . && \ git clone --depth=1 https://github.com/mobz/elasticsearch-head . && \
npm install http-server && \ npm install http-server && \
sed -i "s#\"http\:\/\/localhost\:9200\"#window.location.protocol \+ \'\/\/\' \+ window.location.hostname \+ \'\:\' \+ window.location.port \+ \'\/es\/\'#" /usr/src/app/_site/app.js && \ sed -i "s#\"http\:\/\/localhost\:9200\"#window.location.protocol \+ \'\/\/\' \+ window.location.hostname \+ \'\:\' \+ window.location.port \+ \'\/es\/\'#" /usr/src/app/_site/app.js && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 head && \ addgroup -g 2000 head && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 head && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 head && \
chown -R head:head /usr/src/app/ && \ chown -R head:head /usr/src/app/ && \
#
# Clean up # Clean up
apk del --purge git && \ apk del --purge git && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9100' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9100'
#
# Start elasticsearch-head # Start elasticsearch-head
USER head:head USER head:head
WORKDIR /usr/src/app WORKDIR /usr/src/app

View File

@ -12,5 +12,5 @@ services:
# condition: service_healthy # condition: service_healthy
ports: ports:
- "127.0.0.1:64302:9100" - "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1811" image: "dtagdevsec/head:2006"
read_only: true read_only: true

View File

@ -1,61 +1,69 @@
FROM node:10.15.2-alpine FROM node:10.19.0-alpine
#
# VARS
ENV KB_VER=7.6.1
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup env and apt # Setup env and apt
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \ apk -U --no-cache add \
aria2 \ aria2 \
curl && \ curl && \
#
# Get and install packages # Get and install packages
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/kibana/ && \ mkdir -p /usr/share/kibana/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-6.6.2-linux-x86_64.tar.gz && \ aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-$KB_VER-linux-x86_64.tar.gz && \
tar xvfz kibana-6.6.2-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \ tar xvfz kibana-$KB_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \
#
# Kibana's bundled node does not work in alpine # Kibana's bundled node does not work in alpine
rm /usr/share/kibana/node/bin/node && \ rm /usr/share/kibana/node/bin/node && \
ln -s /usr/bin/node /usr/share/kibana/node/bin/node && \ ln -s /usr/local/bin/node /usr/share/kibana/node/bin/node && \
#
# Add and move files # Add and move files
cd /root/dist/ && \ cd /root/dist/ && \
cp kibana.svg /usr/share/kibana/src/ui/public/images/kibana.svg && \ # cp kibana.svg /usr/share/kibana/src/ui/public/images/kibana.svg && \
cp kibana.svg /usr/share/kibana/src/ui/public/icons/kibana.svg && \ # cp kibana.svg /usr/share/kibana/src/ui/public/icons/kibana.svg && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon.ico && \ # cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon.ico && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-16x16.png && \ # cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-16x16.png && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-32x32.png && \ # cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-32x32.png && \
#
# Setup user, groups and configs # Setup user, groups and configs
sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#kibana.defaultAppId: "home"/kibana.defaultAppId: "dashboards"/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#kibana.defaultAppId: "home"/kibana.defaultAppId: "dashboards"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#server.rewriteBasePath: false/server.rewriteBasePath: false/' /usr/share/kibana/config/kibana.yml && \ sed -i 's/#server.rewriteBasePath: false/server.rewriteBasePath: false/' /usr/share/kibana/config/kibana.yml && \
sed -i "s/#005571/#e20074/g" /usr/share/kibana/src/legacy/core_plugins/kibana/public/index.css && \ # sed -i "s/#005571/#e20074/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
sed -i "s/#007ba4/#9e0051/g" /usr/share/kibana/src/legacy/core_plugins/kibana/public/index.css && \ # sed -i "s/#007ba4/#9e0051/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
sed -i "s/#00465d/#4f0028/g" /usr/share/kibana/src/legacy/core_plugins/kibana/public/index.css && \ # sed -i "s/#00465d/#4f0028/g" /usr/share/kibana/built_assets/css/plugins/kibana/index.css && \
echo "xpack.infra.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.infra.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.logstash.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.logstash.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.canvas.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.canvas.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.spaces.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.spaces.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.apm.enabled: false" >> /usr/share/kibana/config/kibana.yml && \ echo "xpack.apm.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.security.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.uptime.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.siem.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "elasticsearch.requestTimeout: 60000" >> /usr/share/kibana/config/kibana.yml && \
echo "elasticsearch.shardTimeout: 60000" >> /usr/share/kibana/config/kibana.yml && \
rm -rf /usr/share/kibana/optimize/bundles/* && \ rm -rf /usr/share/kibana/optimize/bundles/* && \
/usr/share/kibana/bin/kibana --optimize && \ /usr/share/kibana/bin/kibana --optimize --allow-root && \
addgroup -g 2000 kibana && \ addgroup -g 2000 kibana && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \ adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \
chown -R kibana:kibana /usr/share/kibana/ && \ chown -R kibana:kibana /usr/share/kibana/ && \
#
# Clean up # Clean up
apk del --purge aria2 && \ apk del --purge aria2 && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:5601' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:5601'
#
# Start kibana # Start kibana
STOPSIGNAL SIGKILL STOPSIGNAL SIGKILL
USER kibana:kibana USER kibana:kibana

View File

@ -12,4 +12,4 @@ services:
# condition: service_healthy # condition: service_healthy
ports: ports:
- "127.0.0.1:64296:5601" - "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1903" image: "dtagdevsec/kibana:2006"

View File

@ -1,55 +1,58 @@
FROM alpine FROM alpine
#
# VARS
ENV LS_VER=7.6.1
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup env and apt # Setup env and apt
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \ apk -U --no-cache add \
aria2 \ aria2 \
bash \ bash \
curl \ bzip2 \
git \ curl \
libc6-compat \ libc6-compat \
libzmq \ libzmq \
nss \ nss \
openjdk8-jre && \ openjdk11-jre && \
#
# Get and install packages # Get and install packages
git clone --depth=1 https://github.com/dtag-dev-sec/listbot /etc/listbot && \ mkdir -p /etc/listbot && \
cd /etc/listbot && \
aria2c -s16 -x 16 https://raw.githubusercontent.com/dtag-dev-sec/listbot/master/cve.yaml.bz2 && \
aria2c -s16 -x 16 https://raw.githubusercontent.com/dtag-dev-sec/listbot/master/iprep.yaml.bz2 && \
bunzip2 *.bz2 && \
cd /root/dist/ && \ cd /root/dist/ && \
mkdir -p /usr/share/logstash/ && \ mkdir -p /usr/share/logstash/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-6.6.2.tar.gz && \ aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER.tar.gz && \
tar xvfz logstash-6.6.2.tar.gz --strip-components=1 -C /usr/share/logstash/ && \ tar xvfz logstash-$LS_VER.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
/usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \ /usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \
/usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \ /usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \
aria2c -s 16 -x 16 -o GeoLite2-ASN.tar.gz http://geolite.maxmind.com/download/geoip/database/GeoLite2-ASN.tar.gz && \ #
tar xvfz GeoLite2-ASN.tar.gz --strip-components=1 -C /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor && \
# Add and move files # Add and move files
cd /root/dist/ && \ cd /root/dist/ && \
cp update.sh /usr/bin/ && \ cp update.sh /usr/bin/ && \
chmod u+x /usr/bin/update.sh && \ chmod u+x /usr/bin/update.sh && \
mkdir -p /etc/logstash/conf.d && \ mkdir -p /etc/logstash/conf.d && \
cp logstash.conf /etc/logstash/conf.d/ && \ cp logstash.conf /etc/logstash/conf.d/ && \
cp elasticsearch-template-es6x.json /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.3.2-java/lib/logstash/outputs/elasticsearch/ && \ cp elasticsearch-template-es7x.json /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.3.1-java/lib/logstash/outputs/elasticsearch/ && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 logstash && \ addgroup -g 2000 logstash && \
adduser -S -H -s /bin/bash -u 2000 -D -g 2000 logstash && \ adduser -S -H -s /bin/bash -u 2000 -D -g 2000 logstash && \
chown -R logstash:logstash /usr/share/logstash && \ chown -R logstash:logstash /usr/share/logstash && \
chown -R logstash:logstash /etc/listbot && \ chown -R logstash:logstash /etc/listbot && \
chmod 755 /usr/bin/update.sh && \ chmod 755 /usr/bin/update.sh && \
#
# Clean up # Clean up
apk del --purge aria2 && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /tmp/* && \ rm -rf /tmp/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Healthcheck # Healthcheck
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600' HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600'
#
# Start logstash # Start logstash
#USER logstash:logstash #USER logstash:logstash
CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution

View File

@ -1,53 +0,0 @@
{
"template" : "logstash-*",
"version" : 50001,
"settings" : {
"index.refresh_interval" : "5s",
"index.number_of_shards" : "1",
"index.number_of_replicas" : "0",
"mapping" : {
"total_fields" : {
"limit" : "2000"
}
}
},
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true, "norms" : false},
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"@timestamp": { "type": "date", "include_in_all": false },
"@version": { "type": "keyword", "include_in_all": false },
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}
}

View File

@ -1,48 +0,0 @@
{
"template" : "logstash-*",
"version" : 60001,
"settings" : {
"index.refresh_interval" : "5s",
"index.number_of_shards" : "1",
"index.number_of_replicas" : "0",
"index.mapping.total_fields.limit": "2000"
},
"mappings" : {
"_default_" : {
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"@timestamp": { "type": "date"},
"@version": { "type": "keyword"},
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}
}

View File

@ -0,0 +1,49 @@
{
"index_patterns" : "logstash-*",
"version" : 60001,
"settings" : {
"index.refresh_interval" : "5s",
"number_of_shards" : 1,
"index.number_of_replicas" : "0",
"index.mapping.total_fields.limit" : "2000",
"index.query": {
"default_field": "fields.*"
}
},
"mappings" : {
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
}
}
}
} ],
"properties" : {
"@timestamp": { "type": "date"},
"@version": { "type": "keyword"},
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}

View File

@ -36,6 +36,13 @@ input {
type => "Ciscoasa" type => "Ciscoasa"
} }
# CitrixHoneypot
file {
path => ["/data/citrixhoneypot/logs/server.log"]
codec => json
type => "CitrixHoneypot"
}
# Conpot # Conpot
file { file {
path => ["/data/conpot/log/*.json"] path => ["/data/conpot/log/*.json"]
@ -94,6 +101,7 @@ input {
# Mailoney # Mailoney
file { file {
path => ["/data/mailoney/log/commands.log"] path => ["/data/mailoney/log/commands.log"]
codec => json
type => "Mailoney" type => "Mailoney"
} }
@ -206,6 +214,31 @@ filter {
} }
} }
# CitrixHoneypot
if [type] == "CitrixHoneypot" {
grok {
match => {
"message" => [ "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{UNIXPATH:fileinfo.filename:string}",
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string}",
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{S3_REQUEST_LINE:msg:string} %{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string:string}",
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{GREEDYDATA:msg:string}" ]
}
}
date {
match => [ "asctime", "ISO8601" ]
remove_field => ["asctime"]
remove_field => ["message"]
}
mutate {
add_field => {
"dest_port" => "443"
}
rename => {
"levelname" => "level"
}
}
}
# Conpot # Conpot
if [type] == "ConPot" { if [type] == "ConPot" {
date { date {
@ -312,18 +345,14 @@ filter {
# Mailoney # Mailoney
if [type] == "Mailoney" { if [type] == "Mailoney" {
grok { date {
match => [ "message", "\A%{NAGIOSTIME}\[%{IPV4:src_ip}:%{INT:src_port:integer}] %{GREEDYDATA:smtp_input}" ] match => [ "timestamp", "ISO8601" ]
} }
mutate { mutate {
add_field => { add_field => {
"dest_port" => "25" "dest_port" => "25"
} }
} }
date {
match => [ "nagios_epoch", "UNIX" ]
remove_field => ["nagios_epoch"]
}
} }
# Medpot # Medpot
@ -384,12 +413,12 @@ if "_grokparsefailure" in [tags] { drop {} }
geoip { geoip {
cache_size => 10000 cache_size => 10000
source => "src_ip" source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb" database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"
} }
geoip { geoip {
cache_size => 10000 cache_size => 10000
source => "src_ip" source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-ASN.mmdb" database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-ASN.mmdb"
} }
translate { translate {
refresh_interval => 86400 refresh_interval => 86400
@ -417,7 +446,7 @@ if "_grokparsefailure" in [tags] { drop {} }
} }
# Add T-Pot hostname and external IP # Add T-Pot hostname and external IP
if [type] == "Adbhoney" or [type] == "Ciscoasa" or [type] == "ConPot" or [type] == "Cowrie" or [type] == "Dionaea" or [type] == "ElasticPot" or [type] == "Fatt" or [type] == "Glutton" or [type] == "Honeytrap" or [type] == "Heralding" or [type] == "Honeypy" or [type] == "Mailoney" or [type] == "Medpot" or [type] == "P0f" or [type] == "Rdpy" or [type] == "Suricata" or [type] == "Tanner" { if [type] == "Adbhoney" or [type] == "Ciscoasa" or [type] == "CitrixHoneypot" or [type] == "ConPot" or [type] == "Cowrie" or [type] == "Dionaea" or [type] == "ElasticPot" or [type] == "Fatt" or [type] == "Glutton" or [type] == "Honeytrap" or [type] == "Heralding" or [type] == "Honeypy" or [type] == "Mailoney" or [type] == "Medpot" or [type] == "P0f" or [type] == "Rdpy" or [type] == "Suricata" or [type] == "Tanner" {
mutate { mutate {
add_field => { add_field => {
"t-pot_ip_ext" => "${MY_EXTIP}" "t-pot_ip_ext" => "${MY_EXTIP}"
@ -443,7 +472,7 @@ output {
# } # }
#} #}
# Debug output # Debug output
#if [type] == "XYZ" { #if [type] == "CitrixHoneypot" {
# stdout { # stdout {
# codec => rubydebug # codec => rubydebug
# } # }

View File

@ -6,7 +6,39 @@ function fuCLEANUP {
} }
trap fuCLEANUP EXIT trap fuCLEANUP EXIT
# Download updated translation maps # Check internet availability
cd /etc/listbot function fuCHECKINET () {
git pull --all --depth=1 mySITES=$1
cd / error=0
for i in $mySITES;
do
curl --connect-timeout 5 -Is $i 2>&1 > /dev/null
if [ $? -ne 0 ];
then
let error+=1
fi;
done;
echo $error
}
# Check for connectivity and download latest translation maps
myCHECK=$(fuCHECKINET "raw.githubusercontent.com")
if [ "$myCHECK" == "0" ];
then
echo "Connection to Github looks good, now downloading latest translation maps."
cd /etc/listbot
aria2c -s16 -x 16 https://raw.githubusercontent.com/dtag-dev-sec/listbot/master/cve.yaml.bz2 && \
aria2c -s16 -x 16 https://raw.githubusercontent.com/dtag-dev-sec/listbot/master/iprep.yaml.bz2 && \
bunzip2 -f *.bz2
cd /
else
echo "Cannot reach Github, starting Logstash without latest translation maps."
fi
# Make sure logstash can put latest logstash template by deleting the old one first
echo "Removing logstash template."
curl -XDELETE http://elasticsearch:9200/_template/logstash
echo
echo "Checking if empty."
curl -XGET http://elasticsearch:9200/_template/logstash
echo

View File

@ -12,7 +12,7 @@ services:
# condition: service_healthy # condition: service_healthy
env_file: env_file:
- /opt/tpot/etc/compose/elk_environment - /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1903" image: "dtagdevsec/logstash:2006"
volumes: volumes:
- /data:/data - /data:/data
- /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf

View File

@ -1,54 +1,50 @@
FROM alpine FROM alpine:latest
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Install packages # Install packages
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
git \ git \
libffi-dev \ libffi-dev \
libssl1.1 \ libssl1.1 \
openssl-dev \ openssl-dev \
python-dev \ python3 \
py-cffi \ python3-dev \
py-ipaddress \ py3-cffi \
py-lxml \ py3-ipaddress \
py-mysqldb \ py3-lxml \
py-pip \ py3-mysqlclient \
py-pysqlite \ py3-requests \
py-requests \ py3-setuptools && \
py-setuptools && \ pip3 install --no-cache-dir -U pip && \
pip install --no-cache-dir -U pip && \ pip3 install --no-cache-dir configparser hpfeeds3 pyOpenSSL xmljson && \
pip install --no-cache-dir pyOpenSSL xmljson && \ #
# Setup ewsposter # Setup ewsposter
git clone --depth=1 https://github.com/rep/hpfeeds /opt/hpfeeds && \ git clone --depth=1 https://github.com/dtag-dev-sec/ewsposter /opt/ewsposter && \
cd /opt/hpfeeds && \
python setup.py install && \
git clone --depth=1 https://github.com/vorband/ewsposter /opt/ewsposter && \
mkdir -p /opt/ewsposter/spool /opt/ewsposter/log && \ mkdir -p /opt/ewsposter/spool /opt/ewsposter/log && \
#
# Setup user and groups # Setup user and groups
addgroup -g 2000 ews && \ addgroup -g 2000 ews && \
adduser -S -H -u 2000 -D -g 2000 ews && \ adduser -S -H -u 2000 -D -g 2000 ews && \
chown -R ews:ews /opt/ewsposter && \ chown -R ews:ews /opt/ewsposter && \
#
# Supply configs # Supply configs
mv /root/dist/ews.cfg /opt/ewsposter/ && \ mv /root/dist/ews.cfg /opt/ewsposter/ && \
mv /root/dist/*.pem /opt/ewsposter/ && \ # mv /root/dist/*.pem /opt/ewsposter/ && \
#
# Clean up # Clean up
apk del build-base \ apk del build-base \
git \ git \
openssl-dev \ openssl-dev \
python-dev \ python3-dev \
py-pip \
py-setuptools && \ py-setuptools && \
rm -rf /root/* && \ rm -rf /root/* && \
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
#
# Run ewsposter # Run ewsposter
STOPSIGNAL SIGINT STOPSIGNAL SIGINT
USER ews:ews USER ews:ews
CMD sleep 10 && exec /usr/bin/python -u /opt/ewsposter/ews.py -l 60 CMD sleep 10 && exec /usr/bin/python3 -u /opt/ewsposter/ews.py -l $(shuf -i 10-60 -n 1)

View File

@ -1,70 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIGBDCCA+ygAwIBAgIBATANBgkqhkiG9w0BAQsFADCBnTEYMBYGA1UEAwwPU0lT
U0RFTiBSb290IENBMQswCQYDVQQGEwJQTDERMA8GA1UEBwwIV2Fyc3phd2ExLjAs
BgNVBAoMJU5hdWtvd2EgaSBBa2FkZW1pY2thIFNpZWMgS29tcHV0ZXJvd2ExEDAO
BgNVBAsMB1NJU1NERU4xHzAdBgkqhkiG9w0BCQEWEGFkbWluQHNpc3NkZW4uZXUw
HhcNMTcwNDExMTMxNDE2WhcNMjcwNDA5MTMxNDE2WjCBjTEbMBkGA1UEAwwSU0lT
U0RFTiBTZXJ2aWNlIENBMQswCQYDVQQGEwJQTDEfMB0GCSqGSIb3DQEJARYQYWRt
aW5Ac2lzc2Rlbi5ldTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2ll
YyBLb21wdXRlcm93YTEQMA4GA1UECwwHU0lTU0RFTjCCAiIwDQYJKoZIhvcNAQEB
BQADggIPADCCAgoCggIBAPFLjU6cLQoGz1s73QMPiRxYISCMUh3CXFe52Uim9a60
nkBDLfjMFW87MNhFCcE2xmxwdPPTz4+f5+DsEV3eZf0y63NxWx+RFV+UpODuEW5n
tWPFUDxmgKx6iAR/tyeLVNqmgtCnWzSthE0cg71dlil6onWvkMc+Wn5Kv6aXoz4e
5YVVhNsymhhrR0BntospY8EvtPm70hHAzOty957/zixOQ/MM+4SHRsWXTlKqv0K2
udWpkUy1Ihs3bpea2KAvn9bBWejFwy7K4q3LyhSyqwpVCYjNi+s+9z4ipSMfvAlT
FvHrMrODv/Iz/TQOfypYSlpX2gBP9WKLgOQj3wulJnMDQlvG1XNgOAqKfEF52YGF
eUu21UraRgDAguIIhWxRwgXenmRo8ngWjfk9Q8734PzzXt8cwzbxJWiJLMew1SiW
I+Kg8uYNGNT4mdBeUMo92S17ZNMXVnkt1TYfxT0A0ZlTCrhXPiWITtsVZXAdqFtl
j5hASmEcRYNgXEUQHBn13O9IinEmks2PEcqbbbKbs2Je0DS/JvxBkqES51UdsaVQ
zITKw3deCk0pISG8WDWZ97LEeDCvAKA5l/ooKjDwfS5vWw11mTUCOdhCoF0m8Lao
TwE1fzzNbSaqMsT6JF/n0ACabfuvF2aqCmWsZC/Hpw8LQQS62zOouCLdcqizL9+z
AgMBAAGjXTBbMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgHuMB0GA1UdDgQWBBQ4
nurxBppBA5PTNvFFU/vhDr/NFzAfBgNVHSMEGDAWgBSDpRyQSgaBD5XvyFOA8YHH
tbUAbzANBgkqhkiG9w0BAQsFAAOCAgEAIvA2gkYsIVH7FGuoIo9RIxgwy7G/SHNC
Xllz6hyTx10UwbttJ+o4gdNt8WPuGnkmywFgsjL1//bFw2+fUO5IRvWKSmXzwx9N
faRJAjQT4JNx2uOW0ctw4USngPrLjXr3UrIQQlJFtZnEyT9u5VJXX8zkhfNJudyJ
N88YVrPEf6Gh1Q0P+yCX0rDEb3PlP2jsYyXZtcYA5kDQ6Qq7jpLT/zrjJdaPTmzh
2NUe7jJOBfZxPCoeev7meafY2vVOgqRqMz1+DZRoOgwq+ysczzRaXmd5a2p9Tabc
L1w5FXKNJQ4apszA0cEScI+4mBIIQ7VFT3GO098GOcYsC2MelRkgONAIyamm66AP
tvLQAKoiK/xz3sEHN4zaZvN/YVHaSYZEXUP0QHdyL62P62a92aCNyrHpzKURhEDA
n8cs6icxKrS4xuVa517m53zun0brjrfeltfbO7z1A2TstFYu9BHKzRuhwV9cGRHP
EDcb7PkfA/08sDHsyfsWtzIysNo3hwCmQ6gtOW5xlrGplFfwSsXmPG4SR3ByW379
RA5h3zzrO0g7iCvbLclqHoqLTJTMS+6U43qXjnQ7DJ+mcbhRGcMHcZVKqO3QmLm+
mmkDNzNYfTgY52D5mXJqUK50750mQ8dwMSkD2TufSAPmAPUp90LdQ8u9CIv6gQ+x
A08hDHJ1cdY=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIGHDCCBASgAwIBAgIJAPZqsOOroxaHMA0GCSqGSIb3DQEBCwUAMIGdMRgwFgYD
VQQDDA9TSVNTREVOIFJvb3QgQ0ExCzAJBgNVBAYTAlBMMREwDwYDVQQHDAhXYXJz
emF3YTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2llYyBLb21wdXRl
cm93YTEQMA4GA1UECwwHU0lTU0RFTjEfMB0GCSqGSIb3DQEJARYQYWRtaW5Ac2lz
c2Rlbi5ldTAeFw0xNzA0MTExMzA3NTZaFw0yNzA0MDkxMzA3NTZaMIGdMRgwFgYD
VQQDDA9TSVNTREVOIFJvb3QgQ0ExCzAJBgNVBAYTAlBMMREwDwYDVQQHDAhXYXJz
emF3YTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2llYyBLb21wdXRl
cm93YTEQMA4GA1UECwwHU0lTU0RFTjEfMB0GCSqGSIb3DQEJARYQYWRtaW5Ac2lz
c2Rlbi5ldTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANKT77EYYEhV
tJUnfnvQtGttfgqIzKIV2W6nPK9aDsKRTX5BVDHF6P5ZAF1u/52ATwdyTK7+LD66
Q/nCzyyA2kqTgdruX6VGucpD2DVVSVF6nZhV9PcISNaMXytoG2HHlqrim53E/rVa
rskColfs7oCxama6lPKZ/rqrJlVjA1Pl5ZtxR0IORjpOyZjSbSzKQwLp/JxHPMCU
2cVirS7aEu5UGj+Q7Ibg0AEyoAu5tnHBKun4hmIoo7LtKWNEe1TdboxOSboGJ5wd
UTEmNH+7izZ5FAogTUINjubkf2zZ65xEnN7DT/zFS30vYU1EclqCTp96EKPANogV
ZeBKntEN6M5azM6Q6+nFI56TV5DWHTIXm85zzeDj5JM7TQlIGTh8A5APHpr0YyUP
AiIUrixV2lqSDrjewey5qQcWV6WbjMS72OFKh/x7+UJICJhoUw+KwnPmWSq1WAlt
n7C+W0raSQzt7puI30LUkInKL6iEQebMoYg0eDRI5vsRIpbo+PzflIuk/Vea/D1Y
twgRc8ujoKI9GpPJyP4yO4nY7BkShLqKJ251lEJZnxq8LiFVi8aN6ZHt//OGEtVs
6L97cPzqFx7qx8vnyLBFk23lb8pilHK1G0nqxCCjakTruT/JgkLXnZcLu/IDSqd3
QLjJL0rmU9q6+RTH8A782pcBUNzeLKnlAgMBAAGjXTBbMAwGA1UdEwQFMAMBAf8w
CwYDVR0PBAQDAgHuMB0GA1UdDgQWBBSDpRyQSgaBD5XvyFOA8YHHtbUAbzAfBgNV
HSMEGDAWgBSDpRyQSgaBD5XvyFOA8YHHtbUAbzANBgkqhkiG9w0BAQsFAAOCAgEA
IA0U6znfPykr5PoQlXb/Wr4L5mY/ZtNAJsvJ8jwNMsj3ZlqLOJfnHHoG5LHkb2b/
xfM1Ee2ojmYBt4VDARqrHLLbup38Ivqt0aEco3Qx/WqbIR4IlvZBF+/qKF/wIUuc
CuBYNIy12PcLzafT+SJosj1BJ+XiUCj/RsVXIT5CxsdXIABWC+5b3T3/PrAtKk+C
sVjA/ck1KAHDd+3VUyRjLAAekYWA9C/hek3YwWQ3OvmyHos5gxifqMMDj6bx5qgv
AuIs4mYJlBlHE19GxRmo2TDwE0eZiUoUdavdRBbl9v7dex+AF2GegmnC1ouYc9kv
9moNBcuPFXuJMCOCU44aTpgEKRm3QTZTvVcUza251T+4kgT2wlFyzPqQ8hcpih4t
knlqHhNc9ibL3/qzWr093AgC9uNaNRqmqu1WAu3vs9g3DVb/RSMrUG/V0YS1GgPq
E+nVJ1AIJoee8YaxHztRfjPsmu1R3pp633lfcRPUKCkz52dZDFRPuQP36DuJzl2M
itTra0MtDUuRCsuJfVGe1op2wFprswLI0qy7O9N21D4Ab8g0ik+lhmpOf5DpYxmx
C2Xpe4d/5Xlg3wIYhEs5MnfeEy4lSMA4cxwJs11gVYHba62L7/5lqzpPmHdRYHu3
Vf0pM/6zniQpy58Pf9+9CNU15I3iWF5K3zmevFArd6s=
-----END CERTIFICATE-----

View File

@ -23,7 +23,7 @@ services:
- EWS_HPFEEDS_FORMAT=json - EWS_HPFEEDS_FORMAT=json
env_file: env_file:
- /opt/tpot/etc/compose/elk_environment - /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1903" image: "dtagdevsec/ewsposter:2006"
volumes: volumes:
- /data:/data - /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:latest
# #
# Include dist # Include dist
#ADD dist/ /root/dist/ #ADD dist/ /root/dist/
# #
# Get and install dependencies & packages # Get and install dependencies & packages
RUN apk -U add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U add \
git \ git \
py3-libxml2 \ py3-libxml2 \
py3-lxml \ py3-lxml \

View File

@ -12,6 +12,6 @@ services:
- NET_ADMIN - NET_ADMIN
- SYS_NICE - SYS_NICE
- NET_RAW - NET_RAW
image: "dtagdevsec/fatt:1903" image: "dtagdevsec/fatt:2006"
volumes: volumes:
- /data/fatt/log:/opt/fatt/log - /data/fatt/log:/opt/fatt/log

View File

@ -1,10 +1,11 @@
FROM alpine FROM alpine:latest
#
# Include dist # Include dist
ADD dist/ /root/dist/ ADD dist/ /root/dist/
#
# Setup apk # Setup apk
RUN apk -U --no-cache add \ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \ build-base \
git \ git \
go \ go \
@ -13,32 +14,32 @@ RUN apk -U --no-cache add \
libnetfilter_queue-dev \ libnetfilter_queue-dev \
libcap \ libcap \
libpcap-dev && \ libpcap-dev && \
#
# Setup go, glutton # Setup go, glutton
export GOPATH=/opt/go/ && \ export GOPATH=/opt/go/ && \
go get -d github.com/mushorg/glutton && \ export GO111MODULE=on && \
cd /opt/go/src/github.com/satori/ && \ mkdir -p /opt/go && \
rm -rf go.uuid && \ cd /opt/go/ && \
git clone https://github.com/satori/go.uuid && \ git clone https://github.com/mushorg/glutton && \
cd go.uuid && \ cd /opt/go/glutton/ && \
git checkout v1.2.0 && \ mv /root/dist/system.go /opt/go/glutton/ && \
mv /root/dist/system.go /opt/go/src/github.com/mushorg/glutton/ && \ go mod download && \
cd /opt/go/src/github.com/mushorg/glutton/ && \
make build && \ make build && \
cd / && \ cd / && \
mkdir -p /opt/glutton && \ mkdir -p /opt/glutton && \
mv /opt/go/src/github.com/mushorg/glutton/bin /opt/glutton/ && \ mv /opt/go/glutton/bin /opt/glutton/ && \
mv /opt/go/src/github.com/mushorg/glutton/config /opt/glutton/ && \ mv /opt/go/glutton/config /opt/glutton/ && \
mv /opt/go/src/github.com/mushorg/glutton/rules /opt/glutton/ && \ mv /opt/go/glutton/rules /opt/glutton/ && \
ln -s /sbin/xtables-legacy-multi /sbin/xtables-multi && \
setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \ setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \
setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-multi && \ setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-legacy-multi && \
#
# Setup user, groups and configs # Setup user, groups and configs
addgroup -g 2000 glutton && \ addgroup -g 2000 glutton && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 glutton && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 glutton && \
mkdir -p /var/log/glutton && \ mkdir -p /var/log/glutton && \
mv /root/dist/rules.yaml /opt/glutton/rules/ && \ mv /root/dist/rules.yaml /opt/glutton/rules/ && \
#
# Clean up # Clean up
apk del --purge build-base \ apk del --purge build-base \
git \ git \
@ -47,8 +48,8 @@ RUN apk -U --no-cache add \
rm -rf /var/cache/apk/* \ rm -rf /var/cache/apk/* \
/opt/go \ /opt/go \
/root/dist /root/dist
#
# Start glutton # Start glutton
WORKDIR /opt/glutton WORKDIR /opt/glutton
USER glutton:glutton USER glutton:glutton
CMD exec bin/server -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) -l /var/log/glutton/glutton.log CMD exec bin/server -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) -l /var/log/glutton/glutton.log > /dev/null 2>&1

View File

@ -1,6 +1,7 @@
package glutton package glutton
import ( import (
"errors"
"fmt" "fmt"
"log" "log"
"os" "os"
@ -10,13 +11,19 @@ import (
"time" "time"
) )
func countOpenFiles() int { func countOpenFiles() (int, error) {
out, err := exec.Command("/bin/sh", "-c", fmt.Sprintf("lsof -p %v", os.Getpid())).Output() if runtime.GOOS == "linux" {
if err != nil { if isCommandAvailable("lsof") {
log.Fatal(err) out, err := exec.Command("/bin/sh", "-c", fmt.Sprintf("lsof -p %d", os.Getpid())).Output()
if err != nil {
log.Fatal(err)
}
lines := strings.Split(string(out), "\n")
return len(lines) - 1, nil
}
return 0, errors.New("lsof command does not exist. Kindly run sudo apt install lsof")
} }
lines := strings.Split(string(out), "\n") return 0, errors.New("Operating system type not supported for this command")
return len(lines) - 1
} }
func countRunningRoutines() int { func countRunningRoutines() int {
@ -36,3 +43,11 @@ func (g *Glutton) startMonitor(quit chan struct{}) {
} }
}() }()
} }
func isCommandAvailable(name string) bool {
cmd := exec.Command("/bin/sh", "-c", "command -v "+name)
if err := cmd.Run(); err != nil {
return false
}
return true
}

View File

@ -9,10 +9,11 @@ services:
restart: always restart: always
tmpfs: tmpfs:
- /var/lib/glutton:uid=2000,gid=2000 - /var/lib/glutton:uid=2000,gid=2000
- /run:uid=2000,gid=2000
network_mode: "host" network_mode: "host"
cap_add: cap_add:
- NET_ADMIN - NET_ADMIN
image: "dtagdevsec/glutton:1903" image: "dtagdevsec/glutton:2006"
read_only: true read_only: true
volumes: volumes:
- /data/glutton/log:/var/log/glutton - /data/glutton/log:/var/log/glutton

View File

@ -0,0 +1,78 @@
FROM alpine
#
# Include dist
ADD dist/ /root/dist/
#
# Get and install dependencies & packages
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
git \
nginx \
nginx-mod-http-headers-more \
php7 \
php7-cgi \
php7-ctype \
php7-fileinfo \
php7-fpm \
php7-json \
php7-mbstring \
php7-openssl \
php7-pdo \
php7-pdo_pgsql \
php7-pdo_sqlite \
php7-session \
php7-sqlite3 \
php7-tokenizer \
php7-xml \
php7-zip && \
#
# Clone and setup Heimdall, Nginx
git clone https://github.com/linuxserver/heimdall && \
cp -R heimdall/. /var/lib/nginx/html && \
rm -rf heimdall && \
cd /var/lib/nginx/html && \
cp .env.example .env && \
php artisan key:generate && \
#
## Add previously configured content
mkdir -p /var/lib/nginx/html/storage/app/public/backgrounds/ && \
cp /root/dist/app/bg1.jpg /var/lib/nginx/html/public/img/bg1.jpg && \
cp /root/dist/app/t-pot.png /var/lib/nginx/html/public/img/heimdall-icon-small.png && \
cp /root/dist/app/app.sqlite /var/lib/nginx/html/database/app.sqlite && \
cp /root/dist/app/cyberchef.png /var/lib/nginx/html/storage/app/public/icons/ZotKKZA2QKplZhdoF3WLx4UdKKhLFamf3lSMcLkr.png && \
cp /root/dist/app/eshead.png /var/lib/nginx/html/storage/app/public/icons/77KqFv4YIshXUDLDoOvZ1NUbsKDtsMAjJvg4sYqN.png && \
cp /root/dist/app/tsec.png /var/lib/nginx/html/storage/app/public/icons/RHwXCfCeGNDdhYgzlShL9o4NBFL2LHZWajgyeL0a.png && \
cp /root/dist/app/spiderfoot.png /var/lib/nginx/html/storage/app/public/icons/s7uPe1frJqjv76oI6SNqNbWUsgU1GHYqRALMlwYb.png && \
cp /root/dist/html/*.html /var/lib/nginx/html/public/ && \
cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon-16x16.png && \
cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon-32x32.png && \
cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon-96x96.png && \
cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon.ico && \
#
## Change ownership, permissions
chown root:www-data -R /var/lib/nginx/html && \
chmod 775 -R /var/lib/nginx/html/storage && \
chmod 775 -R /var/lib/nginx/html/database && \
sed -i "s/user = nobody/user = nginx/g" /etc/php7/php-fpm.d/www.conf && \
sed -i "s/group = nobody/group = nginx/g" /etc/php7/php-fpm.d/www.conf && \
sed -i "s#;upload_tmp_dir =#upload_tmp_dir = /var/lib/nginx/tmp#g" /etc/php7/php.ini && \
sed -i "s/9000/64304/g" /etc/php7/php-fpm.d/www.conf && \
sed -i "s/APP_NAME=Heimdall/APP_NAME=T-Pot/g" /var/lib/nginx/html/.env && \
## Add Nginx / T-Pot specific configs
rm -rf /etc/nginx/conf.d/* /usr/share/nginx/html/* && \
cp /root/dist/conf/nginx.conf /etc/nginx/ && \
cp -R /root/dist/conf/ssl /etc/nginx/ && \
cp /root/dist/conf/tpotweb.conf /etc/nginx/conf.d/ && \
cp /root/dist/start.sh / && \
## Pack database for first time usage
cd /var/lib/nginx && \
tar cvfz first.tgz /var/lib/nginx/html/database /var/lib/nginx/html/storage && \
#
# Clean up
apk del --purge \
git && \
rm -rf /root/* && \
rm -rf /var/cache/apk/*
#
# Start nginx
CMD /start.sh && php-fpm7 && exec nginx -g 'daemon off;'

BIN
docker/heimdall/dist/app/app.sqlite vendored Executable file

Binary file not shown.

BIN
docker/heimdall/dist/app/bg1.jpg vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 510 KiB

BIN
docker/heimdall/dist/app/cyberchef.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

BIN
docker/heimdall/dist/app/eshead.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docker/heimdall/dist/app/spiderfoot.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

BIN
docker/heimdall/dist/app/t-pot.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

Some files were not shown because too many files have changed in this diff Show More