223 Commits

Author SHA1 Message Date
9a42192a6c Update screenshot 2024-04-22 17:23:50 +02:00
60be54059b Merge 24.04 into master and prepare release
Merge 24.04 into master and prepare release
2024-04-22 17:10:17 +02:00
0e73986772 Prepare for merge into master 2024-04-22 17:08:22 +02:00
35d68c88cd resolve merge conflicts 2024-04-19 18:20:39 +02:00
85431b308d add 24.04 version tag 2024-03-24 19:22:37 +01:00
086116f64d tweaking 2024-03-24 18:15:58 +01:00
42689cf902 tweaking 2024-03-24 17:53:11 +01:00
0488849a37 tweaking 2024-03-24 17:28:26 +01:00
ebffec9b0f tweaking 2024-03-24 17:22:43 +01:00
3e9c94c3ac tweaking 2024-03-24 17:01:29 +01:00
e2d9362f8a tweaking 2024-03-24 16:59:02 +01:00
3a81e988da finish documentation
add uninstaller playbook and script
tweaking and cleanup
2024-03-24 16:21:51 +01:00
013f817c19 fix an issue where micro is missing in opensuse tumbleweed 2024-03-23 21:41:17 +01:00
d594c2aa0c fix an issue where crontab (cron) is missing in ubuntu 2024-03-23 21:23:28 +01:00
8a64930b59 fix an issue where crontab (cronie) is missing in fedora 2024-03-23 20:08:27 +01:00
e0182cae6b fix an issue where tar is missing for alma and rocky 2024-03-23 19:21:32 +01:00
d8c187b176 fix an issue where micro, htop and exa are no longer available for alma and rocky
fix an issue for download links for supported linux distros
2024-03-23 19:16:50 +01:00
de9db43ae0 continue with documentation 2024-03-22 21:12:37 +01:00
558280a041 continue with documentation 2024-03-22 21:00:10 +01:00
cf5df3b60b continue with documentation
fix tpotinit entrypoint.sh to resolve a conflict with sensor deployment where data folder is not yet owned by tpot user
2024-03-22 20:47:39 +01:00
4585d750e1 add htop, cron job to t-pot playbook 2024-03-22 18:09:18 +01:00
9c1120384b add logo to customizer, change path for genuser.sh, adjust README accordingly 2024-03-22 17:27:21 +01:00
fc0ca4c935 continue with documentation
cleanup preview related folders
fix typos / errors
2024-03-22 16:48:40 +01:00
e21eb1aef6 begin with documentation 2024-03-21 19:32:10 +01:00
0580215afd tweak updater 2024-03-19 21:14:40 +01:00
f9ff4d10aa add updater 2024-03-19 21:05:25 +01:00
abe03436d4 add tpot.service 2024-03-19 13:56:35 +01:00
234fb16394 tweaking
where possible kibana visualizations are converted to lens objects (more than 100 objects)
all dashboards have been updated
fixes #1392 for leaving SentryPeer log tag out
add wordpot dashboard
after discussion (#1486) and testing iptables-legacy is no longer required
include all kibana objects for installation
cleaning up some service scripts
2024-03-18 16:19:49 +01:00
3546e31a7c tweaking 2024-03-15 22:41:12 +01:00
b0a6ce432a add wordpot to compose files 2024-03-13 17:03:30 +01:00
6c5e34f2bf fix typo 2024-03-13 16:37:05 +01:00
3c7e27d9ad better wording 2024-03-13 16:34:35 +01:00
fe5eac0104 add genuser.sh, fix errors
macOS and Windows might not have htpasswd tools available, so adding this to the tpotinit image. users can run genuser.sh which simply contains a docker command to run tpotinit to create a user and add it to the T-Pot config (.env).
Fix an issue where WEB_USER was added with hyphens
Fix issues where shebang was incorrect
Update .env / env.example
2024-03-13 16:30:17 +01:00
1e5e57a52b fix git tree error 2024-03-12 17:37:23 +01:00
540d5574d1 cleanup, tweaking, updating
make tpotinit aware of sigterm events to unload blackhole routes, firewall rules
fixes #1204 where citrixhoneypot logs use logs instead of log folder
bump ELK stack to 8.12.2
add wordpot logs to logstash pipeline
bump t-pot attackmap to 2.2.0, alpine 3.19
2024-03-12 17:03:43 +01:00
1da35284be update, tweaking, add
add wordpot incl. json logging with activated plugins
bump snare, tanner, phpox, tanner_redis to latest master and to alpine 3.19
2024-03-11 17:33:53 +01:00
4baac7ac04 update esvue, cleanup 2024-03-11 09:45:01 +01:00
97adcbeb1b tweaking
updating .env, env.example and compose files regarding sentrypeer ENVs
make glutton image aware of payloads feature
bump glutton to latest master, alpine 3.19, multi-stage build
bump ipphoney to alpine 3.19
bump mailoney to alpine 3.19, adjust for py3
revert medpot to previous master, use multi stage build and alpine 3.19
bump cyberchef to latest master
bump ngninx to alpine 3.19
bump p0f to alpine 3.19, use multi stage build
bump redishoneypot to alpine 3.19, use multi stage build
bump sentrypeer to latest master, fix bug for open ports in compose files, now all tcp/5060, udp/5060 traffic will be seen
bump spiderfoot to latest master
bump spiderfoot to alpine 3.19
bump suricata to 7.0.2, fix performance issue with capture-filter-bpf by reducing the rules
update clean.sh to include glutton payloads folder
2024-03-09 12:11:14 +01:00
c45870594b tweaking
multi stage build for dicompot
rebuild fatt, glutton, hellpot, honeypots for alpine 3.19
bump glutton, hellpot, honeypots to latest master
2024-03-05 19:50:35 +01:00
932ad6b27c Fix repack for AMD64 .iso (#1481) 2024-03-04 15:23:27 +01:00
519a101fdf tweaking 2024-02-28 21:05:03 +01:00
be74fc75ca tweaking
healthcheck, watch pid not cpu
cleanup dockerfiles
bump dicompot, heralding, elasticpot, endlessh to alpine 3.19
bump dionaea, heralding to latest master
2024-02-28 19:07:22 +01:00
285b37a00d cleanup 2024-02-27 20:28:07 +01:00
f9a9c8c4bf tweak deploy, add autoheal, start update Dockerfiles
- tweak deploy a little further
- start with rebuilding Dockerfiles
- rework healthcheck for adbhoney CPU issues
- bump adbhoney, ciscoasa, citrixhoneypot, conpot, cowriepot, ddospot to alpine 3.19
- fix conpot issue with py 3.11
- bump conpot to latest master
- bump cowrie to latest master
- add autoheal to tpotinit to restart unhealthy container (if healthcheck enabled)
2024-02-27 20:23:30 +01:00
22d2bdff7e update .env 2024-02-23 20:41:58 +01:00
2723becd96 continue work on sensor deploy 2024-02-23 20:30:12 +01:00
127f0c2c92 point installer and ansible to alpha 2024-02-23 17:30:36 +01:00
31f09413e0 continue working on env, tpotinit and deploy 2024-02-23 16:41:52 +01:00
72fd6d963b start rework env, sensor deploy 2024-02-22 19:09:52 +01:00
a4262e9aae Add SENSOR type to installer with info to deploy from HIVE. 2024-02-21 16:20:18 +01:00
4f41b84103 Adjust T-Pot config file, tpotinit
fix logrotate.conf path
add tpotinit logging
add support for LS_WEB_USER in tpot config (.env)
make tpotinit always validate config / adjust users on tpotinit start
2024-02-19 17:34:14 +01:00
09b75cb5be Start working on new landing page
Remove old code
2024-02-16 19:32:02 +01:00
0dda858ac1 Start working on new landing page
Remove old code
2024-02-16 19:15:17 +01:00
0c9b58b6ac Remove Cockpit 2024-02-15 18:32:16 +01:00
380ade13a3 make heralding work with py3.10+ 2024-02-14 22:47:41 +01:00
e94f70a15f Revert to medpot (75a2e6134cf926c35b6017d62542274434c87388) from 2 years ago since current master is broken. 2024-02-14 21:14:40 +01:00
8bbfe7ac62 Fix manuf location 2024-02-14 20:16:13 +01:00
7ae6c73b88 Testing and developing in alpha branch 2024-02-14 19:23:25 +01:00
efd5465837 work on permissions, folders and tpotinit 2024-02-14 19:04:05 +01:00
ef2f5b3f93 Rework .env / env.example
Add more functions to customizer.py (improve port and service checks, improve user output)
Adjust docker-compose files
2024-02-13 19:02:40 +01:00
e7aecf560d Add T-Pot Service Builder 2024-02-12 19:18:57 +01:00
bd4df39538 fix missing replace for x86_64 > amd64
thanks to @shark4ce for taking the time to test, debug and offer a solution #1472.
2024-02-06 13:45:12 +01:00
2fe2d59129 remove auto reload 2024-01-05 22:07:19 +01:00
7ba5567e70 add logstash http_input support for nginx
remove cockpit support entirely
cleanup / housekeeping
2024-01-05 21:31:13 +01:00
0f7dc73f1a provide better example 2024-01-05 20:28:39 +01:00
1da37b5f85 re-implement distributed feature, without ssh
add sensor compose file
add distributed option to tpot config
housekeeping / cleanup
2024-01-05 20:19:50 +01:00
c634d294c7 Update .env 2024-01-05 12:00:36 +01:00
908ca2a45b update builder 2023-12-11 11:04:51 +01:00
faec613b9a add persistence to ENVs
add ENVs checker to keep tpotinit from starting if ENVs are not present or incorrectly set
2023-12-08 18:00:09 +01:00
406a7314ee fix logrotate config, fix version 2023-12-07 18:44:24 +01:00
cf91caaf8c fix alias 2023-11-01 16:19:24 +01:00
13326985a9 Add support for Raspbian (64 Bit) 2023-11-01 15:51:56 +01:00
15d65dbc25 Add Raspbian as supported OS (64 Bit) 2023-11-01 15:50:16 +01:00
05bdfd3855 Update ansible installer 2023-11-01 14:13:16 +01:00
5ebeffe31c Finetune raspberry_showcase.yml 2023-10-31 17:33:40 +01:00
5ca4136ebe add citation 2023-08-28 10:38:40 +02:00
02098f9b76 Update Citation 2023-08-28 10:29:24 +02:00
649163e06f Update Citation 2023-08-28 10:16:18 +02:00
9d66bcb7d3 Add Bibtex, closes #1398 2023-08-28 10:02:59 +02:00
dc4384d6ab Merge pull request #1369 from swiftsolves-msft/pr-azure
Azure Deployment via ARM template
2023-08-22 13:36:09 +02:00
90fa3b30e9 Update 2023-07-22 11:55:10 +02:00
32ba41497a Update 2023-07-20 19:16:10 +02:00
d2eaaab4df Update builder to push to GHCR and DockerHub 2023-07-20 18:59:01 +02:00
a8f5555324 - Prepare a docker compose file for a Raspberry Showcase
- Add config settings for the T-Pot Attack Map 2.1.0
2023-07-20 18:41:56 +02:00
cbbd2aa6c8 Update 2023-07-11 19:26:13 +02:00
6f978e3b5d Add Raspberry Pi support. 2023-07-11 19:19:51 +02:00
94445800de Add Raspberry Pi support. 2023-07-06 17:44:57 +02:00
338ebcef80 Add check if Playbook ran successfully. 2023-07-05 23:34:15 +02:00
ccdbb950d1 fix typo 2023-07-05 23:16:15 +02:00
12af5c9d46 Handle password securely, needs htpasswd to create user:password:
- Update tpotinit and entrypoint.sh to reflect this
- Update install.sh to reflect this
- Update .env / env.example to reflect this

Reorder recommended packages in T-Pot Playbook
Add packages to T-Pot Playbook to ensure manual deployment via Ansible will offer the same environment as manual local installation via install.sh and local Ansible deployment.
2023-07-05 23:03:41 +02:00
b3f1b71054 Tweaking:
- Ansible Playbooks refinement
- Add Ansible Bootstrapping
- Add some notes
2023-07-05 17:55:59 +02:00
69be264eae Notes for Dev Preview 2023-07-04 00:41:12 +02:00
fd74707f07 Notes for Dev Preview 2023-07-04 00:38:35 +02:00
1ebdfc2eac Add install support for Alma Linux. 2023-07-03 23:43:31 +02:00
45d7b60d4c Add install support for Rocky Linux. 2023-07-03 22:47:13 +02:00
4dfb9a9caf tweak installer
fix issue with selinux on Fedora
2023-07-03 16:45:40 +02:00
ae9a2dd2ee Tweaking
- reorder install.sh
2023-07-02 15:25:58 +02:00
e26a8a2b39 Tweaking 2023-07-02 15:05:55 +02:00
f7fc81a8ad Update Installer / Playbooks
- add tags
- reorder
- fix errors
2023-07-02 14:52:06 +02:00
1af7cdcaa1 Azure Deployment via ARM template
The following is a Azure Deployment of T-Pot using a ARM Template, creates a debian 11 vm, disks, nic, nsg, pip and leverages cloud-init customData to pass a B64 encoded string of a cloud-inity yaml file, example in readme docs.
2023-07-02 00:56:38 -04:00
cbcfa6d1f0 tweaking 2023-07-01 03:26:26 +02:00
9f9aed8176 tweaking 2023-07-01 01:23:57 +02:00
df0581b491 tweaking 2023-07-01 00:31:50 +02:00
5a7c4b54e6 tweaking 2023-06-30 23:49:47 +02:00
3eead2740e tweaking 2023-06-30 23:42:19 +02:00
3f472f594a tweaking 2023-06-30 23:23:15 +02:00
38b1e99673 tweaking 2023-06-30 22:51:25 +02:00
4df54390dc tweak install script and configs 2023-06-30 22:03:53 +02:00
58ca11f85e tweaking 2023-06-30 14:58:10 +02:00
2d1a06551c tweak installer, playbooks 2023-06-30 13:15:30 +02:00
e4b73c5be7 update distro names 2023-06-30 11:22:50 +02:00
5465a5e364 update distro names 2023-06-30 11:19:49 +02:00
eceb08317e use ghcr for testing 2023-06-30 11:03:16 +02:00
51154d7857 download images during install, tweaking 2023-06-29 18:43:08 +02:00
4c74690c41 tweaking 2023-06-29 13:29:42 +02:00
9815453623 add support for local cache 2023-06-29 13:06:43 +02:00
81aad58c2f adjust build script for docker engine
builder setup is no longer needed
amd64 and arm64 support
2023-06-29 12:22:19 +02:00
65a443d778 add installer
Instead of individual installers for each distribution there is only one necessary now that we are using Ansible.
2023-06-27 19:59:19 +02:00
20559345b0 add installer
Instead of individual installers for each distribution there is only one necessary now that we are using Ansible.
2023-06-27 19:55:46 +02:00
ef812c6b82 Merge branch 'master' into dev 2023-06-27 11:58:39 +00:00
81fab84040 add bookworm check to updates
while not supported the update script will no longer break if bookworm is found
2023-06-27 09:53:28 +00:00
a0c5a8c0e7 fix port definitions
- docker-compose no longer accepts ports definitions when network_mode: host is set
- previous versions simply ignored the ports definitions, the updated docker-compose breaks with an error however
2023-06-27 09:23:52 +00:00
72502ebbe6 tweaking 2023-06-26 18:10:39 +02:00
25eea5b9ab cleanup installer 2023-06-26 17:41:30 +02:00
df4ca7ccd0 tweak ansible uninstall 2023-06-26 17:36:40 +02:00
3c92e6ec06 add ansible uninstall 2023-06-26 04:59:52 +02:00
9be17e982b ansible tweaking, finalize suse 2023-06-25 16:56:18 +02:00
1094b33665 start adding openSUSE Tumbleweed 2023-06-25 13:17:33 +02:00
e2e20e3684 add fedora to installer, tweaking 2023-06-24 14:05:13 +02:00
95c6a8e28a add support for Ubuntu, begin work on Fedora 2023-06-22 18:30:18 +00:00
d7bcfda109 add git clone 2023-06-22 18:36:37 +02:00
048cbb8b6c sync hw clock to system 2023-06-22 17:17:42 +02:00
29a445da4e start work on ansible installer 2023-06-21 23:21:11 +02:00
4671dc8729 Begin of restructuring ...
- tweaking before re-work tpotinit
2023-06-19 15:19:15 +02:00
050c898149 Begin of restructuring ...
- tweaking before re-work tpotinit
2023-06-14 02:17:09 +02:00
ecb1dcd338 Merge pull request #1351 from telekom-security/master
fixes #1346
2023-06-14 00:02:35 +02:00
2c4eaf0794 Begin of restructuring ...
- deprecate old release
- set virtual version
- we need tpot user / group, adding to installer
- tweaking
- do not use the dev branch, it will break stuff
2023-06-13 23:59:09 +02:00
c807c7cd17 Begin of restructuring ...
- deprecate old release
- set virtual version
- we need tpot user / group, adding to installer
- tweaking
- do not use the dev branch, it will break stuff
2023-06-13 23:58:46 +02:00
c1808161e4 fixes #1346 2023-06-07 05:54:17 +00:00
bd12e1a4c0 Merge pull request #1338 from kauedg/dps-patch-1
call $0 instead of hardcoded script name
2023-06-01 13:28:04 +02:00
edda041093 call $0 instead of hardcoded script name
Allows the script to work when called from another directory or if the script name changes.
2023-05-31 14:47:15 -03:00
e3b1fd298a Prepare fix for #1336. 2023-05-31 17:21:15 +02:00
1a2d34c013 bump elk to 8.6.2, rebuild images 2023-05-30 14:35:45 +00:00
00d6d1b4c7 Add T-Pot Technical Preview 2023-05-30 12:22:10 +02:00
87ef005c17 tweaking for tpotlight 2023-05-27 14:49:20 +02:00
9941818a6e Create SECURITY.md 2023-05-12 18:37:04 +02:00
f438be7e27 Allow for automatic geoip db downloads 2023-05-07 18:10:23 +02:00
efd5f4c54c fixes #1320 2023-05-03 22:01:36 +00:00
35188ef28e add option to retrieve ENVs from file 2023-05-02 13:11:05 +02:00
e7963dbdaa update ddospot folders 2023-04-30 22:51:03 +02:00
918a408357 Merge branch 'master' of https://github.com/telekom-security/tpotce 2023-04-27 18:44:30 +02:00
5fd0d158e6 Add Nginx Cockpit Awareness 2023-04-27 18:42:38 +02:00
5265e3945a bump ewsposter to 1.25.0 2023-04-26 08:47:28 +00:00
a08a475f57 tweaking 2023-04-25 17:47:44 +00:00
ff7c368c7f update landing page
make relative links (T-Pot home) dynamic to display them only if services are available
adjust dimensions for link container
correct github link
place attack-map link in the home container
2023-04-25 15:03:26 +02:00
88ab453061 Merge pull request #1283 from tadashi-oya/fix-empty-myINSTALLPACKAGES
fix empty myINSTALLPACKAGES
2023-03-23 16:21:18 +01:00
4bae09e408 fix empty myINSTALLPACKAGES 2023-03-20 05:55:21 +00:00
668a4d91a7 bump ewsposter to 1.24.0 2023-02-24 14:34:49 +00:00
1a20de2f7f Merge pull request #1266 from kawaiipantsu/kawaiipantsu-request-uri-size
Fixing uri max size
2023-02-23 16:54:53 +01:00
350179fc89 Added detailed comment
Added a detailed comment on what the change is needed for and why it's there
2023-02-23 16:51:42 +01:00
f3a6461eaa Fixing uri max size
Changing URI max size from 1024 to 1280 bytes
2023-02-21 01:13:52 +01:00
fc17d850b5 bump t-pot-attack-map to v2.0.1 2023-02-14 17:41:02 +00:00
44c38d809b Merge pull request #1259 from kawaiipantsu/patch-1
Update updateip.sh
2023-02-10 14:52:40 +01:00
5eb9368064 Update updateip.sh
Make sure to target root partition, Debian will often come with /boot/efi or similar. This little hack will utilize regular expression to match line starting with / but having a blank after. So only root partition should match.
2023-02-09 13:31:08 +01:00
72a3b51bd4 bump t-pot-attack-map to 1.2.0 2023-02-04 00:29:26 +00:00
f786769527 bump t-pot-attack-map to 1.1.2 2023-02-03 20:37:27 +00:00
23934bc693 bump t-pot-attack-map to 1.1.1, add nginx cache header 2023-02-03 18:16:32 +00:00
7e60b46732 fixes #1254, fixes #1253
- #1254: new ELK images will be provided shortly
- #1253: documentation and updater will now reflect that an update from 20.06.x is no longer possible
2023-01-26 10:49:24 +00:00
c178d878ab bump ELK to 8.5.3 2023-01-23 16:33:09 +00:00
390390fd43 bump to alpine 3.17, tweaking, fixes for py 3.10 2023-01-23 15:42:59 +00:00
8119aca317 tweaking 2023-01-23 12:04:40 +00:00
2fd0f62484 bump to alpine 3.17 2023-01-20 17:48:46 +00:00
90eab744b1 bump cyberchef to 9.55.0, fix glitches 2023-01-20 17:42:17 +00:00
8547699061 bump cowrie to 2.5.0 2023-01-19 17:15:08 +00:00
2b5127fbdb update readme 2023-01-19 13:18:28 +00:00
4382413672 bump t-pot-attack-map to 1.1.0, buildx to 0.10.0 2023-01-19 11:42:25 +00:00
516bec1deb fixes #1241 2023-01-10 17:56:18 +00:00
ede61b81d9 update map to fix CVE 2023-01-06 19:53:05 +00:00
59cca98e7f update geoip map to latest release
update nginx to include brotli and gzip compression
improve load performance
2023-01-06 18:58:03 +00:00
2641d1e743 bump elastic stack to 8.4.3 2022-11-02 16:37:01 +00:00
3b2e8a4c70 tweaking 2022-11-02 07:54:42 +00:00
16fe4b1d28 bump sentrypeer to 2.0 2022-11-01 15:26:24 +00:00
b34644f1a8 add link for py3 2022-11-01 11:59:52 +00:00
7fa447943d bump medpot to latest fork master 2022-11-01 10:52:47 +00:00
c9b4bd27e6 bump buildx to 0.8.2 2022-11-01 10:46:24 +00:00
38edadb3da bump log4pot to latest master 2022-11-01 09:39:11 +00:00
5da8431e3a bump cyberchef, esvue to latest master 2022-10-31 17:01:04 +01:00
ccb94b1529 revert buildx to 0.8.1 2022-10-31 15:41:59 +00:00
e2cbd981ca bump hellpot to latest master 2022-10-14 14:55:28 +00:00
48f3c842b5 bump fatt to latest master 2022-10-13 14:06:09 +00:00
f9179e3e21 bump cowrie to 2.4.0 2022-10-13 08:44:55 +00:00
5c30a57280 Merge pull request #1173 from zambroid/patch-1
Corrected small typos
2022-10-12 13:54:49 +02:00
8410f84fe9 bump adbhoney to latest master 2022-10-12 11:52:17 +00:00
d9aa6bd525 Update README.md 2022-10-12 13:45:01 +02:00
ee547994dc Merge pull request #1187 from ctulio/url-fix
Update some url repos
2022-10-12 13:22:03 +02:00
0316bc7a2c bump buildx to 0.9.1 2022-10-12 09:50:10 +02:00
c9f6320446 Update some url repos 2022-10-11 22:39:55 -04:00
b8e3df97dc bump ewsposter to latest master, update packages 2022-10-11 15:13:47 +00:00
bac0d3c30c Update README.md 2022-09-02 17:30:04 +02:00
db1e65b968 Made small adjustments to the readme file
The readme file was containing small typos, I tried to identify them and my proposed new version of the file is here
2022-08-25 09:23:29 +02:00
1122d3728e Bump ELK Stack to 8.3.3 2022-08-17 16:34:53 +00:00
b696ec7b39 Merge pull request #1135 from cha147/patch-1 2022-07-14 00:06:23 +02:00
a22a7d98c4 dix typos in readme 2022-07-13 14:35:50 -07:00
a3bda5de8f bump Elastic stack to 8.2.3 2022-06-15 14:29:23 +00:00
5f0c337f09 bump elk, log4pot, honeytrap, dionaea to ubuntu 22.04 2022-06-14 10:47:11 +00:00
fc93db2bc4 fix cleanup medpot 2022-06-14 08:04:35 +00:00
421b3d3020 bump medpot to latest master 2022-06-14 07:51:14 +00:00
1eaec0036e prep for new medpot, honeypots and some tweaking 2022-06-13 11:59:40 +00:00
afb16dcc96 Fix typo, fixes #1111 2022-06-09 17:38:39 +02:00
15f7a17935 Comment ENV opt-in for SentryPeer 2022-06-08 11:09:29 +00:00
dcf15ca489 Opt-In for SentryPeer DHT mode, fixes #1110 2022-06-08 09:10:29 +00:00
a28dfec046 bump qHoneypots to latest master, adjust config for commands input 2022-06-07 11:19:34 +00:00
8993f59001 Bump Glutton to Alpine 3.16, decrease image size 2022-06-03 14:21:55 +00:00
09c682cd7b Bump to Alpine 3.16 for most of the images.
Glutton, Heralding, Mailoney and Snare/Tanner need work.
2022-06-02 15:47:17 +00:00
409e4bde3e Bump Cyberchef to 9.38.0, Elasticvue to 0.40.1
Bump Nginx, Spiderfoot to Alpine 3.16
2022-06-02 13:36:54 +00:00
aaef85c49d Bump SentryPeer to 1.4.1 2022-06-02 08:31:18 +00:00
73b54f5504 Bump Elastic Stack to 8.2.2 2022-06-01 10:26:49 +00:00
55da6a4841 Bump Elastic Stack to 8.2.0, update objects 2022-05-25 14:53:29 +00:00
153c11babd fix glances not showing docker containers 2022-05-24 14:58:45 +00:00
f13d08287f prep for elk 8.1.2 2022-04-15 13:11:25 +00:00
fc123d10f9 bump spiderfoot to 4.0 2022-04-14 17:15:43 +00:00
ded2124932 bump cyberchef, esvue to latest release 2022-04-14 16:52:48 +00:00
909ca358f0 Fix headings, links 2022-04-14 10:36:07 +02:00
345 changed files with 128983 additions and 20686 deletions

125
.env Normal file
View File

@ -0,0 +1,125 @@
# T-Pot config file. Do not remove.
###############################################
# T-Pot Base Settings - Adjust to your needs. #
###############################################
# Set Web usernames and passwords here. This section will be used to create / update the Nginx password file nginxpasswd.
# <empty>: This is the default
# <base64 encoded htpasswd usernames / passwords>:
# Use 'htpasswd -n -b "username" "password" | base64 -w0' to create the WEB_USER if you want to manually deploy T-Pot, run 'install.sh' to automatically add a user during installation, or 'genuser.sh' if you just want to add a web user.
# Example: 'htpasswd -n -b "tsec" "tsec" | base64 -w0' will print dHNlYzokYXByMSRYUnE2SC5rbiRVRjZQM1VVQmJVNWJUQmNmSGRuUFQxCgo=
# Copy the string and replace WEB_USER=dHNlYzokYXByMSRYUnE2SC5rbiRVRjZQM1VVQmJVNWJUQmNmSGRuUFQxCgo=
# Multiple users are possible:
# WEB_USER=dHNlYzokYXByMSRYUnE2SC5rbiRVRjZQM1VVQmJVNWJUQmNmSGRuUFQxCgo= dHNlYzokYXByMSR6VUFHVWdmOCRROXI3a09CTjFjY3lCeU1DTloyanEvCgo=
WEB_USER=
# Set Logstash Web usernames and passwords here. This section will be used to create / update the Nginx password file lswebpasswd.
# The Lostsash Web usernames are used for T-Pot log ingestion via Logstash, each sensor should have its own user.
# <empty>: This is empty by default.
# <'htpasswd encoded usernames / passwords'>:
# Use 'htpasswd -n -b "username" "password" | base64 -w0' to create the LS_WEB_USER if you want to manually deploy the sensor.
# Example: 'htpasswd -n -b "sensor" "sensor" | base64 -w0' will print c2Vuc29yOiRhcHIxJGVpMHdzUmdYJHNyWHF4UG53ZzZqWUc3aEFaUWxrWDEKCg==
# Copy the string and replace / add LS_WEB_USER=c2Vuc29yOiRhcHIxJGVpMHdzUmdYJHNyWHF4UG53ZzZqWUc3aEFaUWxrWDEKCg==
# Multiple users are possible:
# LS_WEB_USER=c2Vuc29yMTokYXByMSQ5aXhNRk5yMCR6d3F2dGFwQ2x0cFBhU1pqMm9ZemYxCgo= c2Vuc29yMjokYXByMSRtYTlOS1J2NCQvU3dsVVBMeW5RaVIyM3pyWVAzOUkwCgo=
LS_WEB_USER=
# T-Pot Blackhole
# ENABLED: T-Pot will download a db of known mass scanners and nullroute them.
# Be aware, this will put T-Pot off the map for stealth reasons and
# you will get less traffic. Routes will be active until next reboot
# and will be re-added with every T-Pot start until disabled.
# DISABLED: This is the default and no stealth efforts are in place.
TPOT_BLACKHOLE=DISABLED
# T-Pot Persistence
# on: This is the default. T-Pot will keep the honeypot logfiles and rotate
# with logrotate for 30 days.
# off: This is recommended for Raspberry Pi or setups with weaker CPUs or
# if you just do not need any of the logfiles.
TPOT_PERSISTENCE=on
# T-Pot Type
# HIVE: This is the default and offers everything to connect T-Pot sensors.
# SENSOR: This needs to be used when running a sensor. Be aware to adjust all other
# settings as well.
# 1. You will need to copy compose/sensor.yml to ./docker-comopose.yml
# 2. From HIVE host you will need to copy ~/tpotce/data/nginx/cert/nginx.crt to
# your SENSOR host to ~/tpotce/data/hive.crt
# 3. On HIVE: Create a web user per SENSOR on HIVE and provide credentials below
# Create credentials with 'htpasswd ~/tpotce/data/nginx/conf/lswebpasswd <username>'
# 4. On SENSOR: Provide username / password from (3) for TPOT_HIVE_USER as base64 encoded string:
# "echo -n 'username:password' | base64 -w0"
TPOT_TYPE=HIVE
# T-Pot Hive User (only relevant for SENSOR deployment)
# <empty>: This is empty by default.
# <base64 encoded string>: Provide a base64 encoded string "echo -n 'username:password' | base64 -w0"
# i.e. TPOT_HIVE_USER='dXNlcm5hbWU6cGFzc3dvcmQ='
TPOT_HIVE_USER=
# T-Pot Hive IP (only relevant for SENSOR deployment)
# <empty>: This is empty by default.
# <IP, FQDN>: This can be either a IP (i.e. 192.168.1.1) or a FQDN (i.e. foo.bar.local)
TPOT_HIVE_IP=
# T-Pot AttackMap Text Output
# ENABLED: This is the default and the docker container map_data will print events to the console.
# DISABLED: Printing events to the console is disabled.
TPOT_ATTACKMAP_TEXT=ENABLED
# T-Pot AttackMap Text Output Timezone
# UTC: (T-Pot default) This is usually the best option.
# Continent/City: In Linux you can check our timezone with `readlink` /etc/localtime or
# see the full list here: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
# Examples: America/New_York, Asia/Taipei, Australia/Melbourne, Europe/Athens, Europe/Berlin
TPOT_ATTACKMAP_TEXT_TIMEZONE=UTC
###################################################################################
# Honeypots / Tools settings
###################################################################################
# Some services / tools offer adjustments using ENVs which can be adjusted here.
###################################################################################
# Suricata ET Pro ruleset
# OPEN: This is the default and will the ET Open ruleset
# OINKCODE: Replace OPEN with your Oinkcode to use the ET Pro ruleset
OINKCODE=OPEN
###################################################################################
# NEVER MAKE CHANGES TO THIS SECTION UNLESS YOU REALLY KNOW WHAT YOU ARE DOING!!! #
###################################################################################
# docker.sock Path
TPOT_DOCKER_SOCK=/var/run/docker.sock
# docker compose .env
TPOT_DOCKER_ENV=./.env
# Docker-Compose file
TPOT_DOCKER_COMPOSE=./docker-compose.yml
# T-Pot Docker Repo
# Depending on where you are located you may choose between DockerHub and GHCR
# dtagdevsec: This will use the DockerHub image registry
# ghcr.io/telekom-security: This will use the GitHub container registry
TPOT_REPO=dtagdevsec
# T-Pot Version Tag
TPOT_VERSION=24.04
# T-Pot Pull Policy
# always: (T-Pot default) Compose implementations SHOULD always pull the image from the registry.
# never: Compose implementations SHOULD NOT pull the image from a registry and SHOULD rely on the platform cached image.
# missing: Compose implementations SHOULD pull the image only if it's not available in the platform cache.
# build: Compose implementations SHOULD build the image. Compose implementations SHOULD rebuild the image if already present.
TPOT_PULL_POLICY=always
# T-Pot Data Path
TPOT_DATA_PATH=./data
# OSType (linux, mac, win)
# Most docker features are available on linux
TPOT_OSTYPE=linux

View File

@ -1,37 +1,42 @@
---
name: Bug report for T-Pot
about: Bug report for T-Pot
name: Bug report for T-Pot 24.04.x
about: Bug report for T-Pot 24.04.x
title: ''
labels: ''
assignees: ''
---
Before you post your issue make sure it has not been answered yet and provide `basic support information` if you come to the conclusion it is a new issue.
# Successfully raise an issue
Before you post your issue make sure it has not been answered yet and provide **⚠️ BASIC SUPPORT INFORMATION** (as requested below) if you come to the conclusion it is a new issue.
- 🔍 Use the [search function](https://github.com/dtag-dev-sec/tpotce/issues?utf8=%E2%9C%93&q=) first
- 🧐 Check our [WIKI](https://github.com/dtag-dev-sec/tpotce/wiki)
- 📚 Consult the documentation of 💻 [Debian](https://www.debian.org/doc/), 🐳 [Docker](https://docs.docker.com/), the 🦌 [ELK stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md).
- **⚠️ Provide [basic support information](#info) or similiar information with regard to your issue or we can not help you and will close the issue without further notice**
- 🧐 Check our [Wiki](https://github.com/dtag-dev-sec/tpotce/wiki) and the [discussions](https://github.com/telekom-security/tpotce/discussions)
- 📚 Consult the documentation of 💻 your Linux OS, 🐳 [Docker](https://docs.docker.com/), the 🦌 [Elastic stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md).
- **⚠️ Provide [BASIC SUPPORT INFORMATION](#-basic-support-information-commands-are-expected-to-run-as-root) or similar detailed information with regard to your issue or we will close the issue or convert it into a discussion without further interaction from the maintainers**.<br>
<br>
<br>
<br>
# ⚠️ Basic support information (commands are expected to run as `root`)
<a name="info"></a>
## ⚠️ Basic support information (commands are expected to run as `root`)
**We happily take the time to improve T-Pot and take care of things, but we need you to take the time to create an issue that provides us with all the information we need.**
- What version of the OS are you currently using `lsb_release -a` and `uname -a`?
- What T-Pot version are you currently using?
- What edition (Standard, Nextgen, etc.) of T-Pot are you running?
- What OS are you T-Pot running on?
- What is the version of the OS `lsb_release -a` and `uname -a`?
- What T-Pot version are you currently using (only **T-Pot 24.04.x** is currently supported)?
- What architecture are you running on (i.e. hardware, cloud, VM, etc.)?
- Did you have any problems during the install? If yes, please attach `/install.log` `/install.err`.
- Review the `~/install_tpot.log`, attach the log and highlight the errors.
- How long has your installation been running?
- If it is a fresh install consult the documentation first.
- Most likely it is a port conflict or a remote dependency was unavailable.
- Retry a fresh installation and only open the issue if the error keeps coming up and is not resolved using the documentation as described [here](#how-to-raise-an-issue).
- Did you install upgrades, packages or use the update script?
- Did you modify any scripts or configs? If yes, please attach the changes.
- Please provide a screenshot of `glances` and `htop`.
- Please provide a screenshot of `htop` and `docker stats`.
- How much free disk space is available (`df -h`)?
- What is the current container status (`dps.sh`)?
- What is the status of the T-Pot service (`systemctl status tpot`)?
- What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `netstat -tulpen`
- On Linux: What is the status of the T-Pot service (`systemctl status tpot`)?
- What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `grc netstat -tulpen`
- Stop T-Pot `systemctl stop tpot`
- Run `grc netstat -tulpen`
- Run T-Pot manually with `docker compose -f ~/tpotce/docker-compose.yml up` and check for errors
- Stop execution with `CTRL-C` and `docker compose -f ~/tpotce/docker-compose.yml down -v`
- If a single container shows as `DOWN` you can run `docker logs <container-name>` for the latest log entries

View File

@ -1,6 +1,6 @@
---
name: Feature request for T-Pot
about: Suggest an idea for T-Pot
name: Feature request for T-Pot 24.04.x
about: Suggest an idea for T-Pot 24.04.x
title: ''
labels: ''
assignees: ''

View File

@ -1,39 +1,42 @@
---
name: General issue for T-Pot
about: General issue for T-Pot
name: General issue for T-Pot 24.04.x
about: General issue for T-Pot 24.04.x
title: ''
labels: ''
assignees: ''
---
🗨️ Please post your questions in [Discussions](https://github.com/telekom-security/tpotce/discussions) and keep the issues for **issues**. Thank you 😁.<br>
Before you post your issue make sure it has not been answered yet and provide `basic support information` if you come to the conclusion it is a new issue.
# Successfully raise an issue
Before you post your issue make sure it has not been answered yet and provide **⚠️ BASIC SUPPORT INFORMATION** (as requested below) if you come to the conclusion it is a new issue.
- 🔍 Use the [search function](https://github.com/dtag-dev-sec/tpotce/issues?utf8=%E2%9C%93&q=) first
- 🧐 Check our [WIKI](https://github.com/dtag-dev-sec/tpotce/wiki)
- 📚 Consult the documentation of 💻 [Debian](https://www.debian.org/doc/), 🐳 [Docker](https://docs.docker.com/), the 🦌 [ELK stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md).
- **⚠️ Provide [basic support information](#info) or similiar information with regard to your issue or we can not help you and will close the issue without further notice**
- 🧐 Check our [Wiki](https://github.com/dtag-dev-sec/tpotce/wiki) and the [discussions](https://github.com/telekom-security/tpotce/discussions)
- 📚 Consult the documentation of 💻 your Linux OS, 🐳 [Docker](https://docs.docker.com/), the 🦌 [Elastic stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md).
- **⚠️ Provide [BASIC SUPPORT INFORMATION](#-basic-support-information-commands-are-expected-to-run-as-root) or similar detailed information with regard to your issue or we will close the issue or convert it into a discussion without further interaction from the maintainers**.<br>
<br>
<br>
<br>
# ⚠️ Basic support information (commands are expected to run as `root`)
<a name="info"></a>
## ⚠️ Basic support information (commands are expected to run as `root`)
**We happily take the time to improve T-Pot and take care of things, but we need you to take the time to create an issue that provides us with all the information we need.**
- What version of the OS are you currently using `lsb_release -a` and `uname -a`?
- What T-Pot version are you currently using?
- What edition (Standard, Nextgen, etc.) of T-Pot are you running?
- What OS are you T-Pot running on?
- What is the version of the OS `lsb_release -a` and `uname -a`?
- What T-Pot version are you currently using (only **T-Pot 24.04.x** is currently supported)?
- What architecture are you running on (i.e. hardware, cloud, VM, etc.)?
- Did you have any problems during the install? If yes, please attach `/install.log` `/install.err`.
- Review the `~/install_tpot.log`, attach the log and highlight the errors.
- How long has your installation been running?
- If it is a fresh install consult the documentation first.
- Most likely it is a port conflict or a remote dependency was unavailable.
- Retry a fresh installation and only open the issue if the error keeps coming up and is not resolved using the documentation as described [here](#how-to-raise-an-issue).
- Did you install upgrades, packages or use the update script?
- Did you modify any scripts or configs? If yes, please attach the changes.
- Please provide a screenshot of `glances` and `htop`.
- Please provide a screenshot of `htop` and `docker stats`.
- How much free disk space is available (`df -h`)?
- What is the current container status (`dps.sh`)?
- What is the status of the T-Pot service (`systemctl status tpot`)?
- What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `netstat -tulpen`
- On Linux: What is the status of the T-Pot service (`systemctl status tpot`)?
- What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `grc netstat -tulpen`
- Stop T-Pot `systemctl stop tpot`
- Run `grc netstat -tulpen`
- Run T-Pot manually with `docker compose -f ~/tpotce/docker-compose.yml up` and check for errors
- Stop execution with `CTRL-C` and `docker compose -f ~/tpotce/docker-compose.yml down -v`
- If a single container shows as `DOWN` you can run `docker logs <container-name>` for the latest log entries

5
.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
# Ignore data folder
data/
**/.DS_Store
.idea
install_tpot.log

View File

@ -1,45 +1,36 @@
# Release Notes / Changelog
T-Pot 22.04.0 is probably the most feature rich release ever provided with long awaited (wanted!) features readily available after installation.
T-Pot 24.04.0 marks probably the largest change in the history of the project. While most of the changes have been made to the underlying platform some changes will be standing out in particular - a T-Pot ISO image will no longer be provided with the benefit that T-Pot will now run on multiple Linux distributions (Alma Linux, Debian, Fedora, OpenSuse, Raspbian, Rocky Linux, Ubuntu), Raspberry Pi (optimized) and macOS / Windows (limited).
## New Features
* **Distributed** Installation with **HIVE** and **HIVE_SENSOR**
* **ARM64** support for all provided Docker images
* **GeoIP Attack Map** visualizing Live Attacks on a dedicated webpage
* **Kibana Live Attack Map** visualizing Live Attacks from different **HIVE_SENSORS**
* **Blackhole** is a script trying to avoid mass scanner detection
* **Elasticvue** a web front end for browsing and interacting with an Elastic Search cluster
* **Ddospot** a honeypot for tracking and monitoring UDP-based Distributed Denial of Service (DDoS) attacks
* **Endlessh** is a SSH tarpit that very slowly sends an endless, random SSH banner
* **HellPot** is an endless honeypot based on Heffalump that sends unruly HTTP bots to hell
* **qHoneypots** 25 honeypots in a single container for monitoring network traffic, bots activities, and username \ password credentials
* **Redishoneypot** is a honeypot mimicking some of the Redis' functions
* **SentryPeer** a dedicated SIP honeypot
* **Index Lifecycle Management** for Elasticseach indices is now being used
## Upgrades
* **Debian 11.x** is now being used for the T-Pot ISO images and required for post installs
* **Elastic Stack 8.x** is now provided as Docker images
* **Distributed** Installation is now using NGINX reverse proxy instead of SSH to transmit **HIVE_SENSOR** logs to **HIVE**
* **`deploy.sh`**, will make the deployment of sensor much easier and will automatically take care of the configuration. You only have to install the T-Pot sensor.
* **T-Pot Init** is the foundation for running T-Pot on multiple Linux distributions and will also ensure to restart containers with failed healthchecks using **autoheal**
* **T-Pot Installer** is now mostly Ansible based providing a universal playbook for the most common Linux distributions
* **T-Pot Uninstaller** allows to uninstall T-Pot, while not recommended for general usage, this comes in handy for testing purposes
* **T-Pot Customizer (`compose/customizer.py`)** is here to assist you in the creation of a customized `docker-compose.yml`
* **T-Pot Landing Page** has been redesigned and simplified
![T-Pot-WebUI](doc/tpotwebui.png)
* **Kibana Dashboards, Objects** fully refreshed in favor of Lens based objects
![Dashbaord](doc/kibana_a.png)
* **Wordpot** is added as new addition to the available honeypots within T-Pot and will run on `tcp/8080` by default.
* **Raspberry Pi** is now supported using a dedicated `mobile.yml` (why this is called mobile will be revealed soon!)
* **GeoIP Attack Map** is now aware of connects / disconnects and thus eliminating required reloads
* **Docker**, where possible, will now be installed directly from the Docker repositories to avoid any incompatibilities
* **`.env`** now provides a single configuration file for the T-Pot related settings
* **`genuser.sh`** can now be used to add new users to the T-Pot Landing Page as part of the T-Pot configuration file (`.env`)
## Updates
* **Honeypots** and **tools** were updated to their latest masters and releases
* **Honeypots** and **tools** were updated to their latest pushed code and / or releases
* Where possible Docker Images will now use Alpine 3.19
* Updates will be provided continuously through Docker Images updates
## Breaking Changes
* For security reasons all Py2.x honeypots with the need of PyPi packages have been removed: **HoneyPy**, **HoneySAP** and **RDPY**
* If you are upgrading from a previous version of T-Pot (20.06.x) you need to import the new Kibana objects or some of the functionality will be broken or will be unavailabe
* **Cyberchef** is now part of the Nginx Docker image, no longer as individual image
* **ElasticSearch Head** is superseded by **Elasticvue** and part the Nginx Docker image
* **Heimdall** is no longer supported and superseded with a new Bento based landing page
* **Elasticsearch Curator** is no longer supprted and superseded with **Index Lifecycle Policies** available through Kibana.
* There is no option to migrate a previous installation to T-Pot 24.04.0, you can try to transfer the old `data` folder to the new T-Pot installation, but a working environment depends on too many other factors outside of our control and a new installation is simply faster.
* Most of the support scripts were moved into the **T-Pot Init** image and are no longer available directly on the host.
* Cockpit is no longer available as part of T-Pot itself. However, where supported, you can simply install the `cockpit` package.
# Thanks & Credits
* @ghenry, for some fun late night debugging and of course SentryPeer!
* @giga-a, for adding much appreciated features (i.e. JSON logging,
X-Forwarded-For, etc.) and of course qHoneypots!
* @sp3t3rs, @trixam, for their backend and ews support!
* @tadashi-oya, for spotting some errors and propose fixes!
* @tmariuss, @shaderecker for their cloud contributions!
* @vorband, for much appreciated and helpful insights regarding the GeoIP Attack Map!
* @yunginnanet, on not giving up on squashing a bug and of course Hellpot!
* @shark4ce for taking the time to test, debug and offer a solution #1472.
... and many others from the T-Pot community by opening valued issues and discussions, suggesting ideas and thus helping to improve T-Pot!

43
CITATION.cff Normal file
View File

@ -0,0 +1,43 @@
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: T-Pot 24.04.0
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- name: Deutsche Telekom Security GmbH
address: Bonner Talweg 100
city: Bonn
country: DE
post-code: '53113'
website: 'https://github.com/telekom-security'
- given-names: Marco
family-names: Ochse
affiliation: Deutsche Telekom Security GmbH
identifiers:
- type: url
value: >-
https://github.com/telekom-security/tpotce/releases/tag/24.04.0
description: T-Pot Release 24.04.0
repository-code: 'https://github.com/telekom-security/tpotce'
abstract: >-
T-Pot is the all in one, optionally distributed, multiarch
(amd64, arm64) honeypot plattform, supporting 20+
honeypots and countless visualization options using the
Elastic Stack, animated live attack maps and lots of
security tools to further improve the deception
experience.
keywords:
- honeypot
- deception
- t-pot
- telekom security
- docker
- elk
license: GPL-3.0
commit: release
version: 24.04.0
date-released: '2024-04-22'

903
README.md

File diff suppressed because it is too large Load Diff

23
SECURITY.md Normal file
View File

@ -0,0 +1,23 @@
# Security Policy
## Supported Versions
| Version | Supported |
|-------|--------------------|
| 24.04 | :white_check_mark: |
## Reporting a Vulnerability
We prioritize the security of T-Pot highly. Often, vulnerabilities in T-Pot components stem from upstream dependencies, including honeypots, Docker images, tools, or packages. We are committed to working together to resolve any issues effectively.
Please follow these steps before reporting a potential vulnerability:
1. Verify that the behavior you've observed isn't already documented as a normal aspect or unrelated issue of T-Pot. For example, Cowrie may initiate outgoing connections, or T-Pot might open all possible TCP ports—a feature enabled by Honeytrap.
2. Clearly identify which component is vulnerable (e.g., a specific honeypot, Docker image, tool, package) and isolate the issue.
3. Provide a detailed description of the issue, including log and, if available, debug files. Include all steps necessary to reproduce the vulnerability. If you have a proposed solution, hotfix, or patch, please be prepared to submit a pull request (PR).
4. Check whether the vulnerability is already known upstream. If there is an existing fix or patch, include that information in your report.
This approach ensures a thorough and efficient resolution process.
We aim to respond as quickly as possible. If you believe the issue poses an immediate threat to the entire T-Pot community, you can expedite the process by responsibly alerting our [CERT](https://www.telekom.com/en/corporate-responsibility/data-protection-data-security/security/details/introducing-deutsche-telekom-cert-358316).

View File

@ -1,77 +0,0 @@
#!/bin/bash
# Make sure script is started as non-root.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" = "root" ]
then
echo "Need to run as non-root ..."
echo ""
exit
fi
# set vars, check deps
myPAM_COCKPIT_FILE="/etc/pam.d/cockpit"
if ! [ -s "$myPAM_COCKPIT_FILE" ];
then
echo "### Cockpit PAM module config does not exist. Something went wrong."
echo ""
exit 1
fi
myPAM_COCKPIT_GA="
# google authenticator for two-factor
auth required pam_google_authenticator.so
"
myAUTHENTICATOR=$(which google-authenticator)
if [ "$myAUTHENTICATOR" == "" ];
then
echo "### Could not locate google-authenticator, trying to install (if asked provide root password)."
echo ""
sudo apt-get update
sudo apt-get install -y libpam-google-authenticator
exec "$1" "$2"
exit 1
fi
# write PAM changes
function fuWRITE_PAM_CHANGES {
myCHECK=$(cat $myPAM_COCKPIT_FILE | grep -c "google")
if ! [ "$myCHECK" == "0" ];
then
echo "### PAM config already enabled. Skipped."
echo ""
else
echo "### Updating PAM config for Cockpit (if asked provide root password)."
echo "$myPAM_COCKPIT_GA" | sudo tee -a $myPAM_COCKPIT_FILE
sudo systemctl restart cockpit
fi
}
# create 2fa
function fuGEN_TOKEN {
echo "### Now generating token for Google Authenticator."
echo ""
google-authenticator -t -d -r 3 -R 30 -w 17
}
# main
echo "### This script will enable Two Factor Authentication for Cockpit."
echo ""
echo "### Please download one of the many authenticator apps from the appstore of your choice."
echo ""
while true;
do
read -p "### Ready to start (y/n)? " myANSWER
case $myANSWER in
[Yy]* ) echo "### OK. Starting ..."; break;;
[Nn]* ) echo "### Exiting."; exit;;
esac
done
fuWRITE_PAM_CHANGES
fuGEN_TOKEN
echo "Done. Re-run this script by every user who needs Cockpit access."
echo ""

View File

@ -1,61 +0,0 @@
#!/bin/bash
# Run as root only.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ];
then
echo "Need to run as root ..."
exit
fi
if [ "$1" == "" ] || [ "$1" != "all" ] && [ "$1" != "base" ];
then
echo "Usage: backup_es_folders [all, base]"
echo " all = backup all ES folder"
echo " base = backup only Kibana index".
echo
exit
fi
# Backup all ES relevant folders
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start tpot'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
# Set vars
myCOUNT=1
myDATE=$(date +%Y%m%d%H%M)
myELKPATH="/data/elk/data"
myKIBANAINDEXNAME=$(curl -s -XGET ''$myES'_cat/indices/.kibana' | awk '{ print $4 }')
myKIBANAINDEXPATH=$myELKPATH/indices/$myKIBANAINDEXNAME
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
### Start ELK
systemctl start tpot
echo "### Now starting T-Pot ..."
}
trap fuCLEANUP EXIT
# Stop T-Pot to lift db lock
echo "### Now stopping T-Pot"
systemctl stop tpot
sleep 2
# Backup DB in 2 flavors
echo "### Now backing up Elasticsearch folders ..."
if [ "$1" == "all" ];
then
tar cvfz "elkall_"$myDATE".tgz" $myELKPATH
elif [ "$1" == "base" ];
then
tar cvfz "elkbase_"$myDATE".tgz" $myKIBANAINDEXPATH
fi

View File

@ -1,73 +0,0 @@
#!/bin/bash
# Run as root only.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
myPARAM="$1"
if [[ $myPARAM =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
then
watch --color -n $myPARAM "dps.sh"
exit
fi
# Show current status of T-Pot containers
myCONTAINERS="$(cat /opt/tpot/etc/tpot.yml | grep -v '#' | grep container_name | cut -d: -f2 | sort | tr -d " ")"
myRED=""
myGREEN=""
myBLUE=""
myWHITE=""
myMAGENTA=""
# Blackhole Status
myBLACKHOLE_STATUS=$(ip r | grep "blackhole" -c)
if [ "$myBLACKHOLE_STATUS" -gt "500" ];
then
myBLACKHOLE_STATUS="${myGREEN}ENABLED"
else
myBLACKHOLE_STATUS="${myRED}DISABLED"
fi
function fuGETTPOT_STATUS {
# T-Pot Status
myTPOT_STATUS=$(systemctl status tpot | grep "Active" | awk '{ print $2 }')
if [ "$myTPOT_STATUS" == "active" ];
then
echo "${myGREEN}ACTIVE"
else
echo "${myRED}INACTIVE"
fi
}
function fuGETSTATUS {
grc --colour=on docker ps -f status=running -f status=exited --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -v "NAME" | sort
}
function fuGETSYS {
printf "[ ========| System |======== ]\n"
printf "${myBLUE}%+11s ${myWHITE}%-20s\n" "DATE: " "$(date)"
printf "${myBLUE}%+11s ${myWHITE}%-20s\n" "UPTIME: " "$(grc --colour=on uptime)"
printf "${myMAGENTA}%+11s %-20s\n" "T-POT: " "$(fuGETTPOT_STATUS)"
printf "${myMAGENTA}%+11s %-20s\n" "BLACKHOLE: " "$myBLACKHOLE_STATUS${myWHITE}"
echo
}
myDPS=$(fuGETSTATUS)
myDPSNAMES=$(echo "$myDPS" | awk '{ print $1 }' | sort)
fuGETSYS
printf "%-21s %-28s %s\n" "NAME" "STATUS" "PORTS"
if [ "$myDPS" != "" ];
then
echo "$myDPS"
fi
for i in $myCONTAINERS; do
myAVAIL=$(echo "$myDPSNAMES" | grep -o "$i" | uniq | wc -l)
if [ "$myAVAIL" = "0" ];
then
printf "%-28s %-28s\n" "$myRED$i" "DOWN$myWHITE"
fi
done

View File

@ -1,29 +0,0 @@
#!/bin/bash
# T-Pot Compose and Container Cleaner
# Set colors
myRED=""
myGREEN=""
myWHITE=""
# Only run with command switch
if [ "$1" != "-y" ]; then
echo $myRED"### WARNING"$myWHITE
echo ""
echo $myRED"###### This script is only intended for the tpot.service."$myWHITE
echo $myRED"###### Run <systemctl stop tpot> first and then <tpdclean.sh -y>."$myWHITE
echo $myRED"###### Be aware, all T-Pot container volumes and images will be removed."$myWHITE
echo ""
echo $myRED"### WARNING "$myWHITE
echo
exit
fi
# Remove old containers, images and volumes
docker-compose -f /opt/tpot/etc/tpot.yml down -v >> /dev/null 2>&1
docker-compose -f /opt/tpot/etc/tpot.yml rm -v >> /dev/null 2>&1
docker network rm $(docker network ls -q) >> /dev/null 2>&1
docker volume rm $(docker volume ls -q) >> /dev/null 2>&1
docker rm -v $(docker ps -aq) >> /dev/null 2>&1
docker rmi $(docker images | grep "<none>" | awk '{print $3}') >> /dev/null 2>&1
docker rmi $(docker images | grep "2203" | awk '{print $3}') >> /dev/null 2>&1
exit 0

View File

@ -1,56 +0,0 @@
#!/bin/bash
# Run as root only.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
# set backtitle, get filename
myBACKTITLE="T-Pot Edition Selection Tool"
myYMLS=$(cd /opt/tpot/etc/compose/ && ls -1 *.yml)
myLINK="/opt/tpot/etc/tpot.yml"
# Let's load docker images in parallel
function fuPULLIMAGES {
local myTPOTCOMPOSE="/opt/tpot/etc/tpot.yml"
for name in $(cat $myTPOTCOMPOSE | grep -v '#' | grep image | cut -d'"' -f2 | uniq)
do
docker pull $name &
done
wait
echo
}
# setup menu
for i in $myYMLS;
do
myITEMS+="$i $(echo $i | cut -d "." -f1 | tr [:lower:] [:upper:]) "
done
myEDITION=$(dialog --backtitle "$myBACKTITLE" --menu "Select T-Pot Edition" 18 50 1 $myITEMS 3>&1 1>&2 2>&3 3>&-)
if [ "$myEDITION" == "" ];
then
echo "Have a nice day!"
exit
fi
dialog --backtitle "$myBACKTITLE" --title "[ Activate now? ]" --yesno "\n$myEDITION" 7 50
myOK=$?
if [ "$myOK" == "0" ];
then
echo "OK - Activating and downloading latest images."
systemctl stop tpot
if [ "$(docker ps -aq)" != "" ];
then
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
fi
rm -f $myLINK
ln -s /opt/tpot/etc/compose/$myEDITION $myLINK
fuPULLIMAGES
systemctl start tpot
echo "Done. Use \"dps.sh\" for monitoring"
else
echo "Have a nice day!"
fi

View File

@ -1,89 +0,0 @@
#!/bin/bash
# Let's add the first local ip to the /etc/issue and external ip to ews.ip file
# If the external IP cannot be detected, the internal IP will be inherited.
source /etc/environment
myCHECKIFSENSOR=$(head -n 1 /opt/tpot/etc/tpot.yml | grep "Sensor" | wc -l)
myUUID=$(lsblk -o MOUNTPOINT,UUID | grep "/" | awk '{ print $2 }')
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(/opt/tpot/bin/myip.sh)
if [ "$myEXTIP" = "" ];
then
myEXTIP=$myLOCALIP
myEXTIP_LAT="49.865835022498125"
myEXTIP_LONG="8.62606472775735"
else
myEXTIP_LOC=$(curl -s ipinfo.io/$myEXTIP/loc)
myEXTIP_LAT=$(echo "$myEXTIP_LOC" | cut -f1 -d",")
myEXTIP_LONG=$(echo "$myEXTIP_LOC" | cut -f2 -d",")
fi
# Load Blackhole routes if enabled
myBLACKHOLE_FILE1="/etc/blackhole/mass_scanner.txt"
myBLACKHOLE_FILE2="/etc/blackhole/mass_scanner_cidr.txt"
if [ -f "$myBLACKHOLE_FILE1" ] || [ -f "$myBLACKHOLE_FILE2" ];
then
/opt/tpot/bin/blackhole.sh add
fi
myBLACKHOLE_STATUS=$(ip r | grep "blackhole" -c)
if [ "$myBLACKHOLE_STATUS" -gt "500" ];
then
myBLACKHOLE_STATUS="| BLACKHOLE: [ ENABLED ]"
else
myBLACKHOLE_STATUS="| BLACKHOLE: [ DISABLED ]"
fi
mySSHUSER=$(cat /etc/passwd | grep 1000 | cut -d ':' -f1)
# Export
export myUUID
export myLOCALIP
export myEXTIP
export myEXTIP_LAT
export myEXTIP_LONG
export myBLACKHOLE_STATUS
export mySSHUSER
# Build issue
echo "" > /etc/issue
toilet -f ivrit -F metal --filter border:metal "T-Pot 22.04" | sed 's/\\/\\\\/g' >> /etc/issue
echo >> /etc/issue
echo ",---- [ \n ] [ \d ] [ \t ]" >> /etc/issue
echo "|" >> /etc/issue
echo "| IP: $myLOCALIP ($myEXTIP)" >> /etc/issue
echo "| SSH: ssh -l tsec -p 64295 $myLOCALIP" >> /etc/issue
if [ "$myCHECKIFSENSOR" == "0" ];
then
echo "| WEB: https://$myLOCALIP:64297" >> /etc/issue
fi
echo "| ADMIN: https://$myLOCALIP:64294" >> /etc/issue
echo "$myBLACKHOLE_STATUS" >> /etc/issue
echo "|" >> /etc/issue
echo "\`----" >> /etc/issue
echo >> /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
tee /opt/tpot/etc/compose/elk_environment << EOF
HONEY_UUID=$myUUID
MY_EXTIP=$myEXTIP
MY_EXTIP_LAT=$myEXTIP_LAT
MY_EXTIP_LONG=$myEXTIP_LONG
MY_INTIP=$myLOCALIP
MY_HOSTNAME=$HOSTNAME
EOF
if [ -s "/data/elk/logstash/ls_environment" ];
then
source /data/elk/logstash/ls_environment
tee -a /opt/tpot/etc/compose/elk_environment << EOF
MY_TPOT_TYPE=$MY_TPOT_TYPE
MY_SENSOR_PRIVATEKEYFILE=$MY_SENSOR_PRIVATEKEYFILE
MY_HIVE_USERNAME=$MY_HIVE_USERNAME
MY_HIVE_IP=$MY_HIVE_IP
EOF
fi
chown tpot:tpot /data/ews/conf/ews.ip
chmod 770 /data/ews/conf/ews.ip

10
cloud/.gitignore vendored
View File

@ -1,10 +0,0 @@
# Ansible
*.retry
# Terraform
**/.terraform
**/terraform.*
# OpenStack clouds
**/clouds.yaml
**/secure.yaml

View File

@ -1,257 +0,0 @@
# T-Pot Ansible
Here you can find a ready-to-use solution for your automated T-Pot deployment using [Ansible](https://www.ansible.com/).
It consists of an Ansible Playbook with multiple roles, which is reusable for all [OpenStack](https://www.openstack.org/) based clouds (e.g. Open Telekom Cloud, Orange Cloud, Telefonica Open Cloud, OVH) out of the box.
Apart from that you can easily adapt the deploy role to use other [cloud providers](https://docs.ansible.com/ansible/latest/scenario_guides/cloud_guides.html). Check out [Ansible Galaxy](https://galaxy.ansible.com/search?keywords=&order_by=-relevance&page=1&deprecated=false&type=collection&tags=cloud) for more cloud collections.
The Playbook first creates all resources (security group, network, subnet, router), deploys one (or more) new servers and then installs and configures T-Pot on them.
This example showcases the deployment on our own OpenStack based Public Cloud Offering [Open Telekom Cloud](https://open-telekom-cloud.com/en).
# Table of contents
- [Preparation of Ansible Master](#ansible-master)
- [Ansible Installation](#ansible)
- [OpenStack Collection Installation](#collection)
- [Agent Forwarding](#agent-forwarding)
- [Preparations in Open Telekom Cloud Console](#preparation)
- [Create new project](#project)
- [Create API user](#api-user)
- [Import Key Pair](#key-pair)
- [Clone Git Repository](#clone-git)
- [Settings and recommended values](#settings)
- [clouds.yaml](#clouds-yaml)
- [Ansible remote user](#remote-user)
- [Number of instances to deploy](#number)
- [Instance settings](#instance-settings)
- [User password](#user-password)
- [Configure `tpot.conf.dist`](#tpot-conf)
- [Optional: Custom `ews.cfg`](#ews-cfg)
- [Optional: Custom HPFEEDS](#hpfeeds)
- [Deploying a T-Pot](#deploy)
- [Further documentation](#documentation)
<a name="ansible-master"></a>
# Preparation of Ansible Master
You can either run the Ansible Playbook locally on your Linux or macOS machine or you can use an ECS (Elastic Cloud Server) on Open Telekom Cloud, which I did.
I used Ubuntu 18.04 for my Ansible Master Server, but other OSes are fine too.
Ansible works over the SSH Port, so you don't have to add any special rules to your Security Group.
<a name="ansible"></a>
## Ansible Installation
:warning: Ansible 2.10 or newer is required!
Example for Ubuntu 18.04:
At first we update the system:
`sudo apt update`
`sudo apt dist-upgrade`
Then we need to add the repository and install Ansible:
`sudo apt-add-repository --yes --update ppa:ansible/ansible`
`sudo apt install ansible`
For other OSes and Distros have a look at the official [Ansible Documentation](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html).
If your OS does not offer a recent version of Ansible (>= 2.10) you should consider [installing Ansible with pip](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-with-pip).
In short (if you already have Python3/pip3 installed):
```
pip3 install ansible
```
<a name="collection"></a>
## OpenStack Collection Installation
For interacting with OpenStack resources in Ansible, you need to install the collection from Ansible Galaxy:
`ansible-galaxy collection install openstack.cloud`
<a name="agent-forwarding"></a>
## Agent Forwarding
If you run the Ansible Playbook remotely on your Ansible Master Server, Agent Forwarding must be enabled in order to let Ansible connect to newly created machines.
- On Linux or macOS:
- Create or edit `~/.ssh/config`
```
Host ANSIBLE_MASTER_IP
ForwardAgent yes
```
- On Windows using Putty:
![Putty Agent Forwarding](doc/putty_agent_forwarding.png)
<a name="preparation"></a>
# Preparations in Open Telekom Cloud Console
(You can skip this if you have already set up a project and an API account with key pair)
(Just make sure you know the naming for everything, as you need to configure the Ansible variables.)
Before we can start deploying, we have to prepare the Open Telekom Cloud tenant.
For that, go to the [Web Console](https://auth.otc.t-systems.com/authui/login) and log in with an admin user.
<a name="project"></a>
## Create new project
I strongly advise you to create a separate project for the T-Pots in your tenant.
In my case I named it `tpot`.
![Create new project](doc/otc_1_project.gif)
<a name="api-user"></a>
## Create API user
The next step is to create a new user account, which is restricted to the project.
This ensures that the API access is limited to that project.
![Create API user](doc/otc_2_user.gif)
<a name="key-pair"></a>
## Import Key Pair
:warning: Now log in with the newly created API user account and select your project.
![Login as API user](doc/otc_3_login.gif)
Import your SSH public key.
![Import SSH Public Key](doc/otc_4_import_key.gif)
<a name="clone-git"></a>
# Clone Git Repository
Clone the `tpotce` repository to your Ansible Master:
`git clone https://github.com/telekom-security/tpotce.git`
All Ansible related files are located in the [`cloud/ansible/openstack`](openstack) folder.
<a name="settings"></a>
# Settings and recommended values
You can configure all aspects of your Elastic Cloud Server and T-Pot before using the Playbook:
<a name="clouds-yaml"></a>
## clouds.yaml
Located at [`openstack/clouds.yaml`](openstack/clouds.yaml).
Enter your Open Telekom Cloud API user credentials here (username, password, project name, user domain name):
```
clouds:
open-telekom-cloud:
profile: otc
auth:
project_name: eu-de_your_project
username: your_api_user
password: your_password
user_domain_name: OTC-EU-DE-000000000010000XXXXX
```
You can also perform different authentication methods like sourcing OpenStack OS_* environment variables or providing an inline dictionary.
For more information have a look in the [openstack.cloud.server](https://docs.ansible.com/ansible/latest/collections/openstack/cloud/server_module.html) Ansible module documentation.
If you already have your own `clouds.yaml` file or have multiple clouds in there, you can specify which one to use in the `openstack/my_os_cloud.yaml` file:
```
# Enter the name of your cloud to use from clouds.yaml
cloud: open-telekom-cloud
```
<a name="remote-user"></a>
## Ansible remote user
You may have to adjust the `remote_user` in the Ansible Playbook under [`openstack/deploy_tpot.yaml`](openstack/deploy_tpot.yaml) depending on your Debian base image (e.g. on Open Telekom Cloud the default Debian user is `linux`).
<a name="number"></a>
## Number of instances to deploy
You can adjust the number of VMs/T-Pots that you want to create in [`openstack/deploy_tpot.yaml`](openstack/deploy_tpot.yaml):
```
loop: "{{ range(0, 1) }}"
```
One instance is set as the default, increase to your liking.
<a name="instance-settings"></a>
## Instance settings
Located at [`openstack/roles/create_vm/vars/main.yaml`](openstack/roles/create_vm/vars/main.yaml).
Here you can customize your virtual machine specifications:
- Choose an availability zone. For Open Telekom Cloud reference see [here](https://docs.otc.t-systems.com/en-us/endpoint/index.html).
- Change the OS image (For T-Pot we need Debian)
- (Optional) Change the volume size
- Specify your key pair (:warning: Mandatory)
- (Optional) Change the instance type (flavor)
`s3.medium.8` corresponds to 1 vCPU and 8GB of RAM and is the minimum required flavor.
A full list of Open Telekom Cloud flavors can be found [here](https://docs.otc.t-systems.com/en-us/usermanual/ecs/en-us_topic_0177512565.html).
```
availability_zone: eu-de-03
image: Standard_Debian_10_latest
volume_size: 128
key_name: your-KeyPair
flavor: s3.medium.8
```
<a name="user-password"></a>
## User password
Located at [`openstack/roles/install/vars/main.yaml`](openstack/roles/install/vars/main.yaml).
Here you can set the password for your Debian user (**you should definitely change that**).
```
user_password: LiNuXuSeRPaSs#
```
<a name="tpot-conf"></a>
## Configure `tpot.conf.dist`
The file is located in [`iso/installer/tpot.conf.dist`](/iso/installer/tpot.conf.dist).
Here you can choose:
- between the various T-Pot editions
- a username for the web interface
- a password for the web interface (**you should definitely change that**)
<a name="ews-cfg"></a>
## Optional: Custom `ews.cfg`
Enable this by uncommenting the role in the [deploy_tpot.yaml](openstack/deploy_tpot.yaml) playbook.
```
# - custom_ews
```
You can use a custom config file for `ewsposter`.
e.g. when you have your own credentials for delivering data to our [Sicherheitstacho](https://sicherheitstacho.eu/start/main).
You can find the `ews.cfg` template file here: [`openstack/roles/custom_ews/templates/ews.cfg`](openstack/roles/custom_ews/templates/ews.cfg) and adapt it for your needs.
For setting custom credentials, these settings would be relevant for you (the rest of the file can stay as is):
```
[MAIN]
...
contact = your_email_address
...
[EWS]
...
username = your_username
token = your_token
...
```
<a name="hpfeeds"></a>
## Optional: Custom HPFEEDS
Enable this by uncommenting the role in the [deploy_tpot.yaml](openstack/deploy_tpot.yaml) playbook.
```
# - custom_hpfeeds
```
You can specify custom HPFEEDS in [`openstack/roles/custom_hpfeeds/files/hpfeeds.cfg`](openstack/roles/custom_hpfeeds/files/hpfeeds.cfg).
That file contains the defaults (turned off) and you can adapt it for your needs, e.g. for SISSDEN:
```
myENABLE=true
myHOST=hpfeeds.sissden.eu
myPORT=10000
myCHANNEL=t-pot.events
myCERT=/opt/ewsposter/sissden.pem
myIDENT=your_user
mySECRET=your_secret
myFORMAT=json
```
<a name="deploy"></a>
# Deploying a T-Pot :honey_pot::honeybee:
Now, after configuring everything, we can finally start deploying T-Pots!
Go to the [`openstack`](openstack) folder and run the Ansible Playbook with:
`ansible-playbook deploy_tpot.yaml`
(Yes, it is as easy as that :smile:)
If you are running on a machine which asks for a sudo password, you can use:
`ansible-playbook --ask-become-pass deploy_tpot.yaml`
The Playbook will first install required packages on the Ansible Master and then deploy one (or more) new server instances.
After that, T-Pot gets installed and configured on them, optionally custom configs are applied and finally it reboots.
Once this is done, you can proceed with connecting/logging in to the T-Pot according to the [documentation](https://github.com/telekom-security/tpotce#ssh-and-web-access).
<a name="documentation"></a>
# Further documentation
- [Ansible Documentation](https://docs.ansible.com/ansible/latest/)
- [openstack.cloud.server Create/Delete Compute Instances from OpenStack](https://docs.ansible.com/ansible/latest/collections/openstack/cloud/server_module.html)
- [Open Telekom Cloud Help Center](https://docs.otc.t-systems.com/)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 204 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 883 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

View File

@ -1,6 +0,0 @@
[defaults]
host_key_checking = false
[ssh_connection]
scp_if_ssh = true
ssh_args = -o ServerAliveInterval=60

View File

@ -1,9 +0,0 @@
clouds:
open-telekom-cloud:
profile: otc
region_name: eu-de
auth:
project_name: eu-de_your_project
username: your_api_user
password: your_password
user_domain_name: OTC-EU-DE-000000000010000XXXXX

View File

@ -1,30 +0,0 @@
- name: Check host prerequisites
hosts: localhost
become: yes
roles:
- check
- name: Deploy instances
hosts: localhost
vars_files: my_os_cloud.yaml
tasks:
- name: Create security group and network
ansible.builtin.include_role:
name: create_net
- name: Create one or more instances
ansible.builtin.include_role:
name: create_vm
loop: "{{ range(0, 1) }}"
loop_control:
extended: yes
- name: Install T-Pot
hosts: tpot
remote_user: linux
become: yes
gather_facts: no
roles:
- install
# - custom_ews
# - custom_hpfeeds
- reboot

View File

@ -1,2 +0,0 @@
# Enter the name of your cloud to use from clouds.yaml
cloud: open-telekom-cloud

View File

@ -1,2 +0,0 @@
collections:
- name: openstack.cloud

View File

@ -1,19 +0,0 @@
- name: Install dependencies
ansible.builtin.package:
name:
- gcc
- python3-dev
- python3-setuptools
- python3-pip
state: present
- name: Install openstacksdk
ansible.builtin.pip:
name: openstacksdk
executable: pip3
- name: Check if agent forwarding is enabled
ansible.builtin.fail:
msg: Please enable agent forwarding to allow Ansible to connect to the remote host!
ignore_errors: yes
failed_when: lookup('env','SSH_AUTH_SOCK') == ""

View File

@ -1,33 +0,0 @@
- name: Create security group
openstack.cloud.security_group:
cloud: "{{ cloud }}"
name: sg-tpot-ansible
description: Security Group for T-Pot
- name: Add rules to security group
openstack.cloud.security_group_rule:
cloud: "{{ cloud }}"
security_group: sg-tpot-ansible
remote_ip_prefix: 0.0.0.0/0
- name: Create network
openstack.cloud.network:
cloud: "{{ cloud }}"
name: network-tpot-ansible
- name: Create subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: network-tpot-ansible
name: subnet-tpot-ansible
cidr: 192.168.0.0/24
dns_nameservers:
- 100.125.4.25
- 100.125.129.199
- name: Create router
openstack.cloud.router:
cloud: "{{ cloud }}"
name: router-tpot-ansible
interfaces:
- subnet-tpot-ansible

View File

@ -1,24 +0,0 @@
- name: Generate T-Pot name
ansible.builtin.set_fact:
tpot_name: "t-pot-ansible-{{ lookup('password', '/dev/null chars=ascii_lowercase,digits length=6') }}"
- name: Create instance {{ ansible_loop.index }} of {{ ansible_loop.length }}
openstack.cloud.server:
cloud: "{{ cloud }}"
name: "{{ tpot_name }}"
availability_zone: "{{ availability_zone }}"
image: "{{ image }}"
boot_from_volume: yes
volume_size: "{{ volume_size }}"
key_name: "{{ key_name }}"
auto_ip: yes
flavor: "{{ flavor }}"
security_groups: sg-tpot-ansible
network: network-tpot-ansible
register: tpot
- name: Add instance to inventory
ansible.builtin.add_host:
hostname: "{{ tpot_name }}"
ansible_host: "{{ tpot.server.public_v4 }}"
groups: tpot

View File

@ -1,5 +0,0 @@
availability_zone: eu-de-03
image: Standard_Debian_10_latest
volume_size: 128
key_name: your-KeyPair
flavor: s3.medium.8

View File

@ -1,13 +0,0 @@
- name: Copy ews configuration file
ansible.builtin.template:
src: ews.cfg
dest: /data/ews/conf
owner: root
group: root
mode: 0644
- name: Patching tpot.yml with custom ews configuration file
ansible.builtin.lineinfile:
path: /opt/tpot/etc/tpot.yml
insertafter: "/opt/ewsposter/ews.ip"
line: " - /data/ews/conf/ews.cfg:/opt/ewsposter/ews.cfg"

View File

@ -1,137 +0,0 @@
[MAIN]
homedir = /opt/ewsposter/
spooldir = /opt/ewsposter/spool/
logdir = /opt/ewsposter/log/
del_malware_after_send = false
send_malware = true
sendlimit = 500
contact = your_email_address
proxy =
ip =
[EWS]
ews = true
username = your_username
token = your_token
rhost_first = https://community.sicherheitstacho.eu/ews-0.1/alert/postSimpleMessage
rhost_second = https://community.sicherheitstacho.eu/ews-0.1/alert/postSimpleMessage
ignorecert = false
[HPFEED]
hpfeed = %(EWS_HPFEEDS_ENABLE)s
host = %(EWS_HPFEEDS_HOST)s
port = %(EWS_HPFEEDS_PORT)s
channels = %(EWS_HPFEEDS_CHANNELS)s
ident = %(EWS_HPFEEDS_IDENT)s
secret= %(EWS_HPFEEDS_SECRET)s
# path/to/certificate for tls broker - or "false" for non-tls broker
tlscert = %(EWS_HPFEEDS_TLSCERT)s
# hpfeeds submission format: "ews" (xml) or "json"
hpfformat = %(EWS_HPFEEDS_FORMAT)s
[EWSJSON]
json = false
jsondir = /data/ews/json/
[GLASTOPFV3]
glastopfv3 = true
nodeid = glastopfv3-{{ ansible_hostname }}
sqlitedb = /data/glastopf/db/glastopf.db
malwaredir = /data/glastopf/data/files/
[GLASTOPFV2]
glastopfv2 = false
nodeid =
mysqlhost =
mysqldb =
mysqluser =
mysqlpw =
malwaredir =
[KIPPO]
kippo = false
nodeid =
mysqlhost =
mysqldb =
mysqluser =
mysqlpw =
malwaredir =
[COWRIE]
cowrie = true
nodeid = cowrie-{{ ansible_hostname }}
logfile = /data/cowrie/log/cowrie.json
[DIONAEA]
dionaea = true
nodeid = dionaea-{{ ansible_hostname }}
malwaredir = /data/dionaea/binaries/
sqlitedb = /data/dionaea/log/dionaea.sqlite
[HONEYTRAP]
honeytrap = true
nodeid = honeytrap-{{ ansible_hostname }}
newversion = true
payloaddir = /data/honeytrap/attacks/
attackerfile = /data/honeytrap/log/attacker.log
[RDPDETECT]
rdpdetect = false
nodeid =
iptableslog =
targetip =
[EMOBILITY]
eMobility = false
nodeid = emobility-{{ ansible_hostname }}
logfile = /data/emobility/log/centralsystemEWS.log
[CONPOT]
conpot = true
nodeid = conpot-{{ ansible_hostname }}
logfile = /data/conpot/log/conpot*.json
[ELASTICPOT]
elasticpot = true
nodeid = elasticpot-{{ ansible_hostname }}
logfile = /data/elasticpot/log/elasticpot.log
[SURICATA]
suricata = true
nodeid = suricata-{{ ansible_hostname }}
logfile = /data/suricata/log/eve.json
[MAILONEY]
mailoney = true
nodeid = mailoney-{{ ansible_hostname }}
logfile = /data/mailoney/log/commands.log
[RDPY]
rdpy = true
nodeid = rdpy-{{ ansible_hostname }}
logfile = /data/rdpy/log/rdpy.log
[VNCLOWPOT]
vnclowpot = true
nodeid = vnclowpot-{{ ansible_hostname }}
logfile = /data/vnclowpot/log/vnclowpot.log
[HERALDING]
heralding = true
nodeid = heralding-{{ ansible_hostname }}
logfile = /data/heralding/log/auth.csv
[CISCOASA]
ciscoasa = true
nodeid = ciscoasa-{{ ansible_hostname }}
logfile = /data/ciscoasa/log/ciscoasa.log
[TANNER]
tanner = true
nodeid = tanner-{{ ansible_hostname }}
logfile = /data/tanner/log/tanner_report.json
[GLUTTON]
glutton = true
nodeid = glutton-{{ ansible_hostname }}
logfile = /data/glutton/log/glutton.log

View File

@ -1,8 +0,0 @@
myENABLE=false
myHOST=host
myPORT=port
myCHANNEL=channels
myCERT=false
myIDENT=user
mySECRET=secret
myFORMAT=json

View File

@ -1,12 +0,0 @@
- name: Copy hpfeeds configuration file
ansible.builtin.copy:
src: hpfeeds.cfg
dest: /data/ews/conf
owner: tpot
group: tpot
mode: 0770
register: config
- name: Applying hpfeeds settings
ansible.builtin.command: /opt/tpot/bin/hpfeeds_optin.sh --conf=/data/ews/conf/hpfeeds.cfg
when: config.changed == true

View File

@ -1,48 +0,0 @@
- name: Waiting for SSH connection
ansible.builtin.wait_for_connection:
- name: Gathering facts
ansible.builtin.setup:
- name: Cloning T-Pot install directory
ansible.builtin.git:
repo: "https://github.com/telekom-security/tpotce.git"
dest: /root/tpot
- name: Prepare to set user password
ansible.builtin.set_fact:
user_name: "{{ ansible_user }}"
user_salt: "s0mew1ck3dTpoT"
no_log: true
- name: Changing password for user {{ user_name }}
ansible.builtin.user:
name: "{{ ansible_user }}"
password: "{{ user_password | password_hash('sha512', user_salt) }}"
state: present
shell: /bin/bash
- name: Copy T-Pot configuration file
ansible.builtin.copy:
src: ../../../../../../iso/installer/tpot.conf.dist
dest: /root/tpot.conf
owner: root
group: root
mode: 0644
- name: Install T-Pot on instance - be patient, this might take 15 to 30 minutes depending on the connection speed.
ansible.builtin.command: /root/tpot/iso/installer/install.sh --type=auto --conf=/root/tpot.conf
- name: Delete T-Pot configuration file
ansible.builtin.file:
path: /root/tpot.conf
state: absent
- name: Change unattended-upgrades to take default action
ansible.builtin.blockinfile:
dest: /etc/apt/apt.conf.d/50unattended-upgrades
block: |
Dpkg::Options {
"--force-confdef";
"--force-confold";
}

View File

@ -1 +0,0 @@
user_password: LiNuXuSeRPaSs#

View File

@ -1,16 +0,0 @@
- name: Finally rebooting T-Pot
ansible.builtin.command: shutdown -r now
async: 1
poll: 0
- name: Next login options
ansible.builtin.debug:
msg:
- "***** SSH Access:"
- "***** ssh {{ ansible_user }}@{{ ansible_host }} -p 64295"
- ""
- "***** Web UI:"
- "***** https://{{ ansible_host }}:64297"
- ""
- "***** Admin UI:"
- "***** https://{{ ansible_host }}:64294"

View File

@ -1,129 +0,0 @@
# T-Pot Terraform
This [Terraform](https://www.terraform.io/) configuration can be used to launch a virtual machine, bootstrap any dependencies and install T-Pot in a single step.
Configuration for Amazon Web Services (AWS) and Open Telekom Cloud (OTC) is currently included.
This can easily be extended to support other [Terraform providers](https://registry.terraform.io/browse/providers?category=public-cloud%2Ccloud-automation%2Cinfrastructure).
[Cloud-init](https://cloudinit.readthedocs.io/en/latest/) is used to bootstrap the instance and install T-Pot on startup.
# Table of Contents
- [What get's created](#what-created)
- [Amazon Web Services (AWS)](#what-created-aws)
- [Open Telekom Cloud (OTC)](#what-created-otc)
- [Prerequisites](#pre)
- [Amazon Web Services (AWS)](#pre-aws)
- [Open Telekom Cloud (OTC)](#pre-otc)
- [Terraform Variables](#variables)
- [Common configuration items](#variables-common)
- [Amazon Web Services (AWS)](#variables-aws)
- [Open Telekom Cloud (OTC)](#variables-otc)
- [Initialising](#initialising)
- [Applying the Configuration](#applying)
- [Connecting to the Instance](#connecting)
<a name="what-created"></a>
## What get's created
<a name="what-created-aws"></a>
### Amazon Web Services (AWS)
* EC2 instance:
* t3.large (2 vCPUs, 8 GB RAM)
* 128 GB disk
* Debian 10
* Public IP
* Security Group:
* TCP/UDP ports <= 64000 open to the Internet
* TCP ports 64294, 64295 and 64297 open to a chosen administrative IP
<a name="what-created-otc"></a>
### Open Telekom Cloud (OTC)
* ECS instance:
* s3.medium.8 (1 vCPU, 8 GB RAM)
* 128 GB disk
* Debian 10
* Public EIP
* Security Group
* All TCP/UDP ports are open to the Internet
* Virtual Private Cloud (VPC) and Subnet
<a name="pre"></a>
## Prerequisites
* [Terraform](https://www.terraform.io/) 0.13
<a name="pre-aws"></a>
### Amazon Web Services (AWS)
* AWS Account
* Existing VPC: VPC ID needs to be specified in `aws/variables.tf`
* Existing subnet: Subnet ID needs to be specified in `aws/variables.tf`
* Existing SSH key pair: Key name needs to be specified in `aws/variables.tf`
* AWS Authentication credentials should be [set using environment variables](https://www.terraform.io/docs/providers/aws/index.html#environment-variables)
<a name="pre-otc"></a>
### Open Telekom Cloud (OTC)
* OTC Account
* Existing SSH key pair: Key name needs to be specified in `otc/variables.tf`
* OTC Authentication credentials (Username, Password, Project Name, User Domain Name) can be set in the `otc/clouds.yaml` file
<a name="variables"></a>
## Terraform Variables
<a name="variables-common"></a>
### Common configuration items
These variables exist in `aws/variables.tf` and `otc/variables.tf` respectively.
Settings for cloud-init:
* `timezone` - Set the Server's timezone
* `linux_password`- Set a password for the Linux Operating System user (which is also used on the Admin UI)
Settings for T-Pot:
* `tpot_flavor` - Set the flavor of the T-Pot (Available flavors are listed in the variable's description)
* `web_user` - Set a username for the T-Pot Kibana Dasboard
* `web_password` - Set a password for the T-Pot Kibana Dashboard
<a name="variables-aws"></a>
### Amazon Web Services (AWS)
In `aws/variables.tf`, you can change the additional variables:
* `admin_ip` - source IP address(es) that you will use to administer the system. Connections to TCP ports 64294, 64295 and 64297 will be allowed from this IP only. Multiple IPs or CIDR blocks can be specified in the format: `["127.0.0.1/32", "192.168.0.0/24"]`
* `ec2_vpc_id` - Specify an existing VPC ID
* `ec2_subnet_id` - Specify an existing Subnet ID
* `ec2_region`
* `ec2_ssh_key_name` - Specify an existing SSH key pair
* `ec2_instance_type`
<a name="variables-otc"></a>
### Open Telekom Cloud (OTC)
In `otc/variables.tf`, you can change the additional variables:
* `ecs_flavor`
* `ecs_disk_size`
* `availability_zone`
* `key_pair` - Specify an existing SSH key pair
* `eip_size`
... and some more, but these are the most relevant.
<a name="initialising"></a>
## Initialising
The [`terraform init`](https://www.terraform.io/docs/commands/init.html) command is used to initialize a working directory containing Terraform configuration files.
```
$ cd aws
$ terraform init
```
OR
```
$ cd otc
$ terraform init
```
<a name="applying"></a>
## Applying the Configuration
The [`terraform apply`](https://www.terraform.io/docs/commands/apply.html) command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a [`terraform plan`](https://www.terraform.io/docs/commands/plan.html) execution plan.
```
$ terraform apply
```
This will create your infrastructure and start a Cloud Server. On startup, the Server gets bootstrapped with cloud-init and will install T-Pot. Once this is done, the server will reboot.
If you want the remove the built infrastructure, you can run [`terraform destroy`](https://www.terraform.io/docs/commands/destroy.html) to delete it.
<a name="connecting"></a>
## Connecting to the Instance
When the installation is completed, you can proceed with connecting/logging in to the T-Pot according to the [documentation](https://github.com/telekom-security/tpotce#ssh-and-web-access).

View File

@ -1,20 +0,0 @@
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/aws" {
version = "3.26.0"
constraints = "3.26.0"
hashes = [
"h1:0i78FItlPeiomd+4ThZrtm56P5K33k7/6dnEe4ZePI0=",
"zh:26043eed36d070ca032cf04bc980c654a25821a8abc0c85e1e570e3935bbfcbb",
"zh:2fe68f3f78d23830a04d7fac3eda550eef1f627dfc130486f70a65dc5c254300",
"zh:3d66484c608c64678e639db25d63872783ce60363a1246e30317f21c9c23b84b",
"zh:46ffd755cfd4cf94fe66342797b5afdcef010a24e126c67fee141b357d393535",
"zh:5e96f24357e945c9067cf5e032ad1d003609629c956c2f9f642fefe714e74587",
"zh:60c27aca36bb63bf3e865c2193be80ca83b376581d00f9c220af4b013e163c4d",
"zh:896f0f22d19d41e71b22f9240b261714c3915b165ddefeb771e7734d69dc47ea",
"zh:90de9966cb2fd3e2f326df291595e55d2dd2d90e7d6dd085c2c8691dce82bdb4",
"zh:ad05a91a88ceb1d6de5a568f7cc0b0e5bc0a79f3da70bc28c1e7f3750e362d58",
"zh:e8c63f59c6465329e1f3357498face3dd7ef10a033df3c366a33aa9e94b46c01",
]
}

View File

@ -1,66 +0,0 @@
provider "aws" {
region = var.ec2_region
}
resource "aws_security_group" "tpot" {
name = "T-Pot"
description = "T-Pot Honeypot"
vpc_id = var.ec2_vpc_id
ingress {
from_port = 0
to_port = 64000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 64000
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 64294
to_port = 64294
protocol = "tcp"
cidr_blocks = var.admin_ip
}
ingress {
from_port = 64295
to_port = 64295
protocol = "tcp"
cidr_blocks = var.admin_ip
}
ingress {
from_port = 64297
to_port = 64297
protocol = "tcp"
cidr_blocks = var.admin_ip
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "T-Pot"
}
}
resource "aws_instance" "tpot" {
ami = var.ec2_ami[var.ec2_region]
instance_type = var.ec2_instance_type
key_name = var.ec2_ssh_key_name
subnet_id = var.ec2_subnet_id
tags = {
Name = "T-Pot Honeypot"
}
root_block_device {
volume_type = "gp2"
volume_size = 128
delete_on_termination = true
}
user_data = templatefile("../cloud-init.yaml", { timezone = var.timezone, password = var.linux_password, tpot_flavor = var.tpot_flavor, web_user = var.web_user, web_password = var.web_password })
vpc_security_group_ids = [aws_security_group.tpot.id]
associate_public_ip_address = true
}

View File

@ -1,12 +0,0 @@
output "Admin_UI" {
value = "https://${aws_instance.tpot.public_dns}:64294/"
}
output "SSH_Access" {
value = "ssh -i {private_key_file} -p 64295 admin@${aws_instance.tpot.public_dns}"
}
output "Web_UI" {
value = "https://${aws_instance.tpot.public_dns}:64297/"
}

View File

@ -1,93 +0,0 @@
variable "admin_ip" {
default = ["127.0.0.1/32"]
description = "admin IP addresses in CIDR format"
}
variable "ec2_vpc_id" {
description = "ID of AWS VPC"
default = "vpc-XXX"
}
variable "ec2_subnet_id" {
description = "ID of AWS VPC subnet"
default = "subnet-YYY"
}
variable "ec2_region" {
description = "AWS region to launch servers"
default = "eu-west-1"
}
variable "ec2_ssh_key_name" {
default = "default"
}
# https://aws.amazon.com/ec2/instance-types/
# t3.large = 2 vCPU, 8 GiB RAM
variable "ec2_instance_type" {
default = "t3.large"
}
# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Bullseye
variable "ec2_ami" {
type = map(string)
default = {
"af-south-1" = "ami-0c372f041acae6d49"
"ap-east-1" = "ami-079b8d011d4655385"
"ap-northeast-1" = "ami-08dbbf1c0485a4aa8"
"ap-northeast-2" = "ami-0269fe7d013b8e2dd"
"ap-northeast-3" = "ami-0848d1e5fb6e3e3da"
"ap-south-1" = "ami-020d429f17c9f1d0a"
"ap-southeast-1" = "ami-09625a221230d9fe6"
"ap-southeast-2" = "ami-03cbc6cddb06af2c2"
"ca-central-1" = "ami-09125623b02302014"
"eu-central-1" = "ami-00c36c60f07e21791"
"eu-north-1" = "ami-052bea934e2d9dbfe"
"eu-south-1" = "ami-04e2bb16d37324719"
"eu-west-1" = "ami-0f87948fe2cf1b2a4"
"eu-west-2" = "ami-02ed1bc837487d535"
"eu-west-3" = "ami-080efd2add7e29430"
"me-south-1" = "ami-0dbde382c834c4a72"
"sa-east-1" = "ami-0a0792814cb068077"
"us-east-1" = "ami-05dd1b6e7ef6f8378"
"us-east-2" = "ami-04dd0542609808c50"
"us-west-1" = "ami-07af5f877b3db9f73"
"us-west-2" = "ami-0d0d8694ba492c02b"
}
}
## cloud-init configuration ##
variable "timezone" {
default = "UTC"
}
variable "linux_password" {
#default = "LiNuXuSeRPaSs#"
description = "Set a password for the default user"
validation {
condition = length(var.linux_password) > 0
error_message = "Please specify a password for the default user."
}
}
## These will go in the generated tpot.conf file ##
variable "tpot_flavor" {
default = "STANDARD"
description = "Specify your tpot flavor [STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]"
}
variable "web_user" {
default = "webuser"
description = "Set a username for the web user"
}
variable "web_password" {
#default = "w3b$ecret"
description = "Set a password for the web user"
validation {
condition = length(var.web_password) > 0
error_message = "Please specify a password for the web user."
}
}

View File

@ -1,9 +0,0 @@
terraform {
required_version = ">= 0.13"
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.26.0"
}
}
}

View File

@ -1,9 +0,0 @@
provider "aws" {
alias = "eu-west-2"
region = "eu-west-2"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}

View File

@ -1,27 +0,0 @@
module "eu-west-2" {
source = "./modules/multi-region"
ec2_vpc_id = "vpc-xxxxxxxx"
ec2_subnet_id = "subnet-xxxxxxxx"
ec2_region = "eu-west-2"
tpot_name = "T-Pot Honeypot"
linux_password = var.linux_password
web_password = var.web_password
providers = {
aws = aws.eu-west-2
}
}
module "us-west-1" {
source = "./modules/multi-region"
ec2_vpc_id = "vpc-xxxxxxxx"
ec2_subnet_id = "subnet-xxxxxxxx"
ec2_region = "us-west-1"
tpot_name = "T-Pot Honeypot"
linux_password = var.linux_password
web_password = var.web_password
providers = {
aws = aws.us-west-1
}
}

View File

@ -1,69 +0,0 @@
variable "ec2_vpc_id" {}
variable "ec2_subnet_id" {}
variable "ec2_region" {}
variable "linux_password" {}
variable "web_password" {}
variable "tpot_name" {}
resource "aws_security_group" "tpot" {
name = "T-Pot"
description = "T-Pot Honeypot"
vpc_id = var.ec2_vpc_id
ingress {
from_port = 0
to_port = 64000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 64000
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 64294
to_port = 64294
protocol = "tcp"
cidr_blocks = var.admin_ip
}
ingress {
from_port = 64295
to_port = 64295
protocol = "tcp"
cidr_blocks = var.admin_ip
}
ingress {
from_port = 64297
to_port = 64297
protocol = "tcp"
cidr_blocks = var.admin_ip
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "T-Pot"
}
}
resource "aws_instance" "tpot" {
ami = var.ec2_ami[var.ec2_region]
instance_type = var.ec2_instance_type
key_name = var.ec2_ssh_key_name
subnet_id = var.ec2_subnet_id
tags = {
Name = var.tpot_name
}
root_block_device {
volume_type = "gp2"
volume_size = 128
delete_on_termination = true
}
user_data = templatefile("../cloud-init.yaml", { timezone = var.timezone, password = var.linux_password, tpot_flavor = var.tpot_flavor, web_user = var.web_user, web_password = var.web_password })
vpc_security_group_ids = [aws_security_group.tpot.id]
associate_public_ip_address = true
}

View File

@ -1,12 +0,0 @@
output "Admin_UI" {
value = "https://${aws_instance.tpot.public_dns}:64294/"
}
output "SSH_Access" {
value = "ssh -i {private_key_file} -p 64295 admin@${aws_instance.tpot.public_dns}"
}
output "Web_UI" {
value = "https://${aws_instance.tpot.public_dns}:64297/"
}

View File

@ -1,57 +0,0 @@
variable "admin_ip" {
default = ["127.0.0.1/32"]
description = "admin IP addresses in CIDR format"
}
variable "ec2_ssh_key_name" {
default = "default"
}
# https://aws.amazon.com/ec2/instance-types/
variable "ec2_instance_type" {
default = "t3.xlarge"
}
# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Bullseye
variable "ec2_ami" {
type = map(string)
default = {
"af-south-1" = "ami-0c372f041acae6d49"
"ap-east-1" = "ami-079b8d011d4655385"
"ap-northeast-1" = "ami-08dbbf1c0485a4aa8"
"ap-northeast-2" = "ami-0269fe7d013b8e2dd"
"ap-northeast-3" = "ami-0848d1e5fb6e3e3da"
"ap-south-1" = "ami-020d429f17c9f1d0a"
"ap-southeast-1" = "ami-09625a221230d9fe6"
"ap-southeast-2" = "ami-03cbc6cddb06af2c2"
"ca-central-1" = "ami-09125623b02302014"
"eu-central-1" = "ami-00c36c60f07e21791"
"eu-north-1" = "ami-052bea934e2d9dbfe"
"eu-south-1" = "ami-04e2bb16d37324719"
"eu-west-1" = "ami-0f87948fe2cf1b2a4"
"eu-west-2" = "ami-02ed1bc837487d535"
"eu-west-3" = "ami-080efd2add7e29430"
"me-south-1" = "ami-0dbde382c834c4a72"
"sa-east-1" = "ami-0a0792814cb068077"
"us-east-1" = "ami-05dd1b6e7ef6f8378"
"us-east-2" = "ami-04dd0542609808c50"
"us-west-1" = "ami-07af5f877b3db9f73"
"us-west-2" = "ami-0d0d8694ba492c02b"
}
}
## cloud-init configuration ##
variable "timezone" {
default = "UTC"
}
## These will go in the generated tpot.conf file ##
variable "tpot_flavor" {
default = "STANDARD"
description = "Specify your tpot flavor [STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]"
}
variable "web_user" {
default = "webuser"
description = "Set a username for the web user"
}

View File

@ -1,9 +0,0 @@
terraform {
required_version = ">= 0.13"
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.72.0"
}
}
}

View File

@ -1,7 +0,0 @@
output "eu-west-2_Web_UI" {
value = module.eu-west-2.Web_UI
}
output "us-west-1_Web_UI" {
value = module.us-west-1.Web_UI
}

View File

@ -1,19 +0,0 @@
variable "linux_password" {
#default = "LiNuXuSeRP4Ss!"
description = "Set a password for the default user"
validation {
condition = length(var.linux_password) > 0
error_message = "Please specify a password for the default user."
}
}
variable "web_password" {
#default = "w3b$ecret20"
description = "Set a password for the web user"
validation {
condition = length(var.web_password) > 0
error_message = "Please specify a password for the web user."
}
}

View File

@ -1,26 +0,0 @@
#cloud-config
timezone: ${timezone}
packages:
- git
runcmd:
- curl -sS --retry 5 https://github.com
- git clone https://github.com/telekom-security/tpotce /root/tpot
- /root/tpot/iso/installer/install.sh --type=auto --conf=/root/tpot.conf
- rm /root/tpot.conf
- /sbin/shutdown -r now
password: ${password}
chpasswd:
expire: false
write_files:
- content: |
# tpot configuration file
myCONF_TPOT_FLAVOR='${tpot_flavor}'
myCONF_WEB_USER='${web_user}'
myCONF_WEB_PW='${web_password}'
owner: root:root
path: /root/tpot.conf
permissions: '0600'

View File

@ -1,38 +0,0 @@
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/random" {
version = "3.1.0"
constraints = "~> 3.1.0"
hashes = [
"h1:BZMEPucF+pbu9gsPk0G0BHx7YP04+tKdq2MrRDF1EDM=",
"zh:2bbb3339f0643b5daa07480ef4397bd23a79963cc364cdfbb4e86354cb7725bc",
"zh:3cd456047805bf639fbf2c761b1848880ea703a054f76db51852008b11008626",
"zh:4f251b0eda5bb5e3dc26ea4400dba200018213654b69b4a5f96abee815b4f5ff",
"zh:7011332745ea061e517fe1319bd6c75054a314155cb2c1199a5b01fe1889a7e2",
"zh:738ed82858317ccc246691c8b85995bc125ac3b4143043219bd0437adc56c992",
"zh:7dbe52fac7bb21227acd7529b487511c91f4107db9cc4414f50d04ffc3cab427",
"zh:a3a9251fb15f93e4cfc1789800fc2d7414bbc18944ad4c5c98f466e6477c42bc",
"zh:a543ec1a3a8c20635cf374110bd2f87c07374cf2c50617eee2c669b3ceeeaa9f",
"zh:d9ab41d556a48bd7059f0810cf020500635bfc696c9fc3adab5ea8915c1d886b",
"zh:d9e13427a7d011dbd654e591b0337e6074eef8c3b9bb11b2e39eaaf257044fd7",
"zh:f7605bd1437752114baf601bdf6931debe6dc6bfe3006eb7e9bb9080931dca8a",
]
}
provider "registry.terraform.io/opentelekomcloud/opentelekomcloud" {
version = "1.23.6"
constraints = "~> 1.23.4"
hashes = [
"h1:B/1Md957jWaDgFqsJDzmJc75KwL0eC/PCVuZ8HV5xSc=",
"zh:1aa79010869d082157fb44fc83c3bff4e40938ec0ca916f704d974c7f7ca39e4",
"zh:3155b8366828ce50231f69962b55df1e2261ed63c44bb64e2c950dd68769df1b",
"zh:4a909617aa96a6d8aead14f56996ad94e0a1cae9d28e8df1ddae19c2095ed337",
"zh:4f71046719632b4b90f88d29d8ba88915ee6ad66cd9d7ebe84a7459013e5003a",
"zh:67e4d10b2db79ad78ae2ec8d9dfac53c4721028f97f4436a7aa45e80b1beefd3",
"zh:7f12541fc5a3513e5522ff2bd5fee17d1e67bfe64f9ef59d03863fc7389e12ce",
"zh:86fadabfc8307cf6084a412ffc9c797ec94932d08bc663a3fcebf98101e951f6",
"zh:98744b39c2bfe3e8e6f929f750a689971071b257f3f066f669f93c8e0b76d179",
"zh:c363d41debb060804e2c6bd9cb50b4e8daa37362299e3ea74e187265cd85f2ca",
]
}

View File

@ -1,9 +0,0 @@
clouds:
open-telekom-cloud:
region_name: eu-de
auth:
project_name: eu-de_your_project
username: your_api_user
password: your_password
user_domain_name: OTC-EU-DE-000000000010000XXXXX
auth_url: https://iam.eu-de.otc.t-systems.com/v3

View File

@ -1,68 +0,0 @@
data "opentelekomcloud_images_image_v2" "debian" {
name = "Standard_Debian_10_latest"
}
resource "opentelekomcloud_networking_secgroup_v2" "secgroup_1" {
name = var.secgroup_name
description = var.secgroup_desc
}
resource "opentelekomcloud_networking_secgroup_rule_v2" "secgroup_rule_1" {
direction = "ingress"
ethertype = "IPv4"
remote_ip_prefix = "0.0.0.0/0"
security_group_id = opentelekomcloud_networking_secgroup_v2.secgroup_1.id
}
resource "opentelekomcloud_vpc_v1" "vpc_1" {
name = var.vpc_name
cidr = var.vpc_cidr
}
resource "opentelekomcloud_vpc_subnet_v1" "subnet_1" {
name = var.subnet_name
cidr = var.subnet_cidr
vpc_id = opentelekomcloud_vpc_v1.vpc_1.id
gateway_ip = var.subnet_gateway_ip
dns_list = ["100.125.4.25", "100.125.129.199"]
}
resource "random_id" "tpot" {
byte_length = 6
prefix = var.ecs_prefix
}
resource "opentelekomcloud_ecs_instance_v1" "ecs_1" {
name = random_id.tpot.b64_url
image_id = data.opentelekomcloud_images_image_v2.debian.id
flavor = var.ecs_flavor
vpc_id = opentelekomcloud_vpc_v1.vpc_1.id
nics {
network_id = opentelekomcloud_vpc_subnet_v1.subnet_1.id
}
system_disk_size = var.ecs_disk_size
system_disk_type = "SAS"
security_groups = [opentelekomcloud_networking_secgroup_v2.secgroup_1.id]
availability_zone = var.availability_zone
key_name = var.key_pair
user_data = templatefile("../cloud-init.yaml", { timezone = var.timezone, password = var.linux_password, tpot_flavor = var.tpot_flavor, web_user = var.web_user, web_password = var.web_password })
}
resource "opentelekomcloud_vpc_eip_v1" "eip_1" {
publicip {
type = "5_bgp"
}
bandwidth {
name = "bandwidth-${random_id.tpot.b64_url}"
size = var.eip_size
share_type = "PER"
}
}
resource "opentelekomcloud_compute_floatingip_associate_v2" "fip_1" {
floating_ip = opentelekomcloud_vpc_eip_v1.eip_1.publicip.0.ip_address
instance_id = opentelekomcloud_ecs_instance_v1.ecs_1.id
}

View File

@ -1,11 +0,0 @@
output "Admin_UI" {
value = "https://${opentelekomcloud_vpc_eip_v1.eip_1.publicip.0.ip_address}:64294"
}
output "SSH_Access" {
value = "ssh -p 64295 linux@${opentelekomcloud_vpc_eip_v1.eip_1.publicip.0.ip_address}"
}
output "Web_UI" {
value = "https://${opentelekomcloud_vpc_eip_v1.eip_1.publicip.0.ip_address}:64297"
}

View File

@ -1,3 +0,0 @@
provider "opentelekomcloud" {
cloud = "open-telekom-cloud"
}

View File

@ -1,98 +0,0 @@
## cloud-init configuration ##
variable "timezone" {
default = "UTC"
}
variable "linux_password" {
#default = "LiNuXuSeRPaSs#"
description = "Set a password for the default user"
validation {
condition = length(var.linux_password) > 0
error_message = "Please specify a password for the default user."
}
}
## Security Group ##
variable "secgroup_name" {
default = "sg-tpot"
}
variable "secgroup_desc" {
default = "Security Group for T-Pot"
}
## Virtual Private Cloud ##
variable "vpc_name" {
default = "vpc-tpot"
}
variable "vpc_cidr" {
default = "192.168.0.0/16"
}
## Subnet ##
variable "subnet_name" {
default = "subnet-tpot"
}
variable "subnet_cidr" {
default = "192.168.0.0/24"
}
variable "subnet_gateway_ip" {
default = "192.168.0.1"
}
## Elastic Cloud Server ##
variable "ecs_prefix" {
default = "tpot-"
}
variable "ecs_flavor" {
default = "s3.medium.8"
}
variable "ecs_disk_size" {
default = "128"
}
variable "availability_zone" {
default = "eu-de-03"
}
variable "key_pair" {
#default = ""
description = "Specify your SSH key pair"
validation {
condition = length(var.key_pair) > 0
error_message = "Please specify a Key Pair."
}
}
## Elastic IP ##
variable "eip_size" {
default = "100"
}
## These will go in the generated tpot.conf file ##
variable "tpot_flavor" {
default = "STANDARD"
description = "Specify your tpot flavor [STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]"
}
variable "web_user" {
default = "webuser"
description = "Set a username for the web user"
}
variable "web_password" {
#default = "w3b$ecret"
description = "Set a password for the web user"
validation {
condition = length(var.web_password) > 0
error_message = "Please specify a password for the web user."
}
}

View File

@ -1,13 +0,0 @@
terraform {
required_version = ">= 0.13"
required_providers {
opentelekomcloud = {
source = "opentelekomcloud/opentelekomcloud"
version = "~> 1.23.4"
}
random = {
source = "hashicorp/random"
version = "~> 3.1.0"
}
}
}

182
compose/customizer.py Normal file
View File

@ -0,0 +1,182 @@
from datetime import datetime
import yaml
version = \
"""
____ [T-Pot] _ ____ _ _ _
/ ___| ___ _ ____ _(_) ___ ___ | __ ) _ _(_) | __| | ___ _ __
\___ \ / _ \ '__\ \ / / |/ __/ _ \ | _ \| | | | | |/ _` |/ _ \ '__|
___) | __/ | \ V /| | (_| __/ | |_) | |_| | | | (_| | __/ |
|____/ \___|_| \_/ |_|\___\___| |____/ \__,_|_|_|\__,_|\___|_| v0.21
# This script is intended for users who want to build a customized docker-compose.yml forT-Pot.
# T-Pot Service Builder will ask for all the docker services to be included in docker-compose.yml.
# The configuration file will be checked for conflicting ports.
# Port conflicts have to be resolve manually or re-running the script and excluding the conflicting services.
# Review the resulting docker-compose-custom.yml and adjust to your needs by (un)commenting the corresponding lines in the config.
"""
header = \
"""# T-Pot: CUSTOM EDITION
# Generated on: {current_date}
"""
config_filename = "tpot_services.yml"
service_filename = "docker-compose-custom.yml"
def load_config(filename):
try:
with open(filename, 'r') as file:
config = yaml.safe_load(file)
except:
print_color(f"Error: {filename} not found. Exiting.", "red")
exit()
return config
def prompt_service_include(service_name):
while True:
try:
response = input(f"Include {service_name}? (y/n): ").strip().lower()
if response in ['y', 'n']:
return response == 'y'
else:
print_color("Please enter 'y' for yes or 'n' for no.", "red")
except KeyboardInterrupt:
print()
print_color("Interrupted by user. Exiting.", "red")
print()
exit()
def check_port_conflicts(selected_services):
all_ports = {}
conflict_ports = []
for service_name, config in selected_services.items():
ports = config.get('ports', [])
for port in ports:
# Split the port mapping and take only the host port part
parts = port.split(':')
host_port = parts[1] if len(parts) == 3 else (parts[0] if parts[1].isdigit() else parts[1])
# Check for port conflict and associate it with the service name
if host_port in all_ports:
conflict_ports.append((service_name, host_port))
if all_ports[host_port] not in [service for service, _ in conflict_ports]:
conflict_ports.append((all_ports[host_port], host_port))
else:
all_ports[host_port] = service_name
if conflict_ports:
print_color("[WARNING] - Port conflict(s) detected:", "red")
for service, port in conflict_ports:
print_color(f"{service}: {port}", "red")
return True
return False
def print_color(text, color):
colors = {
"red": "\033[91m",
"green": "\033[92m",
"blue": "\033[94m", # Added blue
"magenta": "\033[95m", # Added magenta
"end": "\033[0m",
}
print(f"{colors[color]}{text}{colors['end']}")
def enforce_dependencies(selected_services, services):
# If snare or any tanner services are selected, ensure all are enabled
tanner_services = {'snare', 'tanner', 'tanner_redis', 'tanner_phpox', 'tanner_api'}
if tanner_services.intersection(selected_services):
print_color("[OK] - For Snare / Tanner to work all required services have been added to your configuration.", "green")
for service in tanner_services:
selected_services[service] = services[service]
# If kibana is enabled, also enable elasticsearch
if 'kibana' in selected_services:
selected_services['elasticsearch'] = services['elasticsearch']
print_color("[OK] - Kibana requires Elasticsearch which has been added to your configuration.", "green")
# If spiderfoot is enabled, also enable nginx
if 'spiderfoot' in selected_services:
selected_services['nginx'] = services['nginx']
print_color("[OK] - Spiderfoot requires Nginx which has been added to your configuration.","green")
# If any map services are detected, enable logstash, elasticsearch, nginx, and all map services
map_services = {'map_web', 'map_redis', 'map_data'}
if map_services.intersection(selected_services):
print_color("[OK] - For AttackMap to work all required services have been added to your configuration.", "green")
for service in map_services.union({'elasticsearch', 'nginx'}):
selected_services[service] = services[service]
# honeytrap and glutton cannot be active at the same time, always vote in favor of honeytrap
if 'honeytrap' in selected_services and 'glutton' in selected_services:
# Remove glutton and notify
del selected_services['glutton']
print_color("[OK] - Honeytrap and Glutton cannot be active at the same time. Glutton has been removed from your configuration.","green")
def remove_unused_networks(selected_services, services, networks):
used_networks = set()
# Identify networks used by selected services
for service_name in selected_services:
service_config = services[service_name]
if 'networks' in service_config:
for network in service_config['networks']:
used_networks.add(network)
# Remove unused networks
for network in list(networks):
if network not in used_networks:
del networks[network]
def main():
config = load_config(config_filename)
# Separate services and networks
services = config['services']
networks = config.get('networks', {})
selected_services = {'tpotinit': services['tpotinit'],
'logstash': services['logstash']} # Always include tpotinit and logstash
for service_name, service_config in services.items():
if service_name not in selected_services: # Skip already included services
if prompt_service_include(service_name):
selected_services[service_name] = service_config
# Enforce dependencies
enforce_dependencies(selected_services, services)
# Remove unused networks based on selected services
remove_unused_networks(selected_services, services, networks)
output_config = {
'version': '3.9',
'networks': networks,
'services': selected_services,
}
current_date = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
with open(service_filename, 'w') as file:
file.write(header.format(current_date=current_date))
yaml.dump(output_config, file, default_flow_style=False, sort_keys=False, indent=2)
if check_port_conflicts(selected_services):
print_color(f"[WARNING] - Adjust the conflicting ports in the {service_filename} or re-run the script and select services that do not occupy the same port(s).",
"red")
else:
print_color(f"[OK] - Custom {service_filename} has been generated without port conflicts.", "green")
print_color(f"Copy {service_filename} to ~/tpotce and test with: docker compose -f {service_filename} up", "blue")
print_color(f"If everything works, exit with CTRL-C and replace docker-compose.yml with the new config.", "blue")
if __name__ == "__main__":
print_color(version, "magenta")
main()

825
compose/mac_win.yml Normal file
View File

@ -0,0 +1,825 @@
# T-Pot: MAC_WIN
version: '3.9'
networks:
tpotinit_local:
adbhoney_local:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_guardian_ast:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local:
ddospot_local:
dicompot_local:
dionaea_local:
elasticpot_local:
heralding_local:
ipphoney_local:
mailoney_local:
medpot_local:
redishoneypot_local:
sentrypeer_local:
tanner_local:
wordpot_local:
nginx_local:
ewsposter_local:
services:
########################################
#### DEV
########################################
#### T-Pot Init - Never delete this!
########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
networks:
- tpotinit_local
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
##################
#### Honeypots
##################
# Adbhoney service
adbhoney:
container_name: adbhoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- adbhoney_local
ports:
- "5555:5555"
image: ${TPOT_REPO}/adbhoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/adbhoney/log:/opt/adbhoney/log
- ${TPOT_DATA_PATH}/adbhoney/downloads:/opt/adbhoney/dl
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
- ciscoasa_local
ports:
- "5000:5000/udp"
- "8443:8443"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot guardian_ast service
conpot_guardian_ast:
container_name: conpot_guardian_ast
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
- CONPOT_LOG=/var/log/conpot/conpot_guardian_ast.log
- CONPOT_TEMPLATE=guardian_ast
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
ports:
- "22:22"
- "23:23"
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Ddospot service
ddospot:
container_name: ddospot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ddospot_local
ports:
- "19:19/udp"
- "53:53/udp"
- "123:123/udp"
# - "161:161/udp"
- "1900:1900/udp"
image: ${TPOT_REPO}/ddospot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ddospot/log:/opt/ddospot/ddospot/logs
- ${TPOT_DATA_PATH}/ddospot/bl:/opt/ddospot/ddospot/bl
- ${TPOT_DATA_PATH}/ddospot/db:/opt/ddospot/ddospot/db
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
dicompot:
container_name: dicompot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
tty: true
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dionaea_local
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
# - "5060:5060"
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- elasticpot_local
ports:
- "9200:9200"
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
# Heralding service
heralding:
container_name: heralding
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
- "110:110"
- "143:143"
# - "443:443"
- "465:465"
- "993:993"
- "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ipphoney_local
ports:
- "631:631"
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
- "587:25"
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
# SentryPeer service
sentrypeer:
container_name: sentrypeer
restart: always
depends_on:
tpotinit:
condition: service_healthy
# environment:
# - SENTRYPEER_PEER_TO_PEER=1
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/tcp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
container_name: tanner_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## PHP Sandbox service
tanner_phpox:
container_name: tanner_phpox
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Tanner API Service
tanner_api:
container_name: tanner_api
restart: always
depends_on:
- tanner_redis
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
command: tannerapi
## Tanner Service
tanner:
container_name: tanner
restart: always
depends_on:
- tanner_api
- tanner_phpox
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
command: tanner
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
## Snare Service
snare:
container_name: snare
restart: always
depends_on:
- tanner
tty: true
networks:
- tanner_local
ports:
- "80:80"
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
# Wordpot service
wordpot:
container_name: wordpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- wordpot_local
ports:
- "8080:80"
image: ${TPOT_REPO}/wordpot:${TPOT_VERSION}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/wordpot/log:/opt/wordpot/logs/
##################
#### NSM
##################
# Fatt service
fatt:
container_name: fatt
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: ${TPOT_REPO}/fatt:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/fatt/log:/opt/fatt/log
# P0f service
p0f:
container_name: p0f
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: ${TPOT_REPO}/p0f:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/p0f/log:/var/log/p0f
# Suricata service
suricata:
container_name: suricata
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env)
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/suricata/log:/var/log/suricata
##################
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp
networks:
- nginx_local
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
networks:
- nginx_local
mem_limit: 1g
ports:
- "127.0.0.1:64296:5601"
image: ${TPOT_REPO}/kibana:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
networks:
- nginx_local
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Map Redis Service
map_redis:
container_name: map_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- nginx_local
stop_signal: SIGKILL
tty: true
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Map Web Service
map_web:
container_name: map_web
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- nginx_local
environment:
- MAP_COMMAND=AttackMapServer.py
stop_signal: SIGKILL
tty: true
ports:
- "127.0.0.1:64299:64299"
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Map Data Service
map_data:
container_name: map_data
restart: always
depends_on:
elasticsearch:
condition: service_healthy
networks:
- nginx_local
environment:
- MAP_COMMAND=DataServer_v2.py
- TPOT_ATTACKMAP_TEXT=${TPOT_ATTACKMAP_TEXT}
- TZ=${TPOT_ATTACKMAP_TEXT_TIMEZONE}
stop_signal: SIGKILL
tty: true
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Nginx service
nginx:
container_name: nginx
restart: always
environment:
- TPOT_OSTYPE=${TPOT_OSTYPE}
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /var/tmp/nginx/client_body
- /var/tmp/nginx/proxy
- /var/tmp/nginx/fastcgi
- /var/tmp/nginx/uwsgi
- /var/tmp/nginx/scgi
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
networks:
- nginx_local
ports:
- "64297:64297"
image: ${TPOT_REPO}/nginx:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/nginx/cert/:/etc/nginx/cert/:ro
- ${TPOT_DATA_PATH}/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- ${TPOT_DATA_PATH}/nginx/log/:/var/log/nginx/
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- nginx_local
ports:
- "127.0.0.1:64303:8080"
image: ${TPOT_REPO}/spiderfoot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/spiderfoot:/home/spiderfoot/.spiderfoot

View File

@ -1,31 +1,49 @@
# T-Pot (NextGen)
# Do not erase ports sections, these are used by /opt/tpot/bin/rules.sh to setup iptables ACCEPT rules for NFQ (honeytrap / glutton)
version: '2.3'
# T-Pot: MINI
version: '3.9'
networks:
adbhoney_local:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_guardian_ast:
conpot_local_ipmi:
conpot_local_kamstrup_382:
ddospot_local:
dicompot_local:
dionaea_local:
elasticpot_local:
endlessh_local:
hellpot_local:
heralding_local:
ipphoney_local:
mailoney_local:
honeypots_local:
medpot_local:
redishoneypot_local:
ewsposter_local:
spiderfoot_local:
ewsposter_local:
services:
#########################################
#### DEV
#########################################
#### T-Pot Init - Never delete this!
#########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
#### Honeypots
##################
@ -34,20 +52,27 @@ services:
adbhoney:
container_name: adbhoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- adbhoney_local
ports:
- "5555:5555"
image: "dtagdevsec/adbhoney:2204"
image: ${TPOT_REPO}/adbhoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/adbhoney/log:/opt/adbhoney/log
- /data/adbhoney/downloads:/opt/adbhoney/dl
- ${TPOT_DATA_PATH}/adbhoney/log:/opt/adbhoney/log
- ${TPOT_DATA_PATH}/adbhoney/downloads:/opt/adbhoney/dl
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
@ -55,28 +80,19 @@ services:
ports:
- "5000:5000/udp"
- "8443:8443"
image: "dtagdevsec/ciscoasa:2204"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: "dtagdevsec/citrixhoneypot:2204"
read_only: true
volumes:
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
@ -90,15 +106,19 @@ services:
ports:
- "161:161/udp"
- "2404:2404"
image: "dtagdevsec/conpot:2204"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot guardian_ast service
conpot_guardian_ast:
container_name: conpot_guardian_ast
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
@ -111,15 +131,19 @@ services:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: "dtagdevsec/conpot:2204"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
@ -132,15 +156,19 @@ services:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: "dtagdevsec/conpot:2204"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
@ -154,226 +182,119 @@ services:
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:2204"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
# Ddospot service
ddospot:
container_name: ddospot
restart: always
networks:
- ddospot_local
ports:
- "19:19/udp"
- "53:53/udp"
- "123:123/udp"
# - "161:161/udp"
- "1900:1900/udp"
image: "dtagdevsec/ddospot:2204"
read_only: true
volumes:
- /data/ddospot/log:/opt/ddospot/ddospot/logs
- /data/ddospot/bl:/opt/ddospot/ddospot/bl
- /data/ddospot/db:/opt/ddospot/ddospot/db
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
dicompot:
container_name: dicompot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: "dtagdevsec/dicompot:2204"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/dicompot/log:/var/log/dicompot
# - /data/dicompot/images:/opt/dicompot/images
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
container_name: dionaea
# Honeypots service
honeypots:
container_name: honeypots
stdin_open: true
tty: true
restart: always
networks:
- dionaea_local
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
# - "5060:5060"
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:2204"
read_only: true
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- /data/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- /data/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- /data/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- /data/dionaea:/opt/dionaea/var/dionaea
- /data/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:2204"
read_only: true
volumes:
- /data/elasticpot/log:/opt/elasticpot/log
# Endlessh service
endlessh:
container_name: endlessh
restart: always
networks:
- endlessh_local
ports:
- "22:2222"
image: "dtagdevsec/endlessh:2204"
read_only: true
volumes:
- /data/endlessh/log:/var/log/endlessh
# Glutton service
glutton:
container_name: glutton
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /var/lib/glutton:uid=2000,gid=2000
- /run:uid=2000,gid=2000
- /tmp:uid=2000,gid=2000
networks:
- honeypots_local
ports:
- "21:21"
- "22:22"
- "23:23"
- "25:25"
- "53:53/udp"
- "80:80"
- "110:110"
- "123:123"
- "143:143"
- "161:161"
- "389:389"
- "443:443"
- "445:445"
- "631:631"
- "1080:1080"
- "1433:1433"
- "1521:1521"
- "3306:3306"
- "3389:3389"
- "5060:5060/tcp"
- "5060:5060/udp"
- "5432:5432"
- "5900:5900"
- "6379:6379"
- "6667:6667"
- "8080:8080"
- "9100:9100"
- "9200:9200"
- "11211:11211"
image: ${TPOT_REPO}/honeypots:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/honeypots/log:/var/log/honeypots
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/glutton:2204"
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/glutton/log:/var/log/glutton
# - /root/tpotce/docker/glutton/dist/rules.yaml:/opt/glutton/rules/rules.yaml
# Heralding service
heralding:
container_name: heralding
restart: always
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
- "110:110"
- "143:143"
# - "443:443"
- "465:465"
- "993:993"
- "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: "dtagdevsec/heralding:2204"
read_only: true
volumes:
- /data/heralding/log:/var/log/heralding
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
networks:
- ipphoney_local
ports:
- "631:631"
image: "dtagdevsec/ipphoney:2204"
read_only: true
volumes:
- /data/ipphoney/log:/opt/ipphoney/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
image: "dtagdevsec/mailoney:2204"
read_only: true
volumes:
- /data/mailoney/log:/opt/mailoney/logs
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: "dtagdevsec/medpot:2204"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/medpot/log/:/var/log/medpot
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: "dtagdevsec/redishoneypot:2204"
read_only: true
volumes:
- /data/redishoneypot/log:/var/log/redishoneypot
# Hellpot service
hellpot:
container_name: hellpot
restart: always
networks:
- hellpot_local
ports:
- "80:8080"
image: "dtagdevsec/hellpot:2204"
read_only: true
volumes:
- /data/hellpot/log:/var/log/hellpot
##################
#### NSM
@ -383,46 +304,57 @@ services:
fatt:
container_name: fatt
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/fatt:2204"
image: ${TPOT_REPO}/fatt:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- /data/fatt/log:/opt/fatt/log
- ${TPOT_DATA_PATH}/fatt/log:/opt/fatt/log
# P0f service
p0f:
container_name: p0f
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
image: "dtagdevsec/p0f:2204"
image: ${TPOT_REPO}/p0f:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/p0f/log:/var/log/p0f
- ${TPOT_DATA_PATH}/p0f/log:/var/log/p0f
# Suricata service
suricata:
container_name: suricata
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
# For ET Pro ruleset replace "OPEN" with your OINKCODE
- OINKCODE=OPEN
# Loading externel Rules from URL
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env)
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:2204"
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- /data/suricata/log:/var/log/suricata
- ${TPOT_DATA_PATH}/suricata/log:/var/log/suricata
##################
#### Tools
#### Tools
##################
#### ELK
@ -430,6 +362,9 @@ services:
elasticsearch:
container_name: elasticsearch
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
@ -446,9 +381,10 @@ services:
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:2204"
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- /data:/data
- ${TPOT_DATA_PATH}:/data
## Kibana service
kibana:
@ -460,46 +396,57 @@ services:
mem_limit: 1g
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:2204"
image: ${TPOT_REPO}/kibana:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Logstash service
logstash:
container_name: logstash
restart: always
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
depends_on:
elasticsearch:
condition: service_healthy
env_file:
- /opt/tpot/etc/compose/elk_environment
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: "dtagdevsec/logstash:2204"
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- /data:/data
- ${TPOT_DATA_PATH}:/data
## Map Redis Service
map_redis:
container_name: map_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
stop_signal: SIGKILL
tty: true
image: "dtagdevsec/redis:2204"
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Map Web Service
map_web:
container_name: map_web
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- MAP_COMMAND=AttackMapServer.py
env_file:
- /opt/tpot/etc/compose/elk_environment
stop_signal: SIGKILL
tty: true
ports:
- "127.0.0.1:64299:64299"
image: "dtagdevsec/map:2204"
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Map Data Service
map_data:
@ -510,17 +457,21 @@ services:
condition: service_healthy
environment:
- MAP_COMMAND=DataServer_v2.py
env_file:
- /opt/tpot/etc/compose/elk_environment
- TPOT_ATTACKMAP_TEXT=${TPOT_ATTACKMAP_TEXT}
- TZ=${TPOT_ATTACKMAP_TEXT_TIMEZONE}
stop_signal: SIGKILL
tty: true
image: "dtagdevsec/map:2204"
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ewsposter_local
environment:
@ -532,17 +483,21 @@ services:
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:2204"
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Nginx service
nginx:
container_name: nginx
restart: always
environment:
- TPOT_OSTYPE=${TPOT_OSTYPE}
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /var/tmp/nginx/client_body
- /var/tmp/nginx/proxy
@ -554,22 +509,27 @@ services:
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
image: ${TPOT_REPO}/nginx:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro
- /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- /data/nginx/log/:/var/log/nginx/
- ${TPOT_DATA_PATH}/nginx/cert/:/etc/nginx/cert/:ro
- ${TPOT_DATA_PATH}/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- ${TPOT_DATA_PATH}/nginx/conf/lswebpasswd:/etc/nginx/lswebpasswd:ro
- ${TPOT_DATA_PATH}/nginx/log/:/var/log/nginx/
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:2204"
image: ${TPOT_REPO}/spiderfoot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- /data/spiderfoot:/home/spiderfoot/.spiderfoot
- ${TPOT_DATA_PATH}/spiderfoot:/home/spiderfoot/.spiderfoot

628
compose/mobile.yml Normal file
View File

@ -0,0 +1,628 @@
# T-Pot: MOBILE
# Note: This docker compose file has been adjusted to limit the number of tools, services and honeypots to run
# T-Pot on a Raspberry Pi 4 (8GB of RAM).
# The standard docker compose file should work mostly fine (depending on traffic) if you do not enable a
# desktop environment such as LXDE and meet the minimum requirements of 8GB RAM.
version: '3.9'
networks:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local:
dicompot_local:
dionaea_local:
elasticpot_local:
heralding_local:
ipphoney_local:
log4pot_local:
mailoney_local:
medpot_local:
redishoneypot_local:
sentrypeer_local:
tanner_local:
wordpot_local:
ewsposter_local:
services:
#########################################
#### DEV
#########################################
#### T-Pot Init - Never delete this!
#########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
#### Honeypots
##################
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
- ciscoasa_local
ports:
- "5000:5000/udp"
- "8443:8443"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
ports:
- "22:22"
- "23:23"
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
dicompot:
container_name: dicompot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
tty: true
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- dionaea_local
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
# - "5060:5060"
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- elasticpot_local
ports:
- "9200:9200"
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
# Heralding service
heralding:
container_name: heralding
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
- "110:110"
- "143:143"
# - "443:443"
- "465:465"
- "993:993"
- "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- ipphoney_local
ports:
- "631:631"
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
- "587:25"
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Log4pot service
log4pot:
container_name: log4pot
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- log4pot_local
ports:
# - "80:8080"
# - "443:8080"
- "8080:8080"
# - "9200:8080"
- "25565:8080"
image: ${TPOT_REPO}/log4pot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/log4pot/log:/var/log/log4pot/log
- ${TPOT_DATA_PATH}/log4pot/payloads:/var/log/log4pot/payloads
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
# SentryPeer service
sentrypeer:
container_name: sentrypeer
restart: always
depends_on:
logstash:
condition: service_healthy
# environment:
# - SENTRYPEER_PEER_TO_PEER=1
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/tcp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
container_name: tanner_redis
restart: always
depends_on:
logstash:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## PHP Sandbox service
tanner_phpox:
container_name: tanner_phpox
restart: always
depends_on:
logstash:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Tanner API Service
tanner_api:
container_name: tanner_api
restart: always
depends_on:
- tanner_redis
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
command: tannerapi
## Tanner Service
tanner:
container_name: tanner
restart: always
depends_on:
- tanner_api
- tanner_phpox
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
command: tanner
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
## Snare Service
snare:
container_name: snare
restart: always
depends_on:
- tanner
tty: true
networks:
- tanner_local
ports:
- "80:80"
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
# Wordpot service
wordpot:
container_name: wordpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- wordpot_local
ports:
- "8080:80"
image: ${TPOT_REPO}/wordpot:${TPOT_VERSION}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/wordpot/log:/opt/wordpot/logs/
##################
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip

View File

@ -0,0 +1,628 @@
# T-Pot: MOBILE
# Note: This docker compose file has been adjusted to limit the number of tools, services and honeypots to run
# T-Pot on a Raspberry Pi 4 (8GB of RAM).
# The standard docker compose file should work mostly fine (depending on traffic) if you do not enable a
# desktop environment such as LXDE and meet the minimum requirements of 8GB RAM.
version: '3.9'
networks:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local:
dicompot_local:
dionaea_local:
elasticpot_local:
heralding_local:
ipphoney_local:
log4pot_local:
mailoney_local:
medpot_local:
redishoneypot_local:
sentrypeer_local:
tanner_local:
wordpot_local:
ewsposter_local:
services:
#########################################
#### DEV
#########################################
#### T-Pot Init - Never delete this!
#########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
#### Honeypots
##################
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
- ciscoasa_local
ports:
- "5000:5000/udp"
- "8443:8443"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
ports:
- "22:22"
- "23:23"
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
dicompot:
container_name: dicompot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
tty: true
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- dionaea_local
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
# - "5060:5060"
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- elasticpot_local
ports:
- "9200:9200"
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
# Heralding service
heralding:
container_name: heralding
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
- "110:110"
- "143:143"
# - "443:443"
- "465:465"
- "993:993"
- "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- ipphoney_local
ports:
- "631:631"
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
- "587:25"
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Log4pot service
log4pot:
container_name: log4pot
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- log4pot_local
ports:
# - "80:8080"
# - "443:8080"
- "8080:8080"
# - "9200:8080"
- "25565:8080"
image: ${TPOT_REPO}/log4pot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/log4pot/log:/var/log/log4pot/log
- ${TPOT_DATA_PATH}/log4pot/payloads:/var/log/log4pot/payloads
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
# SentryPeer service
sentrypeer:
container_name: sentrypeer
restart: always
depends_on:
logstash:
condition: service_healthy
# environment:
# - SENTRYPEER_PEER_TO_PEER=1
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/tcp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
container_name: tanner_redis
restart: always
depends_on:
logstash:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## PHP Sandbox service
tanner_phpox:
container_name: tanner_phpox
restart: always
depends_on:
logstash:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Tanner API Service
tanner_api:
container_name: tanner_api
restart: always
depends_on:
- tanner_redis
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
command: tannerapi
## Tanner Service
tanner:
container_name: tanner
restart: always
depends_on:
- tanner_api
- tanner_phpox
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
command: tanner
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
## Snare Service
snare:
container_name: snare
restart: always
depends_on:
- tanner
tty: true
networks:
- tanner_local
ports:
- "80:80"
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
# Wordpot service
wordpot:
container_name: wordpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- wordpot_local
ports:
- "82:80"
image: ${TPOT_REPO}/wordpot:${TPOT_VERSION}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/wordpot/log:/opt/wordpot/logs/
##################
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip

View File

@ -1,6 +1,5 @@
# T-Pot (Standard)
# Do not erase ports sections, these are used by /opt/tpot/bin/rules.sh to setup iptables ACCEPT rules for NFQ (honeytrap / glutton)
version: '2.3'
# T-Pot: SENSOR
version: '3.9'
networks:
adbhoney_local:
@ -20,13 +19,40 @@ networks:
mailoney_local:
medpot_local:
redishoneypot_local:
tanner_local:
ewsposter_local:
sentrypeer_local:
spiderfoot_local:
tanner_local:
wordpot_local:
ewsposter_local:
services:
#########################################
#### DEV
#########################################
#### T-Pot Init - Never delete this!
#########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
#### Honeypots
##################
@ -35,20 +61,27 @@ services:
adbhoney:
container_name: adbhoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- adbhoney_local
ports:
- "5555:5555"
image: "dtagdevsec/adbhoney:2204"
image: ${TPOT_REPO}/adbhoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/adbhoney/log:/opt/adbhoney/log
- /data/adbhoney/downloads:/opt/adbhoney/dl
- ${TPOT_DATA_PATH}/adbhoney/log:/opt/adbhoney/log
- ${TPOT_DATA_PATH}/adbhoney/downloads:/opt/adbhoney/dl
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
@ -56,28 +89,36 @@ services:
ports:
- "5000:5000/udp"
- "8443:8443"
image: "dtagdevsec/ciscoasa:2204"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/ciscoasa/log:/var/log/ciscoasa
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: "dtagdevsec/citrixhoneypot:2204"
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
@ -91,15 +132,19 @@ services:
ports:
- "161:161/udp"
- "2404:2404"
image: "dtagdevsec/conpot:2204"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot guardian_ast service
conpot_guardian_ast:
container_name: conpot_guardian_ast
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
@ -112,15 +157,19 @@ services:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: "dtagdevsec/conpot:2204"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
@ -133,15 +182,19 @@ services:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: "dtagdevsec/conpot:2204"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
@ -155,35 +208,43 @@ services:
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:2204"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
- cowrie_local
ports:
- "22:22"
- "23:23"
image: "dtagdevsec/cowrie:2204"
- "22:22"
- "23:23"
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
- /data/cowrie/keys:/home/cowrie/cowrie/etc
- /data/cowrie/log:/home/cowrie/cowrie/log
- /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Ddospot service
ddospot:
container_name: ddospot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ddospot_local
ports:
@ -192,12 +253,13 @@ services:
- "123:123/udp"
# - "161:161/udp"
- "1900:1900/udp"
image: "dtagdevsec/ddospot:2204"
image: ${TPOT_REPO}/ddospot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/ddospot/log:/opt/ddospot/ddospot/logs
- /data/ddospot/bl:/opt/ddospot/ddospot/bl
- /data/ddospot/db:/opt/ddospot/ddospot/db
- ${TPOT_DATA_PATH}/ddospot/log:/opt/ddospot/ddospot/logs
- ${TPOT_DATA_PATH}/ddospot/bl:/opt/ddospot/ddospot/bl
- ${TPOT_DATA_PATH}/ddospot/db:/opt/ddospot/ddospot/db
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
@ -206,15 +268,19 @@ services:
dicompot:
container_name: dicompot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: "dtagdevsec/dicompot:2204"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/dicompot/log:/var/log/dicompot
# - /data/dicompot/images:/opt/dicompot/images
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
@ -222,6 +288,9 @@ services:
stdin_open: true
tty: true
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dionaea_local
ports:
@ -241,35 +310,43 @@ services:
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:2204"
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- /data/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- /data/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- /data/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- /data/dionaea:/opt/dionaea/var/dionaea
- /data/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:2204"
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/elasticpot/log:/opt/elasticpot/log
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
# Heralding service
heralding:
container_name: heralding
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
@ -291,44 +368,56 @@ services:
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: "dtagdevsec/heralding:2204"
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/heralding/log:/var/log/heralding
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:2204"
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
- /data/honeytrap/downloads:/opt/honeytrap/var/downloads
- /data/honeytrap/log:/opt/honeytrap/var/log
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ipphoney_local
ports:
- "631:631"
image: "dtagdevsec/ipphoney:2204"
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/ipphoney/log:/opt/ipphoney/log
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
@ -339,120 +428,165 @@ services:
- mailoney_local
ports:
- "25:25"
image: "dtagdevsec/mailoney:2204"
- "587:25"
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/mailoney/log:/opt/mailoney/logs
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: "dtagdevsec/medpot:2204"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/medpot/log/:/var/log/medpot
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: "dtagdevsec/redishoneypot:2204"
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/redishoneypot/log:/var/log/redishoneypot
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
# SentryPeer service
sentrypeer:
container_name: sentrypeer
restart: always
depends_on:
tpotinit:
condition: service_healthy
# environment:
# - SENTRYPEER_PEER_TO_PEER=1
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/tcp"
- "5060:5060/udp"
image: "dtagdevsec/sentrypeer:2204"
# - "127.0.0.1:8082:8082"
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/sentrypeer/log:/var/log/sentrypeer
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
container_name: tanner_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: "dtagdevsec/redis:2204"
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## PHP Sandbox service
tanner_phpox:
container_name: tanner_phpox
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: "dtagdevsec/phpox:2204"
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Tanner API Service
tanner_api:
container_name: tanner_api
restart: always
depends_on:
- tanner_redis
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:2204"
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
command: tannerapi
depends_on:
- tanner_redis
## Tanner Service
tanner:
container_name: tanner
restart: always
depends_on:
- tanner_api
- tanner_phpox
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:2204"
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
command: tanner
read_only: true
volumes:
- /data/tanner/log:/var/log/tanner
- /data/tanner/files:/opt/tanner/files
depends_on:
- tanner_api
# - tanner_web
- tanner_phpox
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
## Snare Service
snare:
container_name: snare
restart: always
depends_on:
- tanner
tty: true
networks:
- tanner_local
ports:
- "80:80"
image: "dtagdevsec/snare:2204"
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
# Wordpot service
wordpot:
container_name: wordpot
restart: always
depends_on:
- tanner
tpotinit:
condition: service_healthy
networks:
- wordpot_local
ports:
- "8080:80"
image: ${TPOT_REPO}/wordpot:${TPOT_VERSION}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/wordpot/log:/opt/wordpot/logs/
##################
@ -463,144 +597,90 @@ services:
fatt:
container_name: fatt
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/fatt:2204"
image: ${TPOT_REPO}/fatt:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- /data/fatt/log:/opt/fatt/log
- ${TPOT_DATA_PATH}/fatt/log:/opt/fatt/log
# P0f service
p0f:
container_name: p0f
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
image: "dtagdevsec/p0f:2204"
image: ${TPOT_REPO}/p0f:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- /data/p0f/log:/var/log/p0f
- ${TPOT_DATA_PATH}/p0f/log:/var/log/p0f
# Suricata service
suricata:
container_name: suricata
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
# For ET Pro ruleset replace "OPEN" with your OINKCODE
- OINKCODE=OPEN
# Loading externel Rules from URL
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env)
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:2204"
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- /data/suricata/log:/var/log/suricata
- ${TPOT_DATA_PATH}/suricata/log:/var/log/suricata
##################
#### Tools
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:2204"
volumes:
- /data:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
mem_limit: 1g
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:2204"
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
depends_on:
elasticsearch:
condition: service_healthy
env_file:
- /opt/tpot/etc/compose/elk_environment
mem_limit: 2g
image: "dtagdevsec/logstash:2204"
volumes:
- /data:/data
## Map Redis Service
map_redis:
container_name: map_redis
restart: always
stop_signal: SIGKILL
tty: true
image: "dtagdevsec/redis:2204"
read_only: true
## Map Web Service
map_web:
container_name: map_web
restart: always
environment:
- MAP_COMMAND=AttackMapServer.py
env_file:
- /opt/tpot/etc/compose/elk_environment
stop_signal: SIGKILL
tty: true
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64299:64299"
image: "dtagdevsec/map:2204"
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Map Data Service
map_data:
container_name: map_data
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- MAP_COMMAND=DataServer_v2.py
env_file:
- /opt/tpot/etc/compose/elk_environment
stop_signal: SIGKILL
tty: true
image: "dtagdevsec/map:2204"
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ewsposter_local
environment:
@ -612,44 +692,8 @@ services:
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:2204"
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Nginx service
nginx:
container_name: nginx
restart: always
tmpfs:
- /var/tmp/nginx/client_body
- /var/tmp/nginx/proxy
- /var/tmp/nginx/fastcgi
- /var/tmp/nginx/uwsgi
- /var/tmp/nginx/scgi
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro
- /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- /data/nginx/log/:/var/log/nginx/
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:2204"
volumes:
- /data/spiderfoot:/home/spiderfoot/.spiderfoot
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip

830
compose/standard.yml Normal file
View File

@ -0,0 +1,830 @@
# T-Pot: STANDARD
version: '3.9'
networks:
adbhoney_local:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_guardian_ast:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local:
ddospot_local:
dicompot_local:
dionaea_local:
elasticpot_local:
heralding_local:
ipphoney_local:
mailoney_local:
medpot_local:
redishoneypot_local:
sentrypeer_local:
tanner_local:
spiderfoot_local:
wordpot_local:
ewsposter_local:
services:
#########################################
#### DEV
#########################################
#### T-Pot Init - Never delete this!
#########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
#### Honeypots
##################
# Adbhoney service
adbhoney:
container_name: adbhoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- adbhoney_local
ports:
- "5555:5555"
image: ${TPOT_REPO}/adbhoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/adbhoney/log:/opt/adbhoney/log
- ${TPOT_DATA_PATH}/adbhoney/downloads:/opt/adbhoney/dl
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
- ciscoasa_local
ports:
- "5000:5000/udp"
- "8443:8443"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot guardian_ast service
conpot_guardian_ast:
container_name: conpot_guardian_ast
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
- CONPOT_LOG=/var/log/conpot/conpot_guardian_ast.log
- CONPOT_TEMPLATE=guardian_ast
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
ports:
- "22:22"
- "23:23"
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Ddospot service
ddospot:
container_name: ddospot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ddospot_local
ports:
- "19:19/udp"
- "53:53/udp"
- "123:123/udp"
# - "161:161/udp"
- "1900:1900/udp"
image: ${TPOT_REPO}/ddospot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ddospot/log:/opt/ddospot/ddospot/logs
- ${TPOT_DATA_PATH}/ddospot/bl:/opt/ddospot/ddospot/bl
- ${TPOT_DATA_PATH}/ddospot/db:/opt/ddospot/ddospot/db
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
dicompot:
container_name: dicompot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
tty: true
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dionaea_local
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
# - "5060:5060"
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- elasticpot_local
ports:
- "9200:9200"
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
# Heralding service
heralding:
container_name: heralding
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
- "110:110"
- "143:143"
# - "443:443"
- "465:465"
- "993:993"
- "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ipphoney_local
ports:
- "631:631"
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
- "587:25"
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
# SentryPeer service
sentrypeer:
container_name: sentrypeer
restart: always
depends_on:
tpotinit:
condition: service_healthy
# environment:
# - SENTRYPEER_PEER_TO_PEER=1
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/tcp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
container_name: tanner_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## PHP Sandbox service
tanner_phpox:
container_name: tanner_phpox
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Tanner API Service
tanner_api:
container_name: tanner_api
restart: always
depends_on:
- tanner_redis
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
command: tannerapi
## Tanner Service
tanner:
container_name: tanner
restart: always
depends_on:
- tanner_api
- tanner_phpox
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
command: tanner
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
## Snare Service
snare:
container_name: snare
restart: always
depends_on:
- tanner
tty: true
networks:
- tanner_local
ports:
- "80:80"
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
# Wordpot service
wordpot:
container_name: wordpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- wordpot_local
ports:
- "8080:80"
image: ${TPOT_REPO}/wordpot:${TPOT_VERSION}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/wordpot/log:/opt/wordpot/logs/
##################
#### NSM
##################
# Fatt service
fatt:
container_name: fatt
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: ${TPOT_REPO}/fatt:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/fatt/log:/opt/fatt/log
# P0f service
p0f:
container_name: p0f
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
image: ${TPOT_REPO}/p0f:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/p0f/log:/var/log/p0f
# Suricata service
suricata:
container_name: suricata
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env)
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/suricata/log:/var/log/suricata
##################
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
mem_limit: 1g
ports:
- "127.0.0.1:64296:5601"
image: ${TPOT_REPO}/kibana:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Map Redis Service
map_redis:
container_name: map_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
stop_signal: SIGKILL
tty: true
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Map Web Service
map_web:
container_name: map_web
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- MAP_COMMAND=AttackMapServer.py
stop_signal: SIGKILL
tty: true
ports:
- "127.0.0.1:64299:64299"
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Map Data Service
map_data:
container_name: map_data
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- MAP_COMMAND=DataServer_v2.py
- TPOT_ATTACKMAP_TEXT=${TPOT_ATTACKMAP_TEXT}
- TZ=${TPOT_ATTACKMAP_TEXT_TIMEZONE}
stop_signal: SIGKILL
tty: true
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Nginx service
nginx:
container_name: nginx
restart: always
environment:
- TPOT_OSTYPE=${TPOT_OSTYPE}
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /var/tmp/nginx/client_body
- /var/tmp/nginx/proxy
- /var/tmp/nginx/fastcgi
- /var/tmp/nginx/uwsgi
- /var/tmp/nginx/scgi
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
image: ${TPOT_REPO}/nginx:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/nginx/cert/:/etc/nginx/cert/:ro
- ${TPOT_DATA_PATH}/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- ${TPOT_DATA_PATH}/nginx/conf/lswebpasswd:/etc/nginx/lswebpasswd:ro
- ${TPOT_DATA_PATH}/nginx/log/:/var/log/nginx/
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: ${TPOT_REPO}/spiderfoot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/spiderfoot:/home/spiderfoot/.spiderfoot

961
compose/tpot_services.yml Normal file
View File

@ -0,0 +1,961 @@
# T-Pot: Docker Services Base Configuration
# This is only to be used with the T-Pot Customizer
# Editing the contents may result in broken custom configurations!
networks:
adbhoney_local:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_guardian_ast:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local:
ddospot_local:
dicompot_local:
dionaea_local:
elasticpot_local:
endlessh_local:
hellpot_local:
heralding_local:
honeypots_local:
ipphoney_local:
log4pot_local:
mailoney_local:
medpot_local:
redishoneypot_local:
sentrypeer_local:
tanner_local:
wordpot_local:
spiderfoot_local:
ewsposter_local:
services:
#########################################
#### DEV
#########################################
#### T-Pot Init - Never delete this!
#########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
#### Honeypots
##################
# Adbhoney service
adbhoney:
container_name: adbhoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- adbhoney_local
ports:
- "5555:5555"
image: ${TPOT_REPO}/adbhoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/adbhoney/log:/opt/adbhoney/log
- ${TPOT_DATA_PATH}/adbhoney/downloads:/opt/adbhoney/dl
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
- ciscoasa_local
ports:
- "5000:5000/udp"
- "8443:8443"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot guardian_ast service
conpot_guardian_ast:
container_name: conpot_guardian_ast
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
- CONPOT_LOG=/var/log/conpot/conpot_guardian_ast.log
- CONPOT_TEMPLATE=guardian_ast
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
ports:
- "22:22"
- "23:23"
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Ddospot service
ddospot:
container_name: ddospot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ddospot_local
ports:
- "19:19/udp"
- "53:53/udp"
- "123:123/udp"
# - "161:161/udp"
- "1900:1900/udp"
image: ${TPOT_REPO}/ddospot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ddospot/log:/opt/ddospot/ddospot/logs
- ${TPOT_DATA_PATH}/ddospot/bl:/opt/ddospot/ddospot/bl
- ${TPOT_DATA_PATH}/ddospot/db:/opt/ddospot/ddospot/db
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
dicompot:
container_name: dicompot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
tty: true
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dionaea_local
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
# - "5060:5060"
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- elasticpot_local
ports:
- "9200:9200"
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
# Endlessh service
endlessh:
container_name: endlessh
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- endlessh_local
ports:
- "22:2222"
image: ${TPOT_REPO}/endlessh:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/endlessh/log:/var/log/endlessh
# # Glutton service
# glutton:
# container_name: glutton
# restart: always
# depends_on:
# tpotinit:
# condition: service_healthy
# tmpfs:
# - /var/lib/glutton:uid=2000,gid=2000
# - /run:uid=2000,gid=2000
# network_mode: "host"
# cap_add:
# - NET_ADMIN
# image: ${TPOT_REPO}/glutton:${TPOT_VERSION}
# pull_policy: ${TPOT_PULL_POLICY}
# read_only: true
# volumes:
# - ${TPOT_DATA_PATH}/glutton/log:/var/log/glutton
# - ${TPOT_DATA_PATH}/glutton/payloads:/opt/glutton/payloads
# Hellpot service
hellpot:
container_name: hellpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- hellpot_local
ports:
- "80:8080"
image: ${TPOT_REPO}/hellpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/hellpot/log:/var/log/hellpot
# Heralding service
heralding:
container_name: heralding
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
- "110:110"
- "143:143"
# - "443:443"
- "465:465"
- "993:993"
- "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
# Honeypots service
honeypots:
container_name: honeypots
stdin_open: true
tty: true
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- honeypots_local
ports:
- "21:21"
- "22:22"
- "23:23"
- "25:25"
- "53:53/udp"
- "80:80"
- "110:110"
- "123:123"
- "143:143"
- "161:161"
- "389:389"
- "443:443"
- "445:445"
- "631:631"
- "1080:1080"
- "1433:1433"
- "1521:1521"
- "3306:3306"
- "3389:3389"
- "5060:5060"
- "5432:5432"
- "5900:5900"
- "6379:6379"
- "6667:6667"
- "8080:8080"
- "9100:9100"
- "9200:9200"
- "11211:11211"
image: ${TPOT_REPO}/honeypots:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/honeypots/log:/var/log/honeypots
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ipphoney_local
ports:
- "631:631"
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Log4pot service
log4pot:
container_name: log4pot
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- log4pot_local
ports:
- "80:8080"
- "443:8080"
- "8080:8080"
- "9200:8080"
- "25565:8080"
image: ${TPOT_REPO}/log4pot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/log4pot/log:/var/log/log4pot/log
- ${TPOT_DATA_PATH}/log4pot/payloads:/var/log/log4pot/payloads
# Mailoney service
mailoney:
container_name: mailoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
- "587:25"
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
# SentryPeer service
sentrypeer:
container_name: sentrypeer
restart: always
depends_on:
tpotinit:
condition: service_healthy
# environment:
# - SENTRYPEER_PEER_TO_PEER=1
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/tcp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
container_name: tanner_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## PHP Sandbox service
tanner_phpox:
container_name: tanner_phpox
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Tanner API service
tanner_api:
container_name: tanner_api
restart: always
depends_on:
- tanner_redis
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
command: tannerapi
## Tanner service
tanner:
container_name: tanner
restart: always
depends_on:
- tanner_api
- tanner_phpox
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
command: tanner
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
## Snare service
snare:
container_name: snare
restart: always
depends_on:
- tanner
tty: true
networks:
- tanner_local
ports:
- "80:80"
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
# Wordpot service
wordpot:
container_name: wordpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- wordpot_local
ports:
- "80:80"
image: ${TPOT_REPO}/wordpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/wordpot/log:/opt/wordpot/logs/
##################
#### NSM
##################
# Fatt service
fatt:
container_name: fatt
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: ${TPOT_REPO}/fatt:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/fatt/log:/opt/fatt/log
# P0f service
p0f:
container_name: p0f
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
image: ${TPOT_REPO}/p0f:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/p0f/log:/var/log/p0f
# Suricata service
suricata:
container_name: suricata
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env)
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/suricata/log:/var/log/suricata
##################
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
mem_limit: 1g
ports:
- "127.0.0.1:64296:5601"
image: ${TPOT_REPO}/kibana:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Map Redis Service
map_redis:
container_name: map_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
stop_signal: SIGKILL
tty: true
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Map Web Service
map_web:
container_name: map_web
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- MAP_COMMAND=AttackMapServer.py
stop_signal: SIGKILL
tty: true
ports:
- "127.0.0.1:64299:64299"
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Map Data Service
map_data:
container_name: map_data
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- MAP_COMMAND=DataServer_v2.py
- TPOT_ATTACKMAP_TEXT=${TPOT_ATTACKMAP_TEXT}
- TZ=${TPOT_ATTACKMAP_TEXT_TIMEZONE}
stop_signal: SIGKILL
tty: true
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Nginx service
nginx:
container_name: nginx
restart: always
environment:
- TPOT_OSTYPE=${TPOT_OSTYPE}
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /var/tmp/nginx/client_body
- /var/tmp/nginx/proxy
- /var/tmp/nginx/fastcgi
- /var/tmp/nginx/uwsgi
- /var/tmp/nginx/scgi
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
image: ${TPOT_REPO}/nginx:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/nginx/cert/:/etc/nginx/cert/:ro
- ${TPOT_DATA_PATH}/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- ${TPOT_DATA_PATH}/nginx/conf/lswebpasswd:/etc/nginx/lswebpasswd:ro
- ${TPOT_DATA_PATH}/nginx/log/:/var/log/nginx/
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: ${TPOT_REPO}/spiderfoot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/spiderfoot:/home/spiderfoot/.spiderfoot

153
deploy.sh Executable file
View File

@ -0,0 +1,153 @@
#!/usr/bin/env bash
myANSIBLE_PORT=64295
myANSIBLE_TPOT_PLAYBOOK="installer/install/deploy.yml"
myADJECTIVE=$(shuf -n1 installer/install/a.txt)
myNOUN=$(shuf -n1 installer/install/n.txt)
myENV_FILE="$HOME/tpotce/.env"
myDEPLOY=$(cat << "EOF"
____ [ T-Pot ] ____ _
/ ___| ___ _ __ ___ ___ _ __ | _ \ ___ _ __ | | ___ _ _
\___ \ / _ \ _ \/ __|/ _ \| __| | | | |/ _ \ _ \| |/ _ \| | | |
___) | __/ | | \__ \ (_) | | | |_| | __/ |_) | | (_) | |_| |
|____/ \___|_| |_|___/\___/|_| |____/ \___| .__/|_|\___/ \__, |
|_| |___/
EOF
)
# Check if the script is running in a HIVE installation
if ! grep -q 'TPOT_TYPE=HIVE' "$HOME/tpotce/.env";
then
echo "# This script is only supported on HIVE installations."
echo
exit 1
fi
# Check if running on a supported distribution
mySUPPORTED_DISTRIBUTIONS=("AlmaLinux" "Debian GNU/Linux" "Fedora Linux" "openSUSE Tumbleweed" "Raspbian GNU/Linux" "Rocky Linux" "Ubuntu")
myCURRENT_DISTRIBUTION=$(awk -F= '/^NAME/{print $2}' /etc/os-release | tr -d '"')
if [[ ! " ${mySUPPORTED_DISTRIBUTIONS[@]} " =~ " ${myCURRENT_DISTRIBUTION} " ]];
then
echo "# Only the following distributions are supported: AlmaLinux, Fedora, Debian, openSUSE Tumbleweed, Rocky Linux and Ubuntu."
echo
exit 1
fi
echo "${myDEPLOY}"
echo
echo "# This script will prepare a T-Pot SENSOR installation to transmit logs into this HIVE."
echo
# Ask if a T-Pot SENSOR was installed
read -p "# Was a T-Pot SENSOR installed? (y/n): " mySENSOR_INSTALLED
if [[ ${mySENSOR_INSTALLED} != "y" ]];
then
echo "# A T-Pot SENSOR must be installed to continue."
exit 1
fi
# Ask for the remote user
read -p "# Enter the remote username T-Pot SENSOR was installed with: " mySSHUSER
if [[ ${mySSHUSER} == "" ]];
then
echo "# You need to enter a user. Aborting."
exit 1
fi
# Validate IP/domain name loop
while true; do
read -p "# Enter the IP/domain name of the SENSOR: " mySENSOR_IP
if [[ ${mySENSOR_IP} =~ ^([a-zA-Z0-9]+(\.[a-zA-Z0-9]+)*\.[a-zA-Z]{2,})|(([0-9]{1,3}\.){3}[0-9]{1,3})$ ]];
then
break
else
echo "# Invalid IP/domain. Please enter a valid IP or domain name."
fi
done
# Check if ssh key has been deployed
read -p "# Has a SSH key been deployed to the SENSOR? (y/n): " mySSHKEY_DEPLOYED
if [[ ${mySSHKEY_DEPLOYED} != "y" ]];
then
echo "# Generate a SSH key using 'ssh-keygen' and deploy it to the SENSOR (Example: ssh-copy-id -p 64295 ${mySSHUSER}@${mySENSOR_IP})."
exit 1
fi
# Validate IP/domain name of HIVE
while true; do
read -p "# Enter the IP/domain name of this HIVE: " myTPOT_HIVE_IP
if [[ ${myTPOT_HIVE_IP} =~ ^([a-zA-Z0-9]+(\.[a-zA-Z0-9]+)*\.[a-zA-Z]{2,})|(([0-9]{1,3}\.){3}[0-9]{1,3})$ ]];
then
break
else
echo "# Invalid IP/domain. Please enter a valid IP or domain name."
fi
done
# Create a random SENSOR user name that is easily readable
myLS_WEB_USER="sensor-${myADJECTIVE}-${myNOUN}"
# Create a random password
myLS_WEB_PW=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | fold -w 32 | head -n 1)
# Create myLS_WEB_USER_ENC
myLS_WEB_USER_ENC=$(htpasswd -b -n "${myLS_WEB_USER}" "${myLS_WEB_PW}")
myLS_WEB_USER_ENC_B64=$(echo -n "${myLS_WEB_USER_ENC}" | base64 -w0)
# Create myTPOT_HIVE_USER, since this is for Logstash on the SENSOR, it needs to directly base64 encoded
myTPOT_HIVE_USER=$(echo -n "${myLS_WEB_USER}:${myLS_WEB_PW}" | base64 -w0)
# Print credentials
echo "# The following SENSOR credentials have been created:"
echo "# New SENSOR username: ${myLS_WEB_USER}"
echo "# New SENSOR passowrd: ${myLS_WEB_PW}"
echo "# New htpasswd encoded credentials: ${myLS_WEB_USER_ENC}"
echo "# New htpasswd credentials base64 encoded: ${myLS_WEB_USER_ENC_B64}"
echo "# New SENSOR credentials base64 encoded: ${myTPOT_HIVE_USER}"
echo
echo "# Ansible will ask for the BECOME password which is typically the password you sudo with on the SENSOR."
echo "# The password will allow Ansible to run a reboot via sudo on the SENSOR."
echo
# Read LS_WEB_USER from file
myENV_LS_WEB_USER=$(grep "^LS_WEB_USER=" "${myENV_FILE}" | sed 's/^LS_WEB_USER=//g' | tr -d "\"'")
# Add the new SENSOR user
if [ "${myENV_LS_WEB_USER}" == "" ];
then
myENV_LS_WEB_USER="${myLS_WEB_USER_ENC_B64}"
else
myENV_LS_WEB_USER="${myENV_LS_WEB_USER} ${myLS_WEB_USER_ENC_B64}"
fi
# Need to export for Ansible
export myTPOT_HIVE_USER
export myTPOT_HIVE_IP
ANSIBLE_LOG_PATH=${HOME}/tpotce/data/deploy_sensor.log ansible-playbook ${myANSIBLE_TPOT_PLAYBOOK} -i ${mySENSOR_IP}, -c ssh -u ${mySSHUSER} --ask-become-pass -e "ansible_port=${myANSIBLE_PORT}"
if [ "$?" == 0 ];
then
# Update the T-Pot .env config and lswebpasswd (avoid the need to restart T-Pot) on the host
echo "# Updating SENSOR users on this HIVE and in the T-Pot .env config:"
sed -i "/^LS_WEB_USER=/c\LS_WEB_USER=$myENV_LS_WEB_USER" "${myENV_FILE}"
: > "${HOME}"/tpotce/data/nginx/conf/lswebpasswd
for i in $myENV_LS_WEB_USER;
do
if [[ -n $i ]];
then
# Need to control newlines as they kept coming up for some reason
echo -n "$i" | base64 -d -w0
echo
echo -n "$i" | base64 -d -w0 | tr -d '\n' >> ${HOME}/tpotce/data/nginx/conf/lswebpasswd
echo >> ${HOME}/tpotce/data/nginx/conf/lswebpasswd
fi
done
fi
unset myTPOT_HIVE_USER
unset myTPOT_HIVE_IP

Binary file not shown.

Before

Width:  |  Height:  |  Size: 432 KiB

After

Width:  |  Height:  |  Size: 382 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 380 KiB

After

Width:  |  Height:  |  Size: 480 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 334 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 464 KiB

After

Width:  |  Height:  |  Size: 608 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 129 KiB

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 213 KiB

After

Width:  |  Height:  |  Size: 310 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 324 KiB

After

Width:  |  Height:  |  Size: 486 KiB

830
docker-compose.yml Normal file
View File

@ -0,0 +1,830 @@
# T-Pot: STANDARD
version: '3.9'
networks:
adbhoney_local:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_guardian_ast:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local:
ddospot_local:
dicompot_local:
dionaea_local:
elasticpot_local:
heralding_local:
ipphoney_local:
mailoney_local:
medpot_local:
redishoneypot_local:
sentrypeer_local:
tanner_local:
spiderfoot_local:
wordpot_local:
ewsposter_local:
services:
#########################################
#### DEV
#########################################
#### T-Pot Init - Never delete this!
#########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
#### Honeypots
##################
# Adbhoney service
adbhoney:
container_name: adbhoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- adbhoney_local
ports:
- "5555:5555"
image: ${TPOT_REPO}/adbhoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/adbhoney/log:/opt/adbhoney/log
- ${TPOT_DATA_PATH}/adbhoney/downloads:/opt/adbhoney/dl
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
- ciscoasa_local
ports:
- "5000:5000/udp"
- "8443:8443"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot guardian_ast service
conpot_guardian_ast:
container_name: conpot_guardian_ast
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
- CONPOT_LOG=/var/log/conpot/conpot_guardian_ast.log
- CONPOT_TEMPLATE=guardian_ast
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
ports:
- "22:22"
- "23:23"
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Ddospot service
ddospot:
container_name: ddospot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ddospot_local
ports:
- "19:19/udp"
- "53:53/udp"
- "123:123/udp"
# - "161:161/udp"
- "1900:1900/udp"
image: ${TPOT_REPO}/ddospot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ddospot/log:/opt/ddospot/ddospot/logs
- ${TPOT_DATA_PATH}/ddospot/bl:/opt/ddospot/ddospot/bl
- ${TPOT_DATA_PATH}/ddospot/db:/opt/ddospot/ddospot/db
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
dicompot:
container_name: dicompot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
tty: true
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- dionaea_local
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
# - "5060:5060"
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- elasticpot_local
ports:
- "9200:9200"
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
# Heralding service
heralding:
container_name: heralding
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
- "110:110"
- "143:143"
# - "443:443"
- "465:465"
- "993:993"
- "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ipphoney_local
ports:
- "631:631"
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
- "587:25"
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
# SentryPeer service
sentrypeer:
container_name: sentrypeer
restart: always
depends_on:
tpotinit:
condition: service_healthy
# environment:
# - SENTRYPEER_PEER_TO_PEER=1
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/tcp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
container_name: tanner_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## PHP Sandbox service
tanner_phpox:
container_name: tanner_phpox
restart: always
depends_on:
tpotinit:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Tanner API Service
tanner_api:
container_name: tanner_api
restart: always
depends_on:
- tanner_redis
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
command: tannerapi
## Tanner Service
tanner:
container_name: tanner
restart: always
depends_on:
- tanner_api
- tanner_phpox
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
command: tanner
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
## Snare Service
snare:
container_name: snare
restart: always
depends_on:
- tanner
tty: true
networks:
- tanner_local
ports:
- "80:80"
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
# Wordpot service
wordpot:
container_name: wordpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- wordpot_local
ports:
- "8080:80"
image: ${TPOT_REPO}/wordpot:${TPOT_VERSION}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/wordpot/log:/opt/wordpot/logs/
##################
#### NSM
##################
# Fatt service
fatt:
container_name: fatt
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: ${TPOT_REPO}/fatt:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/fatt/log:/opt/fatt/log
# P0f service
p0f:
container_name: p0f
restart: always
depends_on:
tpotinit:
condition: service_healthy
network_mode: "host"
image: ${TPOT_REPO}/p0f:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/p0f/log:/var/log/p0f
# Suricata service
suricata:
container_name: suricata
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env)
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/suricata/log:/var/log/suricata
##################
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
mem_limit: 1g
ports:
- "127.0.0.1:64296:5601"
image: ${TPOT_REPO}/kibana:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Map Redis Service
map_redis:
container_name: map_redis
restart: always
depends_on:
tpotinit:
condition: service_healthy
stop_signal: SIGKILL
tty: true
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Map Web Service
map_web:
container_name: map_web
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- MAP_COMMAND=AttackMapServer.py
stop_signal: SIGKILL
tty: true
ports:
- "127.0.0.1:64299:64299"
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
## Map Data Service
map_data:
container_name: map_data
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- MAP_COMMAND=DataServer_v2.py
- TPOT_ATTACKMAP_TEXT=${TPOT_ATTACKMAP_TEXT}
- TZ=${TPOT_ATTACKMAP_TEXT_TIMEZONE}
stop_signal: SIGKILL
tty: true
image: ${TPOT_REPO}/map:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Nginx service
nginx:
container_name: nginx
restart: always
environment:
- TPOT_OSTYPE=${TPOT_OSTYPE}
depends_on:
tpotinit:
condition: service_healthy
tmpfs:
- /var/tmp/nginx/client_body
- /var/tmp/nginx/proxy
- /var/tmp/nginx/fastcgi
- /var/tmp/nginx/uwsgi
- /var/tmp/nginx/scgi
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
image: ${TPOT_REPO}/nginx:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/nginx/cert/:/etc/nginx/cert/:ro
- ${TPOT_DATA_PATH}/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- ${TPOT_DATA_PATH}/nginx/conf/lswebpasswd:/etc/nginx/lswebpasswd:ro
- ${TPOT_DATA_PATH}/nginx/log/:/var/log/nginx/
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: ${TPOT_REPO}/spiderfoot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}/spiderfoot:/home/spiderfoot/.spiderfoot

View File

@ -1,20 +1,23 @@
FROM alpine:3.15
FROM alpine:3.19
#
# Include dist
COPY dist/ /root/dist/
#
# Install packages
RUN apk --no-cache -U add \
git \
procps \
python3 && \
git \
procps \
py3-psutil \
py3-requests \
python3 && \
#
# Install adbhoney from git
git clone https://github.com/huuck/ADBHoney /opt/adbhoney && \
cd /opt/adbhoney && \
# git checkout ad7c17e78d01f6860d58ba826a4b6a4e4f83acbd && \
git checkout 2417a7a982f4fd527b3a048048df9a23178767ad && \
# git checkout 2417a7a982f4fd527b3a048048df9a23178767ad && \
git checkout 42afd98611724ca3d694a48b694c957e8d953db4 && \
cp /root/dist/adbhoney.cfg /opt/adbhoney && \
cp /root/dist/cpu_check.py / && \
sed -i 's/dst_ip/dest_ip/' /opt/adbhoney/adbhoney/core.py && \
sed -i 's/dst_port/dest_port/' /opt/adbhoney/adbhoney/core.py && \
#
@ -29,8 +32,8 @@ RUN apk --no-cache -U add \
#
# Set workdir and start adbhoney
STOPSIGNAL SIGINT
# Adbhoney sometimes hangs at 100% CPU usage, if detected process will be killed and container restarts per docker-compose settings
HEALTHCHECK CMD if [ $(ps -C mpv -p 1 -o %cpu | tail -n 1 | cut -f 1 -d ".") -gt 75 ]; then kill -2 1; else exit 0; fi
# Adbhoney sometimes hangs at 100% CPU usage, if detected container will become unhealthy and restarted by tpotinit
HEALTHCHECK --interval=5m --timeout=30s --retries=3 CMD python3 /cpu_check.py $(pgrep -of run.py) 99
USER adbhoney:adbhoney
WORKDIR /opt/adbhoney/
CMD /usr/bin/python3 run.py

View File

@ -3,6 +3,8 @@ hostname = honeypot01
address = 0.0.0.0
port = 5555
http_download = true
http_timeout = 45
download_dir = dl/
log_dir = log/

42
docker/adbhoney/dist/cpu_check.py vendored Normal file
View File

@ -0,0 +1,42 @@
import psutil
import sys
import time
if len(sys.argv) != 3:
print("Usage: cpu_check.py <PID> <CPU_USAGE_THRESHOLD>")
sys.exit(1)
try:
pid = int(sys.argv[1])
except ValueError:
print("Please provide a valid integer value for the PID.")
sys.exit(1)
try:
cpu_threshold = float(sys.argv[2])
except ValueError:
print("Please provide a valid number for the CPU usage threshold.")
sys.exit(1)
try:
target_process = psutil.Process(pid)
except psutil.NoSuchProcess:
print(f"No process with the PID {pid} was found.")
sys.exit(1)
# Prepare to calculate the average CPU usage over 3 intervals of 1 second each
cpu_usages = []
for _ in range(3):
cpu_usages.append(target_process.cpu_percent(interval=1))
# Calculate the average CPU usage
average_cpu_usage = sum(cpu_usages) / len(cpu_usages)
print(f"Average CPU Usage of PID {pid} over 3 seconds: {average_cpu_usage}%")
# Check average CPU usage against the threshold
if average_cpu_usage >= cpu_threshold:
print(f"Average CPU usage of PID {pid} is above or equal to the threshold of {cpu_threshold}%.")
sys.exit(1)
else:
print(f"Average CPU usage of PID {pid} is below the threshold of {cpu_threshold}%. Exiting with code 0.")
sys.exit(0)

View File

@ -16,8 +16,8 @@ services:
- adbhoney_local
ports:
- "5555:5555"
image: "dtagdevsec/adbhoney:2204"
image: "dtagdevsec/adbhoney:24.04"
read_only: true
volumes:
- /data/adbhoney/log:/opt/adbhoney/log
- /data/adbhoney/downloads:/opt/adbhoney/dl
- $HOME/tpotce/data/adbhoney/log:/opt/adbhoney/log
- $HOME/tpotce/data/adbhoney/downloads:/opt/adbhoney/dl

View File

@ -1,10 +1,13 @@
#!/bin/bash
# Buildx Example: docker buildx build --platform linux/amd64,linux/arm64 -t username/demo:latest --push .
# Setup Vars
myPLATFORMS="linux/amd64,linux/arm64"
myHUBORG="dtagdevsec"
myTAG="2204"
myIMAGESBASE="adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt glutton hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot"
myHUBORG_DOCKER="dtagdevsec"
myHUBORG_GITHUB="ghcr.io/telekom-security"
myTAG="24.04"
myIMAGESBASE="tpotinit adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt glutton hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot"
myIMAGESELK="elasticsearch kibana logstash map"
myIMAGESTANNER="phpox redis snare tanner"
myBUILDERLOG="builder.log"
@ -23,9 +26,42 @@ fi
docker buildx > /dev/null 2>&1
if [ "$?" == "1" ];
then
echo "### Build environment not setup. Run bin/setup_builder.sh"
echo "### Build environment not setup. Install docker engine from docker:"
echo "### https://docs.docker.com/engine/install/debian/"
fi
# Let's ensure arm64 and amd64 are supported
echo "### Let's ensure ARM64 and AMD64 are supported ..."
myARCHITECTURES="amd64 arm64"
mySUPPORTED=$(docker buildx inspect --bootstrap)
for i in $myARCHITECTURES;
do
if ! echo $mySUPPORTED | grep -q linux/$i;
then
echo "## Installing $i support ..."
docker run --privileged --rm tonistiigi/binfmt --install $i
docker buildx inspect --bootstrap
else
echo "## $i support detected!"
fi
done
echo
# Let's ensure we have builder created with cache support
echo "### Checking for mybuilder ..."
if ! docker buildx ls | grep -q mybuilder;
then
echo "## Setting up mybuilder ..."
docker buildx create --name mybuilder
# Set as default, otherwise local cache is not supported
docker buildx use mybuilder
docker buildx inspect --bootstrap
else
echo "## Found mybuilder!"
fi
echo
# Only run with command switch
if [ "$1" == "" ]; then
echo "### T-Pot Multi Arch Image Builder."
@ -44,7 +80,12 @@ local myPUSHOPTION="$3"
for myREPONAME in $myIMAGELIST;
do
echo -n "Now building: $myREPONAME in $myPATH$myREPONAME/."
docker buildx build --cache-from "type=local,src=$myBUILDCACHE" --cache-to "type=local,dest=$myBUILDCACHE" --platform $myPLATFORMS -t $myHUBORG/$myREPONAME:$myTAG $myPUSHOPTION $myPATH$myREPONAME/. >> $myBUILDERLOG 2>&1
docker buildx build --cache-from "type=local,src=$myBUILDCACHE" \
--cache-to "type=local,dest=$myBUILDCACHE" \
--platform $myPLATFORMS \
-t $myHUBORG_DOCKER/$myREPONAME:$myTAG \
-t $myHUBORG_GITHUB/$myREPONAME:$myTAG \
$myPUSHOPTION $myPATH$myREPONAME/. >> $myBUILDERLOG 2>&1
if [ "$?" != "0" ];
then
echo " [ ERROR ] - Check logs!"
@ -76,4 +117,3 @@ if [ "$1" == "push" ];
fuBUILDIMAGES "elk/" "$myIMAGESELK" "--push"
fuBUILDIMAGES "tanner/" "$myIMAGESTANNER" "--push"
fi

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.19
#
# Include dist
COPY dist/ /root/dist/
@ -6,15 +6,15 @@ COPY dist/ /root/dist/
# Setup env and apt
RUN apk --no-cache -U upgrade && \
apk --no-cache add build-base \
git \
libffi \
libffi-dev \
openssl \
openssl-dev \
py3-cryptography \
py3-pip \
python3 \
python3-dev && \
git \
libffi \
libffi-dev \
openssl \
openssl-dev \
py3-cryptography \
py3-pip \
python3 \
python3-dev && \
#
# Setup user
addgroup -g 2000 ciscoasa && \
@ -27,7 +27,7 @@ RUN apk --no-cache -U upgrade && \
cd ciscoasa_honeypot && \
git checkout d6e91f1aab7fe6fc01fabf2046e76b68dd6dc9e2 && \
sed -i "s/git+git/git+https/g" requirements.txt && \
pip3 install --no-cache-dir -r requirements.txt && \
pip3 install --break-system-packages --no-cache-dir -r requirements.txt && \
cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \
chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \
#

View File

@ -1,5 +1,8 @@
version: '2.3'
networks:
ciscoasa_local:
services:
# Ciscoasa service
@ -16,7 +19,7 @@ services:
ports:
- "5000:5000/udp"
- "8443:8443"
image: "dtagdevsec/ciscoasa:2204"
image: "dtagdevsec/ciscoasa:24.04"
read_only: true
volumes:
- /data/ciscoasa/log:/var/log/ciscoasa
- $HOME/tpotce/data/ciscoasa/log:/var/log/ciscoasa

View File

@ -1,14 +1,14 @@
FROM alpine:3.15
FROM alpine:3.19
#
# Install packages
RUN apk --no-cache -U add \
git \
libcap \
openssl \
py3-pip \
python3 && \
git \
libcap \
openssl \
py3-pip \
python3 && \
#
pip3 install --no-cache-dir python-json-logger && \
pip3 install --break-system-packages --no-cache-dir python-json-logger && \
#
# Install CitrixHoneypot from GitHub
git clone https://github.com/t3chn0m4g3/CitrixHoneypot /opt/citrixhoneypot && \
@ -28,7 +28,7 @@ RUN apk --no-cache -U add \
addgroup -g 2000 citrixhoneypot && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 citrixhoneypot && \
chown -R citrixhoneypot:citrixhoneypot /opt/citrixhoneypot && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
setcap cap_net_bind_service=+ep $(readlink -f $(type -P python3)) && \
#
# Clean up
apk del --purge git \

View File

@ -16,7 +16,7 @@ services:
- citrixhoneypot_local
ports:
- "443:443"
image: "dtagdevsec/citrixhoneypot:2204"
image: "dtagdevsec/citrixhoneypot:24.04"
read_only: true
volumes:
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
- $HOME/tpotce/data/citrixhoneypot/log:/opt/citrixhoneypot/logs

View File

@ -1,52 +1,56 @@
FROM alpine:3.15
FROM alpine:3.19
#
# Include dist
COPY dist/ /root/dist/
#
# Setup apt
RUN apk --no-cache -U add \
build-base \
cython \
file \
git \
libev \
libtool \
libcap \
libffi-dev \
libxslt \
libxslt-dev \
mariadb-dev \
pkgconfig \
procps \
python3 \
python3-dev \
py3-cffi \
py3-cryptography \
py3-freezegun \
py3-gevent \
py3-lxml \
py3-natsort \
py3-pip \
py3-ply \
py3-psutil \
py3-pycryptodomex \
py3-pytest \
py3-requests \
py3-pyserial \
py3-setuptools \
py3-slugify \
py3-snmp \
py3-sphinx \
py3-wheel \
py3-zope-event \
py3-zope-interface \
wget && \
build-base \
cython \
file \
git \
libev \
libtool \
libcap \
libffi-dev \
libxslt \
libxslt-dev \
mariadb-dev \
pkgconfig \
procps \
python3 \
python3-dev \
py3-cffi \
py3-cryptography \
py3-freezegun \
py3-gevent \
py3-lxml \
py3-natsort \
py3-pip \
py3-ply \
py3-psutil \
py3-pycryptodomex \
py3-pytest \
py3-requests \
py3-pyserial \
py3-setuptools \
py3-slugify \
py3-snmp \
py3-sphinx \
py3-wheel \
py3-zope-event \
py3-zope-interface \
wget && \
#
# Setup ConPot
git clone https://github.com/t3chn0m4g3/cpppo /opt/cpppo && \
cd /opt/cpppo && \
pip3 install --break-system-packages --no-cache-dir --upgrade pip && \
pip3 install --break-system-packages --no-cache-dir . && \
git clone https://github.com/mushorg/conpot /opt/conpot && \
cd /opt/conpot/ && \
git checkout b3740505fd26d82473c0d7be405b372fa0f82575 && \
#git checkout 1c2382ea290b611fdc6a0a5f9572c7504bcb616e && \
git checkout 26c67d11b08a855a28e87abd186d959741f46c7f && \
# git checkout b3740505fd26d82473c0d7be405b372fa0f82575 && \
# Change template default ports if <1024
sed -i 's/port="2121"/port="21"/' /opt/conpot/conpot/templates/default/ftp/ftp.xml && \
sed -i 's/port="8800"/port="80"/' /opt/conpot/conpot/templates/default/http/http.xml && \
@ -58,24 +62,23 @@ RUN apk --no-cache -U add \
sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/IEC104/snmp/snmp.xml && \
sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/ipmi/ipmi/ipmi.xml && \
cp /root/dist/requirements.txt . && \
pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir . && \
pip3 install --break-system-packages --no-cache-dir . && \
cd / && \
rm -rf /opt/conpot /tmp/* /var/tmp/* && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
setcap cap_net_bind_service=+ep $(readlink -f $(type -P python3)) && \
#
# Get wireshark manuf db for scapy, setup configs, user, groups
mkdir -p /etc/conpot /var/log/conpot /usr/share/wireshark && \
wget https://github.com/wireshark/wireshark/raw/master/manuf -o /usr/share/wireshark/manuf && \
wget https://www.wireshark.org/download/automated/data/manuf -o /usr/share/wireshark/manuf && \
cp /root/dist/conpot.cfg /etc/conpot/conpot.cfg && \
cp -R /root/dist/templates /usr/lib/python3.9/site-packages/conpot/ && \
cp -R /root/dist/templates /usr/lib/$(readlink -f $(type -P python3) | cut -f4 -d"/")/site-packages/conpot/ && \
cp /root/dist/cpu_check.py / && \
addgroup -g 2000 conpot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 conpot && \
#
# Clean up
apk del --purge \
build-base \
cython-dev \
file \
git \
libev \
@ -91,7 +94,7 @@ RUN apk --no-cache -U add \
#
# Start conpot
STOPSIGNAL SIGINT
# Conpot sometimes hangs at 100% CPU usage, if detected process will be killed and container restarts per docker-compose settings
HEALTHCHECK CMD if [ $(ps -C mpv -p 1 -o %cpu | tail -n 1 | cut -f 1 -d ".") -gt 75 ]; then kill -2 1; else exit 0; fi
# Conpot sometimes hangs at 100% CPU usage, if detected container will become unhealthy and restarted by tpotinit
HEALTHCHECK --interval=5m --timeout=30s --retries=3 CMD python3 /cpu_check.py $(pgrep -of conpot) 99
USER conpot:conpot
CMD exec /usr/bin/conpot --mibcache $CONPOT_TMP --temp_dir $CONPOT_TMP --template $CONPOT_TEMPLATE --logfile $CONPOT_LOG --config $CONPOT_CONFIG

View File

@ -3,7 +3,7 @@ sensorid = conpot
[virtual_file_system]
data_fs_url = %(CONPOT_TMP)s
fs_url = tar:///usr/lib/python3.9/site-packages/conpot/data.tar
fs_url = tar:///usr/lib/python3.11/site-packages/conpot/data.tar
[session]
timeout = 30

42
docker/conpot/dist/cpu_check.py vendored Normal file
View File

@ -0,0 +1,42 @@
import psutil
import sys
import time
if len(sys.argv) != 3:
print("Usage: cpu_check.py <PID> <CPU_USAGE_THRESHOLD>")
sys.exit(1)
try:
pid = int(sys.argv[1])
except ValueError:
print("Please provide a valid integer value for the PID.")
sys.exit(1)
try:
cpu_threshold = float(sys.argv[2])
except ValueError:
print("Please provide a valid number for the CPU usage threshold.")
sys.exit(1)
try:
target_process = psutil.Process(pid)
except psutil.NoSuchProcess:
print(f"No process with the PID {pid} was found.")
sys.exit(1)
# Prepare to calculate the average CPU usage over 3 intervals of 1 second each
cpu_usages = []
for _ in range(3):
cpu_usages.append(target_process.cpu_percent(interval=1))
# Calculate the average CPU usage
average_cpu_usage = sum(cpu_usages) / len(cpu_usages)
print(f"Average CPU Usage of PID {pid} over 3 seconds: {average_cpu_usage}%")
# Check average CPU usage against the threshold
if average_cpu_usage >= cpu_threshold:
print(f"Average CPU usage of PID {pid} is above or equal to the threshold of {cpu_threshold}%.")
sys.exit(1)
else:
print(f"Average CPU usage of PID {pid} is below the threshold of {cpu_threshold}%. Exiting with code 0.")
sys.exit(0)

View File

@ -12,9 +12,7 @@ bacpypes==0.17.0
pyghmi==1.4.1
mixbox
modbus-tk
cpppo
fs==2.3.0
tftpy
# some freezegun versions broken
pycrypto
sphinx_rtd_theme

View File

@ -37,10 +37,10 @@ services:
- "2121:21"
- "44818:44818"
- "47808:47808/udp"
image: "dtagdevsec/conpot:2204"
image: "dtagdevsec/conpot:24.04"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- $HOME/tpotce/data/conpot/log:/var/log/conpot
# Conpot IEC104 service
conpot_IEC104:
@ -61,10 +61,10 @@ services:
ports:
# - "161:161/udp"
- "2404:2404"
image: "dtagdevsec/conpot:2204"
image: "dtagdevsec/conpot:24.04"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- $HOME/tpotce/data/conpot/log:/var/log/conpot
# Conpot guardian_ast service
conpot_guardian_ast:
@ -84,10 +84,10 @@ services:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: "dtagdevsec/conpot:2204"
image: "dtagdevsec/conpot:24.04"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- $HOME/tpotce/data/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
@ -107,10 +107,10 @@ services:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: "dtagdevsec/conpot:2204"
image: "dtagdevsec/conpot:24.04"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- $HOME/tpotce/data/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
@ -131,7 +131,7 @@ services:
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:2204"
image: "dtagdevsec/conpot:24.04"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
- $HOME/tpotce/data/conpot/log:/var/log/conpot

View File

@ -1,37 +1,37 @@
FROM alpine:3.15
FROM alpine:3.19
#
# Include dist
COPY dist/ /root/dist/
#
# Get and install dependencies & packages
RUN apk --no-cache -U add \
bash \
build-base \
git \
gmp-dev \
libcap \
libffi-dev \
mpc1-dev \
mpfr-dev \
openssl \
openssl-dev \
py3-appdirs \
py3-asn1-modules \
py3-attrs \
py3-bcrypt \
py3-cryptography \
py3-dateutil \
py3-greenlet \
py3-mysqlclient \
py3-openssl \
py3-packaging \
py3-parsing \
py3-pip \
py3-service_identity \
py3-treq \
py3-twisted \
python3 \
python3-dev && \
bash \
build-base \
git \
gmp-dev \
libcap \
libffi-dev \
mpc1-dev \
mpfr-dev \
openssl \
openssl-dev \
py3-appdirs \
py3-asn1-modules \
py3-attrs \
py3-bcrypt \
py3-cryptography \
py3-dateutil \
py3-greenlet \
py3-mysqlclient \
py3-openssl \
py3-packaging \
py3-parsing \
py3-pip \
py3-service_identity \
py3-treq \
py3-twisted \
python3 \
python3-dev && \
#
# Setup user
addgroup -g 2000 cowrie && \
@ -40,19 +40,20 @@ RUN apk --no-cache -U add \
# Install cowrie
mkdir -p /home/cowrie && \
cd /home/cowrie && \
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.3.0 && \
# git clone --depth=1 https://github.com/cowrie/cowrie -b v2.5.0 && \
git clone https://github.com/cowrie/cowrie && \
cd cowrie && \
# git checkout 6b1e82915478292f1e77ed776866771772b48f2e && \
git checkout 3394082040c02d91e79efa2c640ad68da9fe2231 && \
mkdir -p log && \
cp /root/dist/requirements.txt . && \
pip3 install --upgrade pip && \
pip3 install -r requirements.txt && \
pip3 install --break-system-packages --upgrade pip && \
pip3 install --break-system-packages -r requirements.txt && \
#
# Setup configs
export PYTHON_DIR=$(python3 --version | tr '[A-Z]' '[a-z]' | tr -d ' ' | cut -d '.' -f 1,2 ) && \
setcap cap_net_bind_service=+ep /usr/bin/$PYTHON_DIR && \
#export PYTHON_DIR=$(python3 --version | tr '[A-Z]' '[a-z]' | tr -d ' ' | cut -d '.' -f 1,2 ) && \
setcap cap_net_bind_service=+ep $(readlink -f $(type -P python3)) && \
cp /root/dist/cowrie.cfg /home/cowrie/cowrie/cowrie.cfg && \
chown cowrie:cowrie -R /home/cowrie/* /usr/lib/$PYTHON_DIR/site-packages/twisted/plugins && \
chown cowrie:cowrie -R /home/cowrie/* /usr/lib/$(readlink -f $(type -P python3) | cut -f4 -d"/")/site-packages/twisted/plugins && \
#
# Start Cowrie once to prevent dropin.cache errors upon container start caused by read-only filesystem
su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie:/home/cowrie/cowrie/src && \
@ -75,6 +76,7 @@ RUN apk --no-cache -U add \
rm -rf /var/cache/apk/* && \
rm -rf /home/cowrie/cowrie/cowrie.pid && \
rm -rf /home/cowrie/cowrie/.git && \
# ln -s /usr/bin/python3 /usr/bin/python && \
unset PYTHON_DIR
#
# Start cowrie

View File

@ -13,10 +13,8 @@ interactive_timeout = 180
authentication_timeout = 120
backend = shell
timezone = UTC
report_public_ip = true
auth_class = AuthRandom
auth_class_parameters = 2, 5, 10
reported_ssh_port = 22
data_path = /tmp/cowrie/data
[shell]

View File

@ -20,10 +20,10 @@ services:
ports:
- "22:22"
- "23:23"
image: "dtagdevsec/cowrie:2204"
image: "dtagdevsec/cowrie:24.04"
read_only: true
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
- /data/cowrie/keys:/home/cowrie/cowrie/etc
- /data/cowrie/log:/home/cowrie/cowrie/log
- /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
- $HOME/tpotce/data/cowrie/downloads:/home/cowrie/cowrie/dl
- $HOME/tpotce/data/cowrie/keys:/home/cowrie/cowrie/etc
- $HOME/tpotce/data/cowrie/log:/home/cowrie/cowrie/log
- $HOME/tpotce/data/cowrie/log/tty:/home/cowrie/cowrie/log/tty

View File

@ -1,22 +1,22 @@
FROM alpine:3.15
FROM alpine:3.19
#
# Include dist
COPY dist/ /root/dist/
#
# Install packages
RUN apk --no-cache -U add \
build-base \
git \
libcap \
py3-colorama \
py3-greenlet \
py3-pip \
py3-schedule \
py3-sqlalchemy \
py3-twisted \
py3-wheel \
python3 \
python3-dev && \
build-base \
git \
libcap \
py3-colorama \
py3-greenlet \
py3-pip \
py3-schedule \
py3-sqlalchemy \
py3-twisted \
py3-wheel \
python3 \
python3-dev && \
#
# Install ddospot from GitHub and setup
mkdir -p /opt && \
@ -40,8 +40,8 @@ RUN apk --no-cache -U add \
sed -i "s#rotate_size = 10#rotate_size = 9999#g" /opt/ddospot/ddospot/pots/ntp/ntpot.conf && \
sed -i "s#rotate_size = 10#rotate_size = 9999#g" /opt/ddospot/ddospot/pots/ssdp/ssdpot.conf && \
cp /root/dist/requirements.txt . && \
pip3 install -r ddospot/requirements.txt && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
pip3 install --break-system-packages -r ddospot/requirements.txt && \
setcap cap_net_bind_service=+ep $(readlink -f $(type -P python3)) && \
#
# Setup user, groups and configs
addgroup -g 2000 ddospot && \
@ -50,8 +50,8 @@ RUN apk --no-cache -U add \
#
# Clean up
apk del --purge build-base \
git \
python3-dev && \
git \
python3-dev && \
rm -rf /root/* && \
rm -rf /opt/ddospot/.git && \
rm -rf /var/cache/apk/*

View File

@ -20,9 +20,9 @@ services:
- "123:123/udp"
# - "161:161/udp"
- "1900:1900/udp"
image: "dtagdevsec/ddospot:2204"
image: "dtagdevsec/ddospot:24.04"
read_only: true
volumes:
- /data/ddospot/log:/opt/ddospot/ddospot/logs
- /data/ddospot/bl:/opt/ddospot/ddospot/bl
- /data/ddospot/db:/opt/ddospot/ddospot/db
- $HOME/tpotce/data/ddospot/log:/opt/ddospot/ddospot/logs
- $HOME/tpotce/data/ddospot/bl:/opt/ddospot/ddospot/bl
- $HOME/tpotce/data/ddospot/db:/opt/ddospot/ddospot/db

View File

@ -14,5 +14,5 @@ services:
- cyberchef_local
ports:
- "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:2204"
image: "dtagdevsec/cyberchef:24.04"
read_only: true

View File

@ -12,5 +12,5 @@ services:
# condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:2204"
image: "dtagdevsec/head:24.04"
read_only: true

View File

@ -20,7 +20,7 @@ services:
- "2324:2324"
- "4096:4096"
- "9200:9200"
image: "dtagdevsec/honeypy:2204"
image: "dtagdevsec/honeypy:24.04"
read_only: true
volumes:
- /data/honeypy/log:/opt/honeypy/log

Some files were not shown because too many files have changed in this diff Show More