64 Commits
alpha ... 24.04

Author SHA1 Message Date
c8b47b09bb Fixes #1715 2024-12-16 14:25:02 +01:00
6136cf3206 Fix Debian Download link
Debian switched from 12.7.0 to 12.8.0
2024-11-18 08:56:00 +01:00
2b3966b6a9 Merge pull request #1695 from tmyksj/patch-1
fix typos in README.md
2024-11-11 11:09:29 +01:00
48ac55f61d fix typos 2024-11-09 20:11:34 +09:00
a869f6f04b Update general-issue-for-t-pot.md 2024-10-28 12:37:53 +01:00
fe2c75389b Update bug-report-for-t-pot.md 2024-10-28 12:37:20 +01:00
c9a87f9f0f Merge pull request #1643 from sarkoziadam/master
Fix conpot docker image errors
2024-10-16 11:54:18 +02:00
34eb7d6e72 Merge pull request #1661 from neon-ninja/patch-1
Correct SSH version in cowrie.cfg
2024-09-28 15:08:45 +02:00
dd741e94b0 Correct SSH version in cowrie.cfg 2024-09-27 16:37:23 +12:00
736cb598d4 Update links, fix #1654 2024-09-20 09:56:15 +02:00
4191cf24b3 Fix conpot docker image errors
Version of pysmi set to previous release, FtpReader function has been removed from the new release
2024-08-24 22:46:20 +02:00
f41c15ec10 Merge pull request #1601 from mattroot/master
Remove Podman-Docker compatibility layer when installing
2024-07-11 13:28:33 +02:00
9283a79045 Clariy removal of SENSORS in the .env config
Fixes #1616
2024-07-11 13:11:46 +02:00
53314b19a1 bump elastic stack to 8.14.2 2024-07-08 15:46:22 +02:00
025583d3ba Rename stale to stale.yml 2024-07-05 21:33:12 +02:00
d1067ad6b2 Action tag / close stale issues and PRs 2024-07-05 21:02:46 +02:00
debb74a31d Action check basic-support-info.yml 2024-07-05 20:56:43 +02:00
12da07b0c2 installer: remove podman 2024-07-02 23:03:30 +00:00
025ab2db46 update cowrie 2024-07-02 16:23:42 +02:00
f31d5b3f73 installer: remove podman-docker 2024-06-28 12:02:12 +02:00
8f3966a675 Remove deprecated version tag from docker compose files
Bump Elastic Stack to 8.13.4
2024-06-19 16:10:03 +02:00
a1d72aa7bd Update Installer to support latest distros as referred to by the README.
Updated README accordingly to better reflect the currently supported / tested distributions.
2024-06-18 17:57:41 +02:00
a510e28ef1 Include config option to disable SSL verification
Adjust README accordingly
Fixes #1543
2024-06-04 15:33:28 +02:00
d83b858be7 Update 2024-06-02 23:04:25 +02:00
1eb3a8a8e3 Update README.md
Fixes #1559
2024-05-30 11:59:31 +02:00
4f82c16bb8 Update ReadMe regarding distributed deployment
Thanks to @SnakeSK and @devArnold for the discussion in #1543
2024-05-22 16:54:29 +02:00
9957a13b41 Update ReadMe regarding distributed deployment
Thanks to @SnakeSK and @devArnold for the discussion in #1543
2024-05-22 11:31:39 +02:00
f4586bc2c4 Update ReadMe regarding Daily Reboot 2024-05-21 13:09:06 +02:00
da647f4b9c Update ReadMe regarding Daily Reboot 2024-05-21 13:03:59 +02:00
ef55a9d434 Update for T-Pot Mobile 2024-05-13 15:49:38 +02:00
996e8a3451 Prepare for T-Pot Mobile
- improve install.sh
2024-05-11 13:31:40 +02:00
621a13df1a Fixes #1540 2024-05-11 10:12:47 +02:00
8ec7255443 Prepare for T-Pot Mobile
- fix port conflict
2024-05-10 16:24:01 +02:00
3453266527 Prepare for T-Pot Mobile
- fix port conflict
2024-05-10 16:17:34 +02:00
812841d086 Prepare for T-Pot Mobile
- fix typo
- cleanup
2024-05-10 15:31:17 +02:00
ade4bd711d Prepare for T-Pot Mobile 2024-05-10 15:19:05 +02:00
8d385a3777 Merge pull request #1538 from glaslos/patch-2
Update Glutton Dockerfile
2024-05-07 14:42:55 +02:00
1078ce537d Update Glutton Dockerfile 2024-05-07 14:26:18 +02:00
5815664417 Fixes #1525
Ubuntu 24.04 switched from exa to eza and the new install playbook reflects these changes.
2024-05-07 11:26:22 +02:00
74a3f375e2 Update mac_win.yml
Conpot throws errors in Docker Desktop for Windows.
2024-05-06 20:16:03 +02:00
0b1281d40f Merge pull request #1536 from t3chn0m4g3/master
Adjust T-Pot for Docker Desktop for Windows with WSL2
2024-05-06 19:42:56 +02:00
3f087b0182 Update entrypoint.sh 2024-05-06 19:37:34 +02:00
f18530575c Adjust README.md for macOS / Windows install 2024-05-06 19:29:17 +02:00
3b94af2d5e Optimize for linux 2024-05-06 19:22:33 +02:00
99539562f2 Prepare fix for Docker Desktop in Windows 2024-05-05 18:57:59 +02:00
0451cd9acd Merge pull request #1533 from ZePotente/patch-1
Typos in customizer.py
2024-05-02 19:12:49 +02:00
5810f5f891 Typos in customizer.py 2024-05-02 14:05:07 -03:00
8ac9598f15 Update README.md 2024-05-02 18:33:51 +02:00
caca93f3a0 #1531, but needs testing 2024-05-02 13:43:16 +02:00
775bc2c1dd update hptest.sh 2024-04-29 19:03:49 +02:00
8b98a78b29 Update Issue templates 2024-04-25 16:01:48 +02:00
72f8b4109a Update issue templates 2024-04-25 14:34:46 +02:00
a8c44d66aa Update activity link 2024-04-23 17:32:50 +02:00
9a42192a6c Update screenshot 2024-04-22 17:23:50 +02:00
60be54059b Merge 24.04 into master and prepare release
Merge 24.04 into master and prepare release
2024-04-22 17:10:17 +02:00
0e73986772 Prepare for merge into master 2024-04-22 17:08:22 +02:00
35d68c88cd resolve merge conflicts 2024-04-19 18:20:39 +02:00
85431b308d add 24.04 version tag 2024-03-24 19:22:37 +01:00
932ad6b27c Fix repack for AMD64 .iso (#1481) 2024-03-04 15:23:27 +01:00
02098f9b76 Update Citation 2023-08-28 10:29:24 +02:00
649163e06f Update Citation 2023-08-28 10:16:18 +02:00
9d66bcb7d3 Add Bibtex, closes #1398 2023-08-28 10:02:59 +02:00
dc4384d6ab Merge pull request #1369 from swiftsolves-msft/pr-azure
Azure Deployment via ARM template
2023-08-22 13:36:09 +02:00
1af7cdcaa1 Azure Deployment via ARM template
The following is a Azure Deployment of T-Pot using a ARM Template, creates a debian 11 vm, disks, nic, nsg, pip and leverages cloud-init customData to pass a B64 encoded string of a cloud-inity yaml file, example in readme docs.
2023-07-02 00:56:38 -04:00
80 changed files with 702 additions and 1143 deletions

15
.env
View File

@ -51,6 +51,7 @@ TPOT_PERSISTENCE=on
# Create credentials with 'htpasswd ~/tpotce/data/nginx/conf/lswebpasswd <username>' # Create credentials with 'htpasswd ~/tpotce/data/nginx/conf/lswebpasswd <username>'
# 4. On SENSOR: Provide username / password from (3) for TPOT_HIVE_USER as base64 encoded string: # 4. On SENSOR: Provide username / password from (3) for TPOT_HIVE_USER as base64 encoded string:
# "echo -n 'username:password' | base64 -w0" # "echo -n 'username:password' | base64 -w0"
# MOBILE: This will set the correct type for T-Pot Mobile (https://github.com/telekom-security/tpotmobile)
TPOT_TYPE=HIVE TPOT_TYPE=HIVE
# T-Pot Hive User (only relevant for SENSOR deployment) # T-Pot Hive User (only relevant for SENSOR deployment)
@ -59,6 +60,18 @@ TPOT_TYPE=HIVE
# i.e. TPOT_HIVE_USER='dXNlcm5hbWU6cGFzc3dvcmQ=' # i.e. TPOT_HIVE_USER='dXNlcm5hbWU6cGFzc3dvcmQ='
TPOT_HIVE_USER= TPOT_HIVE_USER=
# Logstash Sensor SSL verfication (only relevant on SENSOR hosts)
# full: This is the default. Logstash, by default, verifies the complete certificate chain for ssl certificates.
# This also includes the FQDN and sANs. By default T-Pot will only generate a self-signed certificate which
# contains a sAN for the HIVE IP. In scenario where the HIVE needs to be accessed via Internet, maybe with
# a different NAT address, a new certificate needs to be generated before deployment that includes all the
# IPs and FQDNs as sANs for logstash successfully establishing a connection to the HIVE for transmitting
# logs. Details here: https://github.com/telekom-security/tpotce?tab=readme-ov-file#distributed-deployment
# none: This setting will disable the ssl verification check of logstash and should only be used in a testing
# environment where IPs often change. It is not recommended for a production environment where trust between
# HIVE and SENSOR is only established through a self signed certificate.
LS_SSL_VERIFICATION=full
# T-Pot Hive IP (only relevant for SENSOR deployment) # T-Pot Hive IP (only relevant for SENSOR deployment)
# <empty>: This is empty by default. # <empty>: This is empty by default.
# <IP, FQDN>: This can be either a IP (i.e. 192.168.1.1) or a FQDN (i.e. foo.bar.local) # <IP, FQDN>: This can be either a IP (i.e. 192.168.1.1) or a FQDN (i.e. foo.bar.local)
@ -108,7 +121,7 @@ TPOT_DOCKER_COMPOSE=./docker-compose.yml
TPOT_REPO=dtagdevsec TPOT_REPO=dtagdevsec
# T-Pot Version Tag # T-Pot Version Tag
TPOT_VERSION=alpha TPOT_VERSION=24.04
# T-Pot Pull Policy # T-Pot Pull Policy
# always: (T-Pot default) Compose implementations SHOULD always pull the image from the registry. # always: (T-Pot default) Compose implementations SHOULD always pull the image from the registry.

View File

@ -13,6 +13,7 @@ Before you post your issue make sure it has not been answered yet and provide **
- 🔍 Use the [search function](https://github.com/dtag-dev-sec/tpotce/issues?utf8=%E2%9C%93&q=) first - 🔍 Use the [search function](https://github.com/dtag-dev-sec/tpotce/issues?utf8=%E2%9C%93&q=) first
- 🧐 Check our [Wiki](https://github.com/dtag-dev-sec/tpotce/wiki) and the [discussions](https://github.com/telekom-security/tpotce/discussions) - 🧐 Check our [Wiki](https://github.com/dtag-dev-sec/tpotce/wiki) and the [discussions](https://github.com/telekom-security/tpotce/discussions)
- 📚 Consult the documentation of 💻 your Linux OS, 🐳 [Docker](https://docs.docker.com/), the 🦌 [Elastic stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md). - 📚 Consult the documentation of 💻 your Linux OS, 🐳 [Docker](https://docs.docker.com/), the 🦌 [Elastic stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md).
- ⚙️ The [Troubleshoot Section](https://github.com/telekom-security/tpotce?tab=readme-ov-file#troubleshooting) of the [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md) is a good starting point to collect a good set of information for the issue and / or to fix things on your own.
- **⚠️ Provide [BASIC SUPPORT INFORMATION](#-basic-support-information-commands-are-expected-to-run-as-root) or similar detailed information with regard to your issue or we will close the issue or convert it into a discussion without further interaction from the maintainers**.<br> - **⚠️ Provide [BASIC SUPPORT INFORMATION](#-basic-support-information-commands-are-expected-to-run-as-root) or similar detailed information with regard to your issue or we will close the issue or convert it into a discussion without further interaction from the maintainers**.<br>
# ⚠️ Basic support information (commands are expected to run as `root`) # ⚠️ Basic support information (commands are expected to run as `root`)
@ -32,7 +33,7 @@ Before you post your issue make sure it has not been answered yet and provide **
- Did you modify any scripts or configs? If yes, please attach the changes. - Did you modify any scripts or configs? If yes, please attach the changes.
- Please provide a screenshot of `htop` and `docker stats`. - Please provide a screenshot of `htop` and `docker stats`.
- How much free disk space is available (`df -h`)? - How much free disk space is available (`df -h`)?
- What is the current container status (`dps.sh`)? - What is the current container status (`dps`)?
- On Linux: What is the status of the T-Pot service (`systemctl status tpot`)? - On Linux: What is the status of the T-Pot service (`systemctl status tpot`)?
- What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `grc netstat -tulpen` - What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `grc netstat -tulpen`
- Stop T-Pot `systemctl stop tpot` - Stop T-Pot `systemctl stop tpot`

View File

@ -13,6 +13,7 @@ Before you post your issue make sure it has not been answered yet and provide **
- 🔍 Use the [search function](https://github.com/dtag-dev-sec/tpotce/issues?utf8=%E2%9C%93&q=) first - 🔍 Use the [search function](https://github.com/dtag-dev-sec/tpotce/issues?utf8=%E2%9C%93&q=) first
- 🧐 Check our [Wiki](https://github.com/dtag-dev-sec/tpotce/wiki) and the [discussions](https://github.com/telekom-security/tpotce/discussions) - 🧐 Check our [Wiki](https://github.com/dtag-dev-sec/tpotce/wiki) and the [discussions](https://github.com/telekom-security/tpotce/discussions)
- 📚 Consult the documentation of 💻 your Linux OS, 🐳 [Docker](https://docs.docker.com/), the 🦌 [Elastic stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md). - 📚 Consult the documentation of 💻 your Linux OS, 🐳 [Docker](https://docs.docker.com/), the 🦌 [Elastic stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md).
- ⚙️ The [Troubleshoot Section](https://github.com/telekom-security/tpotce?tab=readme-ov-file#troubleshooting) of the [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md) is a good starting point to collect a good set of information for the issue and / or to fix things on your own.
- **⚠️ Provide [BASIC SUPPORT INFORMATION](#-basic-support-information-commands-are-expected-to-run-as-root) or similar detailed information with regard to your issue or we will close the issue or convert it into a discussion without further interaction from the maintainers**.<br> - **⚠️ Provide [BASIC SUPPORT INFORMATION](#-basic-support-information-commands-are-expected-to-run-as-root) or similar detailed information with regard to your issue or we will close the issue or convert it into a discussion without further interaction from the maintainers**.<br>
# ⚠️ Basic support information (commands are expected to run as `root`) # ⚠️ Basic support information (commands are expected to run as `root`)
@ -32,7 +33,7 @@ Before you post your issue make sure it has not been answered yet and provide **
- Did you modify any scripts or configs? If yes, please attach the changes. - Did you modify any scripts or configs? If yes, please attach the changes.
- Please provide a screenshot of `htop` and `docker stats`. - Please provide a screenshot of `htop` and `docker stats`.
- How much free disk space is available (`df -h`)? - How much free disk space is available (`df -h`)?
- What is the current container status (`dps.sh`)? - What is the current container status (`dps`)?
- On Linux: What is the status of the T-Pot service (`systemctl status tpot`)? - On Linux: What is the status of the T-Pot service (`systemctl status tpot`)?
- What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `grc netstat -tulpen` - What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `grc netstat -tulpen`
- Stop T-Pot `systemctl stop tpot` - Stop T-Pot `systemctl stop tpot`

View File

@ -0,0 +1,49 @@
name: "Check Basic Support Info"
on:
issues:
types: [opened, edited]
permissions:
issues: write
contents: read
jobs:
check-issue:
runs-on: ubuntu-latest
steps:
- name: Check out the repository
uses: actions/checkout@v4
- name: Install jq
run: sudo apt-get install jq -y
- name: Check issue for basic support info
id: check_issue
run: |
REQUIRED_INFO=("What OS are you T-Pot running on?" "What is the version of the OS" "What T-Pot version are you currently using" "What architecture are you running on" "Review the \`~/install_tpot.log\`" "How long has your installation been running?" "Did you install upgrades, packages or use the update script?" "Did you modify any scripts or configs?" "Please provide a screenshot of \`htop\` and \`docker stats\`." "How much free disk space is available" "What is the current container status" "What is the status of the T-Pot service" "What ports are being occupied?")
ISSUE_BODY=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.body')
MISSING_INFO=()
for info in "${REQUIRED_INFO[@]}"; do
if [[ "$ISSUE_BODY" != *"$info"* ]]; then
MISSING_INFO+=("$info")
fi
done
if [ ${#MISSING_INFO[@]} -ne 0 ]; then
echo "missing=true" >> $GITHUB_ENV
else
echo "missing=false" >> $GITHUB_ENV
fi
- name: Add "no basic support info" label if necessary
if: env.missing == 'true'
run: gh issue edit "$NUMBER" --add-label "$LABELS"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
NUMBER: ${{ github.event.issue.number }}
LABELS: no basic support info

24
.github/workflows/stale.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: "Tag stale issues and pull requests"
on:
schedule:
- cron: "0 0 * * *" # Runs every day at midnight
workflow_dispatch: # Allows the workflow to be triggered manually
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v7
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue has been marked as stale because it has had no activity for 7 days. If you are still experiencing this issue, please comment or it will be closed in 7 days."
stale-pr-message: "This pull request has been marked as stale because it has had no activity for 7 days. If you are still working on this, please comment or it will be closed in 7 days."
days-before-stale: 7
days-before-close: 7
stale-issue-label: "stale"
exempt-issue-labels: "keep-open"
stale-pr-label: "stale"
exempt-pr-labels: "keep-open"
operations-per-run: 30
debug-only: false

View File

@ -1,45 +1,36 @@
# Release Notes / Changelog # Release Notes / Changelog
T-Pot 22.04.0 is probably the most feature rich release ever provided with long awaited (wanted!) features readily available after installation. T-Pot 24.04.0 marks probably the largest change in the history of the project. While most of the changes have been made to the underlying platform some changes will be standing out in particular - a T-Pot ISO image will no longer be provided with the benefit that T-Pot will now run on multiple Linux distributions (Alma Linux, Debian, Fedora, OpenSuse, Raspbian, Rocky Linux, Ubuntu), Raspberry Pi (optimized) and macOS / Windows (limited).
## New Features ## New Features
* **Distributed** Installation with **HIVE** and **HIVE_SENSOR** * **Distributed** Installation is now using NGINX reverse proxy instead of SSH to transmit **HIVE_SENSOR** logs to **HIVE**
* **ARM64** support for all provided Docker images * **`deploy.sh`**, will make the deployment of sensor much easier and will automatically take care of the configuration. You only have to install the T-Pot sensor.
* **GeoIP Attack Map** visualizing Live Attacks on a dedicated webpage * **T-Pot Init** is the foundation for running T-Pot on multiple Linux distributions and will also ensure to restart containers with failed healthchecks using **autoheal**
* **Kibana Live Attack Map** visualizing Live Attacks from different **HIVE_SENSORS** * **T-Pot Installer** is now mostly Ansible based providing a universal playbook for the most common Linux distributions
* **Blackhole** is a script trying to avoid mass scanner detection * **T-Pot Uninstaller** allows to uninstall T-Pot, while not recommended for general usage, this comes in handy for testing purposes
* **Elasticvue** a web front end for browsing and interacting with an Elastic Search cluster * **T-Pot Customizer (`compose/customizer.py`)** is here to assist you in the creation of a customized `docker-compose.yml`
* **Ddospot** a honeypot for tracking and monitoring UDP-based Distributed Denial of Service (DDoS) attacks * **T-Pot Landing Page** has been redesigned and simplified
* **Endlessh** is a SSH tarpit that very slowly sends an endless, random SSH banner ![T-Pot-WebUI](doc/tpotwebui.png)
* **HellPot** is an endless honeypot based on Heffalump that sends unruly HTTP bots to hell * **Kibana Dashboards, Objects** fully refreshed in favor of Lens based objects
* **qHoneypots** 25 honeypots in a single container for monitoring network traffic, bots activities, and username \ password credentials ![Dashbaord](doc/kibana_a.png)
* **Redishoneypot** is a honeypot mimicking some of the Redis' functions * **Wordpot** is added as new addition to the available honeypots within T-Pot and will run on `tcp/8080` by default.
* **SentryPeer** a dedicated SIP honeypot * **Raspberry Pi** is now supported using a dedicated `mobile.yml` (why this is called mobile will be revealed soon!)
* **Index Lifecycle Management** for Elasticseach indices is now being used * **GeoIP Attack Map** is now aware of connects / disconnects and thus eliminating required reloads
* **Docker**, where possible, will now be installed directly from the Docker repositories to avoid any incompatibilities
## Upgrades * **`.env`** now provides a single configuration file for the T-Pot related settings
* **Debian 11.x** is now being used for the T-Pot ISO images and required for post installs * **`genuser.sh`** can now be used to add new users to the T-Pot Landing Page as part of the T-Pot configuration file (`.env`)
* **Elastic Stack 8.x** is now provided as Docker images
## Updates ## Updates
* **Honeypots** and **tools** were updated to their latest masters and releases * **Honeypots** and **tools** were updated to their latest pushed code and / or releases
* Where possible Docker Images will now use Alpine 3.19
* Updates will be provided continuously through Docker Images updates * Updates will be provided continuously through Docker Images updates
## Breaking Changes ## Breaking Changes
* For security reasons all Py2.x honeypots with the need of PyPi packages have been removed: **HoneyPy**, **HoneySAP** and **RDPY** * There is no option to migrate a previous installation to T-Pot 24.04.0, you can try to transfer the old `data` folder to the new T-Pot installation, but a working environment depends on too many other factors outside of our control and a new installation is simply faster.
* If you are upgrading from a previous version of T-Pot (20.06.x) you need to import the new Kibana objects or some of the functionality will be broken or will be unavailabe * Most of the support scripts were moved into the **T-Pot Init** image and are no longer available directly on the host.
* **Cyberchef** is now part of the Nginx Docker image, no longer as individual image * Cockpit is no longer available as part of T-Pot itself. However, where supported, you can simply install the `cockpit` package.
* **ElasticSearch Head** is superseded by **Elasticvue** and part the Nginx Docker image
* **Heimdall** is no longer supported and superseded with a new Bento based landing page
* **Elasticsearch Curator** is no longer supprted and superseded with **Index Lifecycle Policies** available through Kibana.
# Thanks & Credits # Thanks & Credits
* @ghenry, for some fun late night debugging and of course SentryPeer!
* @giga-a, for adding much appreciated features (i.e. JSON logging,
X-Forwarded-For, etc.) and of course qHoneypots!
* @sp3t3rs, @trixam, for their backend and ews support! * @sp3t3rs, @trixam, for their backend and ews support!
* @tadashi-oya, for spotting some errors and propose fixes! * @shark4ce for taking the time to test, debug and offer a solution #1472.
* @tmariuss, @shaderecker for their cloud contributions!
* @vorband, for much appreciated and helpful insights regarding the GeoIP Attack Map!
* @yunginnanet, on not giving up on squashing a bug and of course Hellpot!
... and many others from the T-Pot community by opening valued issues and discussions, suggesting ideas and thus helping to improve T-Pot! ... and many others from the T-Pot community by opening valued issues and discussions, suggesting ideas and thus helping to improve T-Pot!

View File

@ -38,6 +38,6 @@ keywords:
- docker - docker
- elk - elk
license: GPL-3.0 license: GPL-3.0
commit: unreleased, under heavy development commit: release
version: 24.04.0 version: 24.04.0
date-released: '2024-04-22' date-released: '2024-04-22'

115
README.md
View File

@ -12,15 +12,13 @@ T-Pot is the all in one, optionally distributed, multiarch (amd64, arm64) honeyp
4. Install `curl`: `$ sudo [apt, dnf, zypper] install curl` if not installed already 4. Install `curl`: `$ sudo [apt, dnf, zypper] install curl` if not installed already
5. Run installer as non-root from `$HOME`: 5. Run installer as non-root from `$HOME`:
``` ```
env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/alpha/install.sh)" env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/install.sh)"
``` ```
* Follow instructions, read messages, check for possible port conflicts and reboot * Follow instructions, read messages, check for possible port conflicts and reboot
# Table of Contents
<!-- TOC --> <!-- TOC -->
* [T-Pot - The All In One Multi Honeypot Platform](#t-pot---the-all-in-one-multi-honeypot-platform) * [T-Pot - The All In One Multi Honeypot Platform](#t-pot---the-all-in-one-multi-honeypot-platform)
* [TL;DR](#tldr) * [TL;DR](#tldr)
* [Table of Contents](#table-of-contents)
* [Disclaimer](#disclaimer) * [Disclaimer](#disclaimer)
* [Technical Concept](#technical-concept) * [Technical Concept](#technical-concept)
* [Technical Architecture](#technical-architecture) * [Technical Architecture](#technical-architecture)
@ -39,11 +37,13 @@ env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/alpha/ins
* [macOS & Windows](#macos--windows) * [macOS & Windows](#macos--windows)
* [Installation Types](#installation-types) * [Installation Types](#installation-types)
* [Standard / HIVE](#standard--hive) * [Standard / HIVE](#standard--hive)
* [**Distributed**](#distributed) * [Distributed](#distributed)
* [Uninstall T-Pot](#uninstall-t-pot) * [Uninstall T-Pot](#uninstall-t-pot)
* [First Start](#first-start) * [First Start](#first-start)
* [Standalone First Start](#standalone-first-start) * [Standalone First Start](#standalone-first-start)
* [Distributed Deployment](#distributed-deployment) * [Distributed Deployment](#distributed-deployment)
* [Planning and Certificates](#planning-and-certificates)
* [Deploying Sensors](#deploying-sensors)
* [Community Data Submission](#community-data-submission) * [Community Data Submission](#community-data-submission)
* [Opt-In HPFEEDS Data Submission](#opt-in-hpfeeds-data-submission) * [Opt-In HPFEEDS Data Submission](#opt-in-hpfeeds-data-submission)
* [Remote Access and Tools](#remote-access-and-tools) * [Remote Access and Tools](#remote-access-and-tools)
@ -60,8 +60,10 @@ env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/alpha/ins
* [Maintenance](#maintenance) * [Maintenance](#maintenance)
* [General Updates](#general-updates) * [General Updates](#general-updates)
* [Update Script](#update-script) * [Update Script](#update-script)
* [Daily Reboot](#daily-reboot)
* [Known Issues](#known-issues) * [Known Issues](#known-issues)
* [**Docker Images Fail to Download**](#docker-images-fail-to-download) * [Docker Images Fail to Download](#docker-images-fail-to-download)
* [T-Pot Networking Fails](#t-pot-networking-fails)
* [Start T-Pot](#start-t-pot) * [Start T-Pot](#start-t-pot)
* [Stop T-Pot](#stop-t-pot) * [Stop T-Pot](#stop-t-pot)
* [T-Pot Data Folder](#t-pot-data-folder) * [T-Pot Data Folder](#t-pot-data-folder)
@ -71,8 +73,8 @@ env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/alpha/ins
* [Blackhole](#blackhole) * [Blackhole](#blackhole)
* [Add Users to Nginx (T-Pot WebUI)](#add-users-to-nginx-t-pot-webui) * [Add Users to Nginx (T-Pot WebUI)](#add-users-to-nginx-t-pot-webui)
* [Import and Export Kibana Objects](#import-and-export-kibana-objects) * [Import and Export Kibana Objects](#import-and-export-kibana-objects)
* [**Export**](#export) * [Export](#export)
* [**Import**](#import) * [Import](#import)
* [Troubleshooting](#troubleshooting) * [Troubleshooting](#troubleshooting)
* [Logs](#logs) * [Logs](#logs)
* [RAM and Storage](#ram-and-storage) * [RAM and Storage](#ram-and-storage)
@ -125,6 +127,7 @@ T-Pot offers docker images for the following honeypots ...
* [wordpot](https://github.com/gbrindisi/wordpot) * [wordpot](https://github.com/gbrindisi/wordpot)
... alongside the following tools ... ... alongside the following tools ...
* [Autoheal](https://github.com/willfarrell/docker-autoheal) a tool to automatically restart containers with failed healthchecks.
* [Cyberchef](https://gchq.github.io/CyberChef/) a web app for encryption, encoding, compression and data analysis. * [Cyberchef](https://gchq.github.io/CyberChef/) a web app for encryption, encoding, compression and data analysis.
* [Elastic Stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot. * [Elastic Stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot.
* [Elasticvue](https://github.com/cars10/elasticvue/) a web front end for browsing and interacting with an Elasticsearch cluster. * [Elasticvue](https://github.com/cars10/elasticvue/) a web front end for browsing and interacting with an Elasticsearch cluster.
@ -132,7 +135,7 @@ T-Pot offers docker images for the following honeypots ...
* [T-Pot-Attack-Map](https://github.com/t3chn0m4g3/t-pot-attack-map) a beautifully animated attack map for T-Pot. * [T-Pot-Attack-Map](https://github.com/t3chn0m4g3/t-pot-attack-map) a beautifully animated attack map for T-Pot.
* [P0f](https://lcamtuf.coredump.cx/p0f3/) is a tool for purely passive traffic fingerprinting. * [P0f](https://lcamtuf.coredump.cx/p0f3/) is a tool for purely passive traffic fingerprinting.
* [Spiderfoot](https://github.com/smicallef/spiderfoot) an open source intelligence automation tool. * [Spiderfoot](https://github.com/smicallef/spiderfoot) an open source intelligence automation tool.
* [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine. * [Suricata](https://suricata.io/) a Network Security Monitoring engine.
... to give you the best out-of-the-box experience possible and an easy-to-use multi-honeypot system. ... to give you the best out-of-the-box experience possible and an easy-to-use multi-honeypot system.
<br><br> <br><br>
@ -280,16 +283,22 @@ Once you are familiar with how things work you should choose a network you suspe
<br><br> <br><br>
## Choose your distro ## Choose your distro
Choose a supported distro of your choice. It is recommended to use the minimum / netiso installers linked below and only install a minimalistic set of packages. SSH is mandatory or you will not be able to connect to the machine remotely. **Steps to Follow:**
1. Download a supported Linux distribution from the list below.
2. During installation choose a **minimum**, **netinstall** or **server** version that will only install essential packages.
3. **Never** install a graphical desktop environment such as Gnome or KDE. T-Pot will fail to work with it due to port conflicts.
4. Make sure to install SSH, so you can connect to the machine remotely.
| Distribution Name | x64 | arm64 | | Distribution Name | x64 | arm64 |
|:--------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------| |:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|
| [Alma Linux Boot](https://almalinux.org) | [download](https://repo.almalinux.org/almalinux/9.3/isos/x86_64/AlmaLinux-9.3-x86_64-boot.iso) | [download](https://repo.almalinux.org/almalinux/9.3/isos/aarch64/AlmaLinux-9.3-aarch64-boot.iso) | | [Alma Linux OS 9.4 Boot ISO](https://almalinux.org) | [download](https://repo.almalinux.org/almalinux/9.4/isos/x86_64/AlmaLinux-9.4-x86_64-boot.iso) | [download](https://repo.almalinux.org/almalinux/9.4/isos/aarch64/AlmaLinux-9.4-aarch64-boot.iso) |
| [Debian Netinst](https://www.debian.org/index.en.html) | [download](https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-12.5.0-amd64-netinst.iso) | [download](https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/debian-12.5.0-arm64-netinst.iso) | | [Debian 12 Network Install](https://www.debian.org/CD/netinst/index.en.html) | [download](https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-12.8.0-amd64-netinst.iso) | [download](https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/debian-12.8.0-arm64-netinst.iso) |
| [Fedora Netinst](https://fedoraproject.org) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/39/Server/x86_64/iso/Fedora-Server-netinst-x86_64-39-1.5.iso) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/39/Server/aarch64/iso/Fedora-Server-netinst-aarch64-39-1.5.iso) | | [Fedora Server 40 Network Install](https://fedoraproject.org/server/download) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/40/Server/x86_64/iso/Fedora-Server-netinst-x86_64-40-1.14.iso) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/40/Server/aarch64/iso/Fedora-Server-netinst-aarch64-40-1.14.iso) |
| [OpenSuse Tumbleweed Network Image](https://www.opensuse.org) | [download](https://download.opensuse.org/tumbleweed/iso/openSUSE-Tumbleweed-NET-x86_64-Current.iso) | [download](https://download.opensuse.org/ports/aarch64/tumbleweed/iso/openSUSE-Tumbleweed-NET-aarch64-Current.iso) | | [OpenSuse Tumbleweed Network Image](https://get.opensuse.org/tumbleweed/#download) | [download](https://download.opensuse.org/tumbleweed/iso/openSUSE-Tumbleweed-NET-x86_64-Current.iso) | [download](https://download.opensuse.org/ports/aarch64/tumbleweed/iso/openSUSE-Tumbleweed-NET-aarch64-Current.iso) |
| [Rocky Linux Boot](https://rockylinux.org) | [download](https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.3-x86_64-boot.iso) | [download](https://download.rockylinux.org/pub/rocky/9/isos/aarch64/Rocky-9.3-aarch64-boot.iso) | | [Rocky Linux OS 9.4 Boot ISO](https://rockylinux.org/download) | [download](https://download.rockylinux.org/pub/rocky/9.4/isos/x86_64/Rocky-9.4-x86_64-boot.iso) | [download](https://download.rockylinux.org/pub/rocky/9.4/isos/aarch64/Rocky-9.4-aarch64-boot.iso) |
| [Ubuntu Live Server](https://ubuntu.com) | [download](https://releases.ubuntu.com/22.04.4/ubuntu-22.04.4-live-server-amd64.iso) | [download](https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.4-live-server-arm64.iso) | | [Ubuntu 24.04 Live Server](https://ubuntu.com/download/server) | [download](https://releases.ubuntu.com/24.04/ubuntu-24.04.1-live-server-amd64.iso) | [download](https://cdimage.ubuntu.com/releases/24.04/release/ubuntu-24.04.1-live-server-arm64.iso) |
<br> <br>
@ -326,10 +335,10 @@ Choose a supported distro of your choice. It is recommended to use the minimum /
Sometimes it is just nice if you can spin up a T-Pot instance on macOS or Windows, i.e. for development, testing or just the fun of it. As Docker Desktop is rather limited not all honeypot types or T-Pot features are supported. Also remember, by default the macOS and Windows firewall are blocking access from remote, so testing is limited to the host. For production it is recommended to run T-Pot on [Linux](#choose-your-distro).<br> Sometimes it is just nice if you can spin up a T-Pot instance on macOS or Windows, i.e. for development, testing or just the fun of it. As Docker Desktop is rather limited not all honeypot types or T-Pot features are supported. Also remember, by default the macOS and Windows firewall are blocking access from remote, so testing is limited to the host. For production it is recommended to run T-Pot on [Linux](#choose-your-distro).<br>
To get things up and running just follow these steps: To get things up and running just follow these steps:
1. Install Docker Desktop for [macOS](https://docs.docker.com/desktop/install/mac-install/) or [Windows](https://docs.docker.com/desktop/install/windows-install/). 1. Install Docker Desktop for [macOS](https://docs.docker.com/desktop/install/mac-install/) or [Windows](https://docs.docker.com/desktop/install/windows-install/).
2. Clone the GitHub repository: `git clone https://github.com/telekom-security/tpotce -b alpha`. 2. Clone the GitHub repository: `git clone https://github.com/telekom-security/tpotce` (in Windows make sure the code is checked out with `LF` instead of `CRLF`!)
3. Go to: `cd ~/tpotce` 3. Go to: `cd ~/tpotce`
4. Copy `cp compose/mac_win.yml ./docker-compose.yml`. 4. Copy `cp compose/mac_win.yml ./docker-compose.yml`
5. Create a `WEB_USER` by running `~/tpotce/genuser.sh` 5. Create a `WEB_USER` by running `~/tpotce/genuser.sh` (macOS) or `~/tpotce/genuserwin.ps1` (Windows)
6. Adjust the `.env` file by changing `TPOT_OSTYPE=linux` to either `mac` or `win`: 6. Adjust the `.env` file by changing `TPOT_OSTYPE=linux` to either `mac` or `win`:
``` ```
# OSType (linux, mac, win) # OSType (linux, mac, win)
@ -347,7 +356,7 @@ With T-Pot Standard / HIVE all services, tools, honeypots, etc. will be installe
Once the installation is finished you can proceed to [First Start](#first-start). Once the installation is finished you can proceed to [First Start](#first-start).
<br><br> <br><br>
### **Distributed** ### Distributed
The distributed version of T-Pot requires at least two hosts The distributed version of T-Pot requires at least two hosts
- the T-Pot **HIVE**, the standard installation of T-Pot (install this first!), - the T-Pot **HIVE**, the standard installation of T-Pot (install this first!),
- and a T-Pot **SENSOR**, which will host only the honeypots, some tools and transmit log data to the **HIVE**. - and a T-Pot **SENSOR**, which will host only the honeypots, some tools and transmit log data to the **HIVE**.
@ -381,7 +390,36 @@ There is not much to do except to login and check via `dps.sh` if all services a
<br><br> <br><br>
## Distributed Deployment ## Distributed Deployment
Once you have rebooted the **SENSOR** as instructed by the installer you can continue with the distributed deployment by logging into **HIVE** and go to `cd ~/tpotce` folder. ### Planning and Certificates
The distributed deployment involves planning as **T-Pot Init** will only create a self-signed certificate for the IP of the **HIVE** host which usually is suitable for simple setups. Since **logstash** will check for a valid certificate upon connection, a distributed setup involving **HIVE** to be reachable on multiple IPs (i.e. RFC 1918 and public NAT IP) and maybe even a domain name will result in a connection error where the certificate cannot be validated as such a setup needs a certificate with a common name and SANs (Subject Alternative Name).<br>
Before deploying any sensors make sure you have planned out domain names and IPs properly to avoid issues with the certificate. For more details see [issue #1543](https://github.com/telekom-security/tpotce/issues/1543).<br>
Adjust the example to your IP / domain setup and follow the commands to change the certificate of **HIVE**:
```
sudo systemctl stop tpot
sudo openssl req \
-nodes \
-x509 \
-sha512 \
-newkey rsa:8192 \
-keyout "$HOME/tpotce/data/nginx/cert/nginx.key" \
-out "$HOME/tpotce/data/nginx/cert/nginx.crt" \
-days 3650 \
-subj '/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd' \
-addext "subjectAltName = IP:192.168.1.200, IP:1.2.3.4, DNS:my.primary.domain, DNS:my.secondary.domain"
sudo chmod 774 $HOME/tpotce/data/nginx/cert/*
sudo chown tpot:tpot $HOME/tpotce/data/nginx/cert/*
sudo systemctl start tpot
```
The T-Pot configuration file (`.env`) does allow to disable the SSL verification for logstash connections from **SENSOR** to the **HIVE** by setting `LS_SSL_VERIFICATION=none`. For security reasons this is only recommended for lab or test environments.<br><br>
If you choose to use a valid certificate for the **HIVE** signed by a CA (i.e. Let's Encrypt), logstash, and therefore the **SENSOR**, should have no problems to connect and transmit its logs to the **HIVE**.
### Deploying Sensors
Once you have rebooted the **SENSOR** as instructed by the installer you can continue with the distributed deployment by logging into **HIVE** and go to `cd ~/tpotce` folder. Make sure you understood the [Planning and Certificates](#planning-and-certificates) before continuing with the actual deployment.
If you have not done already generate a SSH key to securely login to the **SENSOR** and to allow `Ansible` to run a playbook on the sensor: If you have not done already generate a SSH key to securely login to the **SENSOR** and to allow `Ansible` to run a playbook on the sensor:
1. Run `ssh-keygen`, follow the instructions and leave the passphrase empty: 1. Run `ssh-keygen`, follow the instructions and leave the passphrase empty:
@ -413,6 +451,10 @@ If you have not done already generate a SSH key to securely login to the **SENSO
4. Once the key is successfully deployed run `./deploy.sh` and follow the instructions. 4. Once the key is successfully deployed run `./deploy.sh` and follow the instructions.
<br><br> <br><br>
### Removing Sensors
Identify the `TPOT_HIVE_USER` ENV on the SENSOR in the `$HOME/tpotce/.env` config (it is a base64 encoded string). Now identify the same string in the `LS_WEB_USER` ENV on the HIVE in the `$HOME/tpotce/.env` config. Remove the string and restart T-Pot.<br>
Now you can safely delete the SENSOR machine.
## Community Data Submission ## Community Data Submission
T-Pot is provided in order to make it accessible to everyone interested in honeypots. By default, the captured data is submitted to a community backend. This community backend uses the data to feed [Sicherheitstacho](https://sicherheitstacho.eu). T-Pot is provided in order to make it accessible to everyone interested in honeypots. By default, the captured data is submitted to a community backend. This community backend uses the data to feed [Sicherheitstacho](https://sicherheitstacho.eu).
You may opt out of the submission by removing the `# Ewsposter service` from `~/tpotce/docker-compose.yml` by following these steps: You may opt out of the submission by removing the `# Ewsposter service` from `~/tpotce/docker-compose.yml` by following these steps:
@ -496,7 +538,7 @@ On the T-Pot Landing Page just click on `Cyberchef` and you will be forwarded to
<br><br> <br><br>
## Elasticvue ## Elasticvue
On the T-Pot Landing Page just click on `Elastivue` and you will be forwarded to Elastivue. On the T-Pot Landing Page just click on `Elasticvue` and you will be forwarded to Elasticvue.
![Elasticvue](doc/elasticvue.png) ![Elasticvue](doc/elasticvue.png)
<br><br> <br><br>
@ -564,17 +606,27 @@ The update script will ...
- update all files in `~/tpotce` to be in sync with the T-Pot master branch - update all files in `~/tpotce` to be in sync with the T-Pot master branch
- restore your custom `ews.cfg` from `~/tpotce/data/ews/conf` and the T-Pot configuration (`~/tpotce/.env`). - restore your custom `ews.cfg` from `~/tpotce/data/ews/conf` and the T-Pot configuration (`~/tpotce/.env`).
## Daily Reboot
By default T-Pot will add a daily reboot including some cleaning up. You can adjust this line with `sudo crontab -e`
```
#Ansible: T-Pot Daily Reboot
42 2 * * * bash -c 'systemctl stop tpot.service && docker container prune -f; docker image prune -f; docker volume prune -f; /usr/sbin/shutdown -r +1 "T-Pot Daily Reboot"'
```
## Known Issues ## Known Issues
The following issues are known, simply follow the described steps to solve them. The following issues are known, simply follow the described steps to solve them.
<br><br> <br><br>
### **Docker Images Fail to Download** ### Docker Images Fail to Download
Some time ago Docker introduced download [rate limits](https://docs.docker.com/docker-hub/download-rate-limit/#:~:text=Docker%20Hub%20limits%20the%20number,pulls%20per%206%20hour%20period.). If you are frequently downloading Docker images via a single or shared IP, the IP address might have exhausted the Docker download rate limit. Login to your Docker account to extend the rate limit. Some time ago Docker introduced download [rate limits](https://docs.docker.com/docker-hub/download-rate-limit/#:~:text=Docker%20Hub%20limits%20the%20number,pulls%20per%206%20hour%20period.). If you are frequently downloading Docker images via a single or shared IP, the IP address might have exhausted the Docker download rate limit. Login to your Docker account to extend the rate limit.
``` ```
sudo su - sudo su -
docker login docker login
``` ```
### T-Pot Networking Fails
T-Pot is designed to only run on machines with a single NIC. T-Pot will try to grab the interface with the default route, however it is not guaranteed that this will always succeed. At best use T-Pot on machines with only a single NIC.
## Start T-Pot ## Start T-Pot
The T-Pot service automatically starts and stops on each reboot (which occurs once on a daily basis as setup in `sudo crontab -l` during installation). The T-Pot service automatically starts and stops on each reboot (which occurs once on a daily basis as setup in `sudo crontab -l` during installation).
<br> <br>
@ -630,14 +682,15 @@ Enabling this feature will drastically reduce attackers visibility and consequen
## Add Users to Nginx (T-Pot WebUI) ## Add Users to Nginx (T-Pot WebUI)
Nginx (T-Pot WebUI) allows you to add as many `<WEB_USER>` accounts as you want (according to the [User Types](#user-types)).<br> Nginx (T-Pot WebUI) allows you to add as many `<WEB_USER>` accounts as you want (according to the [User Types](#user-types)).<br>
To **add** a new user run `~/tpotce/genuser.sh` which will also update the accounts without the need to restart T-Pot.<br> To **add** a new user run `~/tpotce/genuser.sh`.<br>
To **remove** users open `~/tpotce/.env`, locate `WEB_USER` and remove the corresponding base64 string (to decode: `echo <base64_string> | base64 -d`, or open CyberChef and load "From Base64" recipe). For the changes to take effect you need to restart T-Pot using `systemctl stop tpot` and `systemctl start tpot` or `sudo reboot`. To **remove** users open `~/tpotce/.env`, locate `WEB_USER` and remove the corresponding base64 string (to decode: `echo <base64_string> | base64 -d`, or open CyberChef and load "From Base64" recipe).<br>
For the changes to take effect you need to restart T-Pot using `systemctl stop tpot` and `systemctl start tpot` or `sudo reboot`.
<br><br> <br><br>
## Import and Export Kibana Objects ## Import and Export Kibana Objects
Some T-Pot updates will require you to update the Kibana objects. Either to support new honeypots or to improve existing dashboards or visualizations. Make sure to ***export*** first so you do not loose any of your adjustments. Some T-Pot updates will require you to update the Kibana objects. Either to support new honeypots or to improve existing dashboards or visualizations. Make sure to ***export*** first so you do not loose any of your adjustments.
### **Export** ### Export
1. Go to Kibana 1. Go to Kibana
2. Click on "Stack Management" 2. Click on "Stack Management"
3. Click on "Saved Objects" 3. Click on "Saved Objects"
@ -645,7 +698,7 @@ Some T-Pot updates will require you to update the Kibana objects. Either to supp
5. Click on "Export all" 5. Click on "Export all"
This will export a NDJSON file with all your objects. Always run a full export to make sure all references are included. This will export a NDJSON file with all your objects. Always run a full export to make sure all references are included.
### **Import** ### Import
1. [Download the NDJSON file](https://github.com/dtag-dev-sec/tpotce/blob/master/docker/tpotinit/dist/etc/objects/kibana_export.ndjson.zip) and unzip it. 1. [Download the NDJSON file](https://github.com/dtag-dev-sec/tpotce/blob/master/docker/tpotinit/dist/etc/objects/kibana_export.ndjson.zip) and unzip it.
2. Go to Kibana 2. Go to Kibana
3. Click on "Stack Management" 3. Click on "Stack Management"
@ -702,10 +755,10 @@ Use the search function, it is possible a similar discussion has been opened alr
# Licenses # Licenses
The software that T-Pot is built on uses the following licenses. The software that T-Pot is built on uses the following licenses.
<br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/) <br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](https://suricata.io/features/open-source/)
<br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://gitlab.com/bontchev/elasticpot/-/blob/master/LICENSE), [ewsposter](https://github.com/telekom-security/ews/), [log4pot](https://github.com/thomaspatzke/Log4Pot/blob/master/LICENSE), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [ipphoney](https://gitlab.com/bontchev/ipphoney/-/blob/master/LICENSE), [redishoneypot](https://github.com/cypwnpwnsocute/RedisHoneyPot/blob/main/LICENSE), [sentrypeer](https://github.com/SentryPeer/SentryPeer/blob/main/LICENSE.GPL-3.0-only), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE) <br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://gitlab.com/bontchev/elasticpot/-/blob/master/LICENSE), [ewsposter](https://github.com/telekom-security/ews/), [log4pot](https://github.com/thomaspatzke/Log4Pot/blob/master/LICENSE), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [ipphoney](https://gitlab.com/bontchev/ipphoney/-/blob/master/LICENSE), [redishoneypot](https://github.com/cypwnpwnsocute/RedisHoneyPot/blob/main/LICENSE), [sentrypeer](https://github.com/SentryPeer/SentryPeer/blob/main/LICENSE.GPL-3.0-only), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE)
<br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [dicompot](https://github.com/nsmfoo/dicompot/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE) <br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [dicompot](https://github.com/nsmfoo/dicompot/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE)
<br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [ddospot](https://github.com/aelth/ddospot/blob/master/LICENSE), [elasticvue](https://github.com/cars10/elasticvue/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE), [hellpot](https://github.com/yunginnanet/HellPot/blob/master/LICENSE), [maltrail](https://github.com/stamparm/maltrail/blob/master/LICENSE) <br>MIT license: [autoheal](https://github.com/willfarrell/docker-autoheal?tab=MIT-1-ov-file#readme), [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [ddospot](https://github.com/aelth/ddospot/blob/master/LICENSE), [elasticvue](https://github.com/cars10/elasticvue/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE), [hellpot](https://github.com/yunginnanet/HellPot/blob/master/LICENSE), [maltrail](https://github.com/stamparm/maltrail/blob/master/LICENSE)
<br> Unlicense: [endlessh](https://github.com/skeeto/endlessh/blob/master/UNLICENSE) <br> Unlicense: [endlessh](https://github.com/skeeto/endlessh/blob/master/UNLICENSE)
<br> Other: [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot#licencing-agreement-malwaretech-public-licence), [cowrie](https://github.com/cowrie/cowrie/blob/master/LICENSE.rst), [mailoney](https://github.com/awhitehatter/mailoney), [Elastic License](https://www.elastic.co/licensing/elastic-license), [Wordpot](https://github.com/gbrindisi/wordpot) <br> Other: [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot#licencing-agreement-malwaretech-public-licence), [cowrie](https://github.com/cowrie/cowrie/blob/master/LICENSE.rst), [mailoney](https://github.com/awhitehatter/mailoney), [Elastic License](https://www.elastic.co/licensing/elastic-license), [Wordpot](https://github.com/gbrindisi/wordpot)
<br> AGPL-3.0: [honeypots](https://github.com/qeeqbox/honeypots/blob/main/LICENSE) <br> AGPL-3.0: [honeypots](https://github.com/qeeqbox/honeypots/blob/main/LICENSE)
@ -750,7 +803,7 @@ Without open source and the development community we are proud to be a part of,
* [spiderfoot](https://github.com/smicallef/spiderfoot) * [spiderfoot](https://github.com/smicallef/spiderfoot)
* [snare](https://github.com/mushorg/snare/graphs/contributors) * [snare](https://github.com/mushorg/snare/graphs/contributors)
* [tanner](https://github.com/mushorg/tanner/graphs/contributors) * [tanner](https://github.com/mushorg/tanner/graphs/contributors)
* [suricata](https://github.com/inliniac/suricata/graphs/contributors) * [suricata](https://github.com/OISF/suricata/graphs/contributors)
* [wordpot](https://github.com/gbrindisi/wordpot) * [wordpot](https://github.com/gbrindisi/wordpot)
**The following companies and organizations** **The following companies and organizations**
@ -772,4 +825,4 @@ And from @robcowart (creator of [ElastiFlow](https://github.com/robcowart/elasti
<br><br> <br><br>
**Thank you!** **Thank you!**
![Alt](https://repobeats.axiom.co/api/embed/642a1032ac85996c81b12cf9f6393103058b8a04.svg "Repobeats analytics image") ![Alt](https://repobeats.axiom.co/api/embed/75368f879326a61370e485df52906ae0c1f59fbb.svg "Repobeats analytics image")

View File

@ -3,8 +3,8 @@
## Supported Versions ## Supported Versions
| Version | Supported | | Version | Supported |
|---------|--------------------| |-------|--------------------|
| 24.04.x | :white_check_mark: | | 24.04 | :white_check_mark: |
## Reporting a Vulnerability ## Reporting a Vulnerability

View File

@ -12,7 +12,7 @@ version = \
# This script is intended for users who want to build a customized docker-compose.yml for T-Pot. # This script is intended for users who want to build a customized docker-compose.yml for T-Pot.
# T-Pot Service Builder will ask for all the docker services to be included in docker-compose.yml. # T-Pot Service Builder will ask for all the docker services to be included in docker-compose.yml.
# The configuration file will be checked for conflicting ports. # The configuration file will be checked for conflicting ports.
# Port conflicts have to be resolve manually or re-running the script and excluding the conflicting services. # Port conflicts have to be resolved manually or re-running the script and excluding the conflicting services.
# Review the resulting docker-compose-custom.yml and adjust to your needs by (un)commenting the corresponding lines in the config. # Review the resulting docker-compose-custom.yml and adjust to your needs by (un)commenting the corresponding lines in the config.
""" """
@ -157,7 +157,6 @@ def main():
remove_unused_networks(selected_services, services, networks) remove_unused_networks(selected_services, services, networks)
output_config = { output_config = {
'version': '3.9',
'networks': networks, 'networks': networks,
'services': selected_services, 'services': selected_services,
} }

View File

@ -1,15 +1,9 @@
# T-Pot: MAC_WIN # T-Pot: MAC_WIN
version: '3.9'
networks: networks:
tpotinit_local: tpotinit_local:
adbhoney_local: adbhoney_local:
ciscoasa_local: ciscoasa_local:
citrixhoneypot_local: citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_guardian_ast:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local: cowrie_local:
ddospot_local: ddospot_local:
dicompot_local: dicompot_local:
@ -21,6 +15,7 @@ networks:
medpot_local: medpot_local:
redishoneypot_local: redishoneypot_local:
sentrypeer_local: sentrypeer_local:
suricata_local:
tanner_local: tanner_local:
wordpot_local: wordpot_local:
nginx_local: nginx_local:
@ -52,6 +47,7 @@ services:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro - ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole - ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data - ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
################## ##################
@ -113,108 +109,6 @@ services:
volumes: volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs - ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot guardian_ast service
conpot_guardian_ast:
container_name: conpot_guardian_ast
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
- CONPOT_LOG=/var/log/conpot/conpot_guardian_ast.log
- CONPOT_TEMPLATE=guardian_ast
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service # Cowrie service
cowrie: cowrie:
container_name: cowrie container_name: cowrie
@ -250,7 +144,7 @@ services:
- ddospot_local - ddospot_local
ports: ports:
- "19:19/udp" - "19:19/udp"
- "53:53/udp" # - "53:53/udp"
- "123:123/udp" - "123:123/udp"
# - "161:161/udp" # - "161:161/udp"
- "1900:1900/udp" - "1900:1900/udp"
@ -302,7 +196,7 @@ services:
- "81:81" - "81:81"
- "135:135" - "135:135"
# - "443:443" # - "443:443"
- "445:445" # - "445:445"
- "1433:1433" - "1433:1433"
- "1723:1723" - "1723:1723"
- "1883:1883" - "1883:1883"
@ -616,15 +510,16 @@ services:
depends_on: depends_on:
tpotinit: tpotinit:
condition: service_healthy condition: service_healthy
environment: networks:
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env) - suricata_local
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
network_mode: "host"
cap_add: cap_add:
- NET_ADMIN - NET_ADMIN
- SYS_NICE - SYS_NICE
- NET_RAW - NET_RAW
environment:
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env)
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
image: ${TPOT_REPO}/suricata:${TPOT_VERSION} image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY} pull_policy: ${TPOT_PULL_POLICY}
volumes: volumes:
@ -695,6 +590,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE} - TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER} - TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP} - TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports: ports:
- "127.0.0.1:64305:64305" - "127.0.0.1:64305:64305"
mem_limit: 2g mem_limit: 2g

View File

@ -1,6 +1,4 @@
# T-Pot: MINI # T-Pot: MINI
version: '3.9'
networks: networks:
adbhoney_local: adbhoney_local:
ciscoasa_local: ciscoasa_local:
@ -411,6 +409,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE} - TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER} - TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP} - TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports: ports:
- "127.0.0.1:64305:64305" - "127.0.0.1:64305:64305"
mem_limit: 2g mem_limit: 2g

View File

@ -3,8 +3,6 @@
# T-Pot on a Raspberry Pi 4 (8GB of RAM). # T-Pot on a Raspberry Pi 4 (8GB of RAM).
# The standard docker compose file should work mostly fine (depending on traffic) if you do not enable a # The standard docker compose file should work mostly fine (depending on traffic) if you do not enable a
# desktop environment such as LXDE and meet the minimum requirements of 8GB RAM. # desktop environment such as LXDE and meet the minimum requirements of 8GB RAM.
version: '3.9'
networks: networks:
ciscoasa_local: ciscoasa_local:
citrixhoneypot_local: citrixhoneypot_local:
@ -347,6 +345,30 @@ services:
volumes: volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log - ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Log4pot service
log4pot:
container_name: log4pot
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- log4pot_local
ports:
# - "80:8080"
# - "443:8080"
# - "8080:8080"
# - "9200:8080"
- "25565:8080"
image: ${TPOT_REPO}/log4pot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/log4pot/log:/var/log/log4pot/log
- ${TPOT_DATA_PATH}/log4pot/payloads:/var/log/log4pot/payloads
# Mailoney service # Mailoney service
mailoney: mailoney:
container_name: mailoney container_name: mailoney
@ -371,30 +393,6 @@ services:
volumes: volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs - ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Log4pot service
log4pot:
container_name: log4pot
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- log4pot_local
ports:
# - "80:8080"
# - "443:8080"
- "8080:8080"
# - "9200:8080"
- "25565:8080"
image: ${TPOT_REPO}/log4pot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/log4pot/log:/var/log/log4pot/log
- ${TPOT_DATA_PATH}/log4pot/payloads:/var/log/log4pot/payloads
# Medpot service # Medpot service
medpot: medpot:
container_name: medpot container_name: medpot
@ -537,7 +535,7 @@ services:
container_name: wordpot container_name: wordpot
restart: always restart: always
depends_on: depends_on:
tpotinit: logstash:
condition: service_healthy condition: service_healthy
networks: networks:
- wordpot_local - wordpot_local
@ -594,6 +592,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE} - TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER} - TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP} - TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports: ports:
- "127.0.0.1:64305:64305" - "127.0.0.1:64305:64305"
mem_limit: 2g mem_limit: 2g

View File

@ -1,628 +0,0 @@
# T-Pot: MOBILE
# Note: This docker compose file has been adjusted to limit the number of tools, services and honeypots to run
# T-Pot on a Raspberry Pi 4 (8GB of RAM).
# The standard docker compose file should work mostly fine (depending on traffic) if you do not enable a
# desktop environment such as LXDE and meet the minimum requirements of 8GB RAM.
version: '3.9'
networks:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local:
dicompot_local:
dionaea_local:
elasticpot_local:
heralding_local:
ipphoney_local:
log4pot_local:
mailoney_local:
medpot_local:
redishoneypot_local:
sentrypeer_local:
tanner_local:
wordpot_local:
ewsposter_local:
services:
#########################################
#### DEV
#########################################
#### T-Pot Init - Never delete this!
#########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
#### Honeypots
##################
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
- ciscoasa_local
ports:
- "5000:5000/udp"
- "8443:8443"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
ports:
- "22:22"
- "23:23"
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
dicompot:
container_name: dicompot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
tty: true
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- dionaea_local
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
# - "5060:5060"
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- elasticpot_local
ports:
- "9200:9200"
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
# Heralding service
heralding:
container_name: heralding
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
- "110:110"
- "143:143"
# - "443:443"
- "465:465"
- "993:993"
- "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- ipphoney_local
ports:
- "631:631"
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
- "587:25"
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Log4pot service
log4pot:
container_name: log4pot
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- log4pot_local
ports:
# - "80:8080"
# - "443:8080"
- "8080:8080"
# - "9200:8080"
- "25565:8080"
image: ${TPOT_REPO}/log4pot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/log4pot/log:/var/log/log4pot/log
- ${TPOT_DATA_PATH}/log4pot/payloads:/var/log/log4pot/payloads
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
# SentryPeer service
sentrypeer:
container_name: sentrypeer
restart: always
depends_on:
logstash:
condition: service_healthy
# environment:
# - SENTRYPEER_PEER_TO_PEER=1
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/tcp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
container_name: tanner_redis
restart: always
depends_on:
logstash:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## PHP Sandbox service
tanner_phpox:
container_name: tanner_phpox
restart: always
depends_on:
logstash:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Tanner API Service
tanner_api:
container_name: tanner_api
restart: always
depends_on:
- tanner_redis
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
command: tannerapi
## Tanner Service
tanner:
container_name: tanner
restart: always
depends_on:
- tanner_api
- tanner_phpox
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
command: tanner
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
## Snare Service
snare:
container_name: snare
restart: always
depends_on:
- tanner
tty: true
networks:
- tanner_local
ports:
- "80:80"
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
# Wordpot service
wordpot:
container_name: wordpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- wordpot_local
ports:
- "82:80"
image: ${TPOT_REPO}/wordpot:${TPOT_VERSION}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/wordpot/log:/opt/wordpot/logs/
##################
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip

View File

@ -1,6 +1,4 @@
# T-Pot: SENSOR # T-Pot: SENSOR
version: '3.9'
networks: networks:
adbhoney_local: adbhoney_local:
ciscoasa_local: ciscoasa_local:
@ -664,6 +662,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE} - TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER} - TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP} - TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports: ports:
- "127.0.0.1:64305:64305" - "127.0.0.1:64305:64305"
mem_limit: 2g mem_limit: 2g

View File

@ -1,6 +1,4 @@
# T-Pot: STANDARD # T-Pot: STANDARD
version: '3.9'
networks: networks:
adbhoney_local: adbhoney_local:
ciscoasa_local: ciscoasa_local:
@ -706,6 +704,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE} - TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER} - TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP} - TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports: ports:
- "127.0.0.1:64305:64305" - "127.0.0.1:64305:64305"
mem_limit: 2g mem_limit: 2g

View File

@ -837,6 +837,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE} - TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER} - TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP} - TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports: ports:
- "127.0.0.1:64305:64305" - "127.0.0.1:64305:64305"
mem_limit: 2g mem_limit: 2g

Binary file not shown.

Before

Width:  |  Height:  |  Size: 475 KiB

After

Width:  |  Height:  |  Size: 486 KiB

View File

@ -1,6 +1,4 @@
# T-Pot: STANDARD # T-Pot: STANDARD
version: '3.9'
networks: networks:
adbhoney_local: adbhoney_local:
ciscoasa_local: ciscoasa_local:
@ -706,6 +704,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE} - TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER} - TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP} - TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports: ports:
- "127.0.0.1:64305:64305" - "127.0.0.1:64305:64305"
mem_limit: 2g mem_limit: 2g

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
adbhoney_local: adbhoney_local:
@ -16,7 +14,7 @@ services:
- adbhoney_local - adbhoney_local
ports: ports:
- "5555:5555" - "5555:5555"
image: "dtagdevsec/adbhoney:alpha" image: "dtagdevsec/adbhoney:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/adbhoney/log:/opt/adbhoney/log - $HOME/tpotce/data/adbhoney/log:/opt/adbhoney/log

View File

@ -6,8 +6,9 @@
myPLATFORMS="linux/amd64,linux/arm64" myPLATFORMS="linux/amd64,linux/arm64"
myHUBORG_DOCKER="dtagdevsec" myHUBORG_DOCKER="dtagdevsec"
myHUBORG_GITHUB="ghcr.io/telekom-security" myHUBORG_GITHUB="ghcr.io/telekom-security"
myTAG="alpha" myTAG="24.04"
myIMAGESBASE="tpotinit adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt glutton hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot" #myIMAGESBASE="tpotinit adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt glutton hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot"
myIMAGESBASE="tpotinit adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot"
myIMAGESELK="elasticsearch kibana logstash map" myIMAGESELK="elasticsearch kibana logstash map"
myIMAGESTANNER="phpox redis snare tanner" myIMAGESTANNER="phpox redis snare tanner"
myBUILDERLOG="builder.log" myBUILDERLOG="builder.log"

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
ciscoasa_local: ciscoasa_local:
@ -19,7 +17,7 @@ services:
ports: ports:
- "5000:5000/udp" - "5000:5000/udp"
- "8443:8443" - "8443:8443"
image: "dtagdevsec/ciscoasa:alpha" image: "dtagdevsec/ciscoasa:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/ciscoasa/log:/var/log/ciscoasa - $HOME/tpotce/data/ciscoasa/log:/var/log/ciscoasa

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
citrixhoneypot_local: citrixhoneypot_local:
@ -16,7 +14,7 @@ services:
- citrixhoneypot_local - citrixhoneypot_local
ports: ports:
- "443:443" - "443:443"
image: "dtagdevsec/citrixhoneypot:alpha" image: "dtagdevsec/citrixhoneypot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/citrixhoneypot/log:/opt/citrixhoneypot/logs - $HOME/tpotce/data/citrixhoneypot/log:/opt/citrixhoneypot/logs

View File

@ -1,5 +1,5 @@
pysnmp-mibs pysnmp-mibs
pysmi pysmi==0.3.4
libtaxii>=1.1.0 libtaxii>=1.1.0
crc16 crc16
scapy==2.4.3rc1 scapy==2.4.3rc1

View File

@ -1,6 +1,4 @@
# CONPOT TEMPLATE=[default, IEC104, guardian_ast, ipmi, kamstrup_382, proxy] # CONPOT TEMPLATE=[default, IEC104, guardian_ast, ipmi, kamstrup_382, proxy]
version: '2.3'
networks: networks:
conpot_local_default: conpot_local_default:
conpot_local_IEC104: conpot_local_IEC104:
@ -37,7 +35,7 @@ services:
- "2121:21" - "2121:21"
- "44818:44818" - "44818:44818"
- "47808:47808/udp" - "47808:47808/udp"
image: "dtagdevsec/conpot:alpha" image: "dtagdevsec/conpot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/conpot/log:/var/log/conpot - $HOME/tpotce/data/conpot/log:/var/log/conpot
@ -61,7 +59,7 @@ services:
ports: ports:
# - "161:161/udp" # - "161:161/udp"
- "2404:2404" - "2404:2404"
image: "dtagdevsec/conpot:alpha" image: "dtagdevsec/conpot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/conpot/log:/var/log/conpot - $HOME/tpotce/data/conpot/log:/var/log/conpot
@ -84,7 +82,7 @@ services:
- conpot_local_guardian_ast - conpot_local_guardian_ast
ports: ports:
- "10001:10001" - "10001:10001"
image: "dtagdevsec/conpot:alpha" image: "dtagdevsec/conpot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/conpot/log:/var/log/conpot - $HOME/tpotce/data/conpot/log:/var/log/conpot
@ -107,7 +105,7 @@ services:
- conpot_local_ipmi - conpot_local_ipmi
ports: ports:
- "623:623/udp" - "623:623/udp"
image: "dtagdevsec/conpot:alpha" image: "dtagdevsec/conpot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/conpot/log:/var/log/conpot - $HOME/tpotce/data/conpot/log:/var/log/conpot
@ -131,7 +129,7 @@ services:
ports: ports:
- "1025:1025" - "1025:1025"
- "50100:50100" - "50100:50100"
image: "dtagdevsec/conpot:alpha" image: "dtagdevsec/conpot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/conpot/log:/var/log/conpot - $HOME/tpotce/data/conpot/log:/var/log/conpot

View File

@ -22,11 +22,11 @@ filesystem = share/cowrie/fs.pickle
processes = share/cowrie/cmdoutput.json processes = share/cowrie/cmdoutput.json
#arch = linux-x64-lsb #arch = linux-x64-lsb
arch = bsd-aarch64-lsb, bsd-aarch64-msb, bsd-bfin-msb, bsd-mips-lsb, bsd-mips-msb, bsd-mips64-lsb, bsd-mips64-msb, bsd-powepc-msb, bsd-powepc64-lsb, bsd-riscv64-lsb, bsd-sparc-msb, bsd-sparc64-msb, bsd-x32-lsb, bsd-x64-lsb, linux-aarch64-lsb, linux-aarch64-msb, linux-alpha-lsb, linux-am33-lsb, linux-arc-lsb, linux-arc-msb, linux-arm-lsb, linux-arm-msb, linux-avr32-lsb, linux-bfin-lsb, linux-c6x-lsb, linux-c6x-msb, linux-cris-lsb, linux-frv-msb, linux-h8300-msb, linux-hppa-msb, linux-hppa64-msb, linux-ia64-lsb, linux-m32r-msb, linux-m68k-msb, linux-microblaze-msb, linux-mips-lsb, linux-mips-msb, linux-mips64-lsb, linux-mips64-msb, linux-mn10300-lsb, linux-nios-lsb, linux-nios-msb, linux-powerpc-lsb, linux-powerpc-msb, linux-powerpc64-lsb, linux-powerpc64-msb, linux-riscv64-lsb, linux-s390x-msb, linux-sh-lsb, linux-sh-msb, linux-sparc-msb, linux-sparc64-msb, linux-tilegx-lsb, linux-tilegx-msb, linux-tilegx64-lsb, linux-tilegx64-msb, linux-x64-lsb, linux-x86-lsb, linux-xtensa-msb, osx-x32-lsb, osx-x64-lsb arch = bsd-aarch64-lsb, bsd-aarch64-msb, bsd-bfin-msb, bsd-mips-lsb, bsd-mips-msb, bsd-mips64-lsb, bsd-mips64-msb, bsd-powepc-msb, bsd-powepc64-lsb, bsd-riscv64-lsb, bsd-sparc-msb, bsd-sparc64-msb, bsd-x32-lsb, bsd-x64-lsb, linux-aarch64-lsb, linux-aarch64-msb, linux-alpha-lsb, linux-am33-lsb, linux-arc-lsb, linux-arc-msb, linux-arm-lsb, linux-arm-msb, linux-avr32-lsb, linux-bfin-lsb, linux-c6x-lsb, linux-c6x-msb, linux-cris-lsb, linux-frv-msb, linux-h8300-msb, linux-hppa-msb, linux-hppa64-msb, linux-ia64-lsb, linux-m32r-msb, linux-m68k-msb, linux-microblaze-msb, linux-mips-lsb, linux-mips-msb, linux-mips64-lsb, linux-mips64-msb, linux-mn10300-lsb, linux-nios-lsb, linux-nios-msb, linux-powerpc-lsb, linux-powerpc-msb, linux-powerpc64-lsb, linux-powerpc64-msb, linux-riscv64-lsb, linux-s390x-msb, linux-sh-lsb, linux-sh-msb, linux-sparc-msb, linux-sparc64-msb, linux-tilegx-lsb, linux-tilegx-msb, linux-tilegx64-lsb, linux-tilegx64-msb, linux-x64-lsb, linux-x86-lsb, linux-xtensa-msb, osx-x32-lsb, osx-x64-lsb
kernel_version = 3.2.0-4-amd64 kernel_version = 5.15.0-23-generic-amd64
kernel_build_string = #1 SMP Debian 3.2.68-1+deb7u1 kernel_build_string = #25~22.04-Ubuntu SMP
hardware_platform = x86_64 hardware_platform = x86_64
operating_system = GNU/Linux operating_system = GNU/Linux
ssh_version = OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018 ssh_version = OpenSSH_8.9p1, OpenSSL 3.0.2 15 Mar 2022
[ssh] [ssh]
enabled = true enabled = true
@ -39,8 +39,7 @@ ecdsa_private_key = etc/ssh_host_ecdsa_key
ed25519_public_key = etc/ssh_host_ed25519_key.pub ed25519_public_key = etc/ssh_host_ed25519_key.pub
ed25519_private_key = etc/ssh_host_ed25519_key ed25519_private_key = etc/ssh_host_ed25519_key
public_key_auth = ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519 public_key_auth = ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519
#version = SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2 version = SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10
version = SSH-2.0-OpenSSH_7.9p1
ciphers = aes128-ctr,aes192-ctr,aes256-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc ciphers = aes128-ctr,aes192-ctr,aes256-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc
macs = hmac-sha2-512,hmac-sha2-384,hmac-sha2-56,hmac-sha1,hmac-md5 macs = hmac-sha2-512,hmac-sha2-384,hmac-sha2-56,hmac-sha1,hmac-md5
compression = zlib@openssh.com,zlib,none compression = zlib@openssh.com,zlib,none

72
docker/cowrie/dist/cowrie_tpot.cfg vendored Normal file
View File

@ -0,0 +1,72 @@
[honeypot]
hostname = ubuntu
log_path = log
download_path = dl
share_path= share/cowrie
state_path = /tmp/cowrie/data
etc_path = etc
contents_path = honeyfs
txtcmds_path = txtcmds
ttylog = true
ttylog_path = log/tty
interactive_timeout = 180
authentication_timeout = 120
backend = shell
timezone = UTC
auth_class = AuthRandom
auth_class_parameters = 2, 5, 10
data_path = /tmp/cowrie/data
[shell]
filesystem = share/cowrie/fs.pickle
processes = share/cowrie/cmdoutput.json
#arch = linux-x64-lsb
arch = bsd-aarch64-lsb, bsd-aarch64-msb, bsd-bfin-msb, bsd-mips-lsb, bsd-mips-msb, bsd-mips64-lsb, bsd-mips64-msb, bsd-powepc-msb, bsd-powepc64-lsb, bsd-riscv64-lsb, bsd-sparc-msb, bsd-sparc64-msb, bsd-x32-lsb, bsd-x64-lsb, linux-aarch64-lsb, linux-aarch64-msb, linux-alpha-lsb, linux-am33-lsb, linux-arc-lsb, linux-arc-msb, linux-arm-lsb, linux-arm-msb, linux-avr32-lsb, linux-bfin-lsb, linux-c6x-lsb, linux-c6x-msb, linux-cris-lsb, linux-frv-msb, linux-h8300-msb, linux-hppa-msb, linux-hppa64-msb, linux-ia64-lsb, linux-m32r-msb, linux-m68k-msb, linux-microblaze-msb, linux-mips-lsb, linux-mips-msb, linux-mips64-lsb, linux-mips64-msb, linux-mn10300-lsb, linux-nios-lsb, linux-nios-msb, linux-powerpc-lsb, linux-powerpc-msb, linux-powerpc64-lsb, linux-powerpc64-msb, linux-riscv64-lsb, linux-s390x-msb, linux-sh-lsb, linux-sh-msb, linux-sparc-msb, linux-sparc64-msb, linux-tilegx-lsb, linux-tilegx-msb, linux-tilegx64-lsb, linux-tilegx64-msb, linux-x64-lsb, linux-x86-lsb, linux-xtensa-msb, osx-x32-lsb, osx-x64-lsb
kernel_version = 3.2.0-4-amd64
kernel_build_string = #1 SMP Debian 3.2.68-1+deb7u1
hardware_platform = x86_64
operating_system = GNU/Linux
ssh_version = OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018
[ssh]
enabled = true
rsa_public_key = etc/ssh_host_rsa_key.pub
rsa_private_key = etc/ssh_host_rsa_key
dsa_public_key = etc/ssh_host_dsa_key.pub
dsa_private_key = etc/ssh_host_dsa_key
ecdsa_public_key = etc/ssh_host_ecdsa_key.pub
ecdsa_private_key = etc/ssh_host_ecdsa_key
ed25519_public_key = etc/ssh_host_ed25519_key.pub
ed25519_private_key = etc/ssh_host_ed25519_key
public_key_auth = ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519
#version = SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2
version = SSH-2.0-OpenSSH_7.9p1
ciphers = aes128-ctr,aes192-ctr,aes256-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc
macs = hmac-sha2-512,hmac-sha2-384,hmac-sha2-56,hmac-sha1,hmac-md5
compression = zlib@openssh.com,zlib,none
listen_endpoints = tcp:22:interface=0.0.0.0
sftp_enabled = true
forwarding = true
forward_redirect = false
forward_tunnel = false
auth_none_enabled = false
auth_keyboard_interactive_enabled = true
[telnet]
enabled = true
listen_endpoints = tcp:23:interface=0.0.0.0
reported_port = 23
[output_jsonlog]
enabled = true
logfile = log/cowrie.json
epoch_timestamp = false
[output_textlog]
enabled = false
logfile = log/cowrie-textlog.log
format = text
[output_crashreporter]
enabled = false
debug = false

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
cowrie_local: cowrie_local:
@ -20,7 +18,7 @@ services:
ports: ports:
- "22:22" - "22:22"
- "23:23" - "23:23"
image: "dtagdevsec/cowrie:alpha" image: "dtagdevsec/cowrie:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/cowrie/downloads:/home/cowrie/cowrie/dl - $HOME/tpotce/data/cowrie/downloads:/home/cowrie/cowrie/dl

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
ddospot_local: ddospot_local:
@ -20,7 +18,7 @@ services:
- "123:123/udp" - "123:123/udp"
# - "161:161/udp" # - "161:161/udp"
- "1900:1900/udp" - "1900:1900/udp"
image: "dtagdevsec/ddospot:alpha" image: "dtagdevsec/ddospot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/ddospot/log:/opt/ddospot/ddospot/logs - $HOME/tpotce/data/ddospot/log:/opt/ddospot/ddospot/logs

View File

@ -14,5 +14,5 @@ services:
- cyberchef_local - cyberchef_local
ports: ports:
- "127.0.0.1:64299:8000" - "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:alpha" image: "dtagdevsec/cyberchef:24.04"
read_only: true read_only: true

View File

@ -12,5 +12,5 @@ services:
# condition: service_healthy # condition: service_healthy
ports: ports:
- "127.0.0.1:64302:9100" - "127.0.0.1:64302:9100"
image: "dtagdevsec/head:alpha" image: "dtagdevsec/head:24.04"
read_only: true read_only: true

View File

@ -20,7 +20,7 @@ services:
- "2324:2324" - "2324:2324"
- "4096:4096" - "4096:4096"
- "9200:9200" - "9200:9200"
image: "dtagdevsec/honeypy:alpha" image: "dtagdevsec/honeypy:24.04"
read_only: true read_only: true
volumes: volumes:
- /data/honeypy/log:/opt/honeypy/log - /data/honeypy/log:/opt/honeypy/log

View File

@ -14,6 +14,6 @@ services:
- honeysap_local - honeysap_local
ports: ports:
- "3299:3299" - "3299:3299"
image: "dtagdevsec/honeysap:alpha" image: "dtagdevsec/honeysap:24.04"
volumes: volumes:
- /data/honeysap/log:/opt/honeysap/log - /data/honeysap/log:/opt/honeysap/log

View File

@ -22,7 +22,7 @@ services:
- rdpy_local - rdpy_local
ports: ports:
- "3389:3389" - "3389:3389"
image: "dtagdevsec/rdpy:alpha" image: "dtagdevsec/rdpy:24.04"
read_only: true read_only: true
volumes: volumes:
- /data/rdpy/log:/var/log/rdpy - /data/rdpy/log:/var/log/rdpy

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
dicompot_local: dicompot_local:
@ -19,7 +17,7 @@ services:
- dicompot_local - dicompot_local
ports: ports:
- "11112:11112" - "11112:11112"
image: "dtagdevsec/dicompot:alpha" image: "dtagdevsec/dicompot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/dicompot/log:/var/log/dicompot - $HOME/tpotce/data/dicompot/log:/var/log/dicompot

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
dionaea_local: dionaea_local:
@ -33,7 +31,7 @@ services:
# - "5060:5060/udp" # - "5060:5060/udp"
# - "5061:5061" # - "5061:5061"
- "27017:27017" - "27017:27017"
image: "dtagdevsec/dionaea:alpha" image: "dtagdevsec/dionaea:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp - $HOME/tpotce/data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp

View File

@ -1,6 +1,4 @@
# T-Pot Image Builder (use only for building docker images) # T-Pot Image Builder (use only for building docker images)
version: '2.3'
services: services:
################## ##################
@ -10,133 +8,133 @@ services:
# Adbhoney service # Adbhoney service
adbhoney: adbhoney:
build: adbhoney/. build: adbhoney/.
image: "dtagdevsec/adbhoney:alpha" image: "dtagdevsec/adbhoney:24.04"
# Ciscoasa service # Ciscoasa service
ciscoasa: ciscoasa:
build: ciscoasa/. build: ciscoasa/.
image: "dtagdevsec/ciscoasa:alpha" image: "dtagdevsec/ciscoasa:24.04"
# CitrixHoneypot service # CitrixHoneypot service
citrixhoneypot: citrixhoneypot:
build: citrixhoneypot/. build: citrixhoneypot/.
image: "dtagdevsec/citrixhoneypot:alpha" image: "dtagdevsec/citrixhoneypot:24.04"
# Conpot IEC104 service # Conpot IEC104 service
conpot_IEC104: conpot_IEC104:
build: conpot/. build: conpot/.
image: "dtagdevsec/conpot:alpha" image: "dtagdevsec/conpot:24.04"
# Cowrie service # Cowrie service
cowrie: cowrie:
build: cowrie/. build: cowrie/.
image: "dtagdevsec/cowrie:alpha" image: "dtagdevsec/cowrie:24.04"
# Ddospot service # Ddospot service
ddospot: ddospot:
build: ddospot/. build: ddospot/.
image: "dtagdevsec/ddospot:alpha" image: "dtagdevsec/ddospot:24.04"
# Dicompot service # Dicompot service
dicompot: dicompot:
build: dicompot/. build: dicompot/.
image: "dtagdevsec/dicompot:alpha" image: "dtagdevsec/dicompot:24.04"
# Dionaea service # Dionaea service
dionaea: dionaea:
build: dionaea/. build: dionaea/.
image: "dtagdevsec/dionaea:alpha" image: "dtagdevsec/dionaea:24.04"
# ElasticPot service # ElasticPot service
elasticpot: elasticpot:
build: elasticpot/. build: elasticpot/.
image: "dtagdevsec/elasticpot:alpha" image: "dtagdevsec/elasticpot:24.04"
# Endlessh service # Endlessh service
endlessh: endlessh:
build: endlessh/. build: endlessh/.
image: "dtagdevsec/endlessh:alpha" image: "dtagdevsec/endlessh:24.04"
# Glutton service # Glutton service
glutton: # glutton:
build: glutton/. # build: glutton/.
image: "dtagdevsec/glutton:alpha" # image: "dtagdevsec/glutton:24.04"
# Hellpot service # Hellpot service
hellpot: hellpot:
build: hellpot/. build: hellpot/.
image: "dtagdevsec/hellpot:alpha" image: "dtagdevsec/hellpot:24.04"
# Heralding service # Heralding service
heralding: heralding:
build: heralding/. build: heralding/.
image: "dtagdevsec/heralding:alpha" image: "dtagdevsec/heralding:24.04"
# Honeypots service # Honeypots service
honeypots: honeypots:
build: honeypots/. build: honeypots/.
image: "dtagdevsec/honeypots:alpha" image: "dtagdevsec/honeypots:24.04"
# Honeytrap service # Honeytrap service
honeytrap: honeytrap:
build: honeytrap/. build: honeytrap/.
image: "dtagdevsec/honeytrap:alpha" image: "dtagdevsec/honeytrap:24.04"
# IPPHoney service # IPPHoney service
ipphoney: ipphoney:
build: ipphoney/. build: ipphoney/.
image: "dtagdevsec/ipphoney:alpha" image: "dtagdevsec/ipphoney:24.04"
# Log4Pot service # Log4Pot service
log4pot: log4pot:
build: log4pot/. build: log4pot/.
image: "dtagdevsec/log4pot:alpha" image: "dtagdevsec/log4pot:24.04"
# Mailoney service # Mailoney service
mailoney: mailoney:
build: mailoney/. build: mailoney/.
image: "dtagdevsec/mailoney:alpha" image: "dtagdevsec/mailoney:24.04"
# Medpot service # Medpot service
medpot: medpot:
build: medpot/. build: medpot/.
image: "dtagdevsec/medpot:alpha" image: "dtagdevsec/medpot:24.04"
# Redishoneypot service # Redishoneypot service
redishoneypot: redishoneypot:
build: redishoneypot/. build: redishoneypot/.
image: "dtagdevsec/redishoneypot:alpha" image: "dtagdevsec/redishoneypot:24.04"
# Sentrypeer service # Sentrypeer service
sentrypeer: sentrypeer:
build: sentrypeer/. build: sentrypeer/.
image: "dtagdevsec/sentrypeer:alpha" image: "dtagdevsec/sentrypeer:24.04"
#### Snare / Tanner #### Snare / Tanner
## Tanner Redis Service ## Tanner Redis Service
tanner_redis: tanner_redis:
build: tanner/redis/. build: tanner/redis/.
image: "dtagdevsec/redis:alpha" image: "dtagdevsec/redis:24.04"
## PHP Sandbox service ## PHP Sandbox service
tanner_phpox: tanner_phpox:
build: tanner/phpox/. build: tanner/phpox/.
image: "dtagdevsec/phpox:alpha" image: "dtagdevsec/phpox:24.04"
## Tanner API Service ## Tanner API Service
tanner_api: tanner_api:
build: tanner/tanner/. build: tanner/tanner/.
image: "dtagdevsec/tanner:alpha" image: "dtagdevsec/tanner:24.04"
## Snare Service ## Snare Service
snare: snare:
build: tanner/snare/. build: tanner/snare/.
image: "dtagdevsec/snare:alpha" image: "dtagdevsec/snare:24.04"
## Wordpot Service ## Wordpot Service
wordpot: wordpot:
build: wordpot/. build: wordpot/.
image: "dtagdevsec/wordpot:alpha" image: "dtagdevsec/wordpot:24.04"
################## ##################
@ -146,55 +144,60 @@ services:
# Fatt service # Fatt service
fatt: fatt:
build: fatt/. build: fatt/.
image: "dtagdevsec/fatt:alpha" image: "dtagdevsec/fatt:24.04"
# P0f service # P0f service
p0f: p0f:
build: p0f/. build: p0f/.
image: "dtagdevsec/p0f:alpha" image: "dtagdevsec/p0f:24.04"
# Suricata service # Suricata service
suricata: suricata:
build: suricata/. build: suricata/.
image: "dtagdevsec/suricata:alpha" image: "dtagdevsec/suricata:24.04"
################## ##################
#### Tools #### Tools
################## ##################
# T-Pot Init Service
tpotinit:
build: tpotinit/.
image: "dtagdevsec/tpotinit:24.04"
#### ELK #### ELK
## Elasticsearch service ## Elasticsearch service
elasticsearch: elasticsearch:
build: elk/elasticsearch/. build: elk/elasticsearch/.
image: "dtagdevsec/elasticsearch:alpha" image: "dtagdevsec/elasticsearch:24.04"
## Kibana service ## Kibana service
kibana: kibana:
build: elk/kibana/. build: elk/kibana/.
image: "dtagdevsec/kibana:alpha" image: "dtagdevsec/kibana:24.04"
## Logstash service ## Logstash service
logstash: logstash:
build: elk/logstash/. build: elk/logstash/.
image: "dtagdevsec/logstash:alpha" image: "dtagdevsec/logstash:24.04"
# Ewsposter service # Ewsposter service
ewsposter: ewsposter:
build: ewsposter/. build: ewsposter/.
image: "dtagdevsec/ewsposter:alpha" image: "dtagdevsec/ewsposter:24.04"
# Nginx service # Nginx service
nginx: nginx:
build: nginx/. build: nginx/.
image: "dtagdevsec/nginx:alpha" image: "dtagdevsec/nginx:24.04"
# Spiderfoot service # Spiderfoot service
spiderfoot: spiderfoot:
build: spiderfoot/. build: spiderfoot/.
image: "dtagdevsec/spiderfoot:alpha" image: "dtagdevsec/spiderfoot:24.04"
# Map Web Service # Map Web Service
map_web: map_web:
build: elk/map/. build: elk/map/.
image: "dtagdevsec/map:alpha" image: "dtagdevsec/map:24.04"

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
elasticpot_local: elasticpot_local:
@ -16,7 +14,7 @@ services:
- elasticpot_local - elasticpot_local
ports: ports:
- "9200:9200" - "9200:9200"
image: "dtagdevsec/elasticpot:alpha" image: "dtagdevsec/elasticpot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/elasticpot/log:/opt/elasticpot/log - $HOME/tpotce/data/elasticpot/log:/opt/elasticpot/log

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
# ELK services # ELK services
@ -24,7 +22,7 @@ services:
mem_limit: 4g mem_limit: 4g
ports: ports:
- "127.0.0.1:64298:9200" - "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:alpha" image: "dtagdevsec/elasticsearch:24.04"
volumes: volumes:
- $HOME/tpotce/data:/data - $HOME/tpotce/data:/data
@ -40,7 +38,7 @@ services:
mem_limit: 1g mem_limit: 1g
ports: ports:
- "127.0.0.1:64296:5601" - "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:alpha" image: "dtagdevsec/kibana:24.04"
## Logstash service ## Logstash service
logstash: logstash:
@ -52,7 +50,7 @@ services:
depends_on: depends_on:
elasticsearch: elasticsearch:
condition: service_healthy condition: service_healthy
image: "dtagdevsec/logstash:alpha" image: "dtagdevsec/logstash:24.04"
volumes: volumes:
- $HOME/tpotce/data:/data - $HOME/tpotce/data:/data
# - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf # - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
@ -65,7 +63,7 @@ services:
tty: true tty: true
ports: ports:
- "127.0.0.1:6379:6379" - "127.0.0.1:6379:6379"
image: "dtagdevsec/redis:alpha" image: "dtagdevsec/redis:24.04"
read_only: true read_only: true
# Map Web Service # Map Web Service
@ -79,7 +77,7 @@ services:
tty: true tty: true
ports: ports:
- "127.0.0.1:64299:64299" - "127.0.0.1:64299:64299"
image: "dtagdevsec/map:alpha" image: "dtagdevsec/map:24.04"
depends_on: depends_on:
- map_redis - map_redis
@ -91,6 +89,6 @@ services:
- MAP_COMMAND=DataServer_v2.py - MAP_COMMAND=DataServer_v2.py
stop_signal: SIGKILL stop_signal: SIGKILL
tty: true tty: true
image: "dtagdevsec/map:alpha" image: "dtagdevsec/map:24.04"
depends_on: depends_on:
- map_redis - map_redis

View File

@ -1,7 +1,7 @@
FROM ubuntu:22.04 FROM ubuntu:22.04
# #
# VARS # VARS
ENV ES_VER=8.12.2 ENV ES_VER=8.14.2
# #
# Include dist # Include dist
COPY dist/ /root/dist/ COPY dist/ /root/dist/

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
# ELK services # ELK services
@ -24,6 +22,6 @@ services:
mem_limit: 2g mem_limit: 2g
ports: ports:
- "127.0.0.1:64298:9200" - "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:alpha" image: "dtagdevsec/elasticsearch:24.04"
volumes: volumes:
- $HOME/tpotce/data:/data - $HOME/tpotce/data:/data

View File

@ -1,7 +1,7 @@
FROM ubuntu:22.04 FROM ubuntu:22.04
# #
# VARS # VARS
ENV KB_VER=8.12.2 ENV KB_VER=8.14.2
# Include dist # Include dist
COPY dist/ /root/dist/ COPY dist/ /root/dist/
# #

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
## Kibana service ## Kibana service
@ -12,4 +10,4 @@ services:
# condition: service_healthy # condition: service_healthy
ports: ports:
- "127.0.0.1:64296:5601" - "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:alpha" image: "dtagdevsec/kibana:24.04"

View File

@ -1,7 +1,7 @@
FROM ubuntu:22.04 FROM ubuntu:22.04
# #
# VARS # VARS
ENV LS_VER=8.12.2 ENV LS_VER=8.14.2
# Include dist # Include dist
COPY dist/ /root/dist/ COPY dist/ /root/dist/
# #

View File

@ -10,7 +10,10 @@ trap fuCLEANUP EXIT
if [ -f "/data/tpot/etc/compose/elk_environment" ]; if [ -f "/data/tpot/etc/compose/elk_environment" ];
then then
echo "Found .env, now exporting ..." echo "Found .env, now exporting ..."
set -o allexport && source "/data/tpot/etc/compose/elk_environment" && set +o allexport set -o allexport
source "/data/tpot/etc/compose/elk_environment"
LS_SSL_VERIFICATION="${LS_SSL_VERIFICATION:-full}"
set +o allexport
fi fi
# Check internet availability # Check internet availability
@ -50,6 +53,7 @@ if [ "$TPOT_TYPE" == "SENSOR" ];
echo echo
echo "T-Pot type: $TPOT_TYPE" echo "T-Pot type: $TPOT_TYPE"
echo "Hive IP: $TPOT_HIVE_IP" echo "Hive IP: $TPOT_HIVE_IP"
echo "SSL verification: $LS_SSL_VERIFICATION"
echo echo
# Ensure correct file permissions for private keyfile or SSH will ask for password # Ensure correct file permissions for private keyfile or SSH will ask for password
cp /usr/share/logstash/config/pipelines_sensor.yml /usr/share/logstash/config/pipelines.yml cp /usr/share/logstash/config/pipelines_sensor.yml /usr/share/logstash/config/pipelines.yml

View File

@ -723,7 +723,9 @@ output {
codec => "json" codec => "json"
format => "json_batch" format => "json_batch"
url => "https://${TPOT_HIVE_IP}:64294" url => "https://${TPOT_HIVE_IP}:64294"
cacert => "/data/hive.crt" # cacert => "/data/hive.crt"
ssl_verification_mode => "${LS_SSL_VERIFICATION}"
ssl_certificate_authorities => "/data/hive.crt"
headers => { headers => {
"Authorization" => "Basic ${TPOT_HIVE_USER}" "Authorization" => "Basic ${TPOT_HIVE_USER}"
} }

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
## Logstash service ## Logstash service
@ -14,7 +12,7 @@ services:
# condition: service_healthy # condition: service_healthy
ports: ports:
- "127.0.0.1:64305:64305" - "127.0.0.1:64305:64305"
image: "dtagdevsec/logstash:alpha" image: "dtagdevsec/logstash:24.04"
volumes: volumes:
- $HOME/tpotce/data:/data - $HOME/tpotce/data:/data
# - /$HOME/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf # - /$HOME/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf

View File

@ -1,5 +1,3 @@
version: '2.3'
#networks: #networks:
# map_local: # map_local:
@ -11,7 +9,7 @@ services:
restart: always restart: always
stop_signal: SIGKILL stop_signal: SIGKILL
tty: true tty: true
image: "dtagdevsec/redis:alpha" image: "dtagdevsec/redis:24.04"
read_only: true read_only: true
# Map Web Service # Map Web Service
@ -25,7 +23,7 @@ services:
tty: true tty: true
ports: ports:
- "127.0.0.1:64299:64299" - "127.0.0.1:64299:64299"
image: "dtagdevsec/map:alpha" image: "dtagdevsec/map:24.04"
depends_on: depends_on:
- map_redis - map_redis
@ -39,6 +37,6 @@ services:
# - TZ=${TPOT_ATTACKMAP_TEXT_TIMEZONE} # - TZ=${TPOT_ATTACKMAP_TEXT_TIMEZONE}
stop_signal: SIGKILL stop_signal: SIGKILL
tty: true tty: true
image: "dtagdevsec/map:alpha" image: "dtagdevsec/map:24.04"
depends_on: depends_on:
- map_redis - map_redis

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
endlessh_local: endlessh_local:
@ -16,7 +14,7 @@ services:
- endlessh_local - endlessh_local
ports: ports:
- "22:2222" - "22:2222"
image: "dtagdevsec/endlessh:alpha" image: "dtagdevsec/endlessh:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/endlessh/log:/var/log/endlessh - $HOME/tpotce/data/endlessh/log:/var/log/endlessh

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
ewsposter_local: ewsposter_local:
@ -23,7 +21,7 @@ services:
- EWS_HPFEEDS_SECRET=secret - EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false - EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json - EWS_HPFEEDS_FORMAT=json
image: "dtagdevsec/ewsposter:alpha" image: "dtagdevsec/ewsposter:24.04"
volumes: volumes:
- $HOME/tpotce/data:/data - $HOME/tpotce/data:/data
- $HOME/tpotce/data/ews/conf/ews.ip:/opt/ewsposter/ews.ip - $HOME/tpotce/data/ews/conf/ews.ip:/opt/ewsposter/ews.ip

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
# Fatt service # Fatt service
@ -14,6 +12,6 @@ services:
- NET_ADMIN - NET_ADMIN
- SYS_NICE - SYS_NICE
- NET_RAW - NET_RAW
image: "dtagdevsec/fatt:alpha" image: "dtagdevsec/fatt:24.04"
volumes: volumes:
- $HOME/tpotce/data/fatt/log:/opt/fatt/log - $HOME/tpotce/data/fatt/log:/opt/fatt/log

View File

@ -5,21 +5,19 @@ COPY dist/ /root/dist/
# #
# Setup apk # Setup apk
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
build-base \ make \
git \ git \
g++ \ g++ \
iptables-dev \ iptables-dev \
libpcap-dev && \ libpcap-dev && \
# #
# Setup go, glutton # Setup go, glutton
export GO111MODULE=on && \
mkdir -p /opt/ && \ mkdir -p /opt/ && \
cd /opt/ && \ cd /opt/ && \
git clone https://github.com/mushorg/glutton && \ git clone https://github.com/mushorg/glutton && \
cd /opt/glutton/ && \ cd /opt/glutton/ && \
git checkout c1204c65ce32bfdc0e08fb2a9abe89b3b8eeed62 && \ git checkout c1204c65ce32bfdc0e08fb2a9abe89b3b8eeed62 && \
cp /root/dist/system.go . && \ cp /root/dist/system.go . && \
go mod download && \
make build && \ make build && \
mv /root/dist/config.yaml /opt/glutton/config/ mv /root/dist/config.yaml /opt/glutton/config/
# #
@ -30,10 +28,7 @@ COPY --from=builder /opt/glutton/config /opt/glutton/config
COPY --from=builder /opt/glutton/rules /opt/glutton/rules COPY --from=builder /opt/glutton/rules /opt/glutton/rules
# #
RUN apk -U --no-cache add \ RUN apk -U --no-cache add \
iptables \
iptables-dev \ iptables-dev \
libnetfilter_queue-dev \
libcap \
libpcap-dev && \ libpcap-dev && \
setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \ setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \
setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-nft-multi && \ setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-nft-multi && \

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
# glutton service # glutton service
@ -15,7 +13,7 @@ services:
network_mode: "host" network_mode: "host"
cap_add: cap_add:
- NET_ADMIN - NET_ADMIN
image: "dtagdevsec/glutton:alpha" image: "dtagdevsec/glutton:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/glutton/log:/var/log/glutton - $HOME/tpotce/data/glutton/log:/var/log/glutton

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
hellpot_local: hellpot_local:
@ -16,7 +14,7 @@ services:
- hellpot_local - hellpot_local
ports: ports:
- "80:8080" - "80:8080"
image: "dtagdevsec/hellpot:alpha" image: "dtagdevsec/hellpot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/hellpot/log:/var/log/hellpot - $HOME/tpotce/data/hellpot/log:/var/log/hellpot

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
heralding_local: heralding_local:
@ -33,7 +31,7 @@ services:
- "3389:3389" - "3389:3389"
- "5432:5432" - "5432:5432"
- "5900:5900" - "5900:5900"
image: "dtagdevsec/heralding:alpha" image: "dtagdevsec/heralding:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/heralding/log:/var/log/heralding - $HOME/tpotce/data/heralding/log:/var/log/heralding

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
honeypots_local: honeypots_local:
@ -48,7 +46,7 @@ services:
- "9100:9100" - "9100:9100"
- "9200:9200" - "9200:9200"
- "11211:11211" - "11211:11211"
image: "dtagdevsec/honeypots:alpha" image: "dtagdevsec/honeypots:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/honeypots/log:/var/log/honeypots - $HOME/tpotce/data/honeypots/log:/var/log/honeypots

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
# Honeytrap service # Honeytrap service
@ -14,7 +12,7 @@ services:
network_mode: "host" network_mode: "host"
cap_add: cap_add:
- NET_ADMIN - NET_ADMIN
image: "dtagdevsec/honeytrap:alpha" image: "dtagdevsec/honeytrap:24.04"
read_only: true read_only: true
volumes: volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks - /data/honeytrap/attacks:/opt/honeytrap/var/attacks

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
ipphoney_local: ipphoney_local:
@ -16,7 +14,7 @@ services:
- ipphoney_local - ipphoney_local
ports: ports:
- "631:631" - "631:631"
image: "dtagdevsec/ipphoney:alpha" image: "dtagdevsec/ipphoney:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/ipphoney/log:/opt/ipphoney/log - $HOME/tpotce/data/ipphoney/log:/opt/ipphoney/log

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
log4pot_local: log4pot_local:
@ -22,7 +20,7 @@ services:
- "8080:8080" - "8080:8080"
- "9200:8080" - "9200:8080"
- "25565:8080" - "25565:8080"
image: "dtagdevsec/log4pot:alpha" image: "dtagdevsec/log4pot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/log4pot/log:/var/log/log4pot/log - $HOME/tpotce/data/log4pot/log:/var/log/log4pot/log

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
mailoney_local: mailoney_local:
@ -23,7 +21,7 @@ services:
ports: ports:
- "25:25" - "25:25"
- "587:25" - "587:25"
image: "dtagdevsec/mailoney:alpha" image: "dtagdevsec/mailoney:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/mailoney/log:/opt/mailoney/logs - $HOME/tpotce/data/mailoney/log:/opt/mailoney/logs

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
medpot_local: medpot_local:
@ -16,7 +14,7 @@ services:
- medpot_local - medpot_local
ports: ports:
- "2575:2575" - "2575:2575"
image: "dtagdevsec/medpot:alpha" image: "dtagdevsec/medpot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/medpot/log/:/var/log/medpot - $HOME/tpotce/data/medpot/log/:/var/log/medpot

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
# nginx service # nginx service
@ -22,7 +20,7 @@ services:
# ports: # ports:
# - "64297:64297" # - "64297:64297"
# - "127.0.0.1:64304:64304" # - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:alpha" image: "dtagdevsec/nginx:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/nginx/cert/:/etc/nginx/cert/:ro - $HOME/tpotce/data/nginx/cert/:/etc/nginx/cert/:ro

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
# P0f service # P0f service
@ -10,7 +8,7 @@ services:
# cpu_count: 1 # cpu_count: 1
# cpus: 0.75 # cpus: 0.75
network_mode: "host" network_mode: "host"
image: "dtagdevsec/p0f:alpha" image: "dtagdevsec/p0f:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/p0f/log:/var/log/p0f - $HOME/tpotce/data/p0f/log:/var/log/p0f

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
redishoneypot_local: redishoneypot_local:
@ -16,7 +14,7 @@ services:
- redishoneypot_local - redishoneypot_local
ports: ports:
- "6379:6379" - "6379:6379"
image: "dtagdevsec/redishoneypot:alpha" image: "dtagdevsec/redishoneypot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/redishoneypot/log:/var/log/redishoneypot - $HOME/tpotce/data/redishoneypot/log:/var/log/redishoneypot

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
sentrypeer_local: sentrypeer_local:
@ -24,7 +22,7 @@ services:
- "5060:5060/udp" - "5060:5060/udp"
- "5060:5060/tcp" - "5060:5060/tcp"
# - "127.0.0.1:8082:8082" # - "127.0.0.1:8082:8082"
image: "dtagdevsec/sentrypeer:alpha" image: "dtagdevsec/sentrypeer:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/sentrypeer/log:/var/log/sentrypeer - $HOME/tpotce/data/sentrypeer/log:/var/log/sentrypeer

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
spiderfoot_local: spiderfoot_local:
@ -16,6 +14,6 @@ services:
- spiderfoot_local - spiderfoot_local
ports: ports:
- "127.0.0.1:64303:8080" - "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:alpha" image: "dtagdevsec/spiderfoot:24.04"
volumes: volumes:
- $HOME/tpotce/data/spiderfoot:/home/spiderfoot/.spiderfoot - $HOME/tpotce/data/spiderfoot:/home/spiderfoot/.spiderfoot

View File

@ -1,5 +1,3 @@
version: '2.3'
services: services:
# Suricata service # Suricata service
@ -17,6 +15,6 @@ services:
- NET_ADMIN - NET_ADMIN
- SYS_NICE - SYS_NICE
- NET_RAW - NET_RAW
image: "dtagdevsec/suricata:alpha" image: "dtagdevsec/suricata:24.04"
volumes: volumes:
- $HOME/tpotce/data/suricata/log:/var/log/suricata - $HOME/tpotce/data/suricata/log:/var/log/suricata

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
tanner_local: tanner_local:
@ -16,7 +14,7 @@ services:
# cpus: 0.25 # cpus: 0.25
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/redis:alpha" image: "dtagdevsec/redis:24.04"
read_only: true read_only: true
# PHP Sandbox service # PHP Sandbox service
@ -32,7 +30,7 @@ services:
# cpus: 0.25 # cpus: 0.25
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/phpox:alpha" image: "dtagdevsec/phpox:24.04"
read_only: true read_only: true
# Tanner API Service # Tanner API Service
@ -48,7 +46,7 @@ services:
# cpus: 0.25 # cpus: 0.25
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/tanner:alpha" image: "dtagdevsec/tanner:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/tanner/log:/var/log/tanner - $HOME/tpotce/data/tanner/log:/var/log/tanner
@ -69,7 +67,7 @@ services:
# - tanner_local # - tanner_local
# ports: # ports:
# - "127.0.0.1:8091:8091" # - "127.0.0.1:8091:8091"
# image: "dtagdevsec/tanner:alpha" # image: "dtagdevsec/tanner:24.04"
# command: tannerweb # command: tannerweb
# read_only: true # read_only: true
# volumes: # volumes:
@ -90,7 +88,7 @@ services:
# cpus: 0.25 # cpus: 0.25
networks: networks:
- tanner_local - tanner_local
image: "dtagdevsec/tanner:alpha" image: "dtagdevsec/tanner:24.04"
command: tanner command: tanner
read_only: true read_only: true
volumes: volumes:
@ -114,6 +112,6 @@ services:
- tanner_local - tanner_local
ports: ports:
- "80:80" - "80:80"
image: "dtagdevsec/snare:alpha" image: "dtagdevsec/snare:24.04"
depends_on: depends_on:
- tanner - tanner

View File

@ -30,9 +30,10 @@ RUN apk --no-cache -U add \
apk --no-cache -U add --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community \ apk --no-cache -U add --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community \
yq && \ yq && \
# #
# Setup user # Setup user, logrotate permissions
addgroup -g 2000 tpot && \ addgroup -g 2000 tpot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 tpot && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 tpot && \
chmod 0600 /opt/tpot/etc/logrotate/logrotate.conf && \
# #
# Clean up # Clean up
apk del --purge git && \ apk del --purge git && \

View File

@ -1,23 +1,31 @@
#!/bin/bash #!/bin/bash
myHOST="$1" myHOST="$1"
myPACKAGES="nmap" myPACKAGES="dcmtk ncat nmap yq"
myDOCKERCOMPOSEYML="/opt/tpot/etc/tpot.yml" myDOCKERCOMPOSEYML="$HOME/tpotce/docker-compose.yml"
myTIMEOUT=180
function fuGOTROOT { myMEDPOTPACKET="
myWHOAMI=$(whoami) MSH|^~\&|ADT1|MCM|LABADT|MCM|198808181126|SECURITY|ADT^A01|MSG00001-|P|2.6
if [ "$myWHOAMI" != "root" ] EVN|A01|198808181123
then PID|||PATID1234^5^M11^^AN||JONES^WILLIAM^A^III||19610615|M||2106-3|677 DELAWARE AVENUE^^EVERETT^MA^02149|GL|(919)379-1212|(919)271-3434~(919)277-3114||S||PATID12345001^2^M10^^ACSN|123456789|9-87654^NC
echo "Need to run as root ..." NK1|1|JONES^BARBARA^K|SPO|||||20011105
exit NK1|1|JONES^MICHAEL^A|FTH
fi PV1|1|I|2000^2012^01||||004777^LEBAUER^SIDNEY^J.|||SUR||-||ADM|A0
} AL1|1||^PENICILLIN||CODE16~CODE17~CODE18
AL1|2||^CAT DANDER||CODE257
DG1|001|I9|1550|MAL NEO LIVER, PRIMARY|19880501103005|F
PR1|2234|M11|111^CODE151|COMMON PROCEDURES|198809081123
ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^SMITH^ELLEN|199505011201
GT1|1122|1519|BILL^GATES^A
IN1|001|A357|1234|BCMD|||||132987
IN2|ID1551001|SSN12345678
ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^ELLEN|199505011201"
function fuCHECKDEPS { function fuCHECKDEPS {
myINST="" myINST=""
for myDEPS in $myPACKAGES; for myDEPS in $myPACKAGES;
do do
myOK=$(dpkg -s $myDEPS | grep ok | awk '{ print $3 }'); myOK=$(sudo dpkg -s $myDEPS | grep ok | awk '{ print $3 }');
if [ "$myOK" != "ok" ] if [ "$myOK" != "ok" ]
then then
myINST=$(echo $myINST $myDEPS) myINST=$(echo $myINST $myDEPS)
@ -25,10 +33,10 @@ do
done done
if [ "$myINST" != "" ] if [ "$myINST" != "" ]
then then
apt-get update -y sudo apt-get update -y
for myDEPS in $myINST; for myDEPS in $myINST;
do do
apt-get install $myDEPS -y sudo apt-get install $myDEPS -y
done done
fi fi
} }
@ -50,19 +58,35 @@ myDOCKERCOMPOSEUDPPORTS=$(cat $myDOCKERCOMPOSEYML | grep "udp" | tr -d '"\|#\-'
myDOCKERCOMPOSEPORTS=$(cat $myDOCKERCOMPOSEYML | yq -r '.services[].ports' | grep ':' | sed -e s/127.0.0.1// | tr -d '", ' | sed -e s/^:// | cut -f1 -d ':' | grep -v "6429\|6430" | sort -gu) myDOCKERCOMPOSEPORTS=$(cat $myDOCKERCOMPOSEYML | yq -r '.services[].ports' | grep ':' | sed -e s/127.0.0.1// | tr -d '", ' | sed -e s/^:// | cut -f1 -d ':' | grep -v "6429\|6430" | sort -gu)
myUDPPORTS=$(for i in $myDOCKERCOMPOSEUDPPORTS; do echo -n "U:$i,"; done) myUDPPORTS=$(for i in $myDOCKERCOMPOSEUDPPORTS; do echo -n "U:$i,"; done)
myPORTS=$(for i in $myDOCKERCOMPOSEPORTS; do echo -n "T:$i,"; done) myPORTS=$(for i in $myDOCKERCOMPOSEPORTS; do echo -n "T:$i,"; done)
#echo ${myUDPPORTS}
#echo ${myPORTS}
} }
# Main # Main
fuGETPORTS
fuGOTROOT
fuCHECKDEPS
fuCHECKFORARGS fuCHECKFORARGS
fuCHECKDEPS
fuGETPORTS
echo echo
echo "Starting scan on all UDP / TCP ports defined in /opt/tpot/etc/tpot.yml ..." echo "Probing some services ..."
nmap -sV -sC -v -p $myPORTS $1 & echo "$myMEDPOTPACKET" | nc "$myHOST" 2575 &
nmap -sU -sV -sC -v -p $myUDPPORTS $1 & curl -XGET "http://$myHOST:9200/logstash-*/_search" &
curl -XPOST -H "Content-Type: application/json" -d '{"name":"test","email":"test@test.com"}' "http://$myHOST:9200/test" &
echo "I20100" | timeout --foreground 3 nc "$myHOST" 10001 &
findscu -P -k PatientName="*" $myHOST 11112 &
getscu -P -k PatientName="*" $myHOST 11112 &
telnet $myHOST 3299 &
echo
echo "Starting scan on all UDP / TCP ports defined in ${myDOCKERCOMPOSEYML} ..."
timeout --foreground ${myTIMEOUT} nmap -sV -sC -v -p $myPORTS $1 &
timeout --foreground ${myTIMEOUT} nmap -sU -sV -sC -v -p $myUDPPORTS $1 &
echo echo
wait wait
echo "Restarting some containers ..."
docker stop adbhoney conpot_guardian_ast conpot_kamstrup_382 dionaea
docker start adbhoney conpot_guardian_ast conpot_kamstrup_382 dionaea
echo
echo "Resetting terminal ..."
reset
echo
echo "Done." echo "Done."
echo echo

View File

@ -7,6 +7,8 @@ exec > >(tee /data/tpotinit.log) 2>&1
cleanup() { cleanup() {
echo "# SIGTERM received, cleaning up ..." echo "# SIGTERM received, cleaning up ..."
echo echo
if [ "${TPOT_OSTYPE}" = "linux" ];
then
echo "## ... removing firewall rules." echo "## ... removing firewall rules."
/opt/tpot/bin/rules.sh ${COMPOSE} unset /opt/tpot/bin/rules.sh ${COMPOSE} unset
echo echo
@ -16,6 +18,7 @@ cleanup() {
/opt/tpot/bin/blackhole.sh del /opt/tpot/bin/blackhole.sh del
echo echo
fi fi
fi
kill -TERM "$PID" kill -TERM "$PID"
rm -f /tmp/success rm -f /tmp/success
echo "# Cleanup done." echo "# Cleanup done."
@ -153,26 +156,43 @@ update_permissions
# Check for compatible OSType # Check for compatible OSType
echo echo
echo "# Checking if OSType is compatible." echo "# Checking if OSType is set correctly."
echo echo
myOSTYPE=$(uname -a | grep -Eo "linuxkit") myOSTYPE=$(uname -a | grep -Eo "microsoft|linuxkit")
if [ "${myOSTYPE}" == "linuxkit" ] && [ "${TPOT_OSTYPE}" == "linux" ]; if [ "${myOSTYPE}" == "microsoft" ] && [ "${TPOT_OSTYPE}" != "win" ];
then then
echo "# Docker Desktop for macOS or Windows detected." echo "# Docker Desktop for Windows detected, but TPOT_OSTYPE is not set to win."
echo "# 1. You need to adjust the OSType the T-Pot .env config." echo "# 1. You need to adjust the OSType in the T-Pot .env config."
echo "# 2. You need to use the macos or win docker compose file." echo "# 2. You need to copy compose/mac_win.yml to ./docker-compose.yml."
echo echo
echo "# Aborting." echo "# Aborting."
echo echo
exit 1 sleep 1
else
if ! [ -S /var/run/docker.sock ];
then
echo "# Cannot access /var/run/docker.sock, check docker-compose.yml for proper volume definition."
echo
echo "# Aborting."
exit 1 exit 1
fi fi
if [ "${myOSTYPE}" == "linuxkit" ] && [ "${TPOT_OSTYPE}" != "mac" ];
then
echo "# Docker Desktop for macOS detected, but TPOT_OSTYPE is not set to mac."
echo "# 1. You need to adjust the OSType in the T-Pot .env config."
echo "# 2. You need to copy compose/mac_win.yml to ./docker-compose.yml."
echo
echo "# Aborting."
echo
sleep 1
exit 1
fi
if [ "${myOSTYPE}" == "" ] && [ "${TPOT_OSTYPE}" != "linux" ];
then
echo "# Docker Engine detected, but TPOT_OSTYPE is not set to linux."
echo "# 1. You need to adjust the OSType in the T-Pot .env config."
echo "# 2. You need to copy compose/standard.yml to ./docker-compose.yml."
echo
echo "# Aborting."
echo
sleep 1
exit 1
fi fi
# Validate environment variables # Validate environment variables
@ -255,12 +275,8 @@ if [ -f "/data/uuid" ];
fi fi
# Check if TPOT_BLACKHOLE is enabled # Check if TPOT_BLACKHOLE is enabled
if [ "${myOSTYPE}" == "linuxkit" ]; if [ "${TPOT_OSTYPE}" == "linux" ];
then then
echo
echo "# Docker Desktop for macOS or Windows detected, Blackhole feature is not supported."
echo
else
if [ "${TPOT_BLACKHOLE}" == "ENABLED" ] && [ ! -f "/etc/blackhole/mass_scanner.txt" ]; if [ "${TPOT_BLACKHOLE}" == "ENABLED" ] && [ ! -f "/etc/blackhole/mass_scanner.txt" ];
then then
echo echo
@ -278,6 +294,10 @@ if [ "${myOSTYPE}" == "linuxkit" ];
echo echo
echo "# Blackhole is not active." echo "# Blackhole is not active."
fi fi
else
echo
echo "# T-Pot is configured for macOS / Windows. Blackhole is not supported."
echo
fi fi
# Get IP # Get IP
@ -291,7 +311,7 @@ update_permissions
# Update interface settings (p0f and Suricata) and setup iptables to support NFQ based honeypots (glutton, honeytrap) # Update interface settings (p0f and Suricata) and setup iptables to support NFQ based honeypots (glutton, honeytrap)
### This is currently not supported on Docker for Desktop, only on Docker Engine for Linux ### This is currently not supported on Docker for Desktop, only on Docker Engine for Linux
if [ "${myOSTYPE}" != "linuxkit" ] && [ "${TPOT_OSTYPE}" == "linux" ]; if [ "${TPOT_OSTYPE}" == "linux" ];
then then
echo echo
echo "# Get IF, disable offloading, enable promiscious mode for p0f and suricata ..." echo "# Get IF, disable offloading, enable promiscious mode for p0f and suricata ..."
@ -303,10 +323,14 @@ if [ "${myOSTYPE}" != "linuxkit" ] && [ "${TPOT_OSTYPE}" == "linux" ];
echo "# Adding firewall rules ..." echo "# Adding firewall rules ..."
echo echo
/opt/tpot/bin/rules.sh ${COMPOSE} set /opt/tpot/bin/rules.sh ${COMPOSE} set
else
echo
echo "# T-Pot is configured for macOS / Windows. Setting up firewall rules on the host is not supported."
echo
fi fi
# Display open ports # Display open ports
if [ "${myOSTYPE}" != "linuxkit" ]; if [ "${TPOT_OSTYPE}" == "linux" ];
then then
echo echo
echo "# This is a list of open ports on the host (netstat -tulpen)." echo "# This is a list of open ports on the host (netstat -tulpen)."
@ -317,7 +341,7 @@ if [ "${myOSTYPE}" != "linuxkit" ];
echo echo
else else
echo echo
echo "# Docker Desktop for macOS or Windows detected, cannot show open ports on the host." echo "# T-Pot is configured for macOS / Windows. Showing open ports from the host is not supported."
echo echo
fi fi
@ -331,24 +355,20 @@ touch /tmp/success
# We want to see true source for UDP packets in container (https://github.com/moby/libnetwork/issues/1994) # We want to see true source for UDP packets in container (https://github.com/moby/libnetwork/issues/1994)
# Start autoheal if running on a supported os # Start autoheal if running on a supported os
if [ "${myOSTYPE}" != "linuxkit" ]; if [ "${TPOT_OSTYPE}" == "linux" ];
then then
sleep 60 sleep 60
echo "# Dropping UDP connection tables to improve visibility of true source IPs." echo "# Dropping UDP connection tables to improve visibility of true source IPs."
/usr/sbin/conntrack -D -p udp /usr/sbin/conntrack -D -p udp
fi
# Starting container health monitoring # Starting container health monitoring
echo echo
figlet "Starting ..." figlet "Starting ..."
figlet "Autoheal" figlet "Autoheal"
echo "# Now monitoring healthcheck enabled containers to automatically restart them when unhealthy." echo "# Now monitoring healthcheck enabled containers to automatically restart them when unhealthy."
echo echo
# exec /opt/tpot/autoheal.sh autoheal
/opt/tpot/autoheal.sh autoheal & /opt/tpot/autoheal.sh autoheal &
PID=$! PID=$!
wait $PID wait $PID
echo "# T-Pot Init and Autoheal were stopped. Exiting." echo "# T-Pot Init and Autoheal were stopped. Exiting."
else
echo
echo "# Docker Desktop for macOS or Windows detected, Conntrack feature is not supported."
echo
fi

View File

@ -1,5 +1,3 @@
version: '3.9'
services: services:
# T-Pot Init Service # T-Pot Init Service
@ -10,7 +8,7 @@ services:
- $HOME/tpotce/.env - $HOME/tpotce/.env
restart: "no" restart: "no"
stop_grace_period: 60s stop_grace_period: 60s
image: "dtagdevsec/tpotinit:alpha" image: "dtagdevsec/tpotinit:24.04"
volumes: volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro - /var/run/docker.sock:/var/run/docker.sock:ro
- $HOME/tpotce/data:/data - $HOME/tpotce/data:/data

View File

@ -1,5 +1,3 @@
version: '2.3'
networks: networks:
wordpot_local: wordpot_local:
@ -16,7 +14,7 @@ services:
- wordpot_local - wordpot_local
ports: ports:
- "80:80" - "80:80"
image: "dtagdevsec/wordpot:alpha" image: "dtagdevsec/wordpot:24.04"
read_only: true read_only: true
volumes: volumes:
- $HOME/tpotce/data/wordpot/log:/opt/wordpot/logs/ - $HOME/tpotce/data/wordpot/log:/opt/wordpot/logs/

20
dps.ps1 Normal file
View File

@ -0,0 +1,20 @@
# Format, colorize docker ps output
# Define a fixed width for the STATUS column
$statusWidth = 30
# Capture the Docker output into a variable
$dockerOutput = docker ps -f status=running -f status=exited --format "{{.Names}}`t{{.Status}}`t{{.Ports}}"
# Print header with colors
Write-Host ("NAME".PadRight(20) + "STATUS".PadRight($statusWidth) + "PORTS") -ForegroundColor Cyan -NoNewline
Write-Host ""
# Split the output into lines and loop over them
$dockerOutput -split '\r?\n' | ForEach-Object {
if ($_ -ne "") {
$fields = $_ -split "`t"
Write-Host ($fields[0].PadRight(20)) -NoNewline -ForegroundColor Yellow
Write-Host ($fields[1].PadRight($statusWidth)) -NoNewline -ForegroundColor Green
Write-Host ($fields[2]) -ForegroundColor Blue
}
}

View File

@ -51,6 +51,7 @@ TPOT_PERSISTENCE=on
# Create credentials with 'htpasswd ~/tpotce/data/nginx/conf/lswebpasswd <username>' # Create credentials with 'htpasswd ~/tpotce/data/nginx/conf/lswebpasswd <username>'
# 4. On SENSOR: Provide username / password from (3) for TPOT_HIVE_USER as base64 encoded string: # 4. On SENSOR: Provide username / password from (3) for TPOT_HIVE_USER as base64 encoded string:
# "echo -n 'username:password' | base64 -w0" # "echo -n 'username:password' | base64 -w0"
# MOBILE: This will set the correct type for T-Pot Mobile (https://github.com/telekom-security/tpotmobile)
TPOT_TYPE=HIVE TPOT_TYPE=HIVE
# T-Pot Hive User (only relevant for SENSOR deployment) # T-Pot Hive User (only relevant for SENSOR deployment)
@ -59,6 +60,18 @@ TPOT_TYPE=HIVE
# i.e. TPOT_HIVE_USER='dXNlcm5hbWU6cGFzc3dvcmQ=' # i.e. TPOT_HIVE_USER='dXNlcm5hbWU6cGFzc3dvcmQ='
TPOT_HIVE_USER= TPOT_HIVE_USER=
# Logstash Sensor SSL verfication (only relevant on SENSOR hosts)
# full: This is the default. Logstash, by default, verifies the complete certificate chain for ssl certificates.
# This also includes the FQDN and sANs. By default T-Pot will only generate a self-signed certificate which
# contains a sAN for the HIVE IP. In scenario where the HIVE needs to be accessed via Internet, maybe with
# a different NAT address, a new certificate needs to be generated before deployment that includes all the
# IPs and FQDNs as sANs for logstash successfully establishing a connection to the HIVE for transmitting
# logs. Details here: https://github.com/telekom-security/tpotce?tab=readme-ov-file#distributed-deployment
# none: This setting will disable the ssl verification check of logstash and should only be used in a testing
# environment where IPs often change. It is not recommended for a production environment where trust between
# HIVE and SENSOR is only established through a self signed certificate.
LS_SSL_VERIFICATION=full
# T-Pot Hive IP (only relevant for SENSOR deployment) # T-Pot Hive IP (only relevant for SENSOR deployment)
# <empty>: This is empty by default. # <empty>: This is empty by default.
# <IP, FQDN>: This can be either a IP (i.e. 192.168.1.1) or a FQDN (i.e. foo.bar.local) # <IP, FQDN>: This can be either a IP (i.e. 192.168.1.1) or a FQDN (i.e. foo.bar.local)
@ -108,7 +121,7 @@ TPOT_DOCKER_COMPOSE=./docker-compose.yml
TPOT_REPO=dtagdevsec TPOT_REPO=dtagdevsec
# T-Pot Version Tag # T-Pot Version Tag
TPOT_VERSION=alpha TPOT_VERSION=24.04
# T-Pot Pull Policy # T-Pot Pull Policy
# always: (T-Pot default) Compose implementations SHOULD always pull the image from the registry. # always: (T-Pot default) Compose implementations SHOULD always pull the image from the registry.

View File

@ -1,2 +1,2 @@
#!/usr/bin/env bash #!/usr/bin/env bash
docker run -v $HOME/tpotce:/data --entrypoint bash -it -u $(id -u):$(id -g) dtagdevsec/tpotinit:alpha "/opt/tpot/bin/genuser.sh" docker run -v $HOME/tpotce:/data --entrypoint bash -it -u $(id -u):$(id -g) dtagdevsec/tpotinit:24.04 "/opt/tpot/bin/genuser.sh"

12
genuserwin.ps1 Normal file
View File

@ -0,0 +1,12 @@
# Run genuser.sh within tpotinit, prepare path and file
# Define the volume paths
$homePath = $Env:USERPROFILE + "\tpotce"
$nginxpasswdPath = $homePath + "\data\nginx\conf\nginxpasswd"
# Ensure nginxpasswd file exists
if (-Not (Test-Path $nginxpasswdPath)) {
New-Item -ItemType File -Force -Path $nginxpasswdPath
}
# Run the Docker container without specifying UID / GID
docker run -v "${homePath}:/data" --entrypoint bash -it dtagdevsec/tpotinit:24.04 "/opt/tpot/bin/genuser.sh"

View File

@ -119,7 +119,7 @@ fi
if [ ! -f installer/install/tpot.yml ] && [ ! -f tpot.yml ]; if [ ! -f installer/install/tpot.yml ] && [ ! -f tpot.yml ];
then then
echo "### Now downloading T-Pot Ansible Installation Playbook ... " echo "### Now downloading T-Pot Ansible Installation Playbook ... "
wget -qO tpot.yml https://github.com/telekom-security/tpotce/raw/alpha/installer/install/tpot.yml wget -qO tpot.yml https://github.com/telekom-security/tpotce/raw/master/installer/install/tpot.yml
myANSIBLE_TPOT_PLAYBOOK="tpot.yml" myANSIBLE_TPOT_PLAYBOOK="tpot.yml"
echo echo
else else
@ -171,24 +171,38 @@ echo "### (H)ive - T-Pot Standard / HIVE installation."
echo "### Includes also everything you need for a distributed setup with sensors." echo "### Includes also everything you need for a distributed setup with sensors."
echo "### (S)ensor - T-Pot Sensor installation." echo "### (S)ensor - T-Pot Sensor installation."
echo "### Optimized for a distributed installation, without WebUI, Elasticsearch and Kibana." echo "### Optimized for a distributed installation, without WebUI, Elasticsearch and Kibana."
echo "### (M)obile - T-Pot Mobile installation."
echo "### Includes everything to run T-Pot Mobile (available separately)."
while true; do while true; do
read -p "### Install Type? (h/s) " myTPOT_TYPE read -p "### Install Type? (h/s/m) " myTPOT_TYPE
case "${myTPOT_TYPE}" in case "${myTPOT_TYPE}" in
h|H) h|H)
echo echo
echo "### Installing T-Pot Standard / HIVE installation." echo "### Installing T-Pot Standard / HIVE."
myTPOT_TYPE="HIVE" myTPOT_TYPE="HIVE"
cp ${HOME}/tpotce/compose/standard.yml ${HOME}/tpotce/docker-compose.yml
myINFO=""
break ;; break ;;
s|S) s|S)
echo echo
echo "### Installing T-Pot Sensor installation." echo "### Installing T-Pot Sensor."
myTPOT_TYPE="SENSOR" myTPOT_TYPE="SENSOR"
cp ${HOME}/tpotce/compose/sensor.yml ${HOME}/tpotce/docker-compose.yml
myINFO="### Make sure to deploy SSH keys to this SENSOR and disable SSH password authentication.
### On HIVE run the tpotce/deploy.sh script to join this SENSOR to the HIVE."
break ;;
m|M)
echo
echo "### Installing T-Pot Mobile."
myTPOT_TYPE="MOBILE"
cp ${HOME}/tpotce/compose/mobile.yml ${HOME}/tpotce/docker-compose.yml
myINFO=""
break ;; break ;;
esac esac
done done
if [ "${myTPOT_TYPE}" == "HIVE" ]; if [ "${myTPOT_TYPE}" == "HIVE" ];
# Install T-Pot Type HIVE and ask for WebUI username and password # If T-Pot Type is HIVE ask for WebUI username and password
then then
# Preparing web user for T-Pot # Preparing web user for T-Pot
echo echo
@ -259,19 +273,6 @@ if [ "${myTPOT_TYPE}" == "HIVE" ];
echo echo
sed -i "s|^WEB_USER=.*|WEB_USER=${myWEB_USER_ENC_B64}|" ${myTPOT_CONF_FILE} sed -i "s|^WEB_USER=.*|WEB_USER=${myWEB_USER_ENC_B64}|" ${myTPOT_CONF_FILE}
# Install T-Pot Type HIVE and use standard.yml for installation
cp ${HOME}/tpotce/compose/standard.yml ${HOME}/tpotce/docker-compose.yml
myINFO=""
fi
if [ "${myTPOT_TYPE}" == "SENSOR" ];
# Install T-Pot Type SENSOR and use sensor.yml for installation
then
cp ${HOME}/tpotce/compose/sensor.yml ${HOME}/tpotce/docker-compose.yml
myINFO="### Make sure to deploy SSH keys to this sensor and disable SSH password authentication.
### On hive run the tpotce/deploy.sh script to join this sensor to the hive."
fi fi
# Pull docker images # Pull docker images

View File

@ -129,7 +129,6 @@
- cracklib-runtime - cracklib-runtime
- cron - cron
- curl - curl
- exa
- git - git
- gnupg - gnupg
- grc - grc
@ -146,6 +145,32 @@
- "Raspbian" - "Raspbian"
- "Ubuntu" - "Ubuntu"
- name: Install exa (Debian, Raspbian, Ubuntu)
package:
name:
- exa
state: latest
update_cache: yes
register: exa_install_result
ignore_errors: yes
when: ansible_distribution in ["Debian", "Raspbian", "Ubuntu"]
tags:
- "Debian"
- "Raspbian"
- "Ubuntu"
- name: Install eza (if exa failed)
package:
name:
- eza
state: latest
update_cache: yes
when: exa_install_result is failed
tags:
- "Debian"
- "Raspbian"
- "Ubuntu"
- name: Install grc from remote repo (AlmaLinux, Rocky) - name: Install grc from remote repo (AlmaLinux, Rocky)
ansible.builtin.dnf: ansible.builtin.dnf:
name: 'https://github.com/kriipke/grc/releases/download/1.13.8/grc-1.13.8-1.el7.noarch.rpm' name: 'https://github.com/kriipke/grc/releases/download/1.13.8/grc-1.13.8-1.el7.noarch.rpm'
@ -175,6 +200,7 @@
- wget - wget
state: latest state: latest
update_cache: yes update_cache: yes
register: exa_install_result
when: ansible_distribution in ["AlmaLinux", "Rocky"] when: ansible_distribution in ["AlmaLinux", "Rocky"]
tags: tags:
- "AlmaLinux" - "AlmaLinux"
@ -210,6 +236,7 @@
- wget - wget
state: latest state: latest
update_cache: yes update_cache: yes
register: exa_install_result
when: ansible_distribution in ["Fedora"] when: ansible_distribution in ["Fedora"]
tags: tags:
- "Fedora" - "Fedora"
@ -245,6 +272,7 @@
- wget - wget
state: latest state: latest
update_cache: yes update_cache: yes
register: exa_install_result
when: ansible_distribution in ["openSUSE Tumbleweed"] when: ansible_distribution in ["openSUSE Tumbleweed"]
tags: tags:
- "openSUSE Tumbleweed" - "openSUSE Tumbleweed"
@ -259,7 +287,7 @@
become: true become: true
tasks: tasks:
- name: Remove distribution based Docker packages (AlmaLinux, Debian, Fedora, Raspbian, Rocky, Ubuntu) - name: Remove distribution based Docker packages and podman-docker (AlmaLinux, Debian, Fedora, Raspbian, Rocky, Ubuntu)
package: package:
name: name:
- docker - docker
@ -267,6 +295,8 @@
- docker.io - docker.io
- containerd - containerd
- runc - runc
- podman-docker
- podman
state: absent state: absent
update_cache: yes update_cache: yes
when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "Raspbian", "Rocky", "Ubuntu"] when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "Raspbian", "Rocky", "Ubuntu"]
@ -559,6 +589,15 @@
- "Fedora" - "Fedora"
- "Ubuntu" - "Ubuntu"
- name: Copy resolved.conf to /etc/systemd (Fedora)
copy:
src: /usr/lib/systemd/resolved.conf
dest: /etc/systemd/resolved.conf
when: ansible_distribution in ["Fedora"]
ignore_errors: true
tags:
- "Fedora"
- name: Modify DNSStubListener in resolved.conf (Fedora, Ubuntu) - name: Modify DNSStubListener in resolved.conf (Fedora, Ubuntu)
lineinfile: lineinfile:
path: /etc/systemd/resolved.conf path: /etc/systemd/resolved.conf
@ -674,7 +713,7 @@
when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "openSUSE Tumbleweed", "Raspbian", "Rocky", "Ubuntu"] when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "openSUSE Tumbleweed", "Raspbian", "Rocky", "Ubuntu"]
failed_when: ansible_user_id == "root" failed_when: ansible_user_id == "root"
- name: Add aliases (All) - name: Add aliases with exa (All)
blockinfile: blockinfile:
path: ~/.bashrc path: ~/.bashrc
block: | block: |
@ -688,13 +727,41 @@
marker: "# {mark} ANSIBLE MANAGED BLOCK" marker: "# {mark} ANSIBLE MANAGED BLOCK"
insertafter: EOF insertafter: EOF
state: present state: present
when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "openSUSE Tumbleweed", "Raspbian", "Rocky", "Ubuntu"] when: exa_install_result is succeeded and ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "openSUSE Tumbleweed", "Raspbian", "Rocky", "Ubuntu"]
tags:
- "AlmaLinux"
- "Debian"
- "Fedora"
- "openSUSE Tumbleweed"
- "Raspbian"
- "Rocky"
- "Ubuntu"
- name: Add aliases with eza (Debian, Raspbian, Ubuntu)
blockinfile:
path: ~/.bashrc
block: |
alias dps='grc --colour=on docker ps -f status=running -f status=exited --format "table {{'{{'}}.Names{{'}}'}}\\t{{'{{'}}.Status{{'}}'}}\\t{{'{{'}}.Ports{{'}}'}}" | sort'
alias dpsw='watch -c bash -ic dps'
alias mi='micro'
alias sudo='sudo '
alias ls='eza'
alias ll='eza -hlg'
alias la='eza -hlag'
marker: "# {mark} ANSIBLE MANAGED BLOCK"
insertafter: EOF
state: present
when: exa_install_result is failed and ansible_distribution in ["Debian", "Raspbian", "Ubuntu"]
tags:
- "Debian"
- "Raspbian"
- "Ubuntu"
- name: Clone / Update T-Pot repository (All) - name: Clone / Update T-Pot repository (All)
git: git:
repo: 'https://github.com/telekom-security/tpotce' repo: 'https://github.com/telekom-security/tpotce'
dest: '/home/{{ ansible_user_id }}/tpotce/' dest: '/home/{{ ansible_user_id }}/tpotce/'
version: alpha version: master
clone: yes clone: yes
update: no update: no
when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "openSUSE Tumbleweed", "Raspbian", "Rocky", "Ubuntu"] when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "openSUSE Tumbleweed", "Raspbian", "Rocky", "Ubuntu"]

View File

@ -61,7 +61,7 @@ function fuSELFUPDATE () {
return return
fi fi
### DEV ### DEV
myRESULT=$(git diff --name-only origin/alpha | grep "^update.sh") myRESULT=$(git diff --name-only origin/master | grep "^update.sh")
if [ "$myRESULT" == "update.sh" ]; if [ "$myRESULT" == "update.sh" ];
then then
echo "###### $myBLUE""Found newer version, will be pulling updates and restart myself.""$myWHITE" echo "###### $myBLUE""Found newer version, will be pulling updates and restart myself.""$myWHITE"