53 Commits

Author SHA1 Message Date
c8b47b09bb Fixes #1715 2024-12-16 14:25:02 +01:00
6136cf3206 Fix Debian Download link
Debian switched from 12.7.0 to 12.8.0
2024-11-18 08:56:00 +01:00
2b3966b6a9 Merge pull request #1695 from tmyksj/patch-1
fix typos in README.md
2024-11-11 11:09:29 +01:00
48ac55f61d fix typos 2024-11-09 20:11:34 +09:00
a869f6f04b Update general-issue-for-t-pot.md 2024-10-28 12:37:53 +01:00
fe2c75389b Update bug-report-for-t-pot.md 2024-10-28 12:37:20 +01:00
c9a87f9f0f Merge pull request #1643 from sarkoziadam/master
Fix conpot docker image errors
2024-10-16 11:54:18 +02:00
34eb7d6e72 Merge pull request #1661 from neon-ninja/patch-1
Correct SSH version in cowrie.cfg
2024-09-28 15:08:45 +02:00
dd741e94b0 Correct SSH version in cowrie.cfg 2024-09-27 16:37:23 +12:00
736cb598d4 Update links, fix #1654 2024-09-20 09:56:15 +02:00
4191cf24b3 Fix conpot docker image errors
Version of pysmi set to previous release, FtpReader function has been removed from the new release
2024-08-24 22:46:20 +02:00
f41c15ec10 Merge pull request #1601 from mattroot/master
Remove Podman-Docker compatibility layer when installing
2024-07-11 13:28:33 +02:00
9283a79045 Clariy removal of SENSORS in the .env config
Fixes #1616
2024-07-11 13:11:46 +02:00
53314b19a1 bump elastic stack to 8.14.2 2024-07-08 15:46:22 +02:00
025583d3ba Rename stale to stale.yml 2024-07-05 21:33:12 +02:00
d1067ad6b2 Action tag / close stale issues and PRs 2024-07-05 21:02:46 +02:00
debb74a31d Action check basic-support-info.yml 2024-07-05 20:56:43 +02:00
12da07b0c2 installer: remove podman 2024-07-02 23:03:30 +00:00
025ab2db46 update cowrie 2024-07-02 16:23:42 +02:00
f31d5b3f73 installer: remove podman-docker 2024-06-28 12:02:12 +02:00
8f3966a675 Remove deprecated version tag from docker compose files
Bump Elastic Stack to 8.13.4
2024-06-19 16:10:03 +02:00
a1d72aa7bd Update Installer to support latest distros as referred to by the README.
Updated README accordingly to better reflect the currently supported / tested distributions.
2024-06-18 17:57:41 +02:00
a510e28ef1 Include config option to disable SSL verification
Adjust README accordingly
Fixes #1543
2024-06-04 15:33:28 +02:00
d83b858be7 Update 2024-06-02 23:04:25 +02:00
1eb3a8a8e3 Update README.md
Fixes #1559
2024-05-30 11:59:31 +02:00
4f82c16bb8 Update ReadMe regarding distributed deployment
Thanks to @SnakeSK and @devArnold for the discussion in #1543
2024-05-22 16:54:29 +02:00
9957a13b41 Update ReadMe regarding distributed deployment
Thanks to @SnakeSK and @devArnold for the discussion in #1543
2024-05-22 11:31:39 +02:00
f4586bc2c4 Update ReadMe regarding Daily Reboot 2024-05-21 13:09:06 +02:00
da647f4b9c Update ReadMe regarding Daily Reboot 2024-05-21 13:03:59 +02:00
ef55a9d434 Update for T-Pot Mobile 2024-05-13 15:49:38 +02:00
996e8a3451 Prepare for T-Pot Mobile
- improve install.sh
2024-05-11 13:31:40 +02:00
621a13df1a Fixes #1540 2024-05-11 10:12:47 +02:00
8ec7255443 Prepare for T-Pot Mobile
- fix port conflict
2024-05-10 16:24:01 +02:00
3453266527 Prepare for T-Pot Mobile
- fix port conflict
2024-05-10 16:17:34 +02:00
812841d086 Prepare for T-Pot Mobile
- fix typo
- cleanup
2024-05-10 15:31:17 +02:00
ade4bd711d Prepare for T-Pot Mobile 2024-05-10 15:19:05 +02:00
8d385a3777 Merge pull request #1538 from glaslos/patch-2
Update Glutton Dockerfile
2024-05-07 14:42:55 +02:00
1078ce537d Update Glutton Dockerfile 2024-05-07 14:26:18 +02:00
5815664417 Fixes #1525
Ubuntu 24.04 switched from exa to eza and the new install playbook reflects these changes.
2024-05-07 11:26:22 +02:00
74a3f375e2 Update mac_win.yml
Conpot throws errors in Docker Desktop for Windows.
2024-05-06 20:16:03 +02:00
0b1281d40f Merge pull request #1536 from t3chn0m4g3/master
Adjust T-Pot for Docker Desktop for Windows with WSL2
2024-05-06 19:42:56 +02:00
3f087b0182 Update entrypoint.sh 2024-05-06 19:37:34 +02:00
f18530575c Adjust README.md for macOS / Windows install 2024-05-06 19:29:17 +02:00
3b94af2d5e Optimize for linux 2024-05-06 19:22:33 +02:00
99539562f2 Prepare fix for Docker Desktop in Windows 2024-05-05 18:57:59 +02:00
0451cd9acd Merge pull request #1533 from ZePotente/patch-1
Typos in customizer.py
2024-05-02 19:12:49 +02:00
5810f5f891 Typos in customizer.py 2024-05-02 14:05:07 -03:00
8ac9598f15 Update README.md 2024-05-02 18:33:51 +02:00
caca93f3a0 #1531, but needs testing 2024-05-02 13:43:16 +02:00
775bc2c1dd update hptest.sh 2024-04-29 19:03:49 +02:00
8b98a78b29 Update Issue templates 2024-04-25 16:01:48 +02:00
72f8b4109a Update issue templates 2024-04-25 14:34:46 +02:00
a8c44d66aa Update activity link 2024-04-23 17:32:50 +02:00
69 changed files with 571 additions and 1007 deletions

13
.env
View File

@ -51,6 +51,7 @@ TPOT_PERSISTENCE=on
# Create credentials with 'htpasswd ~/tpotce/data/nginx/conf/lswebpasswd <username>'
# 4. On SENSOR: Provide username / password from (3) for TPOT_HIVE_USER as base64 encoded string:
# "echo -n 'username:password' | base64 -w0"
# MOBILE: This will set the correct type for T-Pot Mobile (https://github.com/telekom-security/tpotmobile)
TPOT_TYPE=HIVE
# T-Pot Hive User (only relevant for SENSOR deployment)
@ -59,6 +60,18 @@ TPOT_TYPE=HIVE
# i.e. TPOT_HIVE_USER='dXNlcm5hbWU6cGFzc3dvcmQ='
TPOT_HIVE_USER=
# Logstash Sensor SSL verfication (only relevant on SENSOR hosts)
# full: This is the default. Logstash, by default, verifies the complete certificate chain for ssl certificates.
# This also includes the FQDN and sANs. By default T-Pot will only generate a self-signed certificate which
# contains a sAN for the HIVE IP. In scenario where the HIVE needs to be accessed via Internet, maybe with
# a different NAT address, a new certificate needs to be generated before deployment that includes all the
# IPs and FQDNs as sANs for logstash successfully establishing a connection to the HIVE for transmitting
# logs. Details here: https://github.com/telekom-security/tpotce?tab=readme-ov-file#distributed-deployment
# none: This setting will disable the ssl verification check of logstash and should only be used in a testing
# environment where IPs often change. It is not recommended for a production environment where trust between
# HIVE and SENSOR is only established through a self signed certificate.
LS_SSL_VERIFICATION=full
# T-Pot Hive IP (only relevant for SENSOR deployment)
# <empty>: This is empty by default.
# <IP, FQDN>: This can be either a IP (i.e. 192.168.1.1) or a FQDN (i.e. foo.bar.local)

View File

@ -13,6 +13,7 @@ Before you post your issue make sure it has not been answered yet and provide **
- 🔍 Use the [search function](https://github.com/dtag-dev-sec/tpotce/issues?utf8=%E2%9C%93&q=) first
- 🧐 Check our [Wiki](https://github.com/dtag-dev-sec/tpotce/wiki) and the [discussions](https://github.com/telekom-security/tpotce/discussions)
- 📚 Consult the documentation of 💻 your Linux OS, 🐳 [Docker](https://docs.docker.com/), the 🦌 [Elastic stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md).
- ⚙️ The [Troubleshoot Section](https://github.com/telekom-security/tpotce?tab=readme-ov-file#troubleshooting) of the [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md) is a good starting point to collect a good set of information for the issue and / or to fix things on your own.
- **⚠️ Provide [BASIC SUPPORT INFORMATION](#-basic-support-information-commands-are-expected-to-run-as-root) or similar detailed information with regard to your issue or we will close the issue or convert it into a discussion without further interaction from the maintainers**.<br>
# ⚠️ Basic support information (commands are expected to run as `root`)
@ -32,7 +33,7 @@ Before you post your issue make sure it has not been answered yet and provide **
- Did you modify any scripts or configs? If yes, please attach the changes.
- Please provide a screenshot of `htop` and `docker stats`.
- How much free disk space is available (`df -h`)?
- What is the current container status (`dps.sh`)?
- What is the current container status (`dps`)?
- On Linux: What is the status of the T-Pot service (`systemctl status tpot`)?
- What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `grc netstat -tulpen`
- Stop T-Pot `systemctl stop tpot`

View File

@ -13,6 +13,7 @@ Before you post your issue make sure it has not been answered yet and provide **
- 🔍 Use the [search function](https://github.com/dtag-dev-sec/tpotce/issues?utf8=%E2%9C%93&q=) first
- 🧐 Check our [Wiki](https://github.com/dtag-dev-sec/tpotce/wiki) and the [discussions](https://github.com/telekom-security/tpotce/discussions)
- 📚 Consult the documentation of 💻 your Linux OS, 🐳 [Docker](https://docs.docker.com/), the 🦌 [Elastic stack](https://www.elastic.co/guide/index.html) and the 🍯 [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md).
- ⚙️ The [Troubleshoot Section](https://github.com/telekom-security/tpotce?tab=readme-ov-file#troubleshooting) of the [T-Pot Readme](https://github.com/dtag-dev-sec/tpotce/blob/master/README.md) is a good starting point to collect a good set of information for the issue and / or to fix things on your own.
- **⚠️ Provide [BASIC SUPPORT INFORMATION](#-basic-support-information-commands-are-expected-to-run-as-root) or similar detailed information with regard to your issue or we will close the issue or convert it into a discussion without further interaction from the maintainers**.<br>
# ⚠️ Basic support information (commands are expected to run as `root`)
@ -32,7 +33,7 @@ Before you post your issue make sure it has not been answered yet and provide **
- Did you modify any scripts or configs? If yes, please attach the changes.
- Please provide a screenshot of `htop` and `docker stats`.
- How much free disk space is available (`df -h`)?
- What is the current container status (`dps.sh`)?
- What is the current container status (`dps`)?
- On Linux: What is the status of the T-Pot service (`systemctl status tpot`)?
- What ports are being occupied? Stop T-Pot `systemctl stop tpot` and run `grc netstat -tulpen`
- Stop T-Pot `systemctl stop tpot`

View File

@ -0,0 +1,49 @@
name: "Check Basic Support Info"
on:
issues:
types: [opened, edited]
permissions:
issues: write
contents: read
jobs:
check-issue:
runs-on: ubuntu-latest
steps:
- name: Check out the repository
uses: actions/checkout@v4
- name: Install jq
run: sudo apt-get install jq -y
- name: Check issue for basic support info
id: check_issue
run: |
REQUIRED_INFO=("What OS are you T-Pot running on?" "What is the version of the OS" "What T-Pot version are you currently using" "What architecture are you running on" "Review the \`~/install_tpot.log\`" "How long has your installation been running?" "Did you install upgrades, packages or use the update script?" "Did you modify any scripts or configs?" "Please provide a screenshot of \`htop\` and \`docker stats\`." "How much free disk space is available" "What is the current container status" "What is the status of the T-Pot service" "What ports are being occupied?")
ISSUE_BODY=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.body')
MISSING_INFO=()
for info in "${REQUIRED_INFO[@]}"; do
if [[ "$ISSUE_BODY" != *"$info"* ]]; then
MISSING_INFO+=("$info")
fi
done
if [ ${#MISSING_INFO[@]} -ne 0 ]; then
echo "missing=true" >> $GITHUB_ENV
else
echo "missing=false" >> $GITHUB_ENV
fi
- name: Add "no basic support info" label if necessary
if: env.missing == 'true'
run: gh issue edit "$NUMBER" --add-label "$LABELS"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
NUMBER: ${{ github.event.issue.number }}
LABELS: no basic support info

24
.github/workflows/stale.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: "Tag stale issues and pull requests"
on:
schedule:
- cron: "0 0 * * *" # Runs every day at midnight
workflow_dispatch: # Allows the workflow to be triggered manually
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v7
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue has been marked as stale because it has had no activity for 7 days. If you are still experiencing this issue, please comment or it will be closed in 7 days."
stale-pr-message: "This pull request has been marked as stale because it has had no activity for 7 days. If you are still working on this, please comment or it will be closed in 7 days."
days-before-stale: 7
days-before-close: 7
stale-issue-label: "stale"
exempt-issue-labels: "keep-open"
stale-pr-label: "stale"
exempt-pr-labels: "keep-open"
operations-per-run: 30
debug-only: false

109
README.md
View File

@ -16,11 +16,9 @@ env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/in
```
* Follow instructions, read messages, check for possible port conflicts and reboot
# Table of Contents
<!-- TOC -->
* [T-Pot - The All In One Multi Honeypot Platform](#t-pot---the-all-in-one-multi-honeypot-platform)
* [TL;DR](#tldr)
* [Table of Contents](#table-of-contents)
* [Disclaimer](#disclaimer)
* [Technical Concept](#technical-concept)
* [Technical Architecture](#technical-architecture)
@ -39,11 +37,13 @@ env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/in
* [macOS & Windows](#macos--windows)
* [Installation Types](#installation-types)
* [Standard / HIVE](#standard--hive)
* [**Distributed**](#distributed)
* [Distributed](#distributed)
* [Uninstall T-Pot](#uninstall-t-pot)
* [First Start](#first-start)
* [Standalone First Start](#standalone-first-start)
* [Distributed Deployment](#distributed-deployment)
* [Planning and Certificates](#planning-and-certificates)
* [Deploying Sensors](#deploying-sensors)
* [Community Data Submission](#community-data-submission)
* [Opt-In HPFEEDS Data Submission](#opt-in-hpfeeds-data-submission)
* [Remote Access and Tools](#remote-access-and-tools)
@ -60,8 +60,10 @@ env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/in
* [Maintenance](#maintenance)
* [General Updates](#general-updates)
* [Update Script](#update-script)
* [Daily Reboot](#daily-reboot)
* [Known Issues](#known-issues)
* [**Docker Images Fail to Download**](#docker-images-fail-to-download)
* [Docker Images Fail to Download](#docker-images-fail-to-download)
* [T-Pot Networking Fails](#t-pot-networking-fails)
* [Start T-Pot](#start-t-pot)
* [Stop T-Pot](#stop-t-pot)
* [T-Pot Data Folder](#t-pot-data-folder)
@ -71,8 +73,8 @@ env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/in
* [Blackhole](#blackhole)
* [Add Users to Nginx (T-Pot WebUI)](#add-users-to-nginx-t-pot-webui)
* [Import and Export Kibana Objects](#import-and-export-kibana-objects)
* [**Export**](#export)
* [**Import**](#import)
* [Export](#export)
* [Import](#import)
* [Troubleshooting](#troubleshooting)
* [Logs](#logs)
* [RAM and Storage](#ram-and-storage)
@ -133,7 +135,7 @@ T-Pot offers docker images for the following honeypots ...
* [T-Pot-Attack-Map](https://github.com/t3chn0m4g3/t-pot-attack-map) a beautifully animated attack map for T-Pot.
* [P0f](https://lcamtuf.coredump.cx/p0f3/) is a tool for purely passive traffic fingerprinting.
* [Spiderfoot](https://github.com/smicallef/spiderfoot) an open source intelligence automation tool.
* [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine.
* [Suricata](https://suricata.io/) a Network Security Monitoring engine.
... to give you the best out-of-the-box experience possible and an easy-to-use multi-honeypot system.
<br><br>
@ -281,16 +283,22 @@ Once you are familiar with how things work you should choose a network you suspe
<br><br>
## Choose your distro
Choose a supported distro of your choice. It is recommended to use the minimum / netiso installers linked below and only install a minimalistic set of packages. SSH is mandatory or you will not be able to connect to the machine remotely.
**Steps to Follow:**
| Distribution Name | x64 | arm64 |
|:--------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------|
| [Alma Linux Boot](https://almalinux.org) | [download](https://repo.almalinux.org/almalinux/9.3/isos/x86_64/AlmaLinux-9.3-x86_64-boot.iso) | [download](https://repo.almalinux.org/almalinux/9.3/isos/aarch64/AlmaLinux-9.3-aarch64-boot.iso) |
| [Debian Netinst](https://www.debian.org/index.en.html) | [download](https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-12.5.0-amd64-netinst.iso) | [download](https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/debian-12.5.0-arm64-netinst.iso) |
| [Fedora Netinst](https://fedoraproject.org) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/39/Server/x86_64/iso/Fedora-Server-netinst-x86_64-39-1.5.iso) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/39/Server/aarch64/iso/Fedora-Server-netinst-aarch64-39-1.5.iso) |
| [OpenSuse Tumbleweed Network Image](https://www.opensuse.org) | [download](https://download.opensuse.org/tumbleweed/iso/openSUSE-Tumbleweed-NET-x86_64-Current.iso) | [download](https://download.opensuse.org/ports/aarch64/tumbleweed/iso/openSUSE-Tumbleweed-NET-aarch64-Current.iso) |
| [Rocky Linux Boot](https://rockylinux.org) | [download](https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.3-x86_64-boot.iso) | [download](https://download.rockylinux.org/pub/rocky/9/isos/aarch64/Rocky-9.3-aarch64-boot.iso) |
| [Ubuntu Live Server](https://ubuntu.com) | [download](https://releases.ubuntu.com/22.04.4/ubuntu-22.04.4-live-server-amd64.iso) | [download](https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.4-live-server-arm64.iso) |
1. Download a supported Linux distribution from the list below.
2. During installation choose a **minimum**, **netinstall** or **server** version that will only install essential packages.
3. **Never** install a graphical desktop environment such as Gnome or KDE. T-Pot will fail to work with it due to port conflicts.
4. Make sure to install SSH, so you can connect to the machine remotely.
| Distribution Name | x64 | arm64 |
|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|
| [Alma Linux OS 9.4 Boot ISO](https://almalinux.org) | [download](https://repo.almalinux.org/almalinux/9.4/isos/x86_64/AlmaLinux-9.4-x86_64-boot.iso) | [download](https://repo.almalinux.org/almalinux/9.4/isos/aarch64/AlmaLinux-9.4-aarch64-boot.iso) |
| [Debian 12 Network Install](https://www.debian.org/CD/netinst/index.en.html) | [download](https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-12.8.0-amd64-netinst.iso) | [download](https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/debian-12.8.0-arm64-netinst.iso) |
| [Fedora Server 40 Network Install](https://fedoraproject.org/server/download) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/40/Server/x86_64/iso/Fedora-Server-netinst-x86_64-40-1.14.iso) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/40/Server/aarch64/iso/Fedora-Server-netinst-aarch64-40-1.14.iso) |
| [OpenSuse Tumbleweed Network Image](https://get.opensuse.org/tumbleweed/#download) | [download](https://download.opensuse.org/tumbleweed/iso/openSUSE-Tumbleweed-NET-x86_64-Current.iso) | [download](https://download.opensuse.org/ports/aarch64/tumbleweed/iso/openSUSE-Tumbleweed-NET-aarch64-Current.iso) |
| [Rocky Linux OS 9.4 Boot ISO](https://rockylinux.org/download) | [download](https://download.rockylinux.org/pub/rocky/9.4/isos/x86_64/Rocky-9.4-x86_64-boot.iso) | [download](https://download.rockylinux.org/pub/rocky/9.4/isos/aarch64/Rocky-9.4-aarch64-boot.iso) |
| [Ubuntu 24.04 Live Server](https://ubuntu.com/download/server) | [download](https://releases.ubuntu.com/24.04/ubuntu-24.04.1-live-server-amd64.iso) | [download](https://cdimage.ubuntu.com/releases/24.04/release/ubuntu-24.04.1-live-server-arm64.iso) |
<br>
@ -327,10 +335,10 @@ Choose a supported distro of your choice. It is recommended to use the minimum /
Sometimes it is just nice if you can spin up a T-Pot instance on macOS or Windows, i.e. for development, testing or just the fun of it. As Docker Desktop is rather limited not all honeypot types or T-Pot features are supported. Also remember, by default the macOS and Windows firewall are blocking access from remote, so testing is limited to the host. For production it is recommended to run T-Pot on [Linux](#choose-your-distro).<br>
To get things up and running just follow these steps:
1. Install Docker Desktop for [macOS](https://docs.docker.com/desktop/install/mac-install/) or [Windows](https://docs.docker.com/desktop/install/windows-install/).
2. Clone the GitHub repository: `git clone https://github.com/telekom-security/tpotce`
2. Clone the GitHub repository: `git clone https://github.com/telekom-security/tpotce` (in Windows make sure the code is checked out with `LF` instead of `CRLF`!)
3. Go to: `cd ~/tpotce`
4. Copy `cp compose/mac_win.yml ./docker-compose.yml`
5. Create a `WEB_USER` by running `~/tpotce/genuser.sh`
5. Create a `WEB_USER` by running `~/tpotce/genuser.sh` (macOS) or `~/tpotce/genuserwin.ps1` (Windows)
6. Adjust the `.env` file by changing `TPOT_OSTYPE=linux` to either `mac` or `win`:
```
# OSType (linux, mac, win)
@ -348,7 +356,7 @@ With T-Pot Standard / HIVE all services, tools, honeypots, etc. will be installe
Once the installation is finished you can proceed to [First Start](#first-start).
<br><br>
### **Distributed**
### Distributed
The distributed version of T-Pot requires at least two hosts
- the T-Pot **HIVE**, the standard installation of T-Pot (install this first!),
- and a T-Pot **SENSOR**, which will host only the honeypots, some tools and transmit log data to the **HIVE**.
@ -382,7 +390,36 @@ There is not much to do except to login and check via `dps.sh` if all services a
<br><br>
## Distributed Deployment
Once you have rebooted the **SENSOR** as instructed by the installer you can continue with the distributed deployment by logging into **HIVE** and go to `cd ~/tpotce` folder.
### Planning and Certificates
The distributed deployment involves planning as **T-Pot Init** will only create a self-signed certificate for the IP of the **HIVE** host which usually is suitable for simple setups. Since **logstash** will check for a valid certificate upon connection, a distributed setup involving **HIVE** to be reachable on multiple IPs (i.e. RFC 1918 and public NAT IP) and maybe even a domain name will result in a connection error where the certificate cannot be validated as such a setup needs a certificate with a common name and SANs (Subject Alternative Name).<br>
Before deploying any sensors make sure you have planned out domain names and IPs properly to avoid issues with the certificate. For more details see [issue #1543](https://github.com/telekom-security/tpotce/issues/1543).<br>
Adjust the example to your IP / domain setup and follow the commands to change the certificate of **HIVE**:
```
sudo systemctl stop tpot
sudo openssl req \
-nodes \
-x509 \
-sha512 \
-newkey rsa:8192 \
-keyout "$HOME/tpotce/data/nginx/cert/nginx.key" \
-out "$HOME/tpotce/data/nginx/cert/nginx.crt" \
-days 3650 \
-subj '/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd' \
-addext "subjectAltName = IP:192.168.1.200, IP:1.2.3.4, DNS:my.primary.domain, DNS:my.secondary.domain"
sudo chmod 774 $HOME/tpotce/data/nginx/cert/*
sudo chown tpot:tpot $HOME/tpotce/data/nginx/cert/*
sudo systemctl start tpot
```
The T-Pot configuration file (`.env`) does allow to disable the SSL verification for logstash connections from **SENSOR** to the **HIVE** by setting `LS_SSL_VERIFICATION=none`. For security reasons this is only recommended for lab or test environments.<br><br>
If you choose to use a valid certificate for the **HIVE** signed by a CA (i.e. Let's Encrypt), logstash, and therefore the **SENSOR**, should have no problems to connect and transmit its logs to the **HIVE**.
### Deploying Sensors
Once you have rebooted the **SENSOR** as instructed by the installer you can continue with the distributed deployment by logging into **HIVE** and go to `cd ~/tpotce` folder. Make sure you understood the [Planning and Certificates](#planning-and-certificates) before continuing with the actual deployment.
If you have not done already generate a SSH key to securely login to the **SENSOR** and to allow `Ansible` to run a playbook on the sensor:
1. Run `ssh-keygen`, follow the instructions and leave the passphrase empty:
@ -414,6 +451,10 @@ If you have not done already generate a SSH key to securely login to the **SENSO
4. Once the key is successfully deployed run `./deploy.sh` and follow the instructions.
<br><br>
### Removing Sensors
Identify the `TPOT_HIVE_USER` ENV on the SENSOR in the `$HOME/tpotce/.env` config (it is a base64 encoded string). Now identify the same string in the `LS_WEB_USER` ENV on the HIVE in the `$HOME/tpotce/.env` config. Remove the string and restart T-Pot.<br>
Now you can safely delete the SENSOR machine.
## Community Data Submission
T-Pot is provided in order to make it accessible to everyone interested in honeypots. By default, the captured data is submitted to a community backend. This community backend uses the data to feed [Sicherheitstacho](https://sicherheitstacho.eu).
You may opt out of the submission by removing the `# Ewsposter service` from `~/tpotce/docker-compose.yml` by following these steps:
@ -497,7 +538,7 @@ On the T-Pot Landing Page just click on `Cyberchef` and you will be forwarded to
<br><br>
## Elasticvue
On the T-Pot Landing Page just click on `Elastivue` and you will be forwarded to Elastivue.
On the T-Pot Landing Page just click on `Elasticvue` and you will be forwarded to Elasticvue.
![Elasticvue](doc/elasticvue.png)
<br><br>
@ -565,18 +606,25 @@ The update script will ...
- update all files in `~/tpotce` to be in sync with the T-Pot master branch
- restore your custom `ews.cfg` from `~/tpotce/data/ews/conf` and the T-Pot configuration (`~/tpotce/.env`).
## Daily Reboot
By default T-Pot will add a daily reboot including some cleaning up. You can adjust this line with `sudo crontab -e`
```
#Ansible: T-Pot Daily Reboot
42 2 * * * bash -c 'systemctl stop tpot.service && docker container prune -f; docker image prune -f; docker volume prune -f; /usr/sbin/shutdown -r +1 "T-Pot Daily Reboot"'
```
## Known Issues
The following issues are known, simply follow the described steps to solve them.
<br><br>
### **Docker Images Fail to Download**
### Docker Images Fail to Download
Some time ago Docker introduced download [rate limits](https://docs.docker.com/docker-hub/download-rate-limit/#:~:text=Docker%20Hub%20limits%20the%20number,pulls%20per%206%20hour%20period.). If you are frequently downloading Docker images via a single or shared IP, the IP address might have exhausted the Docker download rate limit. Login to your Docker account to extend the rate limit.
```
sudo su -
docker login
```
### **T-Pot Networking Fails**
### T-Pot Networking Fails
T-Pot is designed to only run on machines with a single NIC. T-Pot will try to grab the interface with the default route, however it is not guaranteed that this will always succeed. At best use T-Pot on machines with only a single NIC.
## Start T-Pot
@ -634,14 +682,15 @@ Enabling this feature will drastically reduce attackers visibility and consequen
## Add Users to Nginx (T-Pot WebUI)
Nginx (T-Pot WebUI) allows you to add as many `<WEB_USER>` accounts as you want (according to the [User Types](#user-types)).<br>
To **add** a new user run `~/tpotce/genuser.sh` which will also update the accounts without the need to restart T-Pot.<br>
To **remove** users open `~/tpotce/.env`, locate `WEB_USER` and remove the corresponding base64 string (to decode: `echo <base64_string> | base64 -d`, or open CyberChef and load "From Base64" recipe). For the changes to take effect you need to restart T-Pot using `systemctl stop tpot` and `systemctl start tpot` or `sudo reboot`.
To **add** a new user run `~/tpotce/genuser.sh`.<br>
To **remove** users open `~/tpotce/.env`, locate `WEB_USER` and remove the corresponding base64 string (to decode: `echo <base64_string> | base64 -d`, or open CyberChef and load "From Base64" recipe).<br>
For the changes to take effect you need to restart T-Pot using `systemctl stop tpot` and `systemctl start tpot` or `sudo reboot`.
<br><br>
## Import and Export Kibana Objects
Some T-Pot updates will require you to update the Kibana objects. Either to support new honeypots or to improve existing dashboards or visualizations. Make sure to ***export*** first so you do not loose any of your adjustments.
### **Export**
### Export
1. Go to Kibana
2. Click on "Stack Management"
3. Click on "Saved Objects"
@ -649,7 +698,7 @@ Some T-Pot updates will require you to update the Kibana objects. Either to supp
5. Click on "Export all"
This will export a NDJSON file with all your objects. Always run a full export to make sure all references are included.
### **Import**
### Import
1. [Download the NDJSON file](https://github.com/dtag-dev-sec/tpotce/blob/master/docker/tpotinit/dist/etc/objects/kibana_export.ndjson.zip) and unzip it.
2. Go to Kibana
3. Click on "Stack Management"
@ -706,7 +755,7 @@ Use the search function, it is possible a similar discussion has been opened alr
# Licenses
The software that T-Pot is built on uses the following licenses.
<br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
<br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](https://suricata.io/features/open-source/)
<br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://gitlab.com/bontchev/elasticpot/-/blob/master/LICENSE), [ewsposter](https://github.com/telekom-security/ews/), [log4pot](https://github.com/thomaspatzke/Log4Pot/blob/master/LICENSE), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [ipphoney](https://gitlab.com/bontchev/ipphoney/-/blob/master/LICENSE), [redishoneypot](https://github.com/cypwnpwnsocute/RedisHoneyPot/blob/main/LICENSE), [sentrypeer](https://github.com/SentryPeer/SentryPeer/blob/main/LICENSE.GPL-3.0-only), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE)
<br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [dicompot](https://github.com/nsmfoo/dicompot/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE)
<br>MIT license: [autoheal](https://github.com/willfarrell/docker-autoheal?tab=MIT-1-ov-file#readme), [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [ddospot](https://github.com/aelth/ddospot/blob/master/LICENSE), [elasticvue](https://github.com/cars10/elasticvue/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE), [hellpot](https://github.com/yunginnanet/HellPot/blob/master/LICENSE), [maltrail](https://github.com/stamparm/maltrail/blob/master/LICENSE)
@ -754,7 +803,7 @@ Without open source and the development community we are proud to be a part of,
* [spiderfoot](https://github.com/smicallef/spiderfoot)
* [snare](https://github.com/mushorg/snare/graphs/contributors)
* [tanner](https://github.com/mushorg/tanner/graphs/contributors)
* [suricata](https://github.com/inliniac/suricata/graphs/contributors)
* [suricata](https://github.com/OISF/suricata/graphs/contributors)
* [wordpot](https://github.com/gbrindisi/wordpot)
**The following companies and organizations**
@ -776,4 +825,4 @@ And from @robcowart (creator of [ElastiFlow](https://github.com/robcowart/elasti
<br><br>
**Thank you!**
![Alt](https://repobeats.axiom.co/api/embed/642a1032ac85996c81b12cf9f6393103058b8a04.svg "Repobeats analytics image")
![Alt](https://repobeats.axiom.co/api/embed/75368f879326a61370e485df52906ae0c1f59fbb.svg "Repobeats analytics image")

View File

@ -9,10 +9,10 @@ version = \
___) | __/ | \ V /| | (_| __/ | |_) | |_| | | | (_| | __/ |
|____/ \___|_| \_/ |_|\___\___| |____/ \__,_|_|_|\__,_|\___|_| v0.21
# This script is intended for users who want to build a customized docker-compose.yml forT-Pot.
# This script is intended for users who want to build a customized docker-compose.yml for T-Pot.
# T-Pot Service Builder will ask for all the docker services to be included in docker-compose.yml.
# The configuration file will be checked for conflicting ports.
# Port conflicts have to be resolve manually or re-running the script and excluding the conflicting services.
# Port conflicts have to be resolved manually or re-running the script and excluding the conflicting services.
# Review the resulting docker-compose-custom.yml and adjust to your needs by (un)commenting the corresponding lines in the config.
"""
@ -157,7 +157,6 @@ def main():
remove_unused_networks(selected_services, services, networks)
output_config = {
'version': '3.9',
'networks': networks,
'services': selected_services,
}

View File

@ -1,15 +1,9 @@
# T-Pot: MAC_WIN
version: '3.9'
networks:
tpotinit_local:
adbhoney_local:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_guardian_ast:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local:
ddospot_local:
dicompot_local:
@ -21,6 +15,7 @@ networks:
medpot_local:
redishoneypot_local:
sentrypeer_local:
suricata_local:
tanner_local:
wordpot_local:
nginx_local:
@ -52,6 +47,7 @@ services:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
@ -113,108 +109,6 @@ services:
volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot guardian_ast service
conpot_guardian_ast:
container_name: conpot_guardian_ast
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
- CONPOT_LOG=/var/log/conpot/conpot_guardian_ast.log
- CONPOT_TEMPLATE=guardian_ast
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
@ -250,7 +144,7 @@ services:
- ddospot_local
ports:
- "19:19/udp"
- "53:53/udp"
# - "53:53/udp"
- "123:123/udp"
# - "161:161/udp"
- "1900:1900/udp"
@ -302,7 +196,7 @@ services:
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
# - "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
@ -616,15 +510,16 @@ services:
depends_on:
tpotinit:
condition: service_healthy
environment:
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env)
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
network_mode: "host"
networks:
- suricata_local
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
environment:
- OINKCODE=${OINKCODE:-OPEN} # Default to OPEN if unset or NULL (value provided by T-Pot .env)
# Loading external Rules from URL
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
@ -695,6 +590,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g

View File

@ -1,6 +1,4 @@
# T-Pot: MINI
version: '3.9'
networks:
adbhoney_local:
ciscoasa_local:
@ -411,6 +409,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g

View File

@ -3,8 +3,6 @@
# T-Pot on a Raspberry Pi 4 (8GB of RAM).
# The standard docker compose file should work mostly fine (depending on traffic) if you do not enable a
# desktop environment such as LXDE and meet the minimum requirements of 8GB RAM.
version: '3.9'
networks:
ciscoasa_local:
citrixhoneypot_local:
@ -347,6 +345,30 @@ services:
volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Log4pot service
log4pot:
container_name: log4pot
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- log4pot_local
ports:
# - "80:8080"
# - "443:8080"
# - "8080:8080"
# - "9200:8080"
- "25565:8080"
image: ${TPOT_REPO}/log4pot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/log4pot/log:/var/log/log4pot/log
- ${TPOT_DATA_PATH}/log4pot/payloads:/var/log/log4pot/payloads
# Mailoney service
mailoney:
container_name: mailoney
@ -371,30 +393,6 @@ services:
volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Log4pot service
log4pot:
container_name: log4pot
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- log4pot_local
ports:
# - "80:8080"
# - "443:8080"
- "8080:8080"
# - "9200:8080"
- "25565:8080"
image: ${TPOT_REPO}/log4pot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/log4pot/log:/var/log/log4pot/log
- ${TPOT_DATA_PATH}/log4pot/payloads:/var/log/log4pot/payloads
# Medpot service
medpot:
container_name: medpot
@ -537,7 +535,7 @@ services:
container_name: wordpot
restart: always
depends_on:
tpotinit:
logstash:
condition: service_healthy
networks:
- wordpot_local
@ -594,6 +592,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g

View File

@ -1,628 +0,0 @@
# T-Pot: MOBILE
# Note: This docker compose file has been adjusted to limit the number of tools, services and honeypots to run
# T-Pot on a Raspberry Pi 4 (8GB of RAM).
# The standard docker compose file should work mostly fine (depending on traffic) if you do not enable a
# desktop environment such as LXDE and meet the minimum requirements of 8GB RAM.
version: '3.9'
networks:
ciscoasa_local:
citrixhoneypot_local:
conpot_local_IEC104:
conpot_local_ipmi:
conpot_local_kamstrup_382:
cowrie_local:
dicompot_local:
dionaea_local:
elasticpot_local:
heralding_local:
ipphoney_local:
log4pot_local:
mailoney_local:
medpot_local:
redishoneypot_local:
sentrypeer_local:
tanner_local:
wordpot_local:
ewsposter_local:
services:
#########################################
#### DEV
#########################################
#### T-Pot Init - Never delete this!
#########################################
# T-Pot Init Service
tpotinit:
container_name: tpotinit
env_file:
- .env
restart: always
stop_grace_period: 60s
tmpfs:
- /tmp/etc:uid=2000,gid=2000
- /tmp/:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
- ${TPOT_DATA_PATH}:/data
- /var/run/docker.sock:/var/run/docker.sock:ro
##################
#### Honeypots
##################
# Ciscoasa service
ciscoasa:
container_name: ciscoasa
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/ciscoasa:uid=2000,gid=2000
networks:
- ciscoasa_local
ports:
- "5000:5000/udp"
- "8443:8443"
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
# CitrixHoneypot service
citrixhoneypot:
container_name: citrixhoneypot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/citrixhoneypot/log:/opt/citrixhoneypot/logs
# Conpot IEC104 service
conpot_IEC104:
container_name: conpot_iec104
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
- CONPOT_TEMPLATE=IEC104
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_IEC104
ports:
- "161:161/udp"
- "2404:2404"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot ipmi
conpot_ipmi:
container_name: conpot_ipmi
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
- CONPOT_TEMPLATE=ipmi
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_ipmi
ports:
- "623:623/udp"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Conpot kamstrup_382
conpot_kamstrup_382:
container_name: conpot_kamstrup_382
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
- CONPOT_TEMPLATE=kamstrup_382
- CONPOT_TMP=/tmp/conpot
tmpfs:
- /tmp/conpot:uid=2000,gid=2000
networks:
- conpot_local_kamstrup_382
ports:
- "1025:1025"
- "50100:50100"
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
ports:
- "22:22"
- "23:23"
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dicompot service
# Get the Horos Client for testing: https://horosproject.org/
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
dicompot:
container_name: dicompot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- dicompot_local
ports:
- "11112:11112"
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
tty: true
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- dionaea_local
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
# - "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
# - "5060:5060"
# - "5060:5060/udp"
# - "5061:5061"
- "27017:27017"
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# ElasticPot service
elasticpot:
container_name: elasticpot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- elasticpot_local
ports:
- "9200:9200"
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
# Heralding service
heralding:
container_name: heralding
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
- "110:110"
- "143:143"
# - "443:443"
- "465:465"
- "993:993"
- "995:995"
# - "3306:3306"
# - "3389:3389"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
# Ipphoney service
ipphoney:
container_name: ipphoney
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- ipphoney_local
ports:
- "631:631"
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
depends_on:
logstash:
condition: service_healthy
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
- "587:25"
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
# Log4pot service
log4pot:
container_name: log4pot
restart: always
depends_on:
logstash:
condition: service_healthy
tmpfs:
- /tmp:uid=2000,gid=2000
networks:
- log4pot_local
ports:
# - "80:8080"
# - "443:8080"
- "8080:8080"
# - "9200:8080"
- "25565:8080"
image: ${TPOT_REPO}/log4pot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/log4pot/log:/var/log/log4pot/log
- ${TPOT_DATA_PATH}/log4pot/payloads:/var/log/log4pot/payloads
# Medpot service
medpot:
container_name: medpot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- medpot_local
ports:
- "2575:2575"
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
# Redishoneypot service
redishoneypot:
container_name: redishoneypot
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- redishoneypot_local
ports:
- "6379:6379"
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
# SentryPeer service
sentrypeer:
container_name: sentrypeer
restart: always
depends_on:
logstash:
condition: service_healthy
# environment:
# - SENTRYPEER_PEER_TO_PEER=1
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/tcp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
#### Snare / Tanner
## Tanner Redis Service
tanner_redis:
container_name: tanner_redis
restart: always
depends_on:
logstash:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## PHP Sandbox service
tanner_phpox:
container_name: tanner_phpox
restart: always
depends_on:
logstash:
condition: service_healthy
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
## Tanner API Service
tanner_api:
container_name: tanner_api
restart: always
depends_on:
- tanner_redis
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
command: tannerapi
## Tanner Service
tanner:
container_name: tanner
restart: always
depends_on:
- tanner_api
- tanner_phpox
tmpfs:
- /tmp/tanner:uid=2000,gid=2000
tty: true
networks:
- tanner_local
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
command: tanner
read_only: true
volumes:
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
## Snare Service
snare:
container_name: snare
restart: always
depends_on:
- tanner
tty: true
networks:
- tanner_local
ports:
- "80:80"
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
# Wordpot service
wordpot:
container_name: wordpot
restart: always
depends_on:
tpotinit:
condition: service_healthy
networks:
- wordpot_local
ports:
- "82:80"
image: ${TPOT_REPO}/wordpot:${TPOT_VERSION}
read_only: true
volumes:
- ${TPOT_DATA_PATH}/wordpot/log:/opt/wordpot/logs/
##################
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
depends_on:
tpotinit:
condition: service_healthy
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
- ES_TMPDIR=/tmp
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
environment:
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
#### /ELK
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
depends_on:
logstash:
condition: service_healthy
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
pull_policy: ${TPOT_PULL_POLICY}
volumes:
- ${TPOT_DATA_PATH}:/data
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip

View File

@ -1,6 +1,4 @@
# T-Pot: SENSOR
version: '3.9'
networks:
adbhoney_local:
ciscoasa_local:
@ -664,6 +662,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g

View File

@ -1,6 +1,4 @@
# T-Pot: STANDARD
version: '3.9'
networks:
adbhoney_local:
ciscoasa_local:
@ -706,6 +704,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g

View File

@ -837,6 +837,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g

View File

@ -1,6 +1,4 @@
# T-Pot: STANDARD
version: '3.9'
networks:
adbhoney_local:
ciscoasa_local:
@ -706,6 +704,7 @@ services:
- TPOT_TYPE=${TPOT_TYPE:-HIVE}
- TPOT_HIVE_USER=${TPOT_HIVE_USER}
- TPOT_HIVE_IP=${TPOT_HIVE_IP}
- LS_SSL_VERIFICATION=${LS_SSL_VERIFICATION:-full}
ports:
- "127.0.0.1:64305:64305"
mem_limit: 2g

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
adbhoney_local:

View File

@ -7,7 +7,8 @@ myPLATFORMS="linux/amd64,linux/arm64"
myHUBORG_DOCKER="dtagdevsec"
myHUBORG_GITHUB="ghcr.io/telekom-security"
myTAG="24.04"
myIMAGESBASE="tpotinit adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt glutton hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot"
#myIMAGESBASE="tpotinit adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt glutton hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot"
myIMAGESBASE="tpotinit adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot"
myIMAGESELK="elasticsearch kibana logstash map"
myIMAGESTANNER="phpox redis snare tanner"
myBUILDERLOG="builder.log"

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
ciscoasa_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
citrixhoneypot_local:

View File

@ -1,5 +1,5 @@
pysnmp-mibs
pysmi
pysmi==0.3.4
libtaxii>=1.1.0
crc16
scapy==2.4.3rc1

View File

@ -1,6 +1,4 @@
# CONPOT TEMPLATE=[default, IEC104, guardian_ast, ipmi, kamstrup_382, proxy]
version: '2.3'
networks:
conpot_local_default:
conpot_local_IEC104:

View File

@ -22,11 +22,11 @@ filesystem = share/cowrie/fs.pickle
processes = share/cowrie/cmdoutput.json
#arch = linux-x64-lsb
arch = bsd-aarch64-lsb, bsd-aarch64-msb, bsd-bfin-msb, bsd-mips-lsb, bsd-mips-msb, bsd-mips64-lsb, bsd-mips64-msb, bsd-powepc-msb, bsd-powepc64-lsb, bsd-riscv64-lsb, bsd-sparc-msb, bsd-sparc64-msb, bsd-x32-lsb, bsd-x64-lsb, linux-aarch64-lsb, linux-aarch64-msb, linux-alpha-lsb, linux-am33-lsb, linux-arc-lsb, linux-arc-msb, linux-arm-lsb, linux-arm-msb, linux-avr32-lsb, linux-bfin-lsb, linux-c6x-lsb, linux-c6x-msb, linux-cris-lsb, linux-frv-msb, linux-h8300-msb, linux-hppa-msb, linux-hppa64-msb, linux-ia64-lsb, linux-m32r-msb, linux-m68k-msb, linux-microblaze-msb, linux-mips-lsb, linux-mips-msb, linux-mips64-lsb, linux-mips64-msb, linux-mn10300-lsb, linux-nios-lsb, linux-nios-msb, linux-powerpc-lsb, linux-powerpc-msb, linux-powerpc64-lsb, linux-powerpc64-msb, linux-riscv64-lsb, linux-s390x-msb, linux-sh-lsb, linux-sh-msb, linux-sparc-msb, linux-sparc64-msb, linux-tilegx-lsb, linux-tilegx-msb, linux-tilegx64-lsb, linux-tilegx64-msb, linux-x64-lsb, linux-x86-lsb, linux-xtensa-msb, osx-x32-lsb, osx-x64-lsb
kernel_version = 3.2.0-4-amd64
kernel_build_string = #1 SMP Debian 3.2.68-1+deb7u1
kernel_version = 5.15.0-23-generic-amd64
kernel_build_string = #25~22.04-Ubuntu SMP
hardware_platform = x86_64
operating_system = GNU/Linux
ssh_version = OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018
ssh_version = OpenSSH_8.9p1, OpenSSL 3.0.2 15 Mar 2022
[ssh]
enabled = true
@ -39,8 +39,7 @@ ecdsa_private_key = etc/ssh_host_ecdsa_key
ed25519_public_key = etc/ssh_host_ed25519_key.pub
ed25519_private_key = etc/ssh_host_ed25519_key
public_key_auth = ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519
#version = SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2
version = SSH-2.0-OpenSSH_7.9p1
version = SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10
ciphers = aes128-ctr,aes192-ctr,aes256-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc
macs = hmac-sha2-512,hmac-sha2-384,hmac-sha2-56,hmac-sha1,hmac-md5
compression = zlib@openssh.com,zlib,none

72
docker/cowrie/dist/cowrie_tpot.cfg vendored Normal file
View File

@ -0,0 +1,72 @@
[honeypot]
hostname = ubuntu
log_path = log
download_path = dl
share_path= share/cowrie
state_path = /tmp/cowrie/data
etc_path = etc
contents_path = honeyfs
txtcmds_path = txtcmds
ttylog = true
ttylog_path = log/tty
interactive_timeout = 180
authentication_timeout = 120
backend = shell
timezone = UTC
auth_class = AuthRandom
auth_class_parameters = 2, 5, 10
data_path = /tmp/cowrie/data
[shell]
filesystem = share/cowrie/fs.pickle
processes = share/cowrie/cmdoutput.json
#arch = linux-x64-lsb
arch = bsd-aarch64-lsb, bsd-aarch64-msb, bsd-bfin-msb, bsd-mips-lsb, bsd-mips-msb, bsd-mips64-lsb, bsd-mips64-msb, bsd-powepc-msb, bsd-powepc64-lsb, bsd-riscv64-lsb, bsd-sparc-msb, bsd-sparc64-msb, bsd-x32-lsb, bsd-x64-lsb, linux-aarch64-lsb, linux-aarch64-msb, linux-alpha-lsb, linux-am33-lsb, linux-arc-lsb, linux-arc-msb, linux-arm-lsb, linux-arm-msb, linux-avr32-lsb, linux-bfin-lsb, linux-c6x-lsb, linux-c6x-msb, linux-cris-lsb, linux-frv-msb, linux-h8300-msb, linux-hppa-msb, linux-hppa64-msb, linux-ia64-lsb, linux-m32r-msb, linux-m68k-msb, linux-microblaze-msb, linux-mips-lsb, linux-mips-msb, linux-mips64-lsb, linux-mips64-msb, linux-mn10300-lsb, linux-nios-lsb, linux-nios-msb, linux-powerpc-lsb, linux-powerpc-msb, linux-powerpc64-lsb, linux-powerpc64-msb, linux-riscv64-lsb, linux-s390x-msb, linux-sh-lsb, linux-sh-msb, linux-sparc-msb, linux-sparc64-msb, linux-tilegx-lsb, linux-tilegx-msb, linux-tilegx64-lsb, linux-tilegx64-msb, linux-x64-lsb, linux-x86-lsb, linux-xtensa-msb, osx-x32-lsb, osx-x64-lsb
kernel_version = 3.2.0-4-amd64
kernel_build_string = #1 SMP Debian 3.2.68-1+deb7u1
hardware_platform = x86_64
operating_system = GNU/Linux
ssh_version = OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018
[ssh]
enabled = true
rsa_public_key = etc/ssh_host_rsa_key.pub
rsa_private_key = etc/ssh_host_rsa_key
dsa_public_key = etc/ssh_host_dsa_key.pub
dsa_private_key = etc/ssh_host_dsa_key
ecdsa_public_key = etc/ssh_host_ecdsa_key.pub
ecdsa_private_key = etc/ssh_host_ecdsa_key
ed25519_public_key = etc/ssh_host_ed25519_key.pub
ed25519_private_key = etc/ssh_host_ed25519_key
public_key_auth = ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519
#version = SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2
version = SSH-2.0-OpenSSH_7.9p1
ciphers = aes128-ctr,aes192-ctr,aes256-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc
macs = hmac-sha2-512,hmac-sha2-384,hmac-sha2-56,hmac-sha1,hmac-md5
compression = zlib@openssh.com,zlib,none
listen_endpoints = tcp:22:interface=0.0.0.0
sftp_enabled = true
forwarding = true
forward_redirect = false
forward_tunnel = false
auth_none_enabled = false
auth_keyboard_interactive_enabled = true
[telnet]
enabled = true
listen_endpoints = tcp:23:interface=0.0.0.0
reported_port = 23
[output_jsonlog]
enabled = true
logfile = log/cowrie.json
epoch_timestamp = false
[output_textlog]
enabled = false
logfile = log/cowrie-textlog.log
format = text
[output_crashreporter]
enabled = false
debug = false

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
cowrie_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
ddospot_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
dicompot_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
dionaea_local:

View File

@ -1,6 +1,4 @@
# T-Pot Image Builder (use only for building docker images)
version: '2.3'
services:
##################
@ -58,9 +56,9 @@ services:
image: "dtagdevsec/endlessh:24.04"
# Glutton service
glutton:
build: glutton/.
image: "dtagdevsec/glutton:24.04"
# glutton:
# build: glutton/.
# image: "dtagdevsec/glutton:24.04"
# Hellpot service
hellpot:
@ -163,6 +161,11 @@ services:
#### Tools
##################
# T-Pot Init Service
tpotinit:
build: tpotinit/.
image: "dtagdevsec/tpotinit:24.04"
#### ELK
## Elasticsearch service
elasticsearch:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
elasticpot_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
# ELK services

View File

@ -1,7 +1,7 @@
FROM ubuntu:22.04
#
# VARS
ENV ES_VER=8.12.2
ENV ES_VER=8.14.2
#
# Include dist
COPY dist/ /root/dist/

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
# ELK services

View File

@ -1,7 +1,7 @@
FROM ubuntu:22.04
#
# VARS
ENV KB_VER=8.12.2
ENV KB_VER=8.14.2
# Include dist
COPY dist/ /root/dist/
#

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
## Kibana service

View File

@ -1,7 +1,7 @@
FROM ubuntu:22.04
#
# VARS
ENV LS_VER=8.12.2
ENV LS_VER=8.14.2
# Include dist
COPY dist/ /root/dist/
#

View File

@ -10,7 +10,10 @@ trap fuCLEANUP EXIT
if [ -f "/data/tpot/etc/compose/elk_environment" ];
then
echo "Found .env, now exporting ..."
set -o allexport && source "/data/tpot/etc/compose/elk_environment" && set +o allexport
set -o allexport
source "/data/tpot/etc/compose/elk_environment"
LS_SSL_VERIFICATION="${LS_SSL_VERIFICATION:-full}"
set +o allexport
fi
# Check internet availability
@ -50,6 +53,7 @@ if [ "$TPOT_TYPE" == "SENSOR" ];
echo
echo "T-Pot type: $TPOT_TYPE"
echo "Hive IP: $TPOT_HIVE_IP"
echo "SSL verification: $LS_SSL_VERIFICATION"
echo
# Ensure correct file permissions for private keyfile or SSH will ask for password
cp /usr/share/logstash/config/pipelines_sensor.yml /usr/share/logstash/config/pipelines.yml

View File

@ -723,7 +723,9 @@ output {
codec => "json"
format => "json_batch"
url => "https://${TPOT_HIVE_IP}:64294"
cacert => "/data/hive.crt"
# cacert => "/data/hive.crt"
ssl_verification_mode => "${LS_SSL_VERIFICATION}"
ssl_certificate_authorities => "/data/hive.crt"
headers => {
"Authorization" => "Basic ${TPOT_HIVE_USER}"
}

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
## Logstash service

View File

@ -1,5 +1,3 @@
version: '2.3'
#networks:
# map_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
endlessh_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
ewsposter_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
# Fatt service

View File

@ -5,21 +5,19 @@ COPY dist/ /root/dist/
#
# Setup apk
RUN apk -U --no-cache add \
build-base \
git \
g++ \
make \
git \
g++ \
iptables-dev \
libpcap-dev && \
#
# Setup go, glutton
export GO111MODULE=on && \
mkdir -p /opt/ && \
cd /opt/ && \
git clone https://github.com/mushorg/glutton && \
cd /opt/glutton/ && \
git checkout c1204c65ce32bfdc0e08fb2a9abe89b3b8eeed62 && \
cp /root/dist/system.go . && \
go mod download && \
make build && \
mv /root/dist/config.yaml /opt/glutton/config/
#
@ -30,11 +28,8 @@ COPY --from=builder /opt/glutton/config /opt/glutton/config
COPY --from=builder /opt/glutton/rules /opt/glutton/rules
#
RUN apk -U --no-cache add \
iptables \
iptables-dev \
libnetfilter_queue-dev \
libcap \
libpcap-dev && \
libpcap-dev && \
setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \
setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-nft-multi && \
mkdir -p /var/log/glutton \

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
# glutton service

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
hellpot_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
heralding_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
honeypots_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
# Honeytrap service

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
ipphoney_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
log4pot_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
mailoney_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
medpot_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
# nginx service

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
# P0f service

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
redishoneypot_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
sentrypeer_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
spiderfoot_local:

View File

@ -1,5 +1,3 @@
version: '2.3'
services:
# Suricata service

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
tanner_local:

View File

@ -30,9 +30,10 @@ RUN apk --no-cache -U add \
apk --no-cache -U add --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community \
yq && \
#
# Setup user
# Setup user, logrotate permissions
addgroup -g 2000 tpot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 tpot && \
chmod 0600 /opt/tpot/etc/logrotate/logrotate.conf && \
#
# Clean up
apk del --purge git && \

View File

@ -1,23 +1,31 @@
#!/bin/bash
myHOST="$1"
myPACKAGES="nmap"
myDOCKERCOMPOSEYML="/opt/tpot/etc/tpot.yml"
function fuGOTROOT {
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
}
myPACKAGES="dcmtk ncat nmap yq"
myDOCKERCOMPOSEYML="$HOME/tpotce/docker-compose.yml"
myTIMEOUT=180
myMEDPOTPACKET="
MSH|^~\&|ADT1|MCM|LABADT|MCM|198808181126|SECURITY|ADT^A01|MSG00001-|P|2.6
EVN|A01|198808181123
PID|||PATID1234^5^M11^^AN||JONES^WILLIAM^A^III||19610615|M||2106-3|677 DELAWARE AVENUE^^EVERETT^MA^02149|GL|(919)379-1212|(919)271-3434~(919)277-3114||S||PATID12345001^2^M10^^ACSN|123456789|9-87654^NC
NK1|1|JONES^BARBARA^K|SPO|||||20011105
NK1|1|JONES^MICHAEL^A|FTH
PV1|1|I|2000^2012^01||||004777^LEBAUER^SIDNEY^J.|||SUR||-||ADM|A0
AL1|1||^PENICILLIN||CODE16~CODE17~CODE18
AL1|2||^CAT DANDER||CODE257
DG1|001|I9|1550|MAL NEO LIVER, PRIMARY|19880501103005|F
PR1|2234|M11|111^CODE151|COMMON PROCEDURES|198809081123
ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^SMITH^ELLEN|199505011201
GT1|1122|1519|BILL^GATES^A
IN1|001|A357|1234|BCMD|||||132987
IN2|ID1551001|SSN12345678
ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^ELLEN|199505011201"
function fuCHECKDEPS {
myINST=""
for myDEPS in $myPACKAGES;
do
myOK=$(dpkg -s $myDEPS | grep ok | awk '{ print $3 }');
myOK=$(sudo dpkg -s $myDEPS | grep ok | awk '{ print $3 }');
if [ "$myOK" != "ok" ]
then
myINST=$(echo $myINST $myDEPS)
@ -25,10 +33,10 @@ do
done
if [ "$myINST" != "" ]
then
apt-get update -y
sudo apt-get update -y
for myDEPS in $myINST;
do
apt-get install $myDEPS -y
sudo apt-get install $myDEPS -y
done
fi
}
@ -50,19 +58,35 @@ myDOCKERCOMPOSEUDPPORTS=$(cat $myDOCKERCOMPOSEYML | grep "udp" | tr -d '"\|#\-'
myDOCKERCOMPOSEPORTS=$(cat $myDOCKERCOMPOSEYML | yq -r '.services[].ports' | grep ':' | sed -e s/127.0.0.1// | tr -d '", ' | sed -e s/^:// | cut -f1 -d ':' | grep -v "6429\|6430" | sort -gu)
myUDPPORTS=$(for i in $myDOCKERCOMPOSEUDPPORTS; do echo -n "U:$i,"; done)
myPORTS=$(for i in $myDOCKERCOMPOSEPORTS; do echo -n "T:$i,"; done)
#echo ${myUDPPORTS}
#echo ${myPORTS}
}
# Main
fuGETPORTS
fuGOTROOT
fuCHECKDEPS
fuCHECKFORARGS
fuCHECKDEPS
fuGETPORTS
echo
echo "Starting scan on all UDP / TCP ports defined in /opt/tpot/etc/tpot.yml ..."
nmap -sV -sC -v -p $myPORTS $1 &
nmap -sU -sV -sC -v -p $myUDPPORTS $1 &
echo "Probing some services ..."
echo "$myMEDPOTPACKET" | nc "$myHOST" 2575 &
curl -XGET "http://$myHOST:9200/logstash-*/_search" &
curl -XPOST -H "Content-Type: application/json" -d '{"name":"test","email":"test@test.com"}' "http://$myHOST:9200/test" &
echo "I20100" | timeout --foreground 3 nc "$myHOST" 10001 &
findscu -P -k PatientName="*" $myHOST 11112 &
getscu -P -k PatientName="*" $myHOST 11112 &
telnet $myHOST 3299 &
echo
echo "Starting scan on all UDP / TCP ports defined in ${myDOCKERCOMPOSEYML} ..."
timeout --foreground ${myTIMEOUT} nmap -sV -sC -v -p $myPORTS $1 &
timeout --foreground ${myTIMEOUT} nmap -sU -sV -sC -v -p $myUDPPORTS $1 &
echo
wait
echo "Restarting some containers ..."
docker stop adbhoney conpot_guardian_ast conpot_kamstrup_382 dionaea
docker start adbhoney conpot_guardian_ast conpot_kamstrup_382 dionaea
echo
echo "Resetting terminal ..."
reset
echo
echo "Done."
echo

View File

@ -7,14 +7,17 @@ exec > >(tee /data/tpotinit.log) 2>&1
cleanup() {
echo "# SIGTERM received, cleaning up ..."
echo
echo "## ... removing firewall rules."
/opt/tpot/bin/rules.sh ${COMPOSE} unset
echo
if [ "${TPOT_BLACKHOLE}" == "ENABLED" ] && [ -f "/etc/blackhole/mass_scanner.txt" ];
if [ "${TPOT_OSTYPE}" = "linux" ];
then
echo "## ... removing Blackhole routes."
/opt/tpot/bin/blackhole.sh del
echo "## ... removing firewall rules."
/opt/tpot/bin/rules.sh ${COMPOSE} unset
echo
if [ "${TPOT_BLACKHOLE}" == "ENABLED" ] && [ -f "/etc/blackhole/mass_scanner.txt" ];
then
echo "## ... removing Blackhole routes."
/opt/tpot/bin/blackhole.sh del
echo
fi
fi
kill -TERM "$PID"
rm -f /tmp/success
@ -29,7 +32,7 @@ check_var() {
local var_value=$(eval echo \$$var_name)
# Check if variable is set and not empty
if [[ -z "$var_value" ]];
if [[ -z "$var_value" ]];
then
echo "# Error: $var_name is not set or empty. Please check T-Pot .env config."
echo
@ -44,7 +47,7 @@ check_safety() {
local var_value=$(eval echo \$$var_name)
# General safety check for most variables
if [[ $var_value =~ [^a-zA-Z0-9_/.:-] ]];
if [[ $var_value =~ [^a-zA-Z0-9_/.:-] ]];
then
echo "# Error: Unsafe characters detected in $var_name. Please check T-Pot .env config."
echo
@ -78,7 +81,7 @@ validate_format() {
case "$var_name" in
TPOT_BLACKHOLE|TPOT_PERSISTENCE|TPOT_ATTACKMAP_TEXT)
if ! [[ $var_value =~ ^(ENABLED|DISABLED|on|off|true|false)$ ]];
if ! [[ $var_value =~ ^(ENABLED|DISABLED|on|off|true|false)$ ]];
then
echo "# Error: Invalid value for $var_name. Expected ENABLED/DISABLED, on/off, true/false. Please check T-Pot .env config."
echo
@ -94,7 +97,7 @@ validate_ip_or_domain() {
# Regular expression for validating IPv4 addresses
local ipv4Regex='^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$'
# Regular expression for validating domain names (including subdomains)
local domainRegex='^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$'
@ -119,7 +122,7 @@ create_web_users() {
: > /data/nginx/conf/lswebpasswd
for i in ${WEB_USER};
do
if [[ -n $i ]];
if [[ -n $i ]];
then
# Need to control newlines as they kept coming up for some reason
echo -n "$i" | base64 -d -w0 | tr -d '\n' >> /data/nginx/conf/nginxpasswd
@ -127,9 +130,9 @@ create_web_users() {
fi
done
for i in ${LS_WEB_USER};
for i in ${LS_WEB_USER};
do
if [[ -n $i ]];
if [[ -n $i ]];
then
# Need to control newlines as they kept coming up for some reason
echo -n "$i" | base64 -d -w0 | tr -d '\n' >> /data/nginx/conf/lswebpasswd
@ -153,26 +156,43 @@ update_permissions
# Check for compatible OSType
echo
echo "# Checking if OSType is compatible."
echo "# Checking if OSType is set correctly."
echo
myOSTYPE=$(uname -a | grep -Eo "linuxkit")
if [ "${myOSTYPE}" == "linuxkit" ] && [ "${TPOT_OSTYPE}" == "linux" ];
myOSTYPE=$(uname -a | grep -Eo "microsoft|linuxkit")
if [ "${myOSTYPE}" == "microsoft" ] && [ "${TPOT_OSTYPE}" != "win" ];
then
echo "# Docker Desktop for macOS or Windows detected."
echo "# 1. You need to adjust the OSType the T-Pot .env config."
echo "# 2. You need to use the macos or win docker compose file."
echo "# Docker Desktop for Windows detected, but TPOT_OSTYPE is not set to win."
echo "# 1. You need to adjust the OSType in the T-Pot .env config."
echo "# 2. You need to copy compose/mac_win.yml to ./docker-compose.yml."
echo
echo "# Aborting."
echo
sleep 1
exit 1
fi
if [ "${myOSTYPE}" == "linuxkit" ] && [ "${TPOT_OSTYPE}" != "mac" ];
then
echo "# Docker Desktop for macOS detected, but TPOT_OSTYPE is not set to mac."
echo "# 1. You need to adjust the OSType in the T-Pot .env config."
echo "# 2. You need to copy compose/mac_win.yml to ./docker-compose.yml."
echo
echo "# Aborting."
echo
sleep 1
exit 1
fi
if [ "${myOSTYPE}" == "" ] && [ "${TPOT_OSTYPE}" != "linux" ];
then
echo "# Docker Engine detected, but TPOT_OSTYPE is not set to linux."
echo "# 1. You need to adjust the OSType in the T-Pot .env config."
echo "# 2. You need to copy compose/standard.yml to ./docker-compose.yml."
echo
echo "# Aborting."
echo
sleep 1
exit 1
else
if ! [ -S /var/run/docker.sock ];
then
echo "# Cannot access /var/run/docker.sock, check docker-compose.yml for proper volume definition."
echo
echo "# Aborting."
exit 1
fi
fi
# Validate environment variables
@ -255,12 +275,8 @@ if [ -f "/data/uuid" ];
fi
# Check if TPOT_BLACKHOLE is enabled
if [ "${myOSTYPE}" == "linuxkit" ];
if [ "${TPOT_OSTYPE}" == "linux" ];
then
echo
echo "# Docker Desktop for macOS or Windows detected, Blackhole feature is not supported."
echo
else
if [ "${TPOT_BLACKHOLE}" == "ENABLED" ] && [ ! -f "/etc/blackhole/mass_scanner.txt" ];
then
echo
@ -278,6 +294,10 @@ if [ "${myOSTYPE}" == "linuxkit" ];
echo
echo "# Blackhole is not active."
fi
else
echo
echo "# T-Pot is configured for macOS / Windows. Blackhole is not supported."
echo
fi
# Get IP
@ -291,7 +311,7 @@ update_permissions
# Update interface settings (p0f and Suricata) and setup iptables to support NFQ based honeypots (glutton, honeytrap)
### This is currently not supported on Docker for Desktop, only on Docker Engine for Linux
if [ "${myOSTYPE}" != "linuxkit" ] && [ "${TPOT_OSTYPE}" == "linux" ];
if [ "${TPOT_OSTYPE}" == "linux" ];
then
echo
echo "# Get IF, disable offloading, enable promiscious mode for p0f and suricata ..."
@ -303,10 +323,14 @@ if [ "${myOSTYPE}" != "linuxkit" ] && [ "${TPOT_OSTYPE}" == "linux" ];
echo "# Adding firewall rules ..."
echo
/opt/tpot/bin/rules.sh ${COMPOSE} set
else
echo
echo "# T-Pot is configured for macOS / Windows. Setting up firewall rules on the host is not supported."
echo
fi
# Display open ports
if [ "${myOSTYPE}" != "linuxkit" ];
if [ "${TPOT_OSTYPE}" == "linux" ];
then
echo
echo "# This is a list of open ports on the host (netstat -tulpen)."
@ -317,9 +341,9 @@ if [ "${myOSTYPE}" != "linuxkit" ];
echo
else
echo
echo "# Docker Desktop for macOS or Windows detected, cannot show open ports on the host."
echo
fi
echo "# T-Pot is configured for macOS / Windows. Showing open ports from the host is not supported."
echo
fi
# Done
@ -331,24 +355,20 @@ touch /tmp/success
# We want to see true source for UDP packets in container (https://github.com/moby/libnetwork/issues/1994)
# Start autoheal if running on a supported os
if [ "${myOSTYPE}" != "linuxkit" ];
if [ "${TPOT_OSTYPE}" == "linux" ];
then
sleep 60
echo "# Dropping UDP connection tables to improve visibility of true source IPs."
/usr/sbin/conntrack -D -p udp
# Starting container health monitoring
echo
figlet "Starting ..."
figlet "Autoheal"
echo "# Now monitoring healthcheck enabled containers to automatically restart them when unhealthy."
echo
# exec /opt/tpot/autoheal.sh autoheal
/opt/tpot/autoheal.sh autoheal &
PID=$!
wait $PID
echo "# T-Pot Init and Autoheal were stopped. Exiting."
else
echo
echo "# Docker Desktop for macOS or Windows detected, Conntrack feature is not supported."
echo
fi
# Starting container health monitoring
echo
figlet "Starting ..."
figlet "Autoheal"
echo "# Now monitoring healthcheck enabled containers to automatically restart them when unhealthy."
echo
/opt/tpot/autoheal.sh autoheal &
PID=$!
wait $PID
echo "# T-Pot Init and Autoheal were stopped. Exiting."

View File

@ -1,5 +1,3 @@
version: '3.9'
services:
# T-Pot Init Service

View File

@ -1,5 +1,3 @@
version: '2.3'
networks:
wordpot_local:

20
dps.ps1 Normal file
View File

@ -0,0 +1,20 @@
# Format, colorize docker ps output
# Define a fixed width for the STATUS column
$statusWidth = 30
# Capture the Docker output into a variable
$dockerOutput = docker ps -f status=running -f status=exited --format "{{.Names}}`t{{.Status}}`t{{.Ports}}"
# Print header with colors
Write-Host ("NAME".PadRight(20) + "STATUS".PadRight($statusWidth) + "PORTS") -ForegroundColor Cyan -NoNewline
Write-Host ""
# Split the output into lines and loop over them
$dockerOutput -split '\r?\n' | ForEach-Object {
if ($_ -ne "") {
$fields = $_ -split "`t"
Write-Host ($fields[0].PadRight(20)) -NoNewline -ForegroundColor Yellow
Write-Host ($fields[1].PadRight($statusWidth)) -NoNewline -ForegroundColor Green
Write-Host ($fields[2]) -ForegroundColor Blue
}
}

View File

@ -51,6 +51,7 @@ TPOT_PERSISTENCE=on
# Create credentials with 'htpasswd ~/tpotce/data/nginx/conf/lswebpasswd <username>'
# 4. On SENSOR: Provide username / password from (3) for TPOT_HIVE_USER as base64 encoded string:
# "echo -n 'username:password' | base64 -w0"
# MOBILE: This will set the correct type for T-Pot Mobile (https://github.com/telekom-security/tpotmobile)
TPOT_TYPE=HIVE
# T-Pot Hive User (only relevant for SENSOR deployment)
@ -59,6 +60,18 @@ TPOT_TYPE=HIVE
# i.e. TPOT_HIVE_USER='dXNlcm5hbWU6cGFzc3dvcmQ='
TPOT_HIVE_USER=
# Logstash Sensor SSL verfication (only relevant on SENSOR hosts)
# full: This is the default. Logstash, by default, verifies the complete certificate chain for ssl certificates.
# This also includes the FQDN and sANs. By default T-Pot will only generate a self-signed certificate which
# contains a sAN for the HIVE IP. In scenario where the HIVE needs to be accessed via Internet, maybe with
# a different NAT address, a new certificate needs to be generated before deployment that includes all the
# IPs and FQDNs as sANs for logstash successfully establishing a connection to the HIVE for transmitting
# logs. Details here: https://github.com/telekom-security/tpotce?tab=readme-ov-file#distributed-deployment
# none: This setting will disable the ssl verification check of logstash and should only be used in a testing
# environment where IPs often change. It is not recommended for a production environment where trust between
# HIVE and SENSOR is only established through a self signed certificate.
LS_SSL_VERIFICATION=full
# T-Pot Hive IP (only relevant for SENSOR deployment)
# <empty>: This is empty by default.
# <IP, FQDN>: This can be either a IP (i.e. 192.168.1.1) or a FQDN (i.e. foo.bar.local)

12
genuserwin.ps1 Normal file
View File

@ -0,0 +1,12 @@
# Run genuser.sh within tpotinit, prepare path and file
# Define the volume paths
$homePath = $Env:USERPROFILE + "\tpotce"
$nginxpasswdPath = $homePath + "\data\nginx\conf\nginxpasswd"
# Ensure nginxpasswd file exists
if (-Not (Test-Path $nginxpasswdPath)) {
New-Item -ItemType File -Force -Path $nginxpasswdPath
}
# Run the Docker container without specifying UID / GID
docker run -v "${homePath}:/data" --entrypoint bash -it dtagdevsec/tpotinit:24.04 "/opt/tpot/bin/genuser.sh"

View File

@ -171,24 +171,38 @@ echo "### (H)ive - T-Pot Standard / HIVE installation."
echo "### Includes also everything you need for a distributed setup with sensors."
echo "### (S)ensor - T-Pot Sensor installation."
echo "### Optimized for a distributed installation, without WebUI, Elasticsearch and Kibana."
echo "### (M)obile - T-Pot Mobile installation."
echo "### Includes everything to run T-Pot Mobile (available separately)."
while true; do
read -p "### Install Type? (h/s) " myTPOT_TYPE
read -p "### Install Type? (h/s/m) " myTPOT_TYPE
case "${myTPOT_TYPE}" in
h|H)
echo
echo "### Installing T-Pot Standard / HIVE installation."
echo "### Installing T-Pot Standard / HIVE."
myTPOT_TYPE="HIVE"
cp ${HOME}/tpotce/compose/standard.yml ${HOME}/tpotce/docker-compose.yml
myINFO=""
break ;;
s|S)
echo
echo "### Installing T-Pot Sensor installation."
echo "### Installing T-Pot Sensor."
myTPOT_TYPE="SENSOR"
cp ${HOME}/tpotce/compose/sensor.yml ${HOME}/tpotce/docker-compose.yml
myINFO="### Make sure to deploy SSH keys to this SENSOR and disable SSH password authentication.
### On HIVE run the tpotce/deploy.sh script to join this SENSOR to the HIVE."
break ;;
m|M)
echo
echo "### Installing T-Pot Mobile."
myTPOT_TYPE="MOBILE"
cp ${HOME}/tpotce/compose/mobile.yml ${HOME}/tpotce/docker-compose.yml
myINFO=""
break ;;
esac
done
if [ "${myTPOT_TYPE}" == "HIVE" ];
# Install T-Pot Type HIVE and ask for WebUI username and password
# If T-Pot Type is HIVE ask for WebUI username and password
then
# Preparing web user for T-Pot
echo
@ -259,19 +273,6 @@ if [ "${myTPOT_TYPE}" == "HIVE" ];
echo
sed -i "s|^WEB_USER=.*|WEB_USER=${myWEB_USER_ENC_B64}|" ${myTPOT_CONF_FILE}
# Install T-Pot Type HIVE and use standard.yml for installation
cp ${HOME}/tpotce/compose/standard.yml ${HOME}/tpotce/docker-compose.yml
myINFO=""
fi
if [ "${myTPOT_TYPE}" == "SENSOR" ];
# Install T-Pot Type SENSOR and use sensor.yml for installation
then
cp ${HOME}/tpotce/compose/sensor.yml ${HOME}/tpotce/docker-compose.yml
myINFO="### Make sure to deploy SSH keys to this sensor and disable SSH password authentication.
### On hive run the tpotce/deploy.sh script to join this sensor to the hive."
fi
# Pull docker images

View File

@ -129,7 +129,6 @@
- cracklib-runtime
- cron
- curl
- exa
- git
- gnupg
- grc
@ -146,6 +145,32 @@
- "Raspbian"
- "Ubuntu"
- name: Install exa (Debian, Raspbian, Ubuntu)
package:
name:
- exa
state: latest
update_cache: yes
register: exa_install_result
ignore_errors: yes
when: ansible_distribution in ["Debian", "Raspbian", "Ubuntu"]
tags:
- "Debian"
- "Raspbian"
- "Ubuntu"
- name: Install eza (if exa failed)
package:
name:
- eza
state: latest
update_cache: yes
when: exa_install_result is failed
tags:
- "Debian"
- "Raspbian"
- "Ubuntu"
- name: Install grc from remote repo (AlmaLinux, Rocky)
ansible.builtin.dnf:
name: 'https://github.com/kriipke/grc/releases/download/1.13.8/grc-1.13.8-1.el7.noarch.rpm'
@ -175,6 +200,7 @@
- wget
state: latest
update_cache: yes
register: exa_install_result
when: ansible_distribution in ["AlmaLinux", "Rocky"]
tags:
- "AlmaLinux"
@ -210,6 +236,7 @@
- wget
state: latest
update_cache: yes
register: exa_install_result
when: ansible_distribution in ["Fedora"]
tags:
- "Fedora"
@ -222,7 +249,7 @@
- postfix
- yast2-auth-client
- yast2-auth-user
state: absent
state: absent
update_cache: yes
when: ansible_distribution in ["openSUSE Tumbleweed"]
tags:
@ -245,6 +272,7 @@
- wget
state: latest
update_cache: yes
register: exa_install_result
when: ansible_distribution in ["openSUSE Tumbleweed"]
tags:
- "openSUSE Tumbleweed"
@ -259,14 +287,16 @@
become: true
tasks:
- name: Remove distribution based Docker packages (AlmaLinux, Debian, Fedora, Raspbian, Rocky, Ubuntu)
- name: Remove distribution based Docker packages and podman-docker (AlmaLinux, Debian, Fedora, Raspbian, Rocky, Ubuntu)
package:
name:
- docker
- docker-engine
- docker.io
- containerd
- containerd
- runc
- podman-docker
- podman
state: absent
update_cache: yes
when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "Raspbian", "Rocky", "Ubuntu"]
@ -559,6 +589,15 @@
- "Fedora"
- "Ubuntu"
- name: Copy resolved.conf to /etc/systemd (Fedora)
copy:
src: /usr/lib/systemd/resolved.conf
dest: /etc/systemd/resolved.conf
when: ansible_distribution in ["Fedora"]
ignore_errors: true
tags:
- "Fedora"
- name: Modify DNSStubListener in resolved.conf (Fedora, Ubuntu)
lineinfile:
path: /etc/systemd/resolved.conf
@ -674,7 +713,7 @@
when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "openSUSE Tumbleweed", "Raspbian", "Rocky", "Ubuntu"]
failed_when: ansible_user_id == "root"
- name: Add aliases (All)
- name: Add aliases with exa (All)
blockinfile:
path: ~/.bashrc
block: |
@ -688,7 +727,35 @@
marker: "# {mark} ANSIBLE MANAGED BLOCK"
insertafter: EOF
state: present
when: ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "openSUSE Tumbleweed", "Raspbian", "Rocky", "Ubuntu"]
when: exa_install_result is succeeded and ansible_distribution in ["AlmaLinux", "Debian", "Fedora", "openSUSE Tumbleweed", "Raspbian", "Rocky", "Ubuntu"]
tags:
- "AlmaLinux"
- "Debian"
- "Fedora"
- "openSUSE Tumbleweed"
- "Raspbian"
- "Rocky"
- "Ubuntu"
- name: Add aliases with eza (Debian, Raspbian, Ubuntu)
blockinfile:
path: ~/.bashrc
block: |
alias dps='grc --colour=on docker ps -f status=running -f status=exited --format "table {{'{{'}}.Names{{'}}'}}\\t{{'{{'}}.Status{{'}}'}}\\t{{'{{'}}.Ports{{'}}'}}" | sort'
alias dpsw='watch -c bash -ic dps'
alias mi='micro'
alias sudo='sudo '
alias ls='eza'
alias ll='eza -hlg'
alias la='eza -hlag'
marker: "# {mark} ANSIBLE MANAGED BLOCK"
insertafter: EOF
state: present
when: exa_install_result is failed and ansible_distribution in ["Debian", "Raspbian", "Ubuntu"]
tags:
- "Debian"
- "Raspbian"
- "Ubuntu"
- name: Clone / Update T-Pot repository (All)
git: