93 Commits

Author SHA1 Message Date
932ad6b27c Fix repack for AMD64 .iso (#1481) 2024-03-04 15:23:27 +01:00
02098f9b76 Update Citation 2023-08-28 10:29:24 +02:00
649163e06f Update Citation 2023-08-28 10:16:18 +02:00
9d66bcb7d3 Add Bibtex, closes #1398 2023-08-28 10:02:59 +02:00
dc4384d6ab Merge pull request #1369 from swiftsolves-msft/pr-azure
Azure Deployment via ARM template
2023-08-22 13:36:09 +02:00
1af7cdcaa1 Azure Deployment via ARM template
The following is a Azure Deployment of T-Pot using a ARM Template, creates a debian 11 vm, disks, nic, nsg, pip and leverages cloud-init customData to pass a B64 encoded string of a cloud-inity yaml file, example in readme docs.
2023-07-02 00:56:38 -04:00
81fab84040 add bookworm check to updates
while not supported the update script will no longer break if bookworm is found
2023-06-27 09:53:28 +00:00
a0c5a8c0e7 fix port definitions
- docker-compose no longer accepts ports definitions when network_mode: host is set
- previous versions simply ignored the ports definitions, the updated docker-compose breaks with an error however
2023-06-27 09:23:52 +00:00
c1808161e4 fixes #1346 2023-06-07 05:54:17 +00:00
bd12e1a4c0 Merge pull request #1338 from kauedg/dps-patch-1
call $0 instead of hardcoded script name
2023-06-01 13:28:04 +02:00
edda041093 call $0 instead of hardcoded script name
Allows the script to work when called from another directory or if the script name changes.
2023-05-31 14:47:15 -03:00
e3b1fd298a Prepare fix for #1336. 2023-05-31 17:21:15 +02:00
1a2d34c013 bump elk to 8.6.2, rebuild images 2023-05-30 14:35:45 +00:00
00d6d1b4c7 Add T-Pot Technical Preview 2023-05-30 12:22:10 +02:00
87ef005c17 tweaking for tpotlight 2023-05-27 14:49:20 +02:00
9941818a6e Create SECURITY.md 2023-05-12 18:37:04 +02:00
f438be7e27 Allow for automatic geoip db downloads 2023-05-07 18:10:23 +02:00
efd5f4c54c fixes #1320 2023-05-03 22:01:36 +00:00
35188ef28e add option to retrieve ENVs from file 2023-05-02 13:11:05 +02:00
e7963dbdaa update ddospot folders 2023-04-30 22:51:03 +02:00
918a408357 Merge branch 'master' of https://github.com/telekom-security/tpotce 2023-04-27 18:44:30 +02:00
5fd0d158e6 Add Nginx Cockpit Awareness 2023-04-27 18:42:38 +02:00
5265e3945a bump ewsposter to 1.25.0 2023-04-26 08:47:28 +00:00
a08a475f57 tweaking 2023-04-25 17:47:44 +00:00
ff7c368c7f update landing page
make relative links (T-Pot home) dynamic to display them only if services are available
adjust dimensions for link container
correct github link
place attack-map link in the home container
2023-04-25 15:03:26 +02:00
88ab453061 Merge pull request #1283 from tadashi-oya/fix-empty-myINSTALLPACKAGES
fix empty myINSTALLPACKAGES
2023-03-23 16:21:18 +01:00
4bae09e408 fix empty myINSTALLPACKAGES 2023-03-20 05:55:21 +00:00
668a4d91a7 bump ewsposter to 1.24.0 2023-02-24 14:34:49 +00:00
1a20de2f7f Merge pull request #1266 from kawaiipantsu/kawaiipantsu-request-uri-size
Fixing uri max size
2023-02-23 16:54:53 +01:00
350179fc89 Added detailed comment
Added a detailed comment on what the change is needed for and why it's there
2023-02-23 16:51:42 +01:00
f3a6461eaa Fixing uri max size
Changing URI max size from 1024 to 1280 bytes
2023-02-21 01:13:52 +01:00
fc17d850b5 bump t-pot-attack-map to v2.0.1 2023-02-14 17:41:02 +00:00
44c38d809b Merge pull request #1259 from kawaiipantsu/patch-1
Update updateip.sh
2023-02-10 14:52:40 +01:00
5eb9368064 Update updateip.sh
Make sure to target root partition, Debian will often come with /boot/efi or similar. This little hack will utilize regular expression to match line starting with / but having a blank after. So only root partition should match.
2023-02-09 13:31:08 +01:00
72a3b51bd4 bump t-pot-attack-map to 1.2.0 2023-02-04 00:29:26 +00:00
f786769527 bump t-pot-attack-map to 1.1.2 2023-02-03 20:37:27 +00:00
23934bc693 bump t-pot-attack-map to 1.1.1, add nginx cache header 2023-02-03 18:16:32 +00:00
7e60b46732 fixes #1254, fixes #1253
- #1254: new ELK images will be provided shortly
- #1253: documentation and updater will now reflect that an update from 20.06.x is no longer possible
2023-01-26 10:49:24 +00:00
c178d878ab bump ELK to 8.5.3 2023-01-23 16:33:09 +00:00
390390fd43 bump to alpine 3.17, tweaking, fixes for py 3.10 2023-01-23 15:42:59 +00:00
8119aca317 tweaking 2023-01-23 12:04:40 +00:00
2fd0f62484 bump to alpine 3.17 2023-01-20 17:48:46 +00:00
90eab744b1 bump cyberchef to 9.55.0, fix glitches 2023-01-20 17:42:17 +00:00
8547699061 bump cowrie to 2.5.0 2023-01-19 17:15:08 +00:00
2b5127fbdb update readme 2023-01-19 13:18:28 +00:00
4382413672 bump t-pot-attack-map to 1.1.0, buildx to 0.10.0 2023-01-19 11:42:25 +00:00
516bec1deb fixes #1241 2023-01-10 17:56:18 +00:00
ede61b81d9 update map to fix CVE 2023-01-06 19:53:05 +00:00
59cca98e7f update geoip map to latest release
update nginx to include brotli and gzip compression
improve load performance
2023-01-06 18:58:03 +00:00
2641d1e743 bump elastic stack to 8.4.3 2022-11-02 16:37:01 +00:00
3b2e8a4c70 tweaking 2022-11-02 07:54:42 +00:00
16fe4b1d28 bump sentrypeer to 2.0 2022-11-01 15:26:24 +00:00
b34644f1a8 add link for py3 2022-11-01 11:59:52 +00:00
7fa447943d bump medpot to latest fork master 2022-11-01 10:52:47 +00:00
c9b4bd27e6 bump buildx to 0.8.2 2022-11-01 10:46:24 +00:00
38edadb3da bump log4pot to latest master 2022-11-01 09:39:11 +00:00
5da8431e3a bump cyberchef, esvue to latest master 2022-10-31 17:01:04 +01:00
ccb94b1529 revert buildx to 0.8.1 2022-10-31 15:41:59 +00:00
e2cbd981ca bump hellpot to latest master 2022-10-14 14:55:28 +00:00
48f3c842b5 bump fatt to latest master 2022-10-13 14:06:09 +00:00
f9179e3e21 bump cowrie to 2.4.0 2022-10-13 08:44:55 +00:00
5c30a57280 Merge pull request #1173 from zambroid/patch-1
Corrected small typos
2022-10-12 13:54:49 +02:00
8410f84fe9 bump adbhoney to latest master 2022-10-12 11:52:17 +00:00
d9aa6bd525 Update README.md 2022-10-12 13:45:01 +02:00
ee547994dc Merge pull request #1187 from ctulio/url-fix
Update some url repos
2022-10-12 13:22:03 +02:00
0316bc7a2c bump buildx to 0.9.1 2022-10-12 09:50:10 +02:00
c9f6320446 Update some url repos 2022-10-11 22:39:55 -04:00
b8e3df97dc bump ewsposter to latest master, update packages 2022-10-11 15:13:47 +00:00
bac0d3c30c Update README.md 2022-09-02 17:30:04 +02:00
db1e65b968 Made small adjustments to the readme file
The readme file was containing small typos, I tried to identify them and my proposed new version of the file is here
2022-08-25 09:23:29 +02:00
1122d3728e Bump ELK Stack to 8.3.3 2022-08-17 16:34:53 +00:00
b696ec7b39 Merge pull request #1135 from cha147/patch-1 2022-07-14 00:06:23 +02:00
a22a7d98c4 dix typos in readme 2022-07-13 14:35:50 -07:00
a3bda5de8f bump Elastic stack to 8.2.3 2022-06-15 14:29:23 +00:00
5f0c337f09 bump elk, log4pot, honeytrap, dionaea to ubuntu 22.04 2022-06-14 10:47:11 +00:00
fc93db2bc4 fix cleanup medpot 2022-06-14 08:04:35 +00:00
421b3d3020 bump medpot to latest master 2022-06-14 07:51:14 +00:00
1eaec0036e prep for new medpot, honeypots and some tweaking 2022-06-13 11:59:40 +00:00
afb16dcc96 Fix typo, fixes #1111 2022-06-09 17:38:39 +02:00
15f7a17935 Comment ENV opt-in for SentryPeer 2022-06-08 11:09:29 +00:00
dcf15ca489 Opt-In for SentryPeer DHT mode, fixes #1110 2022-06-08 09:10:29 +00:00
a28dfec046 bump qHoneypots to latest master, adjust config for commands input 2022-06-07 11:19:34 +00:00
8993f59001 Bump Glutton to Alpine 3.16, decrease image size 2022-06-03 14:21:55 +00:00
09c682cd7b Bump to Alpine 3.16 for most of the images.
Glutton, Heralding, Mailoney and Snare/Tanner need work.
2022-06-02 15:47:17 +00:00
409e4bde3e Bump Cyberchef to 9.38.0, Elasticvue to 0.40.1
Bump Nginx, Spiderfoot to Alpine 3.16
2022-06-02 13:36:54 +00:00
aaef85c49d Bump SentryPeer to 1.4.1 2022-06-02 08:31:18 +00:00
73b54f5504 Bump Elastic Stack to 8.2.2 2022-06-01 10:26:49 +00:00
55da6a4841 Bump Elastic Stack to 8.2.0, update objects 2022-05-25 14:53:29 +00:00
153c11babd fix glances not showing docker containers 2022-05-24 14:58:45 +00:00
f13d08287f prep for elk 8.1.2 2022-04-15 13:11:25 +00:00
fc123d10f9 bump spiderfoot to 4.0 2022-04-14 17:15:43 +00:00
ded2124932 bump cyberchef, esvue to latest release 2022-04-14 16:52:48 +00:00
909ca358f0 Fix headings, links 2022-04-14 10:36:07 +02:00
124 changed files with 4823 additions and 585 deletions

43
CITATION.cff Normal file
View File

@ -0,0 +1,43 @@
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: T-Pot
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- name: Deutsche Telekom Security GmbH
address: Bonner Talweg 100
city: Bonn
country: DE
post-code: '53113'
website: 'https://github.com/telekom-security'
- given-names: Marco
family-names: Ochse
affiliation: Deutsche Telekom Security GmbH
identifiers:
- type: url
value: >-
https://github.com/telekom-security/tpotce/releases/tag/22.04.0
description: T-Pot Release 22.04.0
repository-code: 'https://github.com/telekom-security/tpotce'
abstract: >-
T-Pot is the all in one, optionally distributed, multiarch
(amd64, arm64) honeypot plattform, supporting 20+
honeypots and countless visualization options using the
Elastic Stack, animated live attack maps and lots of
security tools to further improve the deception
experience.
keywords:
- honeypot
- deception
- t-pot
- telekom security
- docker
- elk
license: GPL-3.0
commit: af09aa96b184f873ec83da4e7380762a0a5ce416
version: 22.04.0
date-released: '2022-04-12'

View File

@ -1,4 +1,4 @@
# T-Pot - The All In One Multi Honeypot Plattform
# T-Pot - The All In One Multi Honeypot Platform
![T-Pot](doc/tpotsocial.png)
@ -7,7 +7,7 @@ T-Pot is the all in one, optionally distributed, multiarch (amd64, arm64) honeyp
# TL;DR
1. Meet the [system requirements](#system-requirements). The T-Pot installation needs at least 8-16 GB RAM and 128 GB free disk space as well as a working (outgoing non-filtered) internet connection.
2. Download the T-Pot ISO from [GitHub](https://github.com/telekom-security/tpotce/releases) acording to your architecture (amd64, arm64) or [create it yourself](#create-your-own-iso-image).
2. Download the T-Pot ISO from [GitHub](https://github.com/telekom-security/tpotce/releases) according to your architecture (amd64, arm64) or [create it yourself](#create-your-own-iso-image).
3. Install the system in a [VM](#running-in-a-vm) or on [physical hardware](#running-on-hardware) with [internet access](#system-placement).
4. Enjoy your favorite beverage - [watch](https://sicherheitstacho.eu) and [analyze](#kibana-dashboard).
<br><br>
@ -25,29 +25,29 @@ T-Pot is the all in one, optionally distributed, multiarch (amd64, arm64) honeyp
- [Required Ports](#required-ports)
- [System Placement](#system-placement)
- [Installation](#installation)
- [ISO Based](#isoinstall)
- [Download ISO Image](#downloadiso)
- [Build your own ISO Image](#makeiso)
- [ISO Based](#iso-based)
- [Download ISO Image](#download-iso-image)
- [Create your own ISO Image](#create-your-own-iso-image)
- [Post Install](#post-install)
- [Download Debian Netinstall Image](#download-debian-netinstall-image)
- [User](#post-install-user-method)
- [Auto](#postauto)
- [T-Pot Installer](#tpotinstaller)
- [Installation Types](#installtypes)
- [Standalone](#standalonetype)
- [Distributed](#distributedtype)
- [Cloud Deployments](#cloud)
- [Ansible](#ansible-deployment)
- [Terraform](#terraform-configuration)
- [Post Install User Method](#post-install-user-method)
- [Post Install Auto Method](#post-install-auto-method)
- [T-Pot Installer](#t-pot-installer)
- [Installation Types](#installation-types)
- [Standalone](#standalone)
- [Distributed](#distributed)
- [Cloud Deployments](#cloud-deployments)
- [Ansible Deployment](#ansible-deployment)
- [Terraform Configuration](#terraform-configuration)
- [First Start](#first-start)
- [Standalone Start](#standalone-first-start)
- [Distributed Deployment](#distributed-deployment)
- [Community Data Submission](#community-data-submission)
- [Opt-In HPFEEDS Data Submission](#opt-in-hpfeeds-data-submission)
- [Remote Access and Tools](#remote-access-and-tools)
- [SSH and Cockpit](#ssh)
- [SSH and Cockpit](#ssh-and-cockpit)
- [T-Pot Landing Page](#t-pot-landing-page)
- [Kibana Dashboard](#kibana-dashboardibana)
- [Kibana Dashboard](#kibana-dashboard)
- [Attack Map](#attack-map)
- [Cyberchef](#cyberchef)
- [Elasticvue](#elasticvue)
@ -103,7 +103,7 @@ T-Pot offers docker images for the following honeypots ...
* [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot),
* [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot),
* [conpot](http://conpot.org/),
* [cowrie](http://www.micheloosterhof.com/cowrie/),
* [cowrie](https://github.com/cowrie/cowrie),
* [ddospot](https://github.com/aelth/ddospot),
* [dicompot](https://github.com/nsmfoo/dicompot),
* [dionaea](https://github.com/DinoTools/dionaea),
@ -127,11 +127,11 @@ T-Pot offers docker images for the following honeypots ...
* [Cockpit](https://cockpit-project.org/running) for a lightweight and secure WebManagement and WebTerminal.
* [Cyberchef](https://gchq.github.io/CyberChef/) a web app for encryption, encoding, compression and data analysis.
* [Elastic Stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot.
* [Elasticvue](https://github.com/cars10/elasticvue/) a web front end for browsing and interacting with an Elastic Search cluster.
* [Elasticvue](https://github.com/cars10/elasticvue/) a web front end for browsing and interacting with an Elasticsearch cluster.
* [Fatt](https://github.com/0x4D31/fatt) a pyshark based script for extracting network metadata and fingerprints from pcap files and live network traffic.
* [Geoip-Attack-Map](https://github.com/eddie4/geoip-attack-map) a beautifully animated attack map [optimized](https://github.com/t3chn0m4g3/geoip-attack-map) for T-Pot.
* [T-Pot-Attack-Map](https://github.com/t3chn0m4g3/t-pot-attack-map) a beautifully animated attack map for T-Pot.
* [P0f](https://lcamtuf.coredump.cx/p0f3/) is a tool for purely passive traffic fingerprinting.
* [Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool.
* [Spiderfoot](https://github.com/smicallef/spiderfoot) an open source intelligence automation tool.
* [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine.
... to give you the best out-of-the-box experience possible and an easy-to-use multi-honeypot appliance.
@ -150,17 +150,17 @@ The individual Dockerfiles and configurations are located in the [docker folder]
T-Pot offers a number of services which are basically divided into five groups:
1. System services provided by the OS
* SSH for secure remote access.
* Cockpit for web based remote acccess, management and web terminal.
* Cockpit for web based remote access, management and web terminal.
2. Elastic Stack
* Elasticsearch for storing events.
* Logstash for ingesting, receiving and sending events to Elasticsearch.
* Kibana for displaying events on beautyfully rendered dashboards.
* Kibana for displaying events on beautifully rendered dashboards.
3. Tools
* NGINX for providing secure remote access (reverse proxy) to Kibana, CyberChef, Elasticvue, GeoIP AttackMap and Spiderfoot.
* NGINX provides secure remote access (reverse proxy) to Kibana, CyberChef, Elasticvue, GeoIP AttackMap and Spiderfoot.
* CyberChef a web app for encryption, encoding, compression and data analysis.
* Elasticvue a web front end for browsing and interacting with an Elastic Search cluster.
* Geoip Attack Map a beautifully animated attack map for T-Pot.
* Spiderfoot a open source intelligence automation tool.
* Elasticvue a web front end for browsing and interacting with an Elasticsearch cluster.
* T-Pot Attack Map a beautifully animated attack map for T-Pot.
* Spiderfoot an open source intelligence automation tool.
4. Honeypots
* A selection of the 22 available honeypots based on the selected edition and / or setup.
5. Network Security Monitoring (NSM)
@ -207,7 +207,7 @@ All T-Pot installations will require ...
<br><br>
## Running in a VM
T-Pot is reported to run with with the following hypervisors, however not each and every combination is tested.
T-Pot is reported to run with the following hypervisors, however not each and every combination is tested.
* [UTM (Intel & Apple Silicon)](https://mac.getutm.app/)
* [VirtualBox](https://www.virtualbox.org/)
* [VMWare vSphere / ESXi](https://kb.vmware.com/s/article/2107518)
@ -237,7 +237,7 @@ Some users report working installations on other clouds and hosters, i.e. Azure
<br><br>
## Required Ports
Besides the ports generally needed by the OS, i.e. obtaining a DHCP lease, DNS, etc. T-Pot will require the following ports for incomding / outgoing connections. Review the [T-Pot Architecure](#technical-architecture) for a visual representation. Also some ports will show up as duplicates, which is fine since used in different editions.
Besides the ports generally needed by the OS, i.e. obtaining a DHCP lease, DNS, etc. T-Pot will require the following ports for incoming / outgoing connections. Review the [T-Pot Architecture](#technical-architecture) for a visual representation. Also some ports will show up as duplicates, which is fine since used in different editions.
| Port | Protocol | Direction | Description |
| :--- | :--- | :--- | :--- |
| 80, 443 | tcp | outgoing | T-Pot Management: Install, Updates, Logs (i.e. Debian, GitHub, DockerHub, PyPi, Sicherheitstacho, etc. |
@ -269,20 +269,20 @@ Besides the ports generally needed by the OS, i.e. obtaining a DHCP lease, DNS,
| 80 | tcp | incoming | Honeypot: Snare (Tanner) |
Ports and availability of SaaS services may vary based on your geographical location. Also during first install outgoing ICMP / TRACEROUTE is required additionally to find the closest and fastest mirror to you.
Ports and availability of SaaS services may vary based on your geographical location. Also during the first install outgoing ICMP / TRACEROUTE is required additionally to find the closest and fastest mirror to you.
For some honeypots to reach full functionality (i.e. Cowrie or Log4Pot) outgoing connections are necessary as well, in order for them to download the attackers malware. Please see the individual honeypot's documentation to learn more by following the [links](#technical-concept) to their repositories.
<br><br>
# System Placement
It is recommended to get yourself familiar how T-Pot and the honeypots work before you start exposing towards the interet. For a quickstart run a T-Pot installation in a virtual machine.
It is recommended to get yourself familiar with how T-Pot and the honeypots work before you start exposing towards the internet. For a quickstart run a T-Pot installation in a virtual machine.
<br><br>
Once you are familiar how things work you should choose a network you suspect intruders in or from (i.e. the internet). Otherwise T-Pot will most likely not capture any attacks (unless you want to proof a point)! For starters it is recommended to put T-Pot in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface. To avoid probing for T-Pot's management ports you can put T-Pot behind a firewall and forward all TCP / UDP traffic in the port range of 1-64000 to T-Pot while allowing access to ports > 64000 only from trusted IPs and / or only expose the [ports](#required-ports) relevant to your use-case. If you wish to catch malware traffic on unknown ports you should not limit the ports you forward since glutton and honeytrap dynamically bind any TCP port that is not covered by other honeypot daemons and thus give you a better representation what risks your setup is exposed to.
Once you are familiar with how things work you should choose a network you suspect intruders in or from (i.e. the internet). Otherwise T-Pot will most likely not capture any attacks (unless you want to prove a point)! For starters it is recommended to put T-Pot in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface. To avoid probing for T-Pot's management ports you can put T-Pot behind a firewall and forward all TCP / UDP traffic in the port range of 1-64000 to T-Pot while allowing access to ports > 64000 only from trusted IPs and / or only expose the [ports](#required-ports) relevant to your use-case. If you wish to catch malware traffic on unknown ports you should not limit the ports you forward since glutton and honeytrap dynamically bind any TCP port that is not covered by other honeypot daemons and thus give you a better representation of the risks your setup is exposed to.
<br><br>
# Installation
The T-Pot installation is offered in different variations. While the overall installation of T-Pot is straight forward it heavily depends on a working, non-proxied (unless you made modifications) up and running internet connection (also see [required outgoing ports](#required-ports)). If these conditions are not met the installation **will fail!** either during the execution of the Debian Installer, after the first reboot before the T-Pot Installer is starting up or while the T-Pot installer is trying to download all the necessary dependencies.
The T-Pot installation is offered in different variations. While the overall installation of T-Pot is straightforward it heavily depends on a working, non-proxied (unless you made modifications) up and running internet connection (also see [required outgoing ports](#required-ports)). If these conditions are not met the installation **will fail!** either during the execution of the Debian Installer, after the first reboot before the T-Pot Installer is starting up or while the T-Pot installer is trying to download all the necessary dependencies.
<br><br>
## ISO Based
@ -337,7 +337,7 @@ cd tpotce/iso/installer/
The installation will now start, you can now move on to the [T-Pot Installer](#t-pot-installer) section.
<br><br>
### **Post-Install Auto**
### **Post-Install Auto Method**
You can also let the installer run automatically if you provide your own `tpot.conf`. An example is available in `tpotce/iso/installer/tpot.conf.dist`. This should make things easier in case you want to automate the installation i.e. with **Ansible**.
Just follow these steps while adjusting `tpot.conf` to your needs:
@ -359,7 +359,7 @@ In the past T-Pot was only available as a [standalone](#standalone) solution wit
<br><br>
### **Standalone**
With T-Pot Standalone all services, tools, honeypots, etc. will be installed on to a single host. Make sure to meet the [system requirements](#system-requirements). You can choose from various pre-defined T-Pot editions (or flavors) depending on your personal use-case (you can always adjust `/opt/tpot/etc/tpot.yml` to your needs).
With T-Pot Standalone all services, tools, honeypots, etc. will be installed on to a single host. Make sure to meet the [system requirements](#system-requirements). You can choose from various predefined T-Pot editions (or flavors) depending on your personal use-case (you can always adjust `/opt/tpot/etc/tpot.yml` to your needs).
Once the installation is finished you can proceed to [First Start](#first-start).
<br><br>
@ -544,7 +544,7 @@ T-Pot is designed to be low maintenance. Basically there is nothing you have to
<br><br>
## Updates
While security update are installed automatically by the OS and docker images are pulled once per day (`/etc/crontab`) to check for updated images, T-Pot offers the option to be updated to the latest master and / or upgrade a previous version. Updating and upgrading always introduces the risk of loosing your data, so it is heavily encouraged you backup your machine before proceeding.
While security updates are installed automatically by the OS and docker images are pulled once per day (`/etc/crontab`) to check for updated images, T-Pot offers the option to be updated to the latest master and / or upgrade a previous version. Updating and upgrading always introduces the risk of losing your data, so it is heavily encouraged to backup your machine before proceeding.
<br><br>
Should an update fail, opening an issue or a discussion will help to improve things in the future, but the solution will always be to perform a ***fresh install*** as we simply ***cannot*** provide any support for lost data!
<br>
@ -561,13 +561,8 @@ The update script will ...
### **Update from 20.06.x**
If you are running T-Pot 20.06.x you simply follow these commands ***after you backed up any relevant data***:
```
sudo su -
cd /opt/tpot/
wget -O update.sh https://raw.githubusercontent.com/telekom-security/tpotce/master/update.sh
./update.sh
```
Due to massive changes in Elasticsearch automated updates from 20.06.x are no longer available. If you have not upgraded already a fresh install with 22.04.x is required.
### **Updates for 22.04.x**
If you are already running T-Pot 22.04.x you simply run the update script ***after you backed up any relevant data***:
@ -698,7 +693,7 @@ Some T-Pot updates will require you to update the Kibana objects. Either to supp
1. Go to Kibana
2. Click on "Stack Management"
3. Click on "Saved Objects"
4. Click on "Export <no.> objetcs"
4. Click on "Export <no.> objects"
5. Click on "Export all"
This will export a NDJSON file with all your objects. Always run a full export to make sure all references are included.
@ -728,7 +723,7 @@ reboot
<br><br>
## Adjust tpot.yml
Maybe the avaialble T-Pot editions do not apply to your use-case or you need a different set of honeypots. You can adjust `/opt/tpot/etc/tpot.yml` to your own preference. If you need examples how this works, just follow the configuration of the existing editions (docker-compose files) in `/opt/tpot/etc/compose` and follow the [Docker Compose Specification](https://docs.docker.com/compose/compose-file/).
Maybe the available T-Pot editions do not apply to your use-case or you need a different set of honeypots. You can adjust `/opt/tpot/etc/tpot.yml` to your own preference. If you need examples of how this works, just follow the configuration of the existing editions (docker-compose files) in `/opt/tpot/etc/compose` and follow the [Docker Compose Specification](https://docs.docker.com/compose/compose-file/).
```
sudo su -
systemctl stop tpot
@ -744,13 +739,13 @@ You can enable two-factor-authentication for Cockpit by running `2fa.sh`.
<br><br>
# Troubleshooting
Generally T-Pot is offered ***as is*** without any committment regarding support. Issues and discussions can opened, but be prepared to include basic necessary info, so the community is able to help.
Generally T-Pot is offered ***as is*** without any commitment regarding support. Issues and discussions can be opened, but be prepared to include basic necessary info, so the community is able to help.
<br><br>
## Logging
* Check if your containers are running correctly: `dps.sh`
* Check if your system ressources are not exhausted: `htop`, `glances`
* Check if your system resources are not exhausted: `htop`, `glances`
* Check if there is a port conflict:
```
@ -808,13 +803,13 @@ If there are any banned IPs you can unban these with `fail2ban-client unban --al
## RAM and Storage
The Elastic Stack is hungry for RAM, specifically `logstash` and `elasticsearch`. If the Elastic Stack is unavailable, does not receive any logs or simply keeps crashing it is most likely a RAM or Storage issue.
While T-Pot keeps trying to restart the services / containers run `docker logs -f <container_name>` (either `logstash` or `elasticsearch`) and check if there any warnings or failures involving RAM.
While T-Pot keeps trying to restart the services / containers run `docker logs -f <container_name>` (either `logstash` or `elasticsearch`) and check if there are any warnings or failures involving RAM.
Storage failures can be identified easier via `htop` or `glances`.
<br><br>
# Contact
T-Pot is provided ***as is*** open source ***without*** any committment regarding support ([see the disclaimer](#disclaimer)).
T-Pot is provided ***as is*** open source ***without*** any commitment regarding support ([see the disclaimer](#disclaimer)).
If you are a company or institution and wish a personal contact aside from [issues](#issues) and [discussions](#discussions) please get in contact with our [sales team](https://www.t-systems.com/de/en/security).
@ -824,7 +819,7 @@ If you are a security researcher and want to responsibly report an issue please
## Issues
Please report issues (errors) on our [GitHub Issues](https://github.com/telekom-security/tpotce/issues), but [troubleshoot](#troubleshooting) first. Issues not providing information to address the error will be closed or converted into [discussions](#discussions).
Feel free to use the search function, it is possible a similar issues has been adressed already, with the solution just a search away.
Feel free to use the search function, it is possible a similar issue has been addressed already, with the solution just a search away.
<br><br>
## Discussions
@ -840,7 +835,7 @@ The software that T-Pot is built on uses the following licenses.
<br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [dicompot](https://github.com/nsmfoo/dicompot/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE)
<br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [ddospot](https://github.com/aelth/ddospot/blob/master/LICENSE), [elasticvue](https://github.com/cars10/elasticvue/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE), [hellpot](https://github.com/yunginnanet/HellPot/blob/master/LICENSE), [maltrail](https://github.com/stamparm/maltrail/blob/master/LICENSE)
<br> Unlicense: [endlessh](https://github.com/skeeto/endlessh/blob/master/UNLICENSE)
<br> Other: [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot#licencing-agreement-malwaretech-public-licence), [cowrie](https://github.com/micheloosterhof/cowrie/blob/master/LICENSE.md), [mailoney](https://github.com/awhitehatter/mailoney), [Debian licensing](https://www.debian.org/legal/licenses/), [Elastic License](https://www.elastic.co/licensing/elastic-license)
<br> Other: [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot#licencing-agreement-malwaretech-public-licence), [cowrie](https://github.com/cowrie/cowrie/blob/master/LICENSE.rst), [mailoney](https://github.com/awhitehatter/mailoney), [Debian licensing](https://www.debian.org/legal/licenses/), [Elastic License](https://www.elastic.co/licensing/elastic-license)
<br> AGPL-3.0: [honeypots](https://github.com/qeeqbox/honeypots/blob/main/LICENSE)
<br><br>
@ -856,7 +851,7 @@ Without open source and the fruitful development community (we are proud to be a
* [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot/graphs/contributors)
* [cockpit](https://github.com/cockpit-project/cockpit/graphs/contributors)
* [conpot](https://github.com/mushorg/conpot/graphs/contributors)
* [cowrie](https://github.com/micheloosterhof/cowrie/graphs/contributors)
* [cowrie](https://github.com/cowrie/cowrie/graphs/contributors)
* [ddospot](https://github.com/aelth/ddospot/graphs/contributors)
* [debian](http://www.debian.org/)
* [dicompot](https://github.com/nsmfoo/dicompot/graphs/contributors)

20
SECURITY.md Normal file
View File

@ -0,0 +1,20 @@
# Security Policy
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| 22.04.x | :white_check_mark: |
## Reporting a Vulnerability
We take security of T-Pot very seriously. If one of T-Pot's components is affected, it is most likely that a upstream component we rely on is involved, such as a honeypot, docker image, tool or package. Together we will find the best possible way to remedy the situation.
Before you submit a possible vulnerability, please ensure you have done the following:
1. You have checked the documentation, issues and discussions if the detected behavior is typical and does not revolve around other issues. I.e. Cowrie will be detected with outgoing conncection requests or T-Pot opening all possible TCP ports which Honeytrap enabled install flavors will do as a feature.
2. You have identified the vulnerable component and isolated your finding (honeypot, docker image, tool, package, etc.).
3. You have a detailed description including log files, possibly debug files, with all steps necessary for us to reproduce / trigger the behaviour or vulnerability. At best you already have a possible solution, hotfix, fix or patch to remedy the situation and want to submit a PR.
4. You have checked if the possible vulnerability is known upstream. If a fix / patch is already available, please provide the necessary info.
We will get back to you as fast as possible. In case you think this is an emergency for the whole T-Pot community feel free to speed things up by **responsibly** informing our [CERT](https://www.telekom.com/en/corporate-responsibility/data-protection-data-security/security/details/introducing-deutsche-telekom-cert-358316).

View File

@ -117,7 +117,7 @@ fuCOWRIE () {
# Let's create a function to clean up and prepare ddospot data
fuDDOSPOT () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/ddospot/log; fi
mkdir -p /data/ddospot/log
mkdir -p /data/ddospot/bl /data/ddospot/db /data/ddospot/log
chmod 770 /data/ddospot -R
chown tpot:tpot /data/ddospot -R
}

View File

@ -11,7 +11,7 @@ fi
myPARAM="$1"
if [[ $myPARAM =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
then
watch --color -n $myPARAM "dps.sh"
watch --color -n $myPARAM "$0"
exit
fi

View File

@ -1,7 +1,7 @@
#!/bin/bash
### Vars, Ports for Standard services
myHOSTPORTS="7634 64294 64295"
myHOSTPORTS="7634 64294 64295 64297 64304"
myDOCKERCOMPOSEYML="$1"
myRULESFUNCTION="$2"

View File

@ -20,7 +20,7 @@ fi
# Main
mkdir -p /root/.docker/cli-plugins/
cd /root/.docker/cli-plugins/
wget https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-amd64 -O docker-buildx
wget https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64 -O docker-buildx
chmod +x docker-buildx
docker buildx ls

View File

@ -3,7 +3,7 @@
# If the external IP cannot be detected, the internal IP will be inherited.
source /etc/environment
myCHECKIFSENSOR=$(head -n 1 /opt/tpot/etc/tpot.yml | grep "Sensor" | wc -l)
myUUID=$(lsblk -o MOUNTPOINT,UUID | grep "/" | awk '{ print $2 }')
myUUID=$(lsblk -o MOUNTPOINT,UUID | grep -e "^/ " | awk '{ print $2 }')
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(/opt/tpot/bin/myip.sh)
if [ "$myEXTIP" = "" ];

71
cloud/azure/README.md Normal file
View File

@ -0,0 +1,71 @@
# Azure T-Pot
The following deployment template will deploy a Standard T-Pot server on a Azure VM on a Network\Subnet of your choosing. [Click here to learn more on T-Pot](https://github.com/telekom-security/tpotce)
[![Deploy To Azure](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Ftelekom-security%2Ftpotce%2Fmaster%2Fcloud%2Fazure%2Fazuredeploy.json)
[![Deploy To Azure US Gov](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazuregov.svg?sanitize=true)](https://portal.azure.us/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Ftelekom-security%2Ftpotce%2Fmaster%2Fcloud%2Fazure%2Fazuredeploy.json)
[![Visualize](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/visualizebutton.svg?sanitize=true)](http://armviz.io/#/?load=https%3A%2F%2Fraw.githubusercontent.com%2Ftelekom-security%2Ftpotce%2Fmaster%2Fcloud%2Fazure%2Fazuredeploy.json)
## Install Instructions
1. Update the VM Name to reflect your naming convention and taxonomy.
2. Place you Azure Virtual Network Resource Id *(Recommendation of
placement depending on goal, you may want to place in Hub Virtual
Network to detect activity from on-premise or other virtual
network spokes. You can also place in DMZ or isolated in a unique
virtual network exposed to direct internet.)*
3. My Connection IP of a public ip address you are coming from to use dashboards and manage.
4. Cloud Init B64 Encoded write your cloud init yaml contents and base 64 encode them into this string parameter.
Cloud-Init Yaml Example before B64 Encoding:
packages:
- git
runcmd:
- curl -sS --retry 5 https://github.com
- git clone https://github.com/telekom-security/tpotce /root/tpot
- /root/tpot/iso/installer/install.sh --type=auto --conf=/root/tpot.conf
- rm /root/tpot.conf
- /sbin/shutdown -r now
password: w3b$ecrets2!
chpasswd:
expire: false
write_files:
- content: |
# tpot configuration file
myCONF_TPOT_FLAVOR='STANDARD'
myCONF_WEB_USER='webuser'
myCONF_WEB_PW='w3b$ecrets2!'
owner: root:root
path: /root/tpot.conf
permissions: '0600'
Be sure to copy and update values like:
- password:
- myCONF_TPOT_FLAVOR= (Different flavors as follows: [STANDARD,
HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]
**Recommend deploying STANDARD** if you are exploring first time)
- myCONF_WEB_USER=
- myCONF_WEB_PW=
Once you update the cloud init yaml file locally then base 64 encode and paste this string to in the securestring parameter.
B64 Example:
I2Nsb3VkLWNvbmZpZwp0aW1lem9uZTogVVMvRWFzdGVybgoKcGFja2FnZXM6CiAgLSBnaXQKCnJ1bmNtZDoKICAtIGN1cmwgLXNTIC0tcmV0cnkgNSBodHRwczovL2dpdGh1Yi5jb20KICAtIGdpdCBjbG9uZSBodHRwczovL2dpdGh1Yi5jb20vdGVsZWtvbS1zZWN1cml0eS90cG90Y2UgL3Jvb3QvdHBvdAogIC0gL3Jvb3QvdHBvdC9pc28vaW5zdGFsbGVyL2luc3RhbGwuc2ggLS10eXBlPWF1dG8gLS1jb25mPS9yb290L3Rwb3QuY29uZgogIC0gcm0gL3Jvb3QvdHBvdC5jb25mCiAgLSAvc2Jpbi9zaHV0ZG93biAtciBub3cKCnBhc3N3b3JkOiB3M2IkZWNyZXRzMiEKY2hwYXNzd2Q6CiAgZXhwaXJlOiBmYWxzZQoKd3JpdGVfZmlsZXM6CiAgLSBjb250ZW50OiB8CiAgICAgICMgdHBvdCBjb25maWd1cmF0aW9uIGZpbGUKICAgICAgbXlDT05GX1RQT1RfRkxBVk9SPSdTVEFOREFSRCcKICAgICAgbXlDT05GX1dFQl9VU0VSPSd3ZWJ1c2VyJwogICAgICBteUNPTkZfV0VCX1BXPSd3M2IkZWNyZXRzMiEnCiAgICBvd25lcjogcm9vdDpyb290CiAgICBwYXRoOiAvcm9vdC90cG90LmNvbmYKICAgIHBlcm1pc3Npb25zOiAnMDYwMCc=
Click review and create, deployment of VM should take less than 5 minutes, however Cloud-Init will take some time, **typically 15 minutes** before T-Pot services are up and running.
## Post Install Instructions
Install **may take around 15 minutes** for services to come up. Check to make sure from your public IP you can connect to https://azurepuplicip:64297 you will be prompted for your username and password supplied in the B64 Cloud Init String you supplied for *myCONF_WEB_PW=*
Review the [available honeypots architecture section](https://raw.githubusercontent.com/telekom-security/tpotce/master/doc/architecture.png) and [available ports](https://github.com/telekom-security/tpotce#required-ports) and poke a hole in the Network Security Group to expose the T-Pot to your on-premise network CIDR, or other Azure virtual network CIDRs, finally you can also expose a port to the public Internet for Threat Intelligence gathering.
## Network Security Group
Please study the rules carefully. You may need to make some additional rules or modifications based on your needs and considerations. As an example if this is for internal private ip range detections you may want to remove rules and place a higher priority DENY rule preventing all the T-Pot ports and services being exposed internally, and then place a few ALLOW rules to your on-premise private ip address CIDR, other Hub Private IPs, and some Spoke Private IPs.
![enter image description here](https://raw.githubusercontent.com/telekom-security/tpotce/master/cloud/azure/images/nsg.png)

View File

@ -0,0 +1,308 @@
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"VMName": {
"type": "string",
"metadata": {
"description": "VM Name and convention your company uses, be sure to entice naming EX. vm-fileshares-prod-eastus-003"
},
"defaultValue": "vm-fileshares-prod-eastus-003"
},
"virtualNetworkId": {
"type": "string",
"metadata": {
"description": "Virtual Network Resource ID to Deploy Azure VM into"
},
"defaultValue": "/subscriptions/{SUBID}/resourceGroups/{RG NAME}/providers/Microsoft.Network/virtualNetworks/{VNET NAME}"
},
"subnetName": {
"type": "string",
"metadata": {
"description": "Virtual Network Subnet Name to Deploy Azure VM into"
}
},
"MyConnectionIP": {
"type": "string",
"minLength": 7,
"maxLength": 15,
"metadata": {
"description": "The Public IP I will be connecting from to administer and configure"
},
"defaultValue": "XXX.XXX.XXX.XXX"
},
"adminUsername": {
"type": "string",
"minLength": 1,
"defaultValue": "webuser",
"metadata": {
"description": "Admin user name for Linux VM"
}
},
"authenticationType": {
"type": "string",
"defaultValue": "password",
"allowedValues": [
"sshPublicKey",
"password"
],
"metadata": {
"description": "Type of authentication to use on the Virtual Machine. SSH key is recommended."
}
},
"adminPasswordOrKey": {
"type": "securestring",
"metadata": {
"description": "SSH Key or password for the Virtual Machine. SSH key is recommended."
}
},
"CloudInitB64Encoded": {
"type": "securestring",
"metadata": {
"description": "Cloud Init Configuration as a Base 64 encoded string, decode to examine a few variables to change and encode and submit"
},
"defaultValue": "I2Nsb3VkLWNvbmZpZwp0aW1lem9uZTogVVMvRWFzdGVybgoKcGFja2FnZXM6CiAgLSBnaXQKCnJ1bmNtZDoKICAtIGN1cmwgLXNTIC0tcmV0cnkgNSBodHRwczovL2dpdGh1Yi5jb20KICAtIGdpdCBjbG9uZSBodHRwczovL2dpdGh1Yi5jb20vdGVsZWtvbS1zZWN1cml0eS90cG90Y2UgL3Jvb3QvdHBvdAogIC0gL3Jvb3QvdHBvdC9pc28vaW5zdGFsbGVyL2luc3RhbGwuc2ggLS10eXBlPWF1dG8gLS1jb25mPS9yb290L3Rwb3QuY29uZgogIC0gcm0gL3Jvb3QvdHBvdC5jb25mCiAgLSAvc2Jpbi9zaHV0ZG93biAtciBub3cKCnBhc3N3b3JkOiB3M2IkZWNyZXRzMiEKY2hwYXNzd2Q6CiAgZXhwaXJlOiBmYWxzZQoKd3JpdGVfZmlsZXM6CiAgLSBjb250ZW50OiB8CiAgICAgICMgdHBvdCBjb25maWd1cmF0aW9uIGZpbGUKICAgICAgbXlDT05GX1RQT1RfRkxBVk9SPSdTVEFOREFSRCcKICAgICAgbXlDT05GX1dFQl9VU0VSPSd3ZWJ1c2VyJwogICAgICBteUNPTkZfV0VCX1BXPSd3M2IkZWNyZXRzMiEnCiAgICBvd25lcjogcm9vdDpyb290CiAgICBwYXRoOiAvcm9vdC90cG90LmNvbmYKICAgIHBlcm1pc3Npb25zOiAnMDYwMCc="
}
},
"variables": {
"vnetId": "[parameters('virtualNetworkId')]",
"subnetRef": "[concat(variables('vnetId'), '/subnets/', parameters('subnetName'))]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[format('/home/{0}/.ssh/authorized_keys', parameters('adminUsername'))]",
"keyData": "[parameters('adminPasswordOrKey')]"
}
]
}
}
},
"resources": [
{
"name": "[concat(uniqueString(resourceGroup().id, deployment().name),'-nic')]",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2021-08-01",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Network/networkSecurityGroups/', concat(uniqueString(resourceGroup().id, deployment().name),'-nsg'))]",
"[resourceId('Microsoft.Network/publicIpAddresses', concat(uniqueString(resourceGroup().id, deployment().name),'-pip'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic",
"publicIpAddress": {
"id": "[resourceId(resourceGroup().name, 'Microsoft.Network/publicIpAddresses', concat(uniqueString(resourceGroup().id, deployment().name),'-pip'))]",
"properties": {
"deleteOption": "Detach"
}
}
}
}
],
"enableAcceleratedNetworking": true,
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups/', concat(uniqueString(resourceGroup().id, deployment().name),'-nsg'))]"
}
}
},
{
"name": "[concat(uniqueString(resourceGroup().id, deployment().name),'-nsg')]",
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2019-02-01",
"location": "[resourceGroup().location]",
"properties": {
"securityRules": [
{
"name": "AllowAzureCloud22Inbound",
"properties": {
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "22",
"sourceAddressPrefix": "AzureCloud",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1011,
"direction": "Inbound",
"sourcePortRanges": [],
"destinationPortRanges": [],
"sourceAddressPrefixes": [],
"destinationAddressPrefixes": []
}
},
{
"name": "AllowCustom64294Inbound",
"properties": {
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "64294",
"sourceAddressPrefix": "[parameters('MyConnectionIP')]",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1021,
"direction": "Inbound",
"sourcePortRanges": [],
"destinationPortRanges": [],
"sourceAddressPrefixes": [],
"destinationAddressPrefixes": []
}
},
{
"name": "AllowSSHCustom64295Inbound",
"properties": {
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "64295",
"sourceAddressPrefix": "[parameters('MyConnectionIP')]",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1031,
"direction": "Inbound",
"sourcePortRanges": [],
"destinationPortRanges": [],
"sourceAddressPrefixes": [],
"destinationAddressPrefixes": []
}
},
{
"name": "AllowAzureCloud64295Inbound",
"properties": {
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "64295",
"sourceAddressPrefix": "AzureCloud",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1041,
"direction": "Inbound",
"sourcePortRanges": [],
"destinationPortRanges": [],
"sourceAddressPrefixes": [],
"destinationAddressPrefixes": []
}
},
{
"name": "AllowCustom64297Inbound",
"properties": {
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "64297",
"sourceAddressPrefix": "[parameters('MyConnectionIP')]",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1051,
"direction": "Inbound",
"sourcePortRanges": [],
"destinationPortRanges": [],
"sourceAddressPrefixes": [],
"destinationAddressPrefixes": []
}
},
{
"name": "AllowAllHomeOfficeCustomAnyInbound",
"properties": {
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "[parameters('MyConnectionIP')]",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1061,
"direction": "Inbound",
"sourcePortRanges": [],
"destinationPortRanges": [],
"sourceAddressPrefixes": [],
"destinationAddressPrefixes": []
}
}
]
}
},
{
"name": "[concat(uniqueString(resourceGroup().id, deployment().name),'-pip')]",
"type": "Microsoft.Network/publicIpAddresses",
"apiVersion": "2020-08-01",
"location": "[resourceGroup().location]",
"properties": {
"publicIpAllocationMethod": "Static"
},
"sku": {
"name": "Standard"
},
"zones": [
"1"
]
},
{
"name": "[parameters('VMName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2022-03-01",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Network/networkInterfaces', concat(uniqueString(resourceGroup().id, deployment().name),'-nic'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "Standard_D4s_v3"
},
"storageProfile": {
"osDisk": {
"createOption": "fromImage",
"managedDisk": {
"storageAccountType": "StandardSSD_LRS"
},
"deleteOption": "Delete"
},
"imageReference": {
"publisher": "debian",
"offer": "debian-11",
"sku": "11-gen2",
"version": "latest"
},
"dataDisks": [
{
"name": "[concat(parameters('VMName'),'-datadisk')]",
"diskSizeGB": 256,
"lun": 0,
"createOption": "Empty",
"caching": "ReadWrite"
}
]
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(uniqueString(resourceGroup().id, deployment().name),'-nic'))]",
"properties": {
"deleteOption": "Delete"
}
}
]
},
"osProfile": {
"computerName": "[parameters('VMName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPasswordOrKey')]",
"linuxConfiguration": "[if(equals(parameters('authenticationType'), 'password'), null(), variables('linuxConfiguration'))]",
"customData": "[parameters('CloudInitB64Encoded')]"
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true
}
}
},
"zones": [
"1"
]
}
],
"outputs": {}
}

BIN
cloud/azure/images/nsg.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 380 KiB

After

Width:  |  Height:  |  Size: 620 KiB

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -7,13 +7,14 @@ COPY dist/ /root/dist/
RUN apk --no-cache -U add \
git \
procps \
py3-requests \
python3 && \
#
# Install adbhoney from git
git clone https://github.com/huuck/ADBHoney /opt/adbhoney && \
cd /opt/adbhoney && \
# git checkout ad7c17e78d01f6860d58ba826a4b6a4e4f83acbd && \
git checkout 2417a7a982f4fd527b3a048048df9a23178767ad && \
# git checkout 2417a7a982f4fd527b3a048048df9a23178767ad && \
git checkout 42afd98611724ca3d694a48b694c957e8d953db4 && \
cp /root/dist/adbhoney.cfg /opt/adbhoney && \
sed -i 's/dst_ip/dest_ip/' /opt/adbhoney/adbhoney/core.py && \
sed -i 's/dst_port/dest_port/' /opt/adbhoney/adbhoney/core.py && \

View File

@ -3,6 +3,8 @@ hostname = honeypot01
address = 0.0.0.0
port = 5555
http_download = true
http_timeout = 45
download_dir = dl/
log_dir = log/

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/

View File

@ -1,5 +1,8 @@
version: '2.3'
networks:
ciscoasa_local:
services:
# Ciscoasa service

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Install packages
RUN apk --no-cache -U add \
@ -28,7 +28,7 @@ RUN apk --no-cache -U add \
addgroup -g 2000 citrixhoneypot && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 citrixhoneypot && \
chown -R citrixhoneypot:citrixhoneypot /opt/citrixhoneypot && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
setcap cap_net_bind_service=+ep /usr/bin/python3.10 && \
#
# Clean up
apk del --purge git \

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -62,20 +62,19 @@ RUN apk --no-cache -U add \
pip3 install --no-cache-dir . && \
cd / && \
rm -rf /opt/conpot /tmp/* /var/tmp/* && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
setcap cap_net_bind_service=+ep /usr/bin/python3.10 && \
#
# Get wireshark manuf db for scapy, setup configs, user, groups
mkdir -p /etc/conpot /var/log/conpot /usr/share/wireshark && \
wget https://github.com/wireshark/wireshark/raw/master/manuf -o /usr/share/wireshark/manuf && \
cp /root/dist/conpot.cfg /etc/conpot/conpot.cfg && \
cp -R /root/dist/templates /usr/lib/python3.9/site-packages/conpot/ && \
cp -R /root/dist/templates /usr/lib/python3.10/site-packages/conpot/ && \
addgroup -g 2000 conpot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 conpot && \
#
# Clean up
apk del --purge \
build-base \
cython-dev \
file \
git \
libev \

View File

@ -3,7 +3,7 @@ sensorid = conpot
[virtual_file_system]
data_fs_url = %(CONPOT_TMP)s
fs_url = tar:///usr/lib/python3.9/site-packages/conpot/data.tar
fs_url = tar:///usr/lib/python3.10/site-packages/conpot/data.tar
[session]
timeout = 30

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -40,9 +40,10 @@ RUN apk --no-cache -U add \
# Install cowrie
mkdir -p /home/cowrie && \
cd /home/cowrie && \
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.3.0 && \
git clone --depth=1 https://github.com/cowrie/cowrie -b v2.5.0 && \
#git clone --depth=1 https://github.com/cowrie/cowrie && \
cd cowrie && \
# git checkout 6b1e82915478292f1e77ed776866771772b48f2e && \
#git checkout 8b1e1cf4db0d3b0e70b470cf40385bbbd3ed1733 && \
mkdir -p log && \
cp /root/dist/requirements.txt . && \
pip3 install --upgrade pip && \
@ -75,6 +76,7 @@ RUN apk --no-cache -U add \
rm -rf /var/cache/apk/* && \
rm -rf /home/cowrie/cowrie/cowrie.pid && \
rm -rf /home/cowrie/cowrie/.git && \
# ln -s /usr/bin/python3 /usr/bin/python && \
unset PYTHON_DIR
#
# Start cowrie

View File

@ -13,10 +13,8 @@ interactive_timeout = 180
authentication_timeout = 120
backend = shell
timezone = UTC
report_public_ip = true
auth_class = AuthRandom
auth_class_parameters = 2, 5, 10
reported_ssh_port = 22
data_path = /tmp/cowrie/data
[shell]

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -41,7 +41,7 @@ RUN apk --no-cache -U add \
sed -i "s#rotate_size = 10#rotate_size = 9999#g" /opt/ddospot/ddospot/pots/ssdp/ssdpot.conf && \
cp /root/dist/requirements.txt . && \
pip3 install -r ddospot/requirements.txt && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
setcap cap_net_bind_service=+ep /usr/bin/python3.10 && \
#
# Setup user, groups and configs
addgroup -g 2000 ddospot && \

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Setup apk
RUN apk -U add --no-cache \

View File

@ -1,4 +1,4 @@
FROM ubuntu:20.04
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
#
# Include dist
@ -88,7 +88,7 @@ RUN ARCH=$(arch) && \
python3-bson \
python3-yaml \
wget && \
#
apt-get install -y \
ca-certificates \
python3 \
@ -102,7 +102,7 @@ RUN ARCH=$(arch) && \
libnetfilter-queue1 \
libnl-3-200 \
libpcap0.8 \
libpython3.8 \
libpython3.10 \
libudns0 && \
#
apt-get autoremove --purge -y && \

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/

View File

@ -71,7 +71,7 @@ services:
# Map Web Service
map_web:
build: .
build: map/.
container_name: map_web
restart: always
environment:

View File

@ -1,7 +1,7 @@
FROM ubuntu:20.04
FROM ubuntu:22.04
#
# VARS
ENV ES_VER=8.0.1
ENV ES_VER=8.6.2
#
# Include dist
COPY dist/ /root/dist/

View File

@ -1,7 +1,7 @@
FROM ubuntu:20.04
FROM ubuntu:22.04
#
# VARS
ENV KB_VER=8.0.1
ENV KB_VER=8.6.2
# Include dist
COPY dist/ /root/dist/
#

View File

@ -1,7 +1,7 @@
FROM ubuntu:20.04
FROM ubuntu:22.04
#
# VARS
ENV LS_VER=8.0.1
ENV LS_VER=8.6.2
# Include dist
COPY dist/ /root/dist/
#
@ -13,6 +13,7 @@ RUN apt-get update -y && \
bash \
bzip2 \
curl \
# openjdk-11-jre \
openssh-client && \
#
# Determine arch, get and install packages
@ -41,6 +42,8 @@ RUN apt-get update -y && \
cp pipelines.yml /usr/share/logstash/config/pipelines.yml && \
cp pipelines_sensor.yml /usr/share/logstash/config/pipelines_sensor.yml && \
cp tpot-template.json /etc/logstash/ && \
cd /usr/share/logstash && \
bin/logstash-plugin update logstash-filter-translate && \
rm /etc/logstash/pipelines.yml && \
rm /etc/logstash/logstash.yml && \
#
@ -63,4 +66,4 @@ HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600'
#
# Start logstash
USER logstash:logstash
CMD entrypoint.sh && exec /usr/share/logstash/bin/logstash --config.reload.automatic
CMD entrypoint.sh

View File

@ -6,6 +6,13 @@ function fuCLEANUP {
}
trap fuCLEANUP EXIT
# Source ENVs from file ...
if [ -f "/data/tpot/etc/compose/elk_environment" ];
then
echo "Found .env, now exporting ..."
set -o allexport && source "/data/tpot/etc/compose/elk_environment" && set +o allexport
fi
# Check internet availability
function fuCHECKINET () {
mySITES=$1
@ -50,38 +57,42 @@ if [ "$MY_TPOT_TYPE" == "SENSOR" ];
chmod 600 $MY_SENSOR_PRIVATEKEYFILE
cp /usr/share/logstash/config/pipelines_sensor.yml /usr/share/logstash/config/pipelines.yml
autossh -f -M 0 -4 -l $MY_HIVE_USERNAME -i $MY_SENSOR_PRIVATEKEYFILE -p 64295 -N -L64305:127.0.0.1:64305 $MY_HIVE_IP -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null"
exit 0
fi
# Index Management is happening through ILM, but we need to put T-Pot ILM setting on ES.
myTPOTILM=$(curl -s -XGET "http://elasticsearch:9200/_ilm/policy/tpot" | grep "Lifecycle policy not found: tpot" -c)
if [ "$myTPOTILM" == "1" ];
if [ "$MY_TPOT_TYPE" != "SENSOR" ];
then
echo "T-Pot ILM template not found on ES, putting it on ES now."
curl -XPUT "http://elasticsearch:9200/_ilm/policy/tpot" -H 'Content-Type: application/json' -d'
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {}
},
"delete": {
"min_age": "30d",
"actions": {
# Index Management is happening through ILM, but we need to put T-Pot ILM setting on ES.
myTPOTILM=$(curl -s -XGET "http://elasticsearch:9200/_ilm/policy/tpot" | grep "Lifecycle policy not found: tpot" -c)
if [ "$myTPOTILM" == "1" ];
then
echo "T-Pot ILM template not found on ES, putting it on ES now."
curl -XPUT "http://elasticsearch:9200/_ilm/policy/tpot" -H 'Content-Type: application/json' -d'
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {}
},
"delete": {
"delete_searchable_snapshot": true
"min_age": "30d",
"actions": {
"delete": {
"delete_searchable_snapshot": true
}
}
}
},
"_meta": {
"managed": true,
"description": "T-Pot ILM policy with a retention of 30 days"
}
}
},
"_meta": {
"managed": true,
"description": "T-Pot ILM policy with a retention of 30 days"
}
}
}'
else
echo "T-Pot ILM already configured or ES not available."
}'
else
echo "T-Pot ILM already configured or ES not available."
fi
fi
echo
exec /usr/share/logstash/bin/logstash --config.reload.automatic

View File

@ -638,12 +638,14 @@ if "_jsonparsefailure" in [tags] { drop {} }
geoip {
cache_size => 10000
source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.11-java/vendor/GeoLite2-City.mmdb"
default_database_type => "City"
# database => "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-filter-geoip-7.2.12-java/vendor/GeoLite2-City.mmdb"
}
geoip {
cache_size => 10000
source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.11-java/vendor/GeoLite2-ASN.mmdb"
default_database_type => "ASN"
# database => "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-filter-geoip-7.2.12-java/vendor/GeoLite2-ASN.mmdb"
}
translate {
refresh_interval => 86400
@ -657,13 +659,15 @@ if "_jsonparsefailure" in [tags] { drop {} }
cache_size => 10000
source => "t-pot_ip_ext"
target => "geoip_ext"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.11-java/vendor/GeoLite2-City.mmdb"
default_database_type => "City"
# database => "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-filter-geoip-7.2.12-java/vendor/GeoLite2-City.mmdb"
}
geoip {
cache_size => 10000
source => "t-pot_ip_ext"
target => "geoip_ext"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.11-java/vendor/GeoLite2-ASN.mmdb"
default_database_type => "ASN"
# database => "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-filter-geoip-7.2.12-java/vendor/GeoLite2-ASN.mmdb"
}
}

View File

@ -638,12 +638,14 @@ if "_jsonparsefailure" in [tags] { drop {} }
geoip {
cache_size => 10000
source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.11-java/vendor/GeoLite2-City.mmdb"
default_database_type => "City"
# database => "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-filter-geoip-7.2.12-java/vendor/GeoLite2-City.mmdb"
}
geoip {
cache_size => 10000
source => "src_ip"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.11-java/vendor/GeoLite2-ASN.mmdb"
default_database_type => "ASN"
# database => "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-filter-geoip-7.2.12-java/vendor/GeoLite2-ASN.mmdb"
}
translate {
refresh_interval => 86400
@ -657,13 +659,15 @@ if "_jsonparsefailure" in [tags] { drop {} }
cache_size => 10000
source => "t-pot_ip_ext"
target => "geoip_ext"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.11-java/vendor/GeoLite2-City.mmdb"
default_database_type => "City"
# database => "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-filter-geoip-7.2.12-java/vendor/GeoLite2-City.mmdb"
}
geoip {
cache_size => 10000
source => "t-pot_ip_ext"
target => "geoip_ext"
database => "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.11-java/vendor/GeoLite2-ASN.mmdb"
default_database_type => "ASN"
# database => "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-filter-geoip-7.2.12-java/vendor/GeoLite2-ASN.mmdb"
}
}

View File

@ -1,7 +1,7 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
#COPY dist/ /root/dist/
#
# Install packages
RUN apk -U --no-cache add \
@ -12,30 +12,31 @@ RUN apk -U --no-cache add \
python3 \
python3-dev && \
#
# Install Server from GitHub and setup
# Install from GitHub and setup
mkdir -p /opt && \
cd /opt/ && \
git clone https://github.com/t3chn0m4g3/geoip-attack-map && \
cd geoip-attack-map && \
# git checkout 4dae740178455f371b667ee095f824cb271f07e8 && \
cp /root/dist/* . && \
git clone https://github.com/t3chn0m4g3/t-pot-attack-map -b 2.0.1 && \
cd t-pot-attack-map && \
# git checkout eaf8d123d72a62e4c12093e4e8487e10e6ef60f3 && \
# git branch -a && \
# git checkout multi && \
pip3 install --upgrade pip && \
pip3 install -r requirements.txt && \
pip3 install flask && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
setcap cap_net_bind_service=+ep /usr/bin/python3.10 && \
#
# Setup user, groups and configs
addgroup -g 2000 map && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 map && \
chown map:map -R /opt/geoip-attack-map && \
chown map:map -R /opt/t-pot-attack-map && \
#
# Clean up
apk del --purge build-base \
git \
python3-dev && \
rm -rf /root/* /var/cache/apk/* /opt/geoip-attack-map/.git
rm -rf /root/* /var/cache/apk/* /opt/t-pot-attack-map/.git
#
# Start wordpot
# Start T-Pot-Attack-Map
STOPSIGNAL SIGINT
USER map:map
WORKDIR /opt/geoip-attack-map
CMD ./entrypoint.sh && exec /usr/bin/python3 $MAP_COMMAND
WORKDIR /opt/t-pot-attack-map
CMD /usr/bin/python3 $MAP_COMMAND

View File

@ -16,7 +16,7 @@ RUN apk -U add --no-cache \
make && \
mv /opt/endlessh/endlessh /root/dist
#
FROM alpine:3.15
FROM alpine:3.17
#
COPY --from=builder /root/dist/* /opt/endlessh/
#

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -26,10 +26,7 @@ RUN apk -U --no-cache add \
pip3 install --no-cache-dir configparser hpfeeds3 influxdb influxdb-client xmljson && \
#
# Setup ewsposter
git clone https://github.com/telekom-security/ewsposter /opt/ewsposter && \
cd /opt/ewsposter && \
# git checkout 11ab4c8a0a1b63d4bca8c52c07f2eab520d0b257 && \
git checkout 17c08f3ae500d838c1528c9700e4430d5f6ad214 && \
git clone https://github.com/telekom-security/ewsposter -b v1.25.0 /opt/ewsposter && \
mkdir -p /opt/ewsposter/spool /opt/ewsposter/log && \
#
# Setup user and groups
@ -37,9 +34,10 @@ RUN apk -U --no-cache add \
adduser -S -H -u 2000 -D -g 2000 ews && \
chown -R ews:ews /opt/ewsposter && \
#
# Supply configs
# Supply config and entrypoint.sh
mv /root/dist/ews.cfg /opt/ewsposter/ && \
# mv /root/dist/*.pem /opt/ewsposter/ && \
mv /root/dist/entrypoint.sh /usr/bin/ && \
chmod 755 /usr/bin/entrypoint.sh && \
#
# Clean up
apk del build-base \
@ -52,4 +50,4 @@ RUN apk -U --no-cache add \
# Run ewsposter
STOPSIGNAL SIGINT
USER ews:ews
CMD sleep 10 && exec /usr/bin/python3 -u /opt/ewsposter/ews.py -l $(shuf -i 10-60 -n 1)
CMD /usr/bin/entrypoint.sh

10
docker/ewsposter/dist/entrypoint.sh vendored Normal file
View File

@ -0,0 +1,10 @@
#!/bin/ash
# Source ENVs from file ...
if [ -f "/data/tpot/etc/compose/elk_environment" ];
then
echo "Found .env, now exporting ..."
set -o allexport && source "/data/tpot/etc/compose/elk_environment" && set +o allexport
fi
exec /usr/bin/python3 -u /opt/ewsposter/ews.py -l $(shuf -i 10-60 -n 1)

View File

@ -94,7 +94,7 @@ nodeid = mailoney-community-01
logfile = /data/mailoney/log/commands.log
[RDPY]
rdpy = true
rdpy = false
nodeid = rdpy-community-01
logfile = /data/rdpy/log/rdpy.log
@ -124,7 +124,7 @@ nodeid = glutton-community-01
logfile = /data/glutton/log/glutton.log
[HONEYSAP]
honeysap = true
honeysap = false
nodeid = honeysap-community-01
logfile = /data/honeysap/log/honeysap-external.log
@ -132,6 +132,7 @@ logfile = /data/honeysap/log/honeysap-external.log
adbhoney = true
nodeid = adbhoney-community-01
logfile = /data/adbhoney/log/adbhoney.json
malwaredir = /data/adbhoney/downloads
[FATT]
fatt = false
@ -162,3 +163,23 @@ logfile = /data/honeypy/log/json.log
citrix = true
nodeid = citrix-community-01
logfile = /data/citrixhoneypot/logs/server.log
[REDISHONEYPOT]
redishoneypot = true
nodeid = redishoneypot-community-01
logfile = /data/redishoneypot/log/redishoneypot.log
[ENDLESSH]
endlessh = true
nodeid = endlessh-community-01
logfile = /data/endlessh/log/endlessh.log
[SENTRYPEER]
sentrypeer = true
nodeid = sentrypeer-community-01
logfile = /data/sentrypeer/log/sentrypeer.json
[LOG4POT]
log4pot = true
nodeid = log4pot-community-01
logfile = /data/log4pot/log/log4pot.log

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Get and install dependencies & packages
RUN apk -U --no-cache add \
@ -20,7 +20,8 @@ RUN apk -U --no-cache add \
cd /opt && \
git clone https://github.com/0x4D31/fatt && \
cd fatt && \
git checkout 45cabf0b8b59162b99a1732d853efb01614563fe && \
git checkout c29e553514281e50781f86932b82337a5ada5640 && \
#git checkout 45cabf0b8b59162b99a1732d853efb01614563fe && \
#git checkout 314cd1ff7873b5a145a51ec4e85f6107828a2c79 && \
mkdir -p log && \
# pyshark >= 0.4.3 breaks fatt

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.15 as builder
#
# Include dist
COPY dist/ /root/dist/
@ -11,7 +11,6 @@ RUN apk -U --no-cache add \
g++ \
iptables-dev \
libnetfilter_queue-dev \
libcap \
libpcap-dev && \
#
# Setup go, glutton
@ -25,11 +24,19 @@ RUN apk -U --no-cache add \
mv /root/dist/system.go /opt/go/glutton/ && \
go mod download && \
make build && \
cd / && \
mkdir -p /opt/glutton && \
mv /opt/go/glutton/bin /opt/glutton/ && \
mv /opt/go/glutton/config /opt/glutton/ && \
mv /opt/go/glutton/rules /opt/glutton/ && \
mv /root/dist/rules.yaml /opt/go/glutton/rules/
#
FROM alpine:3.17
#
COPY --from=builder /opt/go/glutton/bin /opt/glutton/bin
COPY --from=builder /opt/go/glutton/config /opt/glutton/config
COPY --from=builder /opt/go/glutton/rules /opt/glutton/rules
#
RUN apk -U --no-cache add \
iptables-dev \
libnetfilter_queue-dev \
libcap \
libpcap-dev && \
ln -s /sbin/xtables-legacy-multi /sbin/xtables-multi && \
setcap cap_net_admin,cap_net_raw=+ep /opt/glutton/bin/server && \
setcap cap_net_admin,cap_net_raw=+ep /sbin/xtables-legacy-multi && \
@ -38,15 +45,9 @@ RUN apk -U --no-cache add \
addgroup -g 2000 glutton && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 glutton && \
mkdir -p /var/log/glutton && \
mv /root/dist/rules.yaml /opt/glutton/rules/ && \
#
# Clean up
apk del --purge build-base \
git \
go \
g++ && \
rm -rf /var/cache/apk/* \
/opt/go \
/root/*
#
# Start glutton

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -17,11 +17,8 @@ RUN apk -U --no-cache add \
mkdir -p /opt/go && \
git clone https://github.com/yunginnanet/HellPot && \
cd HellPot && \
git checkout 1312f20e719223099af8aad80f316420ee3dfcb1 && \
# Hellpot ignores setting the logpath, need to do this hardcoded ...
sed -i 's#logDir = snek.GetString("logger.directory")#logDir = "/var/log/hellpot/"#g' config/logger.go && \
sed -i 's#tnow := "HellPot"#tnow := "hellpot"#g' config/logger.go && \
sed -i 's#logFileName := "HellPot"#logFileName := "hellpot"#g' config/logger.go && \
git checkout 49433bf499b6af314786cbbc3cb8566cdb18c40c && \
sed -i 's#logFileName := "HellPot"#logFileName := "hellpot"#g' internal/config/logger.go && \
go build cmd/HellPot/HellPot.go && \
mv /root/HellPot/HellPot /opt/hellpot/ && \
#

View File

@ -1,23 +1,42 @@
[deception]
# Used as "Server" HTTP header. Note that reverse proxies may hide this.
server_name = "nginx"
[http]
# TCP Listener (default)
bind_addr = "0.0.0.0"
bind_port = "8080"
paths = ["wp-login.php","wp-login","wp-json/omapp/v1/support"]
# this contains a list of blacklisted useragent strings. (case sensitive)
# clients with useragents containing any of these strings will receive "Not found" for any requests.
uagent_string_blacklist = ["Cloudflare-Traffic-Manager", "curl"]
# Unix Socket Listener (will override default)
unix_socket_path = "/var/run/hellpot"
unix_socket_permissions = "0666"
use_unix_socket = false
unix_socket = "/var/run/hellpot"
[http.router]
# Toggling this to true will cause all GET requests to match. Forces makerobots = false.
catchall = true
# Toggling this to false will prevent creation of robots.txt handler.
makerobots = true
# Handlers will be created for these paths, as well as robots.txt entries. Only valid if catchall = false.
paths = ["wp-json/omapp/v1/support", "wp-login.php", "wp-login"]
[logger]
# verbose (-v)
debug = true
log_directory = "/var/log/hellpot/"
nocolor = true
# extra verbose (-vv)
trace = false
# JSON log files will be storn in the below directory.
directory = "/var/log/hellpot/"
# disable all color in console output. when using Windows this will default to true.
nocolor = true
# toggles the use of the current date as the names for new log files.
use_date_filename = false
[performance]
# max_workers is only valid if restrict_concurrency is true
restrict_concurrency = false
max_workers = 256
[deception]
# Used as "Server: " header (if not proxied)
server_name = "nginx"
restrict_concurrency = false

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -54,11 +54,12 @@ RUN apk -U --no-cache add \
cd /opt/ && \
git clone https://github.com/qeeqbox/honeypots && \
cd honeypots && \
git checkout bee3147cf81837ba7639f1e27fe34d717ecccf29 && \
# git checkout bee3147cf81837ba7639f1e27fe34d717ecccf29 && \
git checkout 1ad37d7e07838e9ad18c5244d87b9e49d90c9bc3 && \
cp /root/dist/setup.py . && \
pip3 install --upgrade pip && \
pip3 install . && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
setcap cap_net_bind_service=+ep /usr/bin/python3.10 && \
#
# Setup user, groups and configs
addgroup -g 2000 honeypots && \

View File

@ -4,9 +4,7 @@
"syslog_address":"",
"syslog_facility":0,
"postgres":"",
"db_options":[
],
"db_options":[],
"filter":"",
"interface":"",
"honeypots":{
@ -26,7 +24,8 @@
"password":"anonymous",
"log_file_name":"ftp.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"httpproxy":{
"port":8080,
@ -35,7 +34,8 @@
"password":"admin",
"log_file_name":"httpproxy.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"http":{
"port":80,
@ -45,7 +45,7 @@
"log_file_name":"http.log",
"max_bytes":0,
"backup_count":10,
"options":"fix_get_client_ip"
"options":["capture_commands","fix_get_client_ip"]
},
"https":{
"port":443,
@ -54,7 +54,8 @@
"password":"admin",
"log_file_name":"https.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands","fix_get_client_ip"]
},
"imap":{
"port":143,
@ -63,7 +64,8 @@
"password":"123456",
"log_file_name":"imap.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"mysql":{
"port":3306,
@ -72,7 +74,8 @@
"password":"123456",
"log_file_name":"mysql.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"pop3":{
"port":110,
@ -81,7 +84,8 @@
"password":"123456",
"log_file_name":"pop3.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"postgres":{
"port":5432,
@ -90,7 +94,8 @@
"password":"123456",
"log_file_name":"postgres.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"redis":{
"port":6379,
@ -99,7 +104,8 @@
"password":"",
"log_file_name":"redis.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"smb":{
"port":445,
@ -108,7 +114,8 @@
"password":"123456",
"log_file_name":"smb.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"smtp":{
"port":25,
@ -116,8 +123,9 @@
"username":"root",
"password":"123456",
"log_file_name":"smtp.log",
"max_bytes":0,
"backup_count":10
"max_bytes":10000,
"backup_count":10,
"options":["capture_commands"]
},
"socks5":{
"port":1080,
@ -126,7 +134,8 @@
"password":"admin",
"log_file_name":"socks5.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"ssh":{
"port":22,
@ -135,7 +144,8 @@
"password":"123456",
"log_file_name":"ssh.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands", "interactive"]
},
"telnet":{
"port":23,
@ -144,7 +154,8 @@
"password":"123456",
"log_file_name":"telnet.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"vnc":{
"port":5900,
@ -153,7 +164,8 @@
"password":"123456",
"log_file_name":"vnc.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"elastic":{
"port":9200,
@ -162,7 +174,8 @@
"password":"123456",
"log_file_name":"elastic.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"mssql":{
"port":1433,
@ -171,7 +184,8 @@
"password":"",
"log_file_name":"mssql.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"ldap":{
"port":389,
@ -180,7 +194,8 @@
"password":"123456",
"log_file_name":"ldap.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"ntp":{
"port":123,
@ -189,7 +204,8 @@
"password":"123456",
"log_file_name":"ntp.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"memcache":{
"port":11211,
@ -198,7 +214,8 @@
"password":"123456",
"log_file_name":"memcache.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"oracle":{
"port":1521,
@ -207,7 +224,8 @@
"password":"123456",
"log_file_name":"oracle.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"snmp":{
"port":161,
@ -216,7 +234,28 @@
"password":"123456",
"log_file_name":"snmp.log",
"max_bytes":0,
"backup_count":10
"backup_count":10,
"options":["capture_commands"]
},
"sip":{
"port":5060,
"ip":"0.0.0.0",
"username":"",
"password":"",
"log_file_name":"sip.log",
"max_bytes":0,
"backup_count":10,
"options":["capture_commands"]
},
"irc":{
"port":6667,
"ip":"0.0.0.0",
"username":"",
"password":"",
"log_file_name":"irc.log",
"max_bytes":10000,
"backup_count":10,
"options":["capture_commands"]
}
},
"custom_filter":{

View File

@ -26,18 +26,24 @@ services:
- "53:53/udp"
- "80:80"
- "110:110"
- "123:123"
- "143:143"
- "161:161"
- "389:389"
- "443:443"
- "445:445"
- "1080:1080"
- "1433:1433"
- "1521:1521"
- "3306:3306"
- "5060:5060"
- "5432:5432"
- "5900:5900"
- "6379:6379"
- "6667:6667"
- "8080:8080"
- "9200:9200"
- "11211:11211"
image: "dtagdevsec/honeypots:2204"
read_only: true
volumes:

View File

@ -1,4 +1,4 @@
FROM ubuntu:20.04
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
#
# Include dist

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -37,7 +37,7 @@ RUN apk -U --no-cache add \
git checkout 7ab1cac437baba17cb2cd25d5bb1400327e1bb79 && \
cp /root/dist/requirements.txt . && \
pip3 install -r requirements.txt && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
setcap cap_net_bind_service=+ep /usr/bin/python3.10 && \
#
# Setup user, groups and configs
addgroup -g 2000 ipphoney && \

View File

@ -1,4 +1,4 @@
FROM ubuntu:20.04
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
#
# Install packages
@ -27,11 +27,11 @@ RUN apt-get update -y && \
cd /opt/ && \
git clone https://github.com/thomaspatzke/Log4Pot && \
cd Log4Pot && \
# git checkout b163858649801974e9b86cea397f5eb137c7c01b && \
git checkout fac539f470217347e51127c635f16749a887c0ac && \
# git checkout fac539f470217347e51127c635f16749a887c0ac && \
git checkout e224c0f786efb68b4aab892e69857e379b75b0c6 && \
sed -i 's#"type": logtype,#"reason": logtype,#g' log4pot-server.py && \
poetry install && \
setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
setcap cap_net_bind_service=+ep /usr/bin/python3.10 && \
#
# Setup user, groups and configs
addgroup --gid 2000 log4pot && \

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Setup apk
RUN apk -U --no-cache add \
@ -8,27 +8,29 @@ RUN apk -U --no-cache add \
g++ && \
#
# Setup go, build medpot
export GOPATH=/opt/go/ && \
export GOPATH=/tmp && \
export GO111MODULE=off && \
mkdir -p /opt/go/src && \
cd /opt/go/src && \
git clone https://github.com/schmalle/medpot && \
cd medpot && \
git checkout 75a2e6134cf926c35b6017d62542274434c87388 && \
cd .. && \
cd /tmp && \
go get -d -v github.com/davecgh/go-spew/spew && \
go get -d -v github.com/go-ini/ini && \
go get -d -v github.com/mozillazg/request && \
go get -d -v go.uber.org/zap && \
go get -d -v github.com/s9rA16Bf4/ArgumentParser/go/arguments && \
go get -d -v github.com/s9rA16Bf4/notify_handler/go/notify && \
# git clone https://github.com/schmalle/medpot && \
git clone https://github.com/s9rA16Bf4/medpot && \
cd medpot && \
cp dist/etc/ews.cfg /etc/ && \
go build medpot && \
# git checkout dd9833786bb56cd40c11dfb15e0dd57298e249e8 && \
git checkout 0feb786cd8a7616498ba9749dbfda24b5dbd363e && \
sed -i s/"ews = true"/"ews = false"/g template/ews.cfg && \
go build -o medpot go/medpot.go go/logo.go && \
#
# Setup medpot
mkdir -p /opt/medpot \
mkdir -p /etc/medpot \
/opt/medpot \
/var/log/medpot && \
cp medpot /opt/medpot && \
cp /opt/go/src/medpot/template/*.xml /opt/medpot/ && \
cp ./template/* /etc/medpot && \
#
# Setup user, groups and configs
addgroup -g 2000 medpot && \
@ -42,7 +44,8 @@ RUN apk -U --no-cache add \
g++ && \
rm -rf /var/cache/apk/* \
/opt/go \
/root/dist
/root/dist \
/tmp
#
# Start medpot
WORKDIR /opt/medpot

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -6,7 +6,9 @@ COPY dist/ /root/dist/
# Get and install dependencies & packages
RUN apk -U --no-cache add \
nginx \
nginx-mod-http-headers-more && \
nginx-mod-http-brotli \
nginx-mod-http-headers-more \
nginx-mod-http-lua && \
#
## Setup T-Pot Landing Page, Eleasticvue, Cyberchef
cp -R /root/dist/html/* /var/lib/nginx/html/ && \

View File

@ -1,17 +1,19 @@
FROM node:10.24.1-alpine3.11 as builder
#FROM node:17.9.0-alpine3.15 as builder
FROM node:18-alpine3.15 as builder
#
# Prep and build Cyberchef
RUN apk -U --no-cache add git && \
# Prep and build Cyberchef
ENV CY_VER=v9.55.0
RUN apk -U --no-cache add build-base git python3 && \
chown -R node:node /srv && \
npm install -g grunt-cli
WORKDIR /srv
USER node
RUN git clone https://github.com/gchq/cyberchef -b v9.32.3 . && \
RUN git clone https://github.com/gchq/cyberchef -b $CY_VER . && \
NODE_OPTIONS=--max_old_space_size=2048 && \
npm install && \
grunt prod && \
cd build/prod && \
rm CyberChef_v9.32.3.zip && \
rm CyberChef_$CY_VER.zip && \
tar cvfz cyberchef.tgz *
#
FROM scratch AS exporter

View File

@ -10,7 +10,7 @@ RUN apk -U --no-cache add git && \
cd /opt/app && \
cp /opt/src/package.json . && \
cp /opt/src/yarn.lock . && \
yarn install && \
yarn install --ignore-optional && \
cp -R /opt/src/* . && \
# We need to set this ENV so we can run Elasticvue in its own location rather than /
VUE_APP_PUBLIC_PATH=/elasticvue/ yarn build && \

View File

@ -2,6 +2,16 @@ user nginx;
worker_processes auto;
pid /run/nginx.pid;
load_module /usr/lib/nginx/modules/ngx_http_headers_more_filter_module.so;
load_module /usr/lib/nginx/modules/ngx_http_brotli_filter_module.so;
load_module /usr/lib/nginx/modules/ngx_http_brotli_static_module.so;
# OS ENV variables need to be defined here, so Lua can use them
env COCKPIT;
env TPOT_OSTYPE;
# Both modules are needed for Lua, in this exact order
load_module /usr/lib/nginx/modules/ndk_http_module.so;
load_module /usr/lib/nginx/modules/ngx_http_lua_module.so;
events {
worker_connections 768;
@ -9,11 +19,10 @@ events {
}
http {
##
# Basic Settings
##
resolver 127.0.0.11;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
@ -27,6 +36,23 @@ http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Compression
##
# gzip
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/xml+rss application/atom+xml image/svg+xml;
# brotli
brotli on;
brotli_static on;
brotli_comp_level 6;
brotli_types text/xml image/svg+xml application/x-font-ttf image/vnd.microsoft.icon application/x-font-opentype application/json font/eot application/vnd.ms-fontobject application/javascript font/otf application/xml application/xhtml+xml text/javascript application/x-javascript text/plain application/x-font-truetype application/xml+rss image/x-icon font/opentype text/css image/x-win-bitmap;
##
# SSL Settings
##
@ -76,25 +102,11 @@ http {
'"server_protocol": "$server_protocol", ' # request protocol, like HTTP/1.1 or HTTP/2.0
'"pipe": "$pipe", ' # “p” if request was pipelined, “.” otherwise
'"gzip_ratio": "$gzip_ratio", '
'"http_cf_ray": "$http_cf_ray"'
'"http_cf_ray": "$http_cf_ray", '
'"proxy_host": "$proxy_host"'
'}';
access_log /var/log/nginx/access.log main_json;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs

View File

@ -13,13 +13,12 @@ server {
server_name example.com;
error_page 300 301 302 400 401 402 403 404 500 501 502 503 504 /error.html;
root /var/lib/nginx/html;
add_header Cache-Control "public, max-age=604800";
##############################################
### Remove version number add different header
##############################################
server_tokens off;
more_set_headers 'Server: apache';
##############################################
@ -45,7 +44,14 @@ server {
client_body_buffer_size 128k;
client_header_buffer_size 1k;
client_max_body_size 2M;
large_client_header_buffers 2 1k;
### Changed from OWASP recommendations: "2 1k" to "2 1280" (So 1.2k)
### When you pass though potentially another reverse proxy/load balancer
### in front of tpotce you can introduce more headers than normal and
### therefore you can exceed the allowed header buffer of 1k.
### An 280 extra bytes seems to be working for most use-cases.
### And still keeping it close to OWASP's recommendation.
large_client_header_buffers 2 1280;
### Mitigate Slow HHTP DoS Attack
### Timeouts definition ##
@ -84,83 +90,122 @@ server {
auth_basic "closed site";
auth_basic_user_file /etc/nginx/nginxpasswd;
#############################
### T-Pot Landing Page & Apps
#############################
location / {
set_by_lua_block $index_file {
local cockpit = os.getenv("COCKPIT")
if cockpit == "false" then
return "index_light.html"
else
return "index.html"
end
}
auth_basic "closed site";
auth_basic_user_file /etc/nginx/nginxpasswd;
try_files $uri $uri/ /index.html?$args;
index $index_file;
try_files $uri $uri/ /$index_file;
}
location ^~ /cyberchef {
location /elasticvue {
index index.html;
alias /var/lib/nginx/html/cyberchef;
try_files $uri $uri/ /index.html?$args;
alias /var/lib/nginx/html/esvue/;
try_files $uri $uri/ /elasticvue/index.html;
}
location ^~ /elasticvue {
location /cyberchef {
index index.html;
alias /var/lib/nginx/html/esvue;
try_files $uri $uri/ /index.html?$args;
alias /var/lib/nginx/html/cyberchef/;
try_files $uri $uri/ /cyberchef/index.html;
}
#################
### Proxied sites
#################
### Kibana
location /kibana/ {
proxy_pass http://127.0.0.1:64296;
set_by_lua_block $kibana {
local tpot_ostype = os.getenv("TPOT_OSTYPE")
if tpot_ostype == "mac" or tpot_ostype == "win" then
return "http://kibana:5601";
else
return "http://127.0.0.1:64296";
end
}
proxy_pass $kibana;
rewrite /kibana/(.*)$ /$1 break;
}
### ES
location /es/ {
proxy_pass http://127.0.0.1:64298/;
set_by_lua_block $elasticsearch {
local tpot_ostype = os.getenv("TPOT_OSTYPE")
if tpot_ostype == "mac" or tpot_ostype == "win" then
return "http://elasticsearch:9200";
else
return "http://127.0.0.1:64298";
end
}
proxy_pass $elasticsearch;
rewrite /es/(.*)$ /$1 break;
}
### Map
location /map/ {
proxy_pass http://127.0.0.1:64299/;
set_by_lua_block $map_web {
local tpot_ostype = os.getenv("TPOT_OSTYPE")
if tpot_ostype == "mac" or tpot_ostype == "win" then
return "http://map_web:64299";
else
return "http://127.0.0.1:64299";
end
}
proxy_pass $map_web;
rewrite /map/(.*)$ /$1 break;
proxy_read_timeout 7200s;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header Host $http_host;
proxy_redirect http:// https://;
}
location /websocket {
proxy_pass http://127.0.0.1:64299;
proxy_read_timeout 3600s;
set_by_lua_block $map_web {
local tpot_ostype = os.getenv("TPOT_OSTYPE")
if tpot_ostype == "mac" or tpot_ostype == "win" then
return "http://map_web:64299";
else
return "http://127.0.0.1:64299";
end
}
proxy_pass $map_web;
proxy_read_timeout 7200s;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header Host $http_host;
proxy_redirect http:// https://;
}
### spiderfoot
location /spiderfoot {
proxy_pass http://127.0.0.1:64303;
### Spiderfoot
set_by_lua_block $spiderfoot_backend {
local tpot_ostype = os.getenv("TPOT_OSTYPE")
if tpot_ostype == "mac" or tpot_ostype == "win" then
return "http://spiderfoot:8080";
else
return "http://127.0.0.1:64303";
end
}
location /spiderfoot/ {
proxy_pass $spiderfoot_backend;
proxy_set_header Host $http_host;
proxy_redirect http:// https://;
}
location ~ ^/(static|scanviz|scandelete|scaninfo) {
proxy_pass $spiderfoot_backend/spiderfoot/$1$is_args$args;
}
location /static {
proxy_pass http://127.0.0.1:64303/spiderfoot/static;
}
location /scanviz {
proxy_pass http://127.0.0.1:64303/spiderfoot/scanviz;
}
location /scandelete {
proxy_pass http://127.0.0.1:64303/spiderfoot/scandelete;
}
location /scaninfo {
proxy_pass http://127.0.0.1:64303/spiderfoot/scaninfo;
}
}

View File

@ -4,7 +4,35 @@
// ╚═╝╚═╝╝╚╝ ╩ ╚═╝
*/
@import url('https://fonts.googleapis.com/css2?family=Open+Sans:wght@300;400;700&display=swap');
/* @import url('https://fonts.googleapis.com/css2?family=Open+Sans:wght@300;400;700&display=swap'); */
/* open-sans-300 - latin */
@font-face {
font-family: 'Open Sans';
font-style: normal;
font-weight: 300;
src: local(''),
url('assets/fonts/open-sans-v34-latin-300.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
url('assets/fonts/open-sans-v34-latin-300.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
}
/* open-sans-regular - latin */
@font-face {
font-family: 'Open Sans';
font-style: normal;
font-weight: 400;
src: local(''),
url('assets/fonts/open-sans-v34-latin-regular.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
url('assets/fonts/open-sans-v34-latin-regular.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
}
/* open-sans-700 - latin */
@font-face {
font-family: 'Open Sans';
font-style: normal;
font-weight: 700;
src: local(''),
url('assets/fonts/open-sans-v34-latin-700.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
url('assets/fonts/open-sans-v34-latin-700.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
}
/* V A R I A B L E S */
@ -66,7 +94,7 @@ body {
.container {
width: 145vh;
height: 85vh;
height: 90vh;
display: grid;
grid-template-columns: repeat(4, 1fr);
grid-template-rows: repeat(4, 1fr);

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.0 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -3,12 +3,35 @@
// ┴─┘┴└─┘ ┴ └─┘
// Print the first List
const printFirstList = () => {
const isLinkAvailable = async (link) => {
try {
const response = await fetch(link, { method: 'HEAD', mode: 'no-cors' });
if (response.ok) {
// The link is available
return true;
} else if (response.status === 301 || response.status === 302) {
// The link is a redirect, follow the redirect and check the final location
const newLocation = response.headers.get('Location');
if (newLocation) {
const newResponse = await fetch(newLocation, { method: 'HEAD', mode: 'no-cors' });
if (newResponse.ok) {
// The final location is available
return true;
}
}
}
} catch (error) {
console.error('Link check failed: ', error);
}
// The link is not available
return false;
};
const printFirstList = async () => {
let icon = `<i class="list__head" icon-name="${CONFIG.firstListIcon}"></i>`;
const position = 'beforeend';
list_1.insertAdjacentHTML(position, icon);
for (const link of CONFIG.lists.firstList) {
// List item
let item = `
<a
target="${CONFIG.openInNewTab ? '_blank' : ''}"
@ -17,8 +40,10 @@ const printFirstList = () => {
>${link.name}</a
>
`;
const position = 'beforeend';
list_1.insertAdjacentHTML(position, item);
if (await isLinkAvailable(link.link)) {
const position = 'beforeend';
list_1.insertAdjacentHTML(position, item);
}
}
};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,36 +1,28 @@
// ┌┬┐┬┌┬┐┌─┐
// │ ││││├┤
// ┴ ┴┴ ┴└─┘
// Set time and Date
window.onload = displayClock();
// Clock function
function displayClock() {
const monthNames = [
'Jan',
'Feb',
'Mar',
'Apr',
'May',
'Jun',
'Jul',
'Aug',
'Sep',
'Oct',
'Nov',
'Dec',
];
const monthNames = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'];
// Get clock elements
var d = new Date();
var mm = monthNames[d.getMonth()];
var dd = d.getDate();
var min = (mins = ('0' + d.getMinutes()).slice(-2));
var hh = d.getHours();
var ampm = '';
var d = new Date();
var mm = monthNames[d.getMonth()];
var dd = d.getDate();
var min = (mins = ('0' + d.getMinutes()).slice(-2));
var hh = d.getHours();
var ampm = '';
// Display clock elements
document.getElementById('hour').innerText = hh;
document.getElementById('separator').innerHTML = ' : ';
document.getElementById('minutes').innerText = min + ampm;
setTimeout(displayClock, 1000);
}
if (CONFIG.twelveHourFormat) {
ampm = hh >= 12 ? ' pm' : ' am';
hh = hh % 12;
hh = hh ? hh : 12;
}
document.getElementById('hour').innerText = hh;
document.getElementById('separator').innerHTML = ' : ';
document.getElementById('minutes').innerText = min + ampm;
setTimeout(displayClock, 1000);
}

View File

@ -1,11 +1,13 @@
<!DOCTYPE HTML>
<html lang="en-US">
<head>
<meta charset="UTF-8">
<meta http-equiv="refresh">
<script type="text/javascript">
window.location.href = window.location.protocol + '//' + window.location.hostname + ':64294'
</script>
<title>Redirect to Cockpit</title>
</head>
<head>
<meta charset="UTF-8">
<title>Redirect to Cockpit</title>
</head>
<body>
<script type="text/javascript">
window.location.href = window.location.protocol + '//' + window.location.hostname + ':64294';
</script>
</body>
</html>

View File

@ -32,6 +32,10 @@ const CONFIG = {
// Links
lists: {
firstList: [
{
name: 'Attack Map',
link: '/map/',
},
{
name: 'Cockpit',
link: '/cockpit.html',
@ -54,17 +58,13 @@ const CONFIG = {
},
],
secondList: [
{
name: 'Attack Map',
link: '/map/',
},
{
name: 'SecurityMeter',
link: 'https://sicherheitstacho.eu',
},
{
name: 'T-Pot @ GitHub',
link: 'https://github.com/dtag-dev-sec/tpotce/',
link: 'https://github.com/telekom-security/tpotce/',
},
{
name: 'T-Pot ReadMe',

71
docker/nginx/dist/html/config_light.js vendored Normal file
View File

@ -0,0 +1,71 @@
// ╔╗ ╔═╗╔╗╔╔╦╗╔═╗
// ╠╩╗║╣ ║║║ ║ ║ ║
// ╚═╝╚═╝╝╚╝ ╩ ╚═╝
// ┌─┐┌─┐┌┐┌┌─┐┬┌─┐┬ ┬┬─┐┌─┐┌┬┐┬┌─┐┌┐┌
// │ │ ││││├┤ ││ ┬│ │├┬┘├─┤ │ ││ ││││
// └─┘└─┘┘└┘└ ┴└─┘└─┘┴└─┴ ┴ ┴ ┴└─┘┘└┘
const CONFIG = {
// ┌┐ ┌─┐┌─┐┬┌─┐┌─┐
// ├┴┐├─┤└─┐││ └─┐
// └─┘┴ ┴└─┘┴└─┘└─┘
// General
imageBackground: true,
openInNewTab: true,
twelveHourFormat: false,
// Greetings
greetingMorning: 'Good morning ☕',
greetingAfternoon: 'Good afternoon 🍯',
greetingEvening: 'Good evening 😁',
greetingNight: 'Go to Sleep 🥱',
// ┬ ┬┌─┐┌┬┐┌─┐
// │ │└─┐ │ └─┐
// ┴─┘┴└─┘ ┴ └─┘
//Icons
firstListIcon: 'home',
secondListIcon: 'external-link',
// Links
lists: {
firstList: [
{
name: 'Attack Map',
link: '/map/',
},
{
name: 'Cyberchef',
link: '/cyberchef/',
},
{
name: 'Elasticvue',
link: '/elasticvue/',
},
{
name: 'Kibana',
link: '/kibana/',
},
{
name: 'Spiderfoot',
link: '/spiderfoot/',
},
],
secondList: [
{
name: 'SecurityMeter',
link: 'https://sicherheitstacho.eu',
},
{
name: 'T-Pot @ GitHub',
link: 'https://github.com/telekom-security/tpotce/',
},
{
name: 'T-Pot ReadMe',
link: 'https://github.com/telekom-security/tpotce/blob/master/README.md',
},
],
},
};

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

View File

@ -9,7 +9,7 @@
href="assets/icons/favicon.png"
/>
<link rel="stylesheet" href="app.css" />
<script src="https://unpkg.com/lucide@latest"></script>
<script src="assets/js/lucide.min.js"></script>
</head>
<!--
@ -49,7 +49,6 @@
<script src="assets/js/time.js"></script>
<script src="assets/js/theme.js"></script>
<script src="assets/js/greeting.js"></script>
<script src="assets/js/cards.js"></script>
<script src="assets/js/lists.js"></script>
<script>
lucide.createIcons();

60
docker/nginx/dist/html/index_light.html vendored Normal file
View File

@ -0,0 +1,60 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>T-Pot</title>
<link
rel=" shortcut icon"
type="image/png"
href="assets/icons/favicon.png"
/>
<link rel="stylesheet" href="app.css" />
<script src="assets/js/lucide.min.js"></script>
</head>
<!--
╔╗ ╔═╗╔╗╔╔╦╗╔═╗
╠╩╗║╣ ║║║ ║ ║ ║
╚═╝╚═╝╝╚╝ ╩ ╚═╝
-->
<body class="">
<div class="container">
<!-- Clock and Greetings -->
<div class="timeBlock">
<div class="clock">
<div id="hour" class=""></div>
<div id="separator" class=""></div>
<div id="minutes" class=""></div>
</div>
<div id="greetings"></div>
</div>
<!--
┬ ┬┌─┐┌┬┐┌─┐
│ │└─┐ │ └─┐
┴─┘┴└─┘ ┴ └─┘
-->
<div class="card list list__1" id="list_1"></div>
<div class="card list list__2" id="list_2"></div>
</div>
<!-- Config -->
<script src="config_light.js"></script>
<!-- Scripts -->
<script src="assets/js/time.js"></script>
<script src="assets/js/theme.js"></script>
<script src="assets/js/greeting.js"></script>
<script src="assets/js/lists.js"></script>
<script>
lucide.createIcons();
</script>
</body>
<!-- Developed and designed by Miguel R. Ávila: -->
<!-- https://github.com/migueravila -->
</html>

View File

@ -5,6 +5,8 @@ services:
# nginx service
nginx:
build: .
environment:
- COCKPIT=false
container_name: nginx
restart: always
tmpfs:
@ -18,9 +20,9 @@ services:
# cpu_count: 1
# cpus: 0.75
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

View File

@ -1,6 +1,6 @@
# In case of problems Alpine 3.13 needs to be used:
# https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0#faccessat2
FROM alpine:3.15
FROM alpine:3.17
#
# Add source
COPY . /opt/p0f

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/

View File

@ -1,59 +1,14 @@
FROM alpine:3.15 as builder
#
RUN apk -U add --no-cache \
autoconf \
automake \
autoconf-archive \
build-base \
curl-dev \
cmocka-dev \
git \
jansson-dev \
libmicrohttpd-dev \
pcre2-dev \
sqlite-dev \
util-linux-dev
#
RUN apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing \
libosip2-dev
#
# Download SentryPeer sources and build
RUN git clone https://github.com/SentryPeer/SentryPeer -b v1.2.0
#
WORKDIR /SentryPeer
#
RUN ./bootstrap.sh
RUN ./configure --disable-opendht --disable-zyre
RUN make
RUN make check
RUN make install
#RUN tar cvfz sp.tgz /SentryPeer/* && \
# mv sp.tgz /
#
FROM alpine:3.15
#
#COPY --from=builder /sp.tgz /root
COPY --from=builder /SentryPeer/sentrypeer /opt/sentrypeer/
FROM alpine:edge
#
# Install packages
RUN apk -U add --no-cache \
jansson \
libmicrohttpd \
libuuid \
pcre2 \
sqlite-libs && \
apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing \
libosip2 && \
#
# Extract from builder
# mkdir /opt/sentrypeer && \
# tar xvfz /root/sp.tgz --strip-components=1 -C /opt/sentrypeer/ && \
RUN apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing \
sentrypeer && \
#
# Setup user, groups and configs
mkdir -p /var/log/sentrypeer && \
addgroup -g 2000 sentrypeer && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 sentrypeer && \
chown -R sentrypeer:sentrypeer /opt/sentrypeer && \
chown -R sentrypeer:sentrypeer /usr/bin/sentrypeer && \
#
# Clean up
rm -rf /root/* && \
@ -62,5 +17,4 @@ RUN apk -U add --no-cache \
# Set workdir and start sentrypeer
STOPSIGNAL SIGKILL
USER sentrypeer:sentrypeer
WORKDIR /opt/sentrypeer/
CMD ./sentrypeer -jar -f /var/log/sentrypeer/sentrypeer.db -l /var/log/sentrypeer/sentrypeer.json
CMD /usr/bin/sentrypeer -jar -f /var/log/sentrypeer/sentrypeer.db -l /var/log/sentrypeer/sentrypeer.json

View File

@ -1,66 +1,39 @@
FROM alpine:3.15 as builder
FROM alpine:3.16 as builder
#
RUN apk -U add --no-cache \
argon2-dev \
autoconf \
automake \
autoconf-archive \
build-base \
curl-dev \
cmocka-dev \
czmq-dev \
git \
jansson-dev \
libtool \
libmicrohttpd-dev \
pcre2-dev \
readline-dev \
sqlite-dev \
util-linux-dev \
zeromq-dev
util-linux-dev
#
RUN apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing \
libosip2-dev
RUN apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community \
asio-dev \
msgpack-c-dev \
msgpack-cxx-dev
#
# Download and build Zyre
WORKDIR /tmp
RUN git clone https://github.com/savoirfairelinux/opendht dht
WORKDIR /tmp/dht
RUN ./autogen.sh
RUN ./configure
RUN make
RUN make install
RUN ldconfig /etc/ld.so.conf.d
#
WORKDIR /tmp
RUN git clone --quiet https://github.com/zeromq/zyre zyre
WORKDIR /tmp/zyre
RUN ./autogen.sh 2> /dev/null
RUN ./configure --quiet --without-docs
RUN make
RUN make install
RUN ldconfig /etc/ld.so.conf.d
libosip2-dev \
opendht-dev
#
# Download SentryPeer sources and build
WORKDIR /
RUN git clone https://github.com/SentryPeer/SentryPeer.git
RUN git clone https://github.com/SentryPeer/SentryPeer -b v1.4.1
#
WORKDIR /SentryPeer
#
RUN cp -R /tmp/dht/* .
RUN sed -i '/AM_LDFLAGS=/d' Makefile.am
RUN ./bootstrap.sh
#RUN ./configure --disable-opendht --disable-zyre
RUN ./configure
RUN make CPPFLAGS=-D_POSIX_C_SOURCE=199309L
RUN make
RUN make check
RUN make install
RUN tar cvfz sp.tgz /SentryPeer/* && \
mv sp.tgz /
#RUN tar cvfz sp.tgz /SentryPeer/* && \
# mv sp.tgz /
#
FROM alpine:3.15
FROM alpine:3.16
#
#COPY --from=builder /sp.tgz /root
COPY --from=builder /SentryPeer/sentrypeer /opt/sentrypeer/
@ -73,7 +46,8 @@ RUN apk -U add --no-cache \
pcre2 \
sqlite-libs && \
apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing \
libosip2 && \
libosip2 \
opendht-libs && \
#
# Extract from builder
# mkdir /opt/sentrypeer && \
@ -93,4 +67,4 @@ RUN apk -U add --no-cache \
STOPSIGNAL SIGKILL
USER sentrypeer:sentrypeer
WORKDIR /opt/sentrypeer/
CMD ./sentrypeer -draws
CMD ./sentrypeer -warpj -f /var/log/sentrypeer/sentrypeer.db -l /var/log/sentrypeer/sentrypeer.json

View File

@ -1,95 +0,0 @@
FROM debian:bullseye as builder
ENV DEBIAN_FRONTEND noninteractive
#
RUN apt-get update
RUN apt-get dist-upgrade -y \
autoconf \
automake \
autoconf-archive \
build-essential \
git \
libcmocka-dev \
libcurl4-gnutls-dev \
libczmq-dev \
libjansson-dev \
libmicrohttpd-dev \
libopendht-dev \
libosip2-dev \
libpcre2-dev \
libsqlite3-dev \
libtool
#
# Download and build OpenDHT
WORKDIR /tmp
RUN git clone https://github.com/savoirfairelinux/opendht opendht
WORKDIR /tmp/opendht
RUN ./autogen.sh
RUN ./configure
RUN make
RUN make install
RUN ldconfig
#
# Download and build Zyre
WORKDIR /tmp
RUN git clone https://github.com/zeromq/zyre -b v2.0.1 zyre
WORKDIR /tmp/zyre
RUN ./autogen.sh
RUN ./configure --without-docs
RUN make
RUN make install
RUN ldconfig
#
# Download and build SentryPeer
WORKDIR /
RUN git clone https://github.com/SentryPeer/SentryPeer -b v1.0.0
#
WORKDIR /SentryPeer
#
RUN cp -r /tmp/opendht .
RUN ./bootstrap.sh
RUN ./configure
RUN make
RUN make check
RUN make install
#RUN tar cvfz sp.tgz /SentryPeer/* && \
# mv sp.tgz /
#RUN exit 1
#
FROM debian:bullseye
#
#COPY --from=builder /sp.tgz /root
COPY --from=builder /SentryPeer/sentrypeer /opt/sentrypeer/
#
# Install packages
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y \
libcmocka0 \
libcurl4 \
libczmq4 \
libjansson4 \
libmicrohttpd12 \
libosip2-11 \
libsqlite3-0 \
pcre2-utils && \
#
# Extract from builder
# mkdir /opt/sentrypeer && \
# tar xvfz /root/sp.tgz --strip-components=1 -C /opt/sentrypeer/ && \
#
# Setup user, groups and configs
mkdir -p /var/log/sentrypeer && \
addgroup --gid 2000 sentrypeer && \
adduser --system --no-create-home --shell /bin/bash --uid 2000 --disabled-password --disabled-login --gid 2000 sentrypeer && \
chown -R sentrypeer:sentrypeer /opt/sentrypeer && \
#
# Clean up
rm -rf /root/* && \
apt-get autoremove -y --purge && \
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
#
# Set workdir and start sentrypeer
STOPSIGNAL SIGKILL
USER sentrypeer:sentrypeer
WORKDIR /opt/sentrypeer/
CMD ./sentrypeer -draws

View File

@ -12,11 +12,18 @@ services:
restart: always
# cpu_count: 1
# cpus: 0.25
environment:
# - SENTRYPEER_WEB_GUI=0
# - SENTRYPEER_PEER_TO_PEER=0
# - SENTRYPEER_BOOTSTRAP_NODE=bootstrap.sentrypeer.org
- SENTRYPEER_VERBOSE=1
- SENTRYPEER_DEBUG=1
networks:
- sentrypeer_local
ports:
- "4222:4222/udp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
- "127.0.0.1:8082:8082"
image: "dtagdevsec/sentrypeer:2204"
read_only: true
volumes:

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -46,6 +46,7 @@ RUN apk -U --no-cache add \
py3-tempora \
py3-wheel \
py3-xlsxwriter \
py3-yaml \
swig \
tinyxml \
tinyxml-dev \
@ -56,7 +57,7 @@ RUN apk -U --no-cache add \
adduser -S -s /bin/ash -u 2000 -D -g 2000 spiderfoot && \
#
# Install spiderfoot
git clone --depth=1 -b v3.5 https://github.com/smicallef/spiderfoot /home/spiderfoot && \
git clone --depth=1 -b v4.0 https://github.com/smicallef/spiderfoot /home/spiderfoot && \
cd /home/spiderfoot && \
pip3 install --upgrade pip && \
cp /root/dist/requirements.txt . && \
@ -69,7 +70,7 @@ RUN apk -U --no-cache add \
# Clean up
apk del --purge build-base \
gcc \
git \
git \
libffi-dev \
libxml2-dev \
libxslt-dev \
@ -86,4 +87,4 @@ HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080/spiderfoot/'
# Set user, workdir and start spiderfoot
USER spiderfoot:spiderfoot
WORKDIR /home/spiderfoot
CMD echo -n >> /home/spiderfoot/.spiderfoot/spiderfoot.db && exec /usr/bin/python3.9 sf.py -l 0.0.0.0:8080
CMD echo -n >> /home/spiderfoot/.spiderfoot/spiderfoot.db && exec /usr/bin/python3 sf.py -l 0.0.0.0:8080

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/

View File

@ -1,2 +1,2 @@
aiohttp_jinja2==1.1.0
aiohttp_jinja2==1.5.0
cssutils==1.0.2

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -31,12 +31,13 @@ RUN apk -U --no-cache add \
python3-dev && \
#
# Setup Tanner
git clone https://github.com/mushorg/tanner /opt/tanner && \
# git clone https://github.com/mushorg/tanner /opt/tanner && \
git clone https://github.com/t3chn0m4g3/tanner /opt/tanner && \
cd /opt/tanner/ && \
# git fetch origin pull/364/head:test && \
# git checkout test && \
# git checkout 20dabcbccc50f8878525677b925a4c9abcaf9f54 && \
git checkout 2fdce2e2ad7e125012c7e6dcbfa02b50f73c128e && \
# git checkout 2fdce2e2ad7e125012c7e6dcbfa02b50f73c128e && \
# sed -i 's/aioredis/aioredis==1.3.1/g' requirements.txt && \
# sed -i 's/^aiohttp$/aiohttp==3.7.4/g' requirements.txt && \
cp /root/dist/config.yaml /opt/tanner/tanner/data && \
@ -68,7 +69,7 @@ RUN apk -U --no-cache add \
git \
libcap \
libffi-dev \
libressl-dev \
# libressl-dev \
linux-headers \
python3-dev && \
rm -rf /root/* && \

View File

@ -1,4 +1,4 @@
FROM alpine:3.15
FROM alpine:3.17
#
# Include dist
COPY dist/ /root/dist/
@ -27,7 +27,7 @@ RUN apk -U --no-cache add \
# cp /root/dist/views.py /opt/wordpot2/wordpot/views.py && \
cp /root/dist/requirements.txt . && \
pip3 install -r requirements.txt && \
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
setcap cap_net_bind_service=+ep /usr/bin/python3.10 && \
#
# Setup user, groups and configs
addgroup -g 2000 wordpot && \

View File

@ -237,9 +237,9 @@ services:
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

View File

@ -118,9 +118,9 @@ services:
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

View File

@ -374,10 +374,17 @@ services:
sentrypeer:
container_name: sentrypeer
restart: always
# SentryPeer offers to exchange bad actor data via DHT / P2P mode by setting the ENV to true (1)
# In some cases (i.e. internally deployed T-Pots) this might be confusing as SentryPeer will show
# the bad actors in its logs. Therefore this option is opt-in based.
# environment:
# - SENTRYPEER_PEER_TO_PEER=0
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: "dtagdevsec/sentrypeer:2204"
read_only: true
volumes:

View File

@ -408,9 +408,9 @@ services:
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

View File

@ -227,9 +227,9 @@ services:
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

View File

@ -221,9 +221,9 @@ services:
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

View File

@ -31,9 +31,9 @@ services:
- "53:53/udp"
- "80:80"
- "110:110"
- "123:123/udp"
- "123:123"
- "143:143"
- "161:161/udp"
- "161:161"
- "389:389"
- "443:443"
- "445:445"
@ -41,9 +41,11 @@ services:
- "1433:1433"
- "1521:1521"
- "3306:3306"
- "5060:5060"
- "5432:5432"
- "5900:5900"
- "6379:6379"
- "6667:6667"
- "8080:8080"
- "9200:9200"
- "11211:11211"
@ -246,9 +248,9 @@ services:
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

View File

@ -552,9 +552,9 @@ services:
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

View File

@ -374,10 +374,17 @@ services:
sentrypeer:
container_name: sentrypeer
restart: always
# SentryPeer offers to exchange bad actor data via DHT / P2P mode by setting the ENV to true (1)
# In some cases (i.e. internally deployed T-Pots) this might be confusing as SentryPeer will show
# the bad actors in its logs. Therefore this option is opt-in based.
# environment:
# - SENTRYPEER_PEER_TO_PEER=0
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: "dtagdevsec/sentrypeer:2204"
read_only: true
volumes:

View File

@ -374,10 +374,17 @@ services:
sentrypeer:
container_name: sentrypeer
restart: always
# SentryPeer offers to exchange bad actor data via DHT / P2P mode by setting the ENV to true (1)
# In some cases (i.e. internally deployed T-Pots) this might be confusing as SentryPeer will show
# the bad actors in its logs. Therefore this option is opt-in based.
# environment:
# - SENTRYPEER_PEER_TO_PEER=0
networks:
- sentrypeer_local
ports:
# - "4222:4222/udp"
- "5060:5060/udp"
# - "127.0.0.1:8082:8082"
image: "dtagdevsec/sentrypeer:2204"
read_only: true
volumes:
@ -632,9 +639,9 @@ services:
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

View File

@ -264,9 +264,9 @@ services:
- /run
- /var/lib/nginx/tmp:uid=100,gid=82
network_mode: "host"
ports:
- "64297:64297"
- "127.0.0.1:64304:64304"
# ports:
# - "64297:64297"
# - "127.0.0.1:64304:64304"
image: "dtagdevsec/nginx:2204"
read_only: true
volumes:

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More