1 Commits

Author SHA1 Message Date
9e880d14ed test backport fix for hostname 2017-06-29 08:04:49 +00:00
64 changed files with 1216 additions and 2296 deletions

View File

@ -23,7 +23,7 @@ Thank you :smiley:
<a name="info"></a>
### Basic support information
### Baisc support information
- What T-Pot version are you currtently using?
- Are you running on a Intel NUC or a VM?

299
README.md
View File

@ -1,16 +1,16 @@
# T-Pot 17.10 (Alpha)
# T-Pot 16.10 Image Creator
This repository contains the necessary files to create the **[T-Pot](http://dtag-dev-sec.github.io/)** ISO image.
This repository contains the necessary files to create the **[T-Pot community honeypot](http://dtag-dev-sec.github.io/)** ISO image.
The image can then be used to install T-Pot on a physical or virtual machine.
In October 2016 we released
[T-Pot 16.10](http://dtag-dev-sec.github.io/mediator/feature/2016/10/31/t-pot-16.10.html)
In March 2016 we released
[T-Pot 16.03](http://dtag-dev-sec.github.io/mediator/feature/2016/03/11/t-pot-16.03.html)
# T-Pot 17.10 (Alpha - be careful there may be dragons!)
# T-Pot 16.10
T-Pot 17.10 uses latest 16.04 LTS Ubuntu Server Network Installer image, is based on
T-Pot 16.10 now uses Ubuntu Server 16.04 LTS and is based on
[docker](https://www.docker.com/), [docker-compose](https://docs.docker.com/compose/)
[docker](https://www.docker.com/)
and includes dockerized versions of the following honeypots
@ -19,12 +19,8 @@ and includes dockerized versions of the following honeypots
* [dionaea](https://github.com/DinoTools/dionaea),
* [elasticpot](https://github.com/schmalle/ElasticPot),
* [emobility](https://github.com/dtag-dev-sec/emobility),
* [glastopf](http://glastopf.org/),
* [honeytrap](https://github.com/armedpot/honeytrap/),
* [mailoney](https://github.com/awhitehatter/mailoney),
* [rdpy](https://github.com/citronneur/rdpy) and
* [vnclowpot](https://github.com/magisterquis/vnclowpot)
* [glastopf](http://glastopf.org/) and
* [honeytrap](https://github.com/armedpot/honeytrap/)
Furthermore we use the following tools
@ -32,7 +28,6 @@ Furthermore we use the following tools
* [Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster.
* [Netdata](http://my-netdata.io/) for real-time performance monitoring.
* [Portainer](http://portainer.io/) a web based UI for docker.
* [Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool.
* [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine.
* [Wetty](https://github.com/krishnasrinivas/wetty) a web based SSH client.
@ -40,7 +35,7 @@ Furthermore we use the following tools
# TL;DR
1. Meet the [system requirements](#requirements). The T-Pot installation needs at least 4 GB RAM and 64 GB free disk space as well as a working internet connection.
2. Download the T-Pot ISO from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases) or [create it yourself](#createiso).
2. Download the [tpot.iso](http://community-honeypot.de/tpot.iso) or [create it yourself](#createiso).
3. Install the system in a [VM](#vm) or on [physical hardware](#hw) with [internet access](#placement).
4. Enjoy your favorite beverage - [watch](http://sicherheitstacho.eu/?peers=communityPeers) and [analyze](#kibana).
@ -77,57 +72,58 @@ Seeing is believing :bowtie:
<a name="background"></a>
# Changelog
- **Size still matters** 😅
- All docker images have been rebuilt as micro containers based on Alpine Linux to even further reduce the image size and leading to image sizes (compressed) below the 50 MB mark. The uncompressed size of eMobility and the ELK stack could each be reduced by a whopping 600 MB!
- A "Everything" installation now takes roughly 1.6 GB download size
- **docker-compose**
- T-Pot containers are now being controlled and monitored through docker-compose and a single configuration file `/etc/tpot/tpot.yml` allowing for greater flexibility and resulting in easier image management (i.e. updated images).
- As a benefit only a single `systemd` script `/etc/systemd/system/tpot.service` is needed to start `systemctl start tpot` and stop `systemctl stop tpot` the T-Pot services.
- There are four pre-configured compose configurations which do reflect the T-Pot editions `/etc/tpot/compose`. Simply stop the T-Pot services and copy i.e. `cp /etc/tpot/compose/all.yml /etc/tpot/tpot.yml`, restart the T-Pot services and the selcted edition will be running after downloading the required docker images.
- **Introducing** [Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool.
- **Installation** procedure simplified
- Within the Ubuntu Installer you only have to choose language settings
- After the first reboot the T-Pot installer checks if internet and required services are reachable before the installation procedure begins
- T-Pot Installer now uses a “dialog” which looks way better than the old text based installer
- `tsec` user & password dialog is now part of the T-Pot Installer
- The self-signed certificate is now created automatically to reduce unnecessary overhead for novice users
- New ASCII logo and login screen pointing to web and ssh logins
- Hostnames are now generated using an offline name generator, which still produces funny and collision free hostnames
- **CVE IDs for Suricata**
- Our very own [Listbot](https://github.com/dtag-dev-sec/listbot) builds translation maps for Logstash. If Logstash registers a match the events' CVE ID will be stored alongside the event within Elasticsearch.
- **IP Reputations**
- [Listbot](https://github.com/dtag-dev-sec/listbot) also builds translation maps for blacklisted IPs
- Based upon 30+ publicly available IP blacklisting sources listbot creates a logstash translation map matching the events' source IP addresses against the IPs reputation
- If the source IP is known to a blacklist service a corresponding tag will be stored with the event
- Updates occur on every logstash container start; by default every 24h
- **Ubuntu 16.04 LTS** is now being used as T-Pot's OS base
- **Size does matter** 😅
- `tpot.iso` is now based on **Ubuntu's** network installer reducing the image download size by 600MB from 650MB to only **50MB**
- All docker images have been rebuilt to reduce the image size at least by 50MB in some cases even 400-600MB
- A "Everything" installation takes roughly 2GB less download size (counting from initial image download)
- **Introducing** new tools making things a lot easier for new users
- [Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster.
- [Netdata](http://my-netdata.io/) for real-time performance monitoring.
- [Portainer](http://portainer.io/) a web based UI for docker.
- [Wetty](https://github.com/krishnasrinivas/wetty) a web based SSH client.
- **NGINX** implemented as HTTPS reverse proxy
- Access Kibana, ES Head plugin, UI-for-Docker, WebSSH and Netdata via browser!
- Two factor based SSH tunnel is no longer needed!
- **Installation** procedure improved
- Set your own password for the *tsec* user
- Choose your installation type without the need of building your own image
- Setup a remote user / password for secure web access including a self-signed-certificate
- Easy to remember hostnames
- **First login** easy and secure
- Access from console, ssh or web
- No two-factor-authentication needed for ssh when logging in from RFC1918 networks
- Enforcing public-key authentication for ssh connections other than RFC1918 networks
- **Systemd** now supersedes *upstart* as init system. All upstart scripts were ported to systemd along with the following improvements:
- Improved start / stop handling of containers
- Set persistence individually per container startup scripts (`/etc/systemd/system`)
- Set persistence globally (`/usr/bin/clean.sh`)
- **Honeypot updates and improvements**
- All honeypots were updated to their latest & stable versions.
- **New Honeypots** were added ...
* [mailoney](https://github.com/awhitehatter/mailoney)
- A low interaction SMTP honeypot
* [rdpy](https://github.com/citronneur/rdpy)
- A low interaction RDP honeypot
* [vnclowpot](https://github.com/magisterquis/vnclowpot)
- A low interaction VNC honeypot
- **Persistence** is now enabled by default and will keep honeypot logs and tools data in `/data/` and its sub-folders by default for 30 days. You may change that behavior in `/etc/tpot/logrotate/logrotate.conf`. ELK data however will be kept for 90 days by default. You may change that behavior in `/etc/tpot/curator/actions.yml`. Scripts will be triggered through `/etc/crontab`.
- **Conpot** now supports **JSON logging** with many thanks as to making this feature request possible going to:
- [Andrea Pasquale](https://github.com/adepasquale),
- [Danilo Massa](https://github.com/danilo-massa) &
- [Johnny Vestergaard](https://github.com/johnnykv)
- **Cowrie** is now supporting **telnet** which is highly appreciated and thank you
- [Michel Oosterhof](https://github.com/micheloosterhof)
- **Dionaea** now supports **JSON logging** with many thanks as to making this feature request possible going to:
- [PhiBo](https://github.com/phibos)
- **Elasticpot** now supports **logging all queries and requests** with many thanks as to making this feature request possible going to:
- [Markus Schmall](https://github.com/schmalle)
- **Honeytrap** now supports **JSON logging** with many thanks as to making this feature request possible going to:
- [Andrea Pasquale](https://github.com/adepasquale)
- **Updates**
- **Docker** was updated to the latest **1.12.6** release within Ubuntu 16.04.x LTS
- **ELK** was updated to the latest **Kibana 5.6.1**, **Elasticsearch 5.6.1** and **Logstash 5.6.1** releases.
- **Suricata** was updated to the latest **4.0.0** version including the latest **Emerging Threats** community ruleset.
- **Dashboards Makeover**
- We now have **160+ Visualizations** pre-configured and compiled to 14 individual **Kibana Dashboards** for every honeypot. Monitor all *honeypot events* locally on your T-Pot installation. Aside from *honeypot events* you can also view *Suricata NSM, Syslog and NGINX* events for a quick overview of local host events.
- View available IP reputation of any source IP address
- View available CVE ID for events
- More **Smart links** are now included.
- **Docker** was updated to the latest **1.12.2** release
- **ELK** was updated to the latest **Kibana 4.6.2**, **Elasticsearch 2.4.1** and **Logstash 2.4.0** releases.
- **Suricata** was updated to the latest **3.1.2** version including the latest **Emerging Threats** community ruleset.
- We now have **150 Visualizations** pre-configured and compiled to 14 individual **Kibana Dashboards** for every honeypot. Monitor all *honeypot events* locally on your T-Pot installation. Aside from *honeypot events* you can also view *Suricata NSM, Syslog and NGINX* events for a quick overview of local host events.
- More **Smart links** are now included.
<a name="concept"></a>
# Technical Concept
T-Pot is based on the network installer of Ubuntu Server 16.04.x LTS.
The honeypot daemons as well as other support components being used have been containerized using [docker](http://docker.io).
This allows us to run multiple honeypot daemons on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment.
T-Pot is based on the network installer of Ubuntu Server 16.04 LTS.
The honeypot daemons as well as other support components being used have been paravirtualized using [docker](http://docker.io).
This allows us to run multiple honeypot daemons on the same network interface without problems and thus making the entire system very low maintenance. <br>The encapsulation of the honeypot daemons in docker provides a good isolation of the runtime environments and easy update mechanisms.
In T-Pot we combine the dockerized honeypots
[conpot](http://conpot.org/),
@ -135,34 +131,27 @@ In T-Pot we combine the dockerized honeypots
[dionaea](https://github.com/DinoTools/dionaea),
[elasticpot](https://github.com/schmalle/ElasticPot),
[emobility](https://github.com/dtag-dev-sec/emobility),
[glastopf](http://glastopf.org/),
[honeytrap](https://github.com/armedpot/honeytrap/),
[mailoney](https://github.com/awhitehatter/mailoney),
[rdpy](https://github.com/citronneur/rdpy) and
[vnclowpot](https://github.com/magisterquis/vnclowpot) with
[ELK stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot,
[Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster,
[Netdata](http://my-netdata.io/) for real-time performance monitoring,
[Portainer](http://portainer.io/) a web based UI for docker,
[Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool,
[Suricata](http://suricata-ids.org/) a Network Security Monitoring engine and
[Wetty](https://github.com/krishnasrinivas/wetty) a web based SSH client.
[glastopf](http://glastopf.org/) and
[honeytrap](https://github.com/armedpot/honeytrap/) with
[suricata](http://suricata-ids.org/) a Network Security Monitoring engine and the
[ELK stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot. Events will be correlated by our own data submission tool [ewsposter](https://github.com/dtag-dev-sec/ews) which also supports Honeynet project hpfeeds honeypot data sharing.
![Architecture](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/architecture.png)
![Architecture](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/architecture.png)
While data within docker containers is volatile we do now ensure a default 30 day persistence of all relevant honeypot and tool data in the well known `/data` folder and sub-folders. The persistence configuration may be adjusted in `/etc/tpot/logrotate/logrotate.conf`. Once a docker container crashes, all other data produced within its environment is erased and a fresh instance is started from the corresponding docker image.<br>
All data in docker is volatile. Once a docker container crashes, all data produced within its environment is gone and a fresh instance is restarted. Hence, for some data that needs to be persistent, i.e. config files, we have a persistent storage **`/data/`** on the host in order to make it available and persistent across container or system restarts.<br>
Important log data is now also stored outside the container in `/data/<container-name>` allowing easy access to logs from within the host and. The **systemd** scripts have been adjusted to support storing data on the host either volatile (*default*) or persistent (adjust individual systemd scripts in `/etc/systemd/system` or use a global setting in `/usr/bin/clear.sh`).
Basically, what happens when the system is booted up is the following:
- start host system
- start all the necessary services (i.e. docker-engine, reverse proxy, etc.)
- start all docker containers via docker-compose (honeypots, nms, elk)
- start all docker containers (honeypots, nms, elk)
Within the T-Pot project, we provide all the tools and documentation necessary to build your own honeypot system and contribute to our [community data view](http://sicherheitstacho.eu/?peers=communityPeers), a separate channel on our [Sicherheitstacho](http://sicherheitstacho.eu) that is powered by T-Pot community data.
The source code and configuration files are stored in individual GitHub repositories, which are linked below. The docker images are pre-configured for the T-Pot environment. If you want to run the docker images separately, make sure you study the docker-compose configuration (`/etc/tpot/tpot.yml`) and the T-Pot systemd script (`/etc/systemd/system/tpot.service`), as they provide a good starting point for implementing changes.
The source code and configuration files are stored in individual GitHub repositories, which are linked below. The docker images are tailored to be run in this environment. If you want to run the docker images separately, make sure you study the upstart scripts, as they provide an insight on how we configured them.
The individual docker configurations are located in the following GitHub repositories:
The individual docker configurations etc. we used can be found here:
- [conpot](https://github.com/dtag-dev-sec/conpot)
- [cowrie](https://github.com/dtag-dev-sec/cowrie)
@ -170,65 +159,63 @@ The individual docker configurations are located in the following GitHub reposit
- [elasticpot](https://github.com/dtag-dev-sec/elasticpot)
- [elk-stack](https://github.com/dtag-dev-sec/elk)
- [emobility](https://github.com/dtag-dev-sec/emobility)
- [ewsposter](https://github.com/dtag-dev-sec/ews)
- [glastopf](https://github.com/dtag-dev-sec/glastopf)
- [honeytrap](https://github.com/dtag-dev-sec/honeytrap)
- [mailoney](https://github.com/dtag-dev-sec/mailoney)
- [netdata](https://github.com/dtag-dev-sec/netdata)
- [portainer](https://github.com/dtag-dev-sec/ui-for-docker)
- [rdpy](https://github.com/dtag-dev-sec/rdpy)
- [spiderfoot](https://github.com/dtag-dev-sec/spiderfoot)
- [suricata & p0f](https://github.com/dtag-dev-sec/suricata)
- [vnclowpot](https://github.com/dtag-dev-sec/vnclowpot)
- [suricata](https://github.com/dtag-dev-sec/suricata)
<a name="requirements"></a>
# System Requirements
Depending on your installation type, whether you install on [real hardware](#hardware) or in a [virtual machine](#vm), make sure your designated T-Pot system meets the following requirements:
##### T-Pot Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap, Mailoney, Rdpy, Vnclowpot, ELK, Suricata+P0f & Tools)
##### T-Pot Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap, ELK, Suricata+P0f & Tools)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 4 GB RAM (6-8 GB recommended)
- 64 GB SSD (128 GB SSD recommended)
- 64 GB disk (128 GB SSD recommended)
- Network via DHCP
- A working, non-proxied, internet connection
- A working internet connection
##### Honeypot Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap, Mailoney, Rdpy, Vnclowpot)
##### Sensor Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 3 GB RAM (4-6 GB recommended)
- 64 GB SSD (64 GB SSD recommended)
- 64 GB disk (64 GB SSD recommended)
- Network via DHCP
- A working, non-proxied, internet connection
- A working internet connection
##### Industrial Installation (ConPot, eMobility, ELK, Suricata+P0f & Tools)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 4 GB RAM (8 GB recommended)
- 64 GB SSD (128 GB SSD recommended)
- 64 GB disk (128 GB SSD recommended)
- Network via DHCP
- A working, non-proxied, internet connection
- A working internet connection
##### Everything Installation (Everything, all of the above)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 8+ GB RAM
- 128+ GB SSD
- 8 GB RAM
- 128 GB disk or larger (128 GB SSD or larger recommended)
- Network via DHCP
- A working, non-proxied, internet connection
- A working internet connection
<a name="installation"></a>
# Installation
The installation of T-Pot is straight forward and heavily depends on a working, transparent and non-proxied up and running internet connection. Otherwise the installation **will fail!**
The installation of T-Pot is straight forward. Please be advised that you should have an internet connection up and running as all all the docker images for the chosen installation type need to be pulled from docker hub.
Firstly, decide if you want to download our prebuilt installation ISO image from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases) ***or*** [create it yourself](#createiso).
Firstly, decide if you want to download our prebuilt installation ISO image [tpot.iso](http://community-honeypot.de/tpot.iso) ***or*** [create it yourself](#createiso).
Secondly, decide where you want to let the system run: [real hardware](#hardware) or in a [virtual machine](#vm)?
<a name="prebuilt"></a>
## Prebuilt ISO Image
We provide an installation ISO image for download (~50MB), which is created using the same [tool](https://github.com/dtag-dev-sec/tpotce) you can use yourself in order to create your own image. It will basically just save you some time downloading components and creating the ISO image.
You can download the prebuilt installation image from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases) and jump to the [installation](#vm) section.
You can download the prebuilt installation image [here](http://community-honeypot.de/tpot.iso) and jump to the [installation](#vm) section. The ISO image is hosted by our friends from [Strato](http://www.strato.de) / [Cronon](http://www.cronon.de).
sha256sum tpot.iso
df6b1db24d0dcc421125dc973fbb2d17aa91cd9ff94607dde9d1b09a92bcbaf0 tpot.iso
<a name="createiso"></a>
## Create your own ISO Image
@ -243,15 +230,15 @@ For transparency reasons and to give you the ability to customize your install,
**How to create the ISO image:**
1. Clone the repository and enter it.
```
git clone https://github.com/dtag-dev-sec/tpotce
cd tpotce
```
git clone https://github.com/dtag-dev-sec/tpotce.git
cd tpotce
2. Invoke the script that builds the ISO image.
The script will download and install dependencies necessary to build the image on the invoking machine. It will further download the ubuntu network installer image (~50MB) which T-Pot is based on.
```
sudo ./makeiso.sh
```
sudo ./makeiso.sh
After a successful build, you will find the ISO image `tpot.iso` along with a SHA256 checksum `tpot.sha256`in your directory.
<a name="vm"></a>
@ -262,9 +249,9 @@ We successfully tested T-Pot with [VirtualBox](https://www.virtualbox.org) and [
It is important to make sure you meet the [system requirements](#requirements) and assign a virtual harddisk >=64 GB, >=4 GB RAM and bridged networking to T-Pot.
You need to enable promiscuous mode for the network interface for suricata and p0f to work properly. Make sure you enable it during configuration.
You need to enable promiscuous mode for the network interface for suricata to work properly. Make sure you enable it during configuration.
If you want to use a wifi card as primary NIC for T-Pot, please be aware of the fact that not all network interface drivers support all wireless cards. E.g. in VirtualBox, you then have to choose the *"MT SERVER"* model of the NIC.
If you want to use a wifi card as primary NIC for T-Pot, please remind that not all network interface drivers support all wireless cards. E.g. in VirtualBox, you then have to choose the *"MT SERVER"* model of the NIC.
Lastly, mount the `tpot.iso` ISO to the VM and continue with the installation.<br>
@ -282,9 +269,9 @@ Whereas most CD burning tools allow you to burn from ISO images, the procedure t
<a name="firstrun"></a>
## First Run
The installation requires very little interaction, only a locale and keyboard setting has to be answered for the basic linux installation. The system will reboot and please maintain an active internet connection. The T-Pot installer will start and ask you for an installation type, password for the **tsec** user and credentials for a **web user**. Everything else will be configured automatically. All docker images and other componenents will be downloaded. Depending on your network connection and the chosen installation type, the installation may take some time. During our tests (50Mbit down, 10Mbit up), the installation is usually finished within a 30 minute timeframe.
The installation requires very little interaction, only some locales and keyboard settings have to be answered. Everything else will be configured automatically. The system will reboot two times. Make sure it can access the internet as it needs to download the updates and the dockerized honeypot components. Depending on your network connection and the chosen installation type, the installation may take some time. During our tests (50Mbit down, 10Mbit up), the installation is usually finished within <=30 minutes.
Once the installation is finished, the system will automatically reboot and you will be presented with the T-Pot login screen. On the console you may login with the **tsec** user:
Once the installation is finished, the system will automatically reboot and you will be presented with the T-Pot login screen. The user credentials for the first login are:
- user: **tsec**
- pass: **password you chose during the installation**
@ -299,9 +286,21 @@ You can also login from your browser: ``https://<your.ip>:64297``
<a name="placement"></a>
# System Placement
Make sure your system is reachable through the internet. Otherwise it will not capture any attacks, other than the ones from your internal network! We recommend you put it in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface.
Make sure your system is reachable through the internet. Otherwise it will not capture any attacks, other than the ones from your hostile internal network! We recommend you put it in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface.
If you are behind a NAT gateway (e.g. home router), here is a list of ports that should be forwarded to T-Pot.
| Honeypot|Transport|Forwarded ports|
|---|---|---|
| conpot | TCP | 1025, 50100 |
| cowrie | TCP | 22, 23 |
| dionaea | TCP | 21, 42, 135, 443, 445, 1433, 1723, 1883, 1900, 3306, 5060, 5061, 8081, 11211 |
| dionaea | UDP | 69, 5060 |
| elasticpot | TCP | 9200 |
| emobility | TCP | 8080 |
| glastopf | TCP | 80 |
| honeytrap | TCP | 25, 110, 139, 3389, 4444, 4899, 5900, 21000 |
A list of all relevant ports is available as part of the [Technical Concept](#concept)
<br>
Basically, you can forward as many TCP ports as you want, as honeytrap dynamically binds any TCP port that is not covered by the other honeypot daemons.
@ -309,7 +308,8 @@ Basically, you can forward as many TCP ports as you want, as honeytrap dynamical
In case you need external SSH access, forward TCP port 64295 to T-Pot, see below.
In case you need external web access, forward TCP port 64297 to T-Pot, see below.
T-Pot requires outgoing git, http, https connections for updates (Ubuntu, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location.
T-Pot requires outgoing http and https connections for updates (ubuntu, docker) and attack submission (ewsposter, hpfeeds).
<a name="options"></a>
# Options
@ -326,9 +326,9 @@ If you do not have a SSH client at hand and still want to access the machine via
- user: **user you chose during the installation**
- pass: **password you chose during the installation**
and choose **WebTTY** from the navigation bar. You will be prompted to allow access for this connection and enter the password for the user **tsec**.
and choose **WebSSH** from the navigation bar. You will be prompted to allow access for this connection and enter the password for the user **tsec**.
![WebTTY](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/webssh.png)
![WebSSH](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/webssh.png)
<a name="kibana"></a>
## Kibana Dashboard
@ -337,60 +337,47 @@ Just open a web browser and access and connect to `https://<your.ip>:64297`, ent
- user: **user you chose during the installation**
- pass: **password you chose during the installation**
and **Kibana** will automagically load. The Kibana dashboard can be customized to fit your needs. By default, we haven't added any filtering, because the filters depend on your setup. E.g. you might want to filter out your incoming administrative ssh connections and connections to update servers.
and the **Kibana dashboard** will automagically load. The Kibana dashboard can be customized to fit your needs. By default, we haven't added any filtering, because the filters depend on your setup. E.g. you might want to filter out your incoming administrative ssh connections and connections to update servers.
![Dashbaord](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/dashboard.png)
![Dashbaord](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/dashboard.png)
<a name="tools"></a>
## Tools
We included some web based management tools to improve and ease up on your daily tasks.
![ES Head Plugin](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/headplugin.png)
![Netdata](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/netdata.png)
![Portainer](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/dockerui.png)
![Spiderfoot](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/spiderfoot.png)
![ES Head Plugin](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/headplugin.png)
![UI-For-Docker](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/dockerui.png)
![Netdata](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/netdata.png)
<a name="maintenance"></a>
## Maintenance
As mentioned before, the system was designed to be low maintenance. Basically, there is nothing you have to do but let it run.
As mentioned before, the system was designed to be low maintenance. Basically, there is nothing you have to do but let it run. If one of the dockerized daemon fails, it will restart. If this fails, the regarding upstart job will be restarted.
If you run into any problems, a reboot may fix it :bowtie:
If you run into any problems, a reboot may fix it. ;)
If new versions of the components involved appear, we will test them and build new docker images. Those new docker images will be pushed to docker hub and downloaded to T-Pot and activated accordingly.
<a name="submission"></a>
## Community Data Submission
We provide T-Pot in order to make it accessible to all parties interested in honeypot deployment. By default, the data captured is submitted to a community backend. This community backend uses the data to feed a [community data view](http://sicherheitstacho.eu/?peers=communityPeers), a separate channel on our own [Sicherheitstacho](http://sicherheitstacho.eu), which is powered by our own set of honeypots.
You may opt out the submission to our community server by removing the `# Ewsposter service` from `/etc/tpot/tpot.yml`:
1. Stop T-Pot services: `systemctl stop tpot`
2. Remove Ewsposter service: `vi /etc/tpot/tpot.yml`
3. Remove the following lines, save and exit vi (`:x!`):<br>
```
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
```
4. Start T-Pot services: `systemctl start tpot`
You may opt out the submission to our community server by disabling it in the `[EWS]`-section of the config file `/data/ews/conf/ews.cfg`.
Further we support [hpfeeds](https://github.com/rep/hpfeeds). It is disabled by default since you need to supply a channel you want to post to and enter your user credentials. To enable hpfeeds, edit the config file `/data/ews/conf/ews.cfg`, section `[HPFEED]` and set it to true.
Data is submitted in a structured ews-format, a XML stucture. Hence, you can parse out the information that is relevant to you.
We encourage you not to disable the data submission as it is the main purpose of the community approach - as you all know **sharing is caring** 😍
The *`/data/ews/conf/ews.cfg`* file contains many configuration parameters required for the system to run. You can - if you want - add an email address, that will be included with your submissions, in order to be able to identify your requests later. Further you can add a proxy.
Please do not change anything other than those settings and only if you absolutely need to. Otherwise, the system may not work as expected.
<a name="roadmap"></a>
# Roadmap
As with every development there is always room for improvements ...
- Bump ELK-stack to 6.x
- Introduce new honeypots
- Include automatic updates
- Bump ELK-stack to 5.0
- Move from Glastopf to SNARE
- Documentation 😎
Some features may be provided with updated docker images, others may require some hands on from your side.
@ -403,6 +390,10 @@ You are always invited to participate in development on our [GitHub](https://git
- You install and you run within your responsibility. Choose your deployment wisely as a system compromise can never be ruled out.
- Honeypots should - by design - not host any sensitive data. Make sure you don't add any.
- By default, your data is submitted to the community dashboard. You can disable this in the config. But hey, wouldn't it be better to contribute to the community?
- By default, hpfeeds submission is disabled. You can enable it in the config section for hpfeeds. This is due to the nature of hpfeeds. We do not want to spam any channel, so you can choose where to post your data and who to share it with.
- Malware submission is enabled by default but malware is currently not processed on the submission backend. This may be added later, but can also be disabled in the `ews.cfg` config file.
- The system restarts the docker containers every night to avoid clutter and reduce disk consumption. *All data in the container is then reset.* The data displayed in kibana is kept for <=90 days.
<a name="faq"></a>
# FAQ
@ -417,14 +408,12 @@ For general feedback you can write to cert @ telekom.de.
<a name="licenses"></a>
# Licenses
The software that T-Pot is built on uses the following licenses.
The software that T-Pot is built on, uses the following licenses.
<br>GPLv2: [conpot (by Lukas Rist)](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap (by Tillmann Werner)](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
<br>GPLv3: [elasticpot (by Markus Schmall)](https://github.com/schmalle/ElasticPot), [emobility (by Mohamad Sbeiti)](https://github.com/dtag-dev-sec/emobility/blob/master/LICENSE), [ewsposter (by Markus Schroer)](https://github.com/dtag-dev-sec/ews/), [glastopf (by Lukas Rist)](https://github.com/glastopf/glastopf/blob/master/GPL), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [netdata](https://github.com/firehol/netdata/blob/master/LICENSE.md)
<br>Apache 2 License: [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE)
<br>MIT License: [ctop](https://github.com/bcicen/ctop/blob/master/LICENSE), [wetty](https://github.com/krishnasrinivas/wetty/blob/master/LICENSE)
<br>zlib License: [vnclowpot](https://github.com/magisterquis/vnclowpot/blob/master/LICENSE)
<br>GPLv3: [elasticpot (by Markus Schmall)](https://github.com/schmalle/ElasticPot), [emobility (by Mohamad Sbeiti)](https://github.com/dtag-dev-sec/emobility/blob/master/LICENSE), [ewsposter (by Markus Schroer)](https://github.com/dtag-dev-sec/ews/), [glastopf (by Lukas Rist)](https://github.com/glastopf/glastopf/blob/master/GPL), [netdata](https://github.com/firehol/netdata/blob/master/LICENSE.md)
<br>Apache 2 License: [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker] (https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE)
<br>MIT License: [tagcloud (by Shelby Sturgis)](https://github.com/stormpython/tagcloud/blob/master/LICENSE), [heatmap (by Shelby Sturgis)](https://github.com/stormpython/heatmap/blob/master/LICENSE), [wetty](https://github.com/krishnasrinivas/wetty/blob/master/LICENSE)
<br>[cowrie (copyright disclaimer by Upi Tamminen)](https://github.com/micheloosterhof/cowrie/blob/master/doc/COPYRIGHT)
<br>[mailoney](https://github.com/awhitehatter/mailoney)
<br>[Ubuntu licensing](http://www.ubuntu.com/about/about-ubuntu/licensing)
<br>[Portainer](https://github.com/portainer/portainer/blob/develop/LICENSE)
@ -432,7 +421,7 @@ The software that T-Pot is built on uses the following licenses.
# Credits
Without open source and the fruitful development community we are proud to be a part of T-Pot would not have been possible. Our thanks are extended but not limited to the following people and organizations:
### The developers and development communities of
###The developers and development communities of
* [conpot](https://github.com/mushorg/conpot/graphs/contributors)
* [cowrie](https://github.com/micheloosterhof/cowrie/graphs/contributors)
@ -444,21 +433,19 @@ Without open source and the fruitful development community we are proud to be a
* [emobility](https://github.com/dtag-dev-sec/emobility/graphs/contributors)
* [ewsposter](https://github.com/armedpot/ewsposter/graphs/contributors)
* [glastopf](https://github.com/mushorg/glastopf/graphs/contributors)
* [heatmap](https://github.com/stormpython/heatmap/graphs/contributors)
* [honeytrap](https://github.com/armedpot/honeytrap/graphs/contributors)
* [kibana](https://github.com/elastic/kibana/graphs/contributors)
* [logstash](https://github.com/elastic/logstash/graphs/contributors)
* [mailoney](https://github.com/awhitehatter/mailoney)
* [netdata](https://github.com/firehol/netdata/graphs/contributors)
* [p0f](http://lcamtuf.coredump.cx/p0f3/)
* [portainer](https://github.com/portainer/portainer/graphs/contributors)
* [rdpy](https://github.com/citronneur/rdpy)
* [spiderfoot](https://github.com/smicallef/spiderfoot)
* [suricata](https://github.com/inliniac/suricata/graphs/contributors)
* [tagcloud](https://github.com/stormpython/tagcloud/graphs/contributors)
* [ubuntu](http://www.ubuntu.com/)
* [vnclowpot](https://github.com/magisterquis/vnclowpot)
* [wetty](https://github.com/krishnasrinivas/wetty/graphs/contributors)
### The following companies and organizations
###The following companies and organizations
* [cannonical](http://www.canonical.com/)
* [docker](https://www.docker.com/)
* [elastic.io](https://www.elastic.co/)
@ -470,9 +457,9 @@ Without open source and the fruitful development community we are proud to be a
<a name="staytuned"></a>
# Stay tuned ...
We will be releasing a new version of T-Pot about every 6-12 months.
We will be releasing a new version of T-Pot about every 6 months.
<a name="funfact"></a>
# Fun Fact
Coffee just does not cut it anymore which is why we needed a different caffeine source and consumed *215* bottles of [Club Mate](https://de.wikipedia.org/wiki/Club-Mate) during the development of T-Pot 17.10 😇
Coffee just does not cut it anymore which is why we needed a different caffeine source and consumed *107* bottles of [Club Mate](https://de.wikipedia.org/wiki/Club-Mate) during the development of T-Pot 16.10 😇

Binary file not shown.

Before

Width:  |  Height:  |  Size: 180 KiB

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 594 KiB

After

Width:  |  Height:  |  Size: 319 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 199 KiB

After

Width:  |  Height:  |  Size: 211 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 61 KiB

44
getimages.sh Executable file
View File

@ -0,0 +1,44 @@
#!/bin/bash
########################################################
# T-Pot #
# Export docker images maker #
# #
# v16.03.1 by mo, DTAG, 2016-03-09 #
########################################################
# This feature is experimental and requires at least docker 1.7!
# Using any docker version < 1.7 may result in a unusable T-Pot installation
# This script will download the docker images and export them to the folder "images".
# When building the .iso image the preloaded docker images will be exported to the .iso which
# may be useful if you need to install more than one machine.
# Got root?
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Please run as root ..."
exit
fi
if [ -z $1 ]
then
echo "Please view the script for more details!"
exit
fi
if [ $1 == "now" ]
then
for name in $(cat installer/data/imgcfg/all_images.conf)
do
docker pull dtagdevsec/$name:latest1610
done
mkdir images
chmod 777 images
for name in $(cat installer/data/full_images.conf)
do
echo "Now exporting dtagdevsec/$name:latest1603"
docker save -o images/$name:latest1610.img dtagdevsec/$name:latest1610
done
chmod 777 images/*.img
fi

60
installer/bin/backup_elk.sh Executable file
View File

@ -0,0 +1,60 @@
#!/bin/bash
########################################################
# T-Pot #
# ELK DB backup script #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
myCOUNT=1
myDATE=$(date +%Y%m%d%H%M)
myELKPATH="/data/elk/"
myBACKUPPATH="/data/"
# Make sure not to interrupt a check
while true
do
if ! [ -a /var/run/check.lock ];
then break
fi
sleep 0.1
if [ "$myCOUNT" = "1" ];
then
echo -n "Waiting for services "
else echo -n .
fi
if [ "$myCOUNT" = "6000" ];
then
echo
echo "Overriding check.lock"
rm /var/run/check.lock
break
fi
myCOUNT=$[$myCOUNT +1]
done
# We do not want to get interrupted by a check
touch /var/run/check.lock
# Stop ELK to lift db lock
echo "Now stopping ELK ..."
systemctl stop elk
sleep 10
# Backup DB in 2 flavors
echo "Now backing up Elasticsearch data ..."
tar cvfz $myBACKUPPATH"$myDATE"_elkall.tgz $myELKPATH
rm -rf "$myELKPATH"log/*
rm -rf "$myELKPATH"data/tpotcluster/nodes/0/indices/logstash*
tar cvfz $myBACKUPPATH"$myDATE"_elkbase.tgz $myELKPATH
rm -rf $myELKPATH
tar xvfz $myBACKUPPATH"$myDATE"_elkall.tgz -C /
chmod 760 -R $myELKPATH
chown tpot:tpot -R $myELKPATH
# Start ELK
systemctl start elk
echo "Now starting up ELK ..."
# Allow checks to resume
rm /var/run/check.lock

View File

@ -1,38 +0,0 @@
#!/bin/bash
# Backup all ES relevant folders
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
# Set vars
myCOUNT=1
myDATE=$(date +%Y%m%d%H%M)
myELKPATH="/data/elk/data"
myKIBANAINDEXNAME=$(curl -s -XGET ''$myES'_cat/indices/' | grep .kibana | awk '{ print $4 }')
myKIBANAINDEXPATH=$myELKPATH/nodes/0/indices/$myKIBANAINDEXNAME
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
### Start ELK
systemctl start tpot
echo "### Now starting T-Pot ..."
}
trap fuCLEANUP EXIT
# Stop T-Pot to lift db lock
echo "### Now stopping T-Pot"
systemctl stop tpot
sleep 2
# Backup DB in 2 flavors
echo "### Now backing up Elasticsearch folders ..."
tar cvfz "elkall_"$myDATE".tgz" $myELKPATH
tar cvfz "elkbase_"$myDATE".tgz" $myKIBANAINDEXPATH

41
installer/bin/check.sh Executable file
View File

@ -0,0 +1,41 @@
#!/bin/bash
########################################################
# T-Pot #
# Check container and services script #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
if [ -a /var/run/check.lock ];
then
echo "Lock exists. Exiting now."
exit
fi
myIMAGES=$(cat /data/images.conf)
touch /var/run/check.lock
myUPTIME=$(awk '{print int($1/60)}' /proc/uptime)
for i in $myIMAGES
do
if [ "$i" != "ui-for-docker" ] && [ "$i" != "netdata" ];
then
myCIDSTATUS=$(docker exec $i supervisorctl status)
if [ $? -ne 0 ];
then
myCIDSTATUS=1
else
myCIDSTATUS=$(echo $myCIDSTATUS | egrep -c "(STOPPED|FATAL)")
fi
if [ $myUPTIME -gt 4 ] && [ $myCIDSTATUS -gt 0 ];
then
echo "Restarting "$i"."
systemctl stop $i
sleep 5
systemctl start $i
fi
fi
done
rm /var/run/check.lock

View File

@ -1,72 +1,25 @@
#!/bin/bash
# T-Pot Container Data Cleaner & Log Rotator
# Set colors
myRED=""
myGREEN=""
myWHITE=""
########################################################
# T-Pot #
# Container Data Cleaner #
# #
# v16.10.0 by mo, DTAG, 2016-05-28 #
########################################################
# Set persistence
myPERSISTENCE=$1
myPERSISTENCE=$2
# Let's create a function to check if folder is empty
fuEMPTY () {
local myFOLDER=$1
echo $(ls $myFOLDER | wc -l)
}
# Let's create a function to rotate and compress logs
fuLOGROTATE () {
local mySTATUS="/etc/tpot/logrotate/status"
local myCONF="/etc/tpot/logrotate/logrotate.conf"
local myCOWRIETTYLOGS="/data/cowrie/log/tty/"
local myCOWRIETTYTGZ="/data/cowrie/log/ttylogs.tgz"
local myCOWRIEDL="/data/cowrie/downloads/"
local myCOWRIEDLTGZ="/data/cowrie/downloads.tgz"
local myDIONAEABI="/data/dionaea/bistreams/"
local myDIONAEABITGZ="/data/dionaea/bistreams.tgz"
local myDIONAEABIN="/data/dionaea/binaries/"
local myDIONAEABINTGZ="/data/dionaea/binaries.tgz"
local myHONEYTRAPATTACKS="/data/honeytrap/attacks/"
local myHONEYTRAPATTACKSTGZ="/data/honeytrap/attacks.tgz"
local myHONEYTRAPDL="/data/honeytrap/downloads/"
local myHONEYTRAPDLTGZ="/data/honeytrap/downloads.tgz"
# Ensure correct permissions and ownerships for logrotate to run without issues
chmod 760 /data/ -R
chown tpot:tpot /data -R
# Run logrotate with force (-f) first, so the status file can be written and race conditions (with tar) be avoided
logrotate -f -s $mySTATUS $myCONF
# Compressing some folders first and rotate them later
if [ "$(fuEMPTY $myCOWRIETTYLOGS)" != "0" ]; then tar cvfz $myCOWRIETTYTGZ $myCOWRIETTYLOGS; fi
if [ "$(fuEMPTY $myCOWRIEDL)" != "0" ]; then tar cvfz $myCOWRIEDLTGZ $myCOWRIEDL; fi
if [ "$(fuEMPTY $myDIONAEABI)" != "0" ]; then tar cvfz $myDIONAEABITGZ $myDIONAEABI; fi
if [ "$(fuEMPTY $myDIONAEABIN)" != "0" ]; then tar cvfz $myDIONAEABINTGZ $myDIONAEABIN; fi
if [ "$(fuEMPTY $myHONEYTRAPATTACKS)" != "0" ]; then tar cvfz $myHONEYTRAPATTACKSTGZ $myHONEYTRAPATTACKS; fi
if [ "$(fuEMPTY $myHONEYTRAPDL)" != "0" ]; then tar cvfz $myHONEYTRAPDLTGZ $myHONEYTRAPDL; fi
# Ensure correct permissions and ownership for previously created archives
chmod 760 $myCOWRIETTYTGZ $myCOWRIEDLTGZ $myDIONAEABITGZ $myDIONAEABINTGZ $myHONEYTRAPATTACKSTGZ $myHONEYTRAPDLTGZ
chown tpot:tpot $myCOWRIETTYTGZ $myCOWRIEDLTGZ $myDIONAEABITGZ $myDIONAEABINTGZ $myHONEYTRAPATTACKSTGZ $myHONEYTRAPDLTGZ
# Need to remove subfolders since too many files cause rm to exit with errors
rm -rf $myCOWRIETTYLOGS $myCOWRIEDL $myDIONAEABI $myDIONAEABIN $myHONEYTRAPATTACKS $myHONEYTRAPDL
# Recreate subfolders with correct permissions and ownership
mkdir -p $myCOWRIETTYLOGS $myCOWRIEDL $myDIONAEABI $myDIONAEABIN $myHONEYTRAPATTACKS $myHONEYTRAPDL
chmod 760 $myCOWRIETTYLOGS $myCOWRIEDL $myDIONAEABI $myDIONAEABIN $myHONEYTRAPATTACKS $myHONEYTRAPDL
chown tpot:tpot $myCOWRIETTYLOGS $myCOWRIEDL $myDIONAEABI $myDIONAEABIN $myHONEYTRAPATTACKS $myHONEYTRAPDL
# Run logrotate again to account for previously created archives - DO NOT FORCE HERE!
logrotate -s $mySTATUS $myCONF
}
# Check persistence
if [ "$myPERSISTENCE" = "on" ];
then
echo "### Persistence enabled, nothing to do."
exit
fi
# Let's create a function to clean up and prepare conpot data
fuCONPOT () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/conpot/*; fi
rm -rf /data/conpot/*
mkdir -p /data/conpot/log
chmod 760 /data/conpot -R
chown tpot:tpot /data/conpot -R
@ -74,7 +27,7 @@ fuCONPOT () {
# Let's create a function to clean up and prepare cowrie data
fuCOWRIE () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/cowrie/*; fi
rm -rf /data/cowrie/*
mkdir -p /data/cowrie/log/tty/ /data/cowrie/downloads/ /data/cowrie/keys/ /data/cowrie/misc/
chmod 760 /data/cowrie -R
chown tpot:tpot /data/cowrie -R
@ -82,7 +35,8 @@ fuCOWRIE () {
# Let's create a function to clean up and prepare dionaea data
fuDIONAEA () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/dionaea/*; fi
rm -rf /data/dionaea/*
rm /data/ews/dionaea/ews.json
mkdir -p /data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/roots/ftp /data/dionaea/roots/tftp /data/dionaea/roots/www /data/dionaea/roots/upnp
chmod 760 /data/dionaea -R
chown tpot:tpot /data/dionaea -R
@ -90,7 +44,7 @@ fuDIONAEA () {
# Let's create a function to clean up and prepare elasticpot data
fuELASTICPOT () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/elasticpot/*; fi
rm -rf /data/elasticpot/*
mkdir -p /data/elasticpot/log
chmod 760 /data/elasticpot -R
chown tpot:tpot /data/elasticpot -R
@ -100,23 +54,24 @@ fuELASTICPOT () {
fuELK () {
# ELK data will be kept for <= 90 days, check /etc/crontab for curator modification
# ELK daemon log files will be removed
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/elk/log/*; fi
mkdir -p /data/elk
rm -rf /data/elk/log/*
mkdir -p /data/elk/logstash/conf
chmod 760 /data/elk -R
chown tpot:tpot /data/elk -R
}
# Let's create a function to clean up and prepare emobility data
fuEMOBILITY () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/emobility/*; fi
mkdir -p /data/emobility/log
rm -rf /data/emobility/*
rm /data/ews/emobility/ews.json
mkdir -p /data/emobility/log /data/ews/emobility
chmod 760 /data/emobility -R
chown tpot:tpot /data/emobility -R
}
# Let's create a function to clean up and prepare glastopf data
fuGLASTOPF () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/glastopf/*; fi
rm -rf /data/glastopf/*
mkdir -p /data/glastopf
chmod 760 /data/glastopf -R
chown tpot:tpot /data/glastopf -R
@ -124,96 +79,46 @@ fuGLASTOPF () {
# Let's create a function to clean up and prepare honeytrap data
fuHONEYTRAP () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/honeytrap/*; fi
rm -rf /data/honeytrap/*
mkdir -p /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/
chmod 760 /data/honeytrap/ -R
chown tpot:tpot /data/honeytrap/ -R
}
# Let's create a function to clean up and prepare mailoney data
fuMAILONEY () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/mailoney/*; fi
mkdir -p /data/mailoney/log/
chmod 760 /data/mailoney/ -R
chown tpot:tpot /data/mailoney/ -R
}
# Let's create a function to clean up and prepare rdpy data
fuRDPY () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/rdpy/*; fi
mkdir -p /data/rdpy/log/
chmod 760 /data/rdpy/ -R
chown tpot:tpot /data/rdpy/ -R
}
# Let's create a function to prepare spiderfoot db
fuSPIDERFOOT () {
mkdir -p /data/spiderfoot
touch /data/spiderfoot/spiderfoot.db
chmod 760 -R /data/spiderfoot
chown tpot:tpot -R /data/spiderfoot
}
# Let's create a function to clean up and prepare suricata data
fuSURICATA () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/suricata/*; fi
rm -rf /data/suricata/*
mkdir -p /data/suricata/log
chmod 760 -R /data/suricata
chown tpot:tpot -R /data/suricata
}
# Let's create a function to clean up and prepare p0f data
fuP0F () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/p0f/*; fi
mkdir -p /data/p0f/log
chmod 760 -R /data/p0f
chown tpot:tpot -R /data/p0f
}
# Let's create a function to clean up and prepare vnclowpot data
fuVNCLOWPOT () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/vnclowpot/*; fi
mkdir -p /data/vnclowpot/log/
chmod 760 /data/vnclowpot/ -R
chown tpot:tpot /data/vnclowpot/ -R
}
# Avoid unwanted cleaning
if [ "$myPERSISTENCE" = "" ];
then
echo $myRED"!!! WARNING !!! - This will delete ALL honeypot logs. "$myWHITE
while [ "$myQST" != "y" ] && [ "$myQST" != "n" ];
do
read -p "Continue? (y/n) " myQST
done
if [ "$myQST" = "n" ];
then
echo $myGREEN"Puuh! That was close! Aborting!"$myWHITE
exit
fi
fi
# Check persistence, if enabled compress and rotate logs
if [ "$myPERSISTENCE" = "on" ];
then
echo "Persistence enabled, now rotating and compressing logs."
fuLOGROTATE
else
echo "Cleaning up and preparing data folders."
fuCONPOT
fuCOWRIE
fuDIONAEA
fuELASTICPOT
fuELK
fuEMOBILITY
fuGLASTOPF
fuHONEYTRAP
fuMAILONEY
fuRDPY
fuSPIDERFOOT
fuSURICATA
fuP0F
fuVNCLOWPOT
fi
case $1 in
conpot)
fuCONPOT $1
;;
cowrie)
fuCOWRIE $1
;;
dionaea)
fuDIONAEA $1
;;
elasticpot)
fuELASTICPOT $1
;;
elk)
fuELK $1
;;
emobility)
fuEMOBILITY $1
;;
glastopf)
fuGLASTOPF $1
;;
honeytrap)
fuHONEYTRAP $1
;;
suricata)
fuSURICATA $1
;;
esac

76
installer/bin/dcres.sh Executable file
View File

@ -0,0 +1,76 @@
#!/bin/bash
########################################################
# T-Pot #
# Container and services restart script #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
myCOUNT=1
while true
do
if ! [ -a /var/run/check.lock ];
then break
fi
sleep 0.1
if [ "$myCOUNT" = "1" ];
then
echo -n "Waiting for services "
else echo -n .
fi
if [ "$myCOUNT" = "6000" ];
then
echo
echo "Overriding check.lock"
rm /var/run/check.lock
break
fi
myCOUNT=$[$myCOUNT +1]
done
myIMAGES=$(cat /data/images.conf)
touch /var/run/check.lock
myUPTIME=$(awk '{print int($1/60)}' /proc/uptime)
if [ $myUPTIME -gt 4 ];
then
for i in $myIMAGES
do
systemctl stop $i
done
echo "### Waiting 10 seconds before restarting docker ..."
sleep 10
iptables -w -F
systemctl restart docker
while true
do
docker info > /dev/null
if [ $? -ne 0 ];
then
echo Docker daemon is still starting.
else
echo Docker daemon is now available.
break
fi
sleep 0.1
done
echo "### Docker is now up and running again."
echo "### Removing obsolete container data ..."
docker rm -v $(docker ps -aq)
echo "### Removing obsolete image data ..."
docker rmi $(docker images | grep "<none>" | awk '{print $3}')
echo "### Starting T-Pot services ..."
for i in $myIMAGES
do
systemctl start $i
done
sleep 5
else
echo "### T-Pot needs to be up and running for at least 5 minutes."
fi
rm /var/run/check.lock
/etc/rc.local

View File

@ -1,71 +1,32 @@
#/bin/bash
# Show current status of all running containers
myPARAM="$1"
myIMAGES="$(cat /etc/tpot/tpot.yml | grep -v '#' | grep container_name | cut -d: -f2)"
myRED=""
myGREEN=""
myBLUE=""
myWHITE=""
myMAGENTA=""
function fuCONTAINERSTATUS {
local myNAME="$1"
local mySTATUS="$(/usr/bin/docker ps -f name=$myNAME --format "table {{.Status}}" -f status=running -f status=exited | tail -n 1)"
myDOWN="$(echo "$mySTATUS" | grep -o -E "(STATUS|NAMES|Exited)")"
case "$myDOWN" in
STATUS)
mySTATUS="$myRED"DOWN"$myWHITE"
;;
NAMES)
mySTATUS="$myRED"DOWN"$myWHITE"
;;
Exited)
mySTATUS="$myRED$mySTATUS$myWHITE"
;;
*)
mySTATUS="$myGREEN$mySTATUS$myWHITE"
;;
esac
printf "$mySTATUS"
}
function fuCONTAINERPORTS {
local myNAME="$1"
local myPORTS="$(/usr/bin/docker ps -f name=$myNAME --format "table {{.Ports}}" -f status=running -f status=exited | tail -n 1 | sed s/","/",\n\t\t\t\t\t\t\t"/g)"
if [ "$myPORTS" != "PORTS" ];
then
printf "$myBLUE$myPORTS$myWHITE"
fi
}
function fuGETSYS {
printf "========| System |========\n"
printf "%+10s %-20s\n" "Date: " "$(date)"
printf "%+10s %-20s\n" "Uptime: " "$(uptime | cut -b 2-)"
printf "%+10s %-20s\n" "CPU temp: " "$(sensors | grep 'Physical' | awk '{ print $4" " }' | tr -d [:cntrl:])"
echo
}
stty -echo -icanon time 0 min 0
myIMAGES=$(cat /data/images.conf)
while true
do
fuGETSYS
printf "%-19s %-36s %s\n" "NAME" "STATUS" "PORTS"
clear
echo "======| System |======"
echo Date:" "$(date)
echo Uptime:" "$(uptime)
echo CPU temp: $(sensors | grep "Physical" | awk '{ print $4 }')
echo
echo "NAME CREATED PORTS"
for i in $myIMAGES; do
myNAME="$myMAGENTA$i$myWHITE"
printf "%-32s %-49s %s" "$myNAME" "$(fuCONTAINERSTATUS $i)" "$(fuCONTAINERPORTS $i)"
echo
if [ "$myPARAM" = "vv" ];
/usr/bin/docker ps -f name=$i --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" -f status=running -f status=exited | GREP_COLORS='mt=01;35' /bin/egrep --color=always "(^[_a-z-]+ |$)|$" | GREP_COLORS='mt=01;32' /bin/egrep --color=always "(Up[ 0-9a-Z ]+ |$)|$" | GREP_COLORS='mt=01;31' /bin/egrep --color=always "(Exited[ \(0-9\) ]+ [0-9a-Z ]+ ago|$)|$" | tail -n 1
if [ "$1" = "vv" ];
then
/usr/bin/docker exec -t "$i" /bin/ps awfuwfxwf | egrep -v -E "awfuwfxwf|/bin/ps"
/usr/bin/docker exec -t $i /bin/ps -awfuwfxwf | egrep -v -E "awfuwfxwf|/bin/ps"
fi
done
if [[ $myPARAM =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
if [[ $1 =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
then
sleep "$myPARAM"
sleep $1
else
break
fi
read myKEY
if [ "$myKEY" == "q" ];
then
break;
fi
done
stty sane

View File

@ -1,45 +0,0 @@
#/bin/bash
# Dump all ES data
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
rm -rf tmp
}
trap fuCLEANUP EXIT
# Set vars
myDATE=$(date +%Y%m%d%H%M)
myINDICES=$(curl -s -XGET ''$myES'_cat/indices/' | grep logstash | awk '{ print $3 }' | sort | grep -v 1970)
myES="http://127.0.0.1:64298/"
myCOL1=""
myCOL0=""
# Dumping all ES data
echo $myCOL1"### The following indices will be dumped: "$myCOL0
echo $myINDICES
echo
mkdir tmp
for i in $myINDICES;
do
echo $myCOL1"### Now dumping: "$i $myCOL0
elasticdump --input=$myES$i --output="tmp/"$i --limit 7500
echo $myCOL1"### Now compressing: tmp/$i" $myCOL0
gzip -f "tmp/"$i
done;
# Build tar archive
echo $myCOL1"### Now building tar archive: es_dump_"$myDATE".tgz" $myCOL0
tar cvf es_dump_$myDATE.tar tmp/*
echo $myCOL1"### Done."$myCOL0

View File

@ -1,77 +0,0 @@
#!/bin/bash
# Export all Kibana objects
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
# Set vars
myDATE=$(date +%Y%m%d%H%M)
myINDEXCOUNT=$(curl -s -XGET ''$myES'.kibana/index-pattern/logstash-*' | tr '\\' '\n' | grep "scripted" | wc -w)
myDASHBOARDS=$(curl -s -XGET ''$myES'.kibana/dashboard/_search?filter_path=hits.hits._id&pretty&size=10000' | jq '.hits.hits[] | {_id}' | jq -r '._id')
myVISUALIZATIONS=$(curl -s -XGET ''$myES'.kibana/visualization/_search?filter_path=hits.hits._id&pretty&size=10000' | jq '.hits.hits[] | {_id}' | jq -r '._id')
mySEARCHES=$(curl -s -XGET ''$myES'.kibana/search/_search?filter_path=hits.hits._id&pretty&size=10000' | jq '.hits.hits[] | {_id}' | jq -r '._id')
myCOL1=""
myCOL0=""
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
rm -rf patterns/ dashboards/ visualizations/ searches/
}
trap fuCLEANUP EXIT
# Export index patterns
mkdir -p patterns
echo $myCOL1"### Now exporting"$myCOL0 $myINDEXCOUNT $myCOL1"index patterns." $myCOL0
curl -s -XGET ''$myES'.kibana/index-pattern/logstash-*?' | jq '._source' > patterns/index-patterns.json
echo
# Export dashboards
mkdir -p dashboards
echo $myCOL1"### Now exporting"$myCOL0 $(echo $myDASHBOARDS | wc -w) $myCOL1"dashboards." $myCOL0
for i in $myDASHBOARDS;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myES'.kibana/dashboard/'$i'' | jq '._source' > dashboards/$i.json
done;
echo
# Export visualizations
mkdir -p visualizations
echo $myCOL1"### Now exporting"$myCOL0 $(echo $myVISUALIZATIONS | wc -w) $myCOL1"visualizations." $myCOL0
for i in $myVISUALIZATIONS;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myES'.kibana/visualization/'$i'' | jq '._source' > visualizations/$i.json
done;
echo
# Export searches
mkdir -p searches
echo $myCOL1"### Now exporting"$myCOL0 $(echo $mySEARCHES | wc -w) $myCOL1"searches." $myCOL0
for i in $mySEARCHES;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myES'.kibana/search/'$i'' | jq '._source' > searches/$i.json
done;
echo
# Building tar archive
echo $myCOL1"### Now building archive"$myCOL0 "kibana-objects_"$myDATE".tgz"
tar cvfz kibana-objects_$myDATE.tgz patterns dashboards visualizations searches > /dev/null
# Stats
echo
echo $myCOL1"### Statistics"
echo $myCOL1"###### Exported"$myCOL0 $myINDEXCOUNT $myCOL1"index patterns." $myCOL0
echo $myCOL1"###### Exported"$myCOL0 $(echo $myDASHBOARDS | wc -w) $myCOL1"dashboards." $myCOL0
echo $myCOL1"###### Exported"$myCOL0 $(echo $myVISUALIZATIONS | wc -w) $myCOL1"visualizations." $myCOL0
echo $myCOL1"###### Exported"$myCOL0 $(echo $mySEARCHES | wc -w) $myCOL1"searches." $myCOL0
echo

View File

@ -1,91 +0,0 @@
#!/bin/bash
# Import Kibana objects
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
# Set vars
myDUMP=$1
myCOL1=""
myCOL0=""
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
rm -rf patterns/ dashboards/ visualizations/ searches/
}
trap fuCLEANUP EXIT
# Check if parameter is given and file exists
if [ "$myDUMP" = "" ];
then
echo $myCOL1"### Please provide a backup file name."$myCOL0
echo $myCOL1"### restore-kibana-objects.sh <kibana-objects.tgz>"$myCOL0
echo
exit
fi
if ! [ -a $myDUMP ];
then
echo $myCOL1"### File not found."$myCOL0
exit
fi
# Unpack tar
tar xvfz $myDUMP > /dev/null
# Restore index patterns
myINDEXCOUNT=$(cat patterns/index-patterns.json | tr '\\' '\n' | grep "scripted" | wc -w)
echo $myCOL1"### Now importing"$myCOL0 $myINDEXCOUNT $myCOL1"index patterns." $myCOL0
curl -s -XDELETE ''$myES'.kibana/index-pattern/logstash-*' > /dev/null
curl -s -XPUT ''$myES'.kibana/index-pattern/logstash-*' -T patterns/index-patterns.json > /dev/null
echo
# Restore dashboards
myDASHBOARDS=$(ls dashboards/*.json | cut -c 12- | rev | cut -c 6- | rev)
echo $myCOL1"### Now importing "$myCOL0$(echo $myDASHBOARDS | wc -w)$myCOL1 "dashboards." $myCOL0
for i in $myDASHBOARDS;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XDELETE ''$myES'.kibana/dashboard/'$i'' > /dev/null
curl -s -XPUT ''$myES'.kibana/dashboard/'$i'' -T dashboards/$i.json > /dev/null
done;
echo
# Restore visualizations
myVISUALIZATIONS=$(ls visualizations/*.json | cut -c 16- | rev | cut -c 6- | rev)
echo $myCOL1"### Now importing "$myCOL0$(echo $myVISUALIZATIONS | wc -w)$myCOL1 "visualizations." $myCOL0
for i in $myVISUALIZATIONS;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XDELETE ''$myES'.kibana/visualization/'$i'' > /dev/null
curl -s -XPUT ''$myES'.kibana/visualization/'$i'' -T visualizations/$i.json > /dev/null
done;
echo
# Restore searches
mySEARCHES=$(ls searches/*.json | cut -c 10- | rev | cut -c 6- | rev)
echo $myCOL1"### Now importing "$myCOL0$(echo $mySEARCHES | wc -w)$myCOL1 "searches." $myCOL0
for i in $mySEARCHES;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XDELETE ''$myES'.kibana/search/'$i'' > /dev/null
curl -s -XPUT ''$myES'.kibana/search/'$i'' -T searches/$i.json > /dev/null
done;
echo
# Stats
echo
echo $myCOL1"### Statistics"
echo $myCOL1"###### Imported"$myCOL0 $myINDEXCOUNT $myCOL1"index patterns." $myCOL0
echo $myCOL1"###### Imported"$myCOL0 $(echo $myDASHBOARDS | wc -w) $myCOL1"dashboards." $myCOL0
echo $myCOL1"###### Imported"$myCOL0 $(echo $myVISUALIZATIONS | wc -w) $myCOL1"visualizations." $myCOL0
echo $myCOL1"###### Imported"$myCOL0 $(echo $mySEARCHES | wc -w) $myCOL1"searches." $myCOL0
echo

View File

@ -10,11 +10,13 @@ dnslist=(
"dig +short myip.opendns.com @resolver2.opendns.com"
"dig +short myip.opendns.com @resolver3.opendns.com"
"dig +short myip.opendns.com @resolver4.opendns.com"
"dig +short -t txt o-o.myaddr.l.google.com @ns1.google.com"
"dig +short -4 -t a whoami.akamai.net @ns1-1.akamaitech.net"
"dig +short whoami.akamai.net @ns1-1.akamaitech.net"
)
httplist=(
4.ifcfg.me
alma.ch/myip.cgi
api.infoip.io/ip
api.ipify.org
@ -30,10 +32,13 @@ httplist=(
ip.tyk.nu
l2.io/ip
smart-ip.net/myip
tnx.nl/ip
wgetip.com
whatismyip.akamai.com
)
# function to shuffle the global array "array"
shuffle() {
local i tmp size max rand
@ -46,6 +51,7 @@ shuffle() {
done
}
# if we have dig and a list of dns methods, try that first
if hash dig 2>/dev/null && [ ${#dnslist[*]} -gt 0 ]; then
eval array=( \"\${dnslist[@]}\" )
@ -61,7 +67,9 @@ if hash dig 2>/dev/null && [ ${#dnslist[*]} -gt 0 ]; then
done
fi
# if we haven't succeeded with DNS, try HTTP
if [ ${#httplist[*]} == 0 ]; then
echo "No hosts in httplist array!" >&2
exit 1

View File

@ -1,61 +0,0 @@
#/bin/bash
# Restore folder based ES backup
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
fi
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
rm -rf tmp
}
trap fuCLEANUP EXIT
# Set vars
myDUMP=$1
myCOL1=""
myCOL0=""
# Check if parameter is given and file exists
if [ "$myDUMP" = "" ];
then
echo $myCOL1"### Please provide a backup file name."$myCOL0
echo $myCOL1"### restore-elk.sh <es_dump.tar>"$myCOL0
echo
exit
fi
if ! [ -a $myDUMP ];
then
echo $myCOL1"### File not found."$myCOL0
exit
fi
# Unpack tar archive
echo $myCOL1"### Now unpacking tar archive: "$myDUMP $myCOL0
tar xvf $myDUMP
# Build indices list
myINDICES=$(ls tmp/logstash*.gz | cut -c 5- | rev | cut -c 4- | rev)
echo $myCOL1"### The following indices will be restored: "$myCOL0
echo $myINDICES
echo
# Restore indices
for i in $myINDICES;
do
# Delete index if it already exists
curl -s -XDELETE $myES$i > /dev/null
echo $myCOL1"### Now uncompressing: tmp/$i.gz" $myCOL0
gunzip -f tmp/$i.gz
# Restore index to ES
echo $myCOL1"### Now restoring: "$i $myCOL0
elasticdump --input=tmp/$i --output=$myES$i --limit 7500
rm tmp/$i
done;
echo $myCOL1"### Done."$myCOL0

51
installer/bin/status.sh Executable file
View File

@ -0,0 +1,51 @@
#!/bin/bash
########################################################
# T-Pot #
# Container and services status script #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
myCOUNT=1
if [[ $1 == "" ]]
then
myIMAGES=$(cat /data/images.conf)
else myIMAGES=$1
fi
while true
do
if ! [ -a /var/run/check.lock ];
then break
fi
sleep 0.1
if [ $myCOUNT = 1 ];
then
echo -n "Waiting for services "
else echo -n .
fi
if [ $myCOUNT = 300 ];
then
echo
echo "Services are busy or not available. Please retry later."
exit 1
fi
myCOUNT=$[$myCOUNT +1]
done
echo
echo
echo "======| System |======"
echo Date:" "$(date)
echo Uptime:" "$(uptime)
echo CPU temp: $(sensors | grep "Physical" | awk '{ print $4 }')
echo
for i in $myIMAGES
do
if [ "$i" != "ui-for-docker" ] && [ "$i" != "netdata" ];
then
echo "======| Container:" $i "|======"
docker exec $i supervisorctl status | GREP_COLORS='mt=01;32' egrep --color=always "(RUNNING)|$" | GREP_COLORS='mt=01;31' egrep --color=always "(STOPPED|FATAL)|$"
echo
fi
done

66
installer/bin/update-images.sh Executable file
View File

@ -0,0 +1,66 @@
#!/bin/bash
########################################################
# T-Pot #
# Only start the containers found in /etc/init/ #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
# Make sure not to interrupt a check
while true
do
if ! [ -a /var/run/check.lock ];
then break
fi
sleep 0.1
if [ "$myCOUNT" = "1" ];
then
echo -n "Waiting for services "
else echo -n .
fi
if [ "$myCOUNT" = "6000" ];
then
echo
echo "Overriding check.lock"
rm /var/run/check.lock
break
fi
myCOUNT=$[$myCOUNT +1]
done
# We do not want to get interrupted by a check
touch /var/run/check.lock
# Stop T-Pot services and disable all T-Pot services
echo "### Stopping T-Pot services and cleaning up."
for i in $(cat /data/imgcfg/all_images.conf);
do
systemctl stop $i
sleep 2
systemctl disable $i;
done
# Restarting docker services
echo "### Restarting docker services ..."
systemctl stop docker
sleep 2
systemctl start docker
sleep 2
# Enable only T-Pot upstart scripts from images.conf and pull the images
for i in $(cat /data/images.conf);
do
docker pull dtagdevsec/$i:latest1610;
systemctl enable $i;
done
# Announce reboot
echo "### Rebooting in 60 seconds for the changes to take effect."
sleep 60
# Allow checks to resume
rm /var/run/check.lock
# Reboot
reboot

View File

@ -1,24 +0,0 @@
#!/bin/bash
# Let's add the first local ip to the /etc/issue and external ip to ews.ip file
# If the external IP cannot be detected, the internal IP will be inherited.
source /etc/environment
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(/usr/share/tpot/bin/myip.sh)
if [ "$myEXTIP" = "" ];
then
myEXTIP=$myLOCALIP
fi
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
tee /etc/tpot/elk/environment << EOF
MY_EXTIP=$myEXTIP
MY_INTIP=$myLOCALIP
MY_HOSTNAME=$HOSTNAME
EOF
chown tpot:tpot /data/ews/conf/ews.ip
chmod 760 /data/ews/conf/ews.ip

BIN
installer/data/elkbase.tgz Normal file

Binary file not shown.

View File

@ -0,0 +1,83 @@
[MAIN]
homedir = /opt/ewsposter/
spooldir = /opt/ewsposter/spool/
logdir = /opt/ewsposter/log/
del_malware_after_send = false
send_malware = true
sendlimit = 400
contact = your_email_address
proxy =
ip =
[EWS]
ews = true
username = community-01-user
token = foth{a5maiCee8fineu7
rhost_first = https://community.sicherheitstacho.eu/ews-0.1/alert/postSimpleMessage
rhost_second = https://community.sicherheitstacho.eu/ews-0.1/alert/postSimpleMessage
ignorecert = false
[HPFEED]
hpfeed = false
host = 0.0.0.0
port = 0
channels = 0
ident = 0
secret= 0
[EWSJSON]
json = false
jsondir = /data/ews/
[GLASTOPFV3]
glastopfv3 = true
nodeid = glastopfv3-community-01
sqlitedb = /data/glastopf/db/glastopf.db
malwaredir = /data/glastopf/data/files/
[GLASTOPFV2]
glastopfv2 = false
nodeid =
mysqlhost =
mysqldb =
mysqluser =
mysqlpw =
malwaredir =
[KIPPO]
kippo = true
nodeid = kippo-community-01
mysqlhost = localhost
mysqldb = cowrie
mysqluser = cowrie
mysqlpw = s0m3Secr3T!
malwaredir = /data/cowrie/downloads/
[DIONAEA]
dionaea = true
nodeid = dionaea-community-01
malwaredir = /data/dionaea/binaries/
sqlitedb = /data/dionaea/log/dionaea.sqlite
[HONEYTRAP]
honeytrap = true
nodeid = honeytrap-community-01
newversion = true
payloaddir = /data/honeytrap/attacks/
attackerfile = /data/honeytrap/log/attacker.log
[RDPDETECT]
rdpdetect = false
nodeid =
iptableslog =
targetip =
[EMOBILITY]
eMobility = true
nodeid = emobility-community-01
logfile = /data/eMobility/log/centralsystemEWS.log
[CONPOT]
conpot = true
nodeid = conpot-community-01
logfile = /data/conpot/log/conpot.json

View File

@ -0,0 +1,11 @@
conpot
cowrie
dionaea
elasticpot
elk
emobility
glastopf
honeytrap
suricata
netdata
ui-for-docker

View File

@ -0,0 +1,5 @@
cowrie
dionaea
elasticpot
glastopf
honeytrap

View File

@ -0,0 +1,6 @@
conpot
elk
emobility
suricata
netdata
ui-for-docker

View File

@ -0,0 +1,9 @@
cowrie
dionaea
elasticpot
elk
glastopf
honeytrap
suricata
netdata
ui-for-docker

View File

@ -0,0 +1,15 @@
[Unit]
Description=conpot
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop conpot
ExecStartPre=-/usr/bin/docker rm -v conpot
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh conpot off'
ExecStart=/usr/bin/docker run --name conpot --rm=true -v /data/conpot:/data/conpot -v /data/ews:/data/ews -p 1025:1025 -p 50100:50100 dtagdevsec/conpot:latest1610
ExecStop=/usr/bin/docker stop conpot
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
[Unit]
Description=cowrie
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop cowrie
ExecStartPre=-/usr/bin/docker rm -v cowrie
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh cowrie off'
ExecStart=/usr/bin/docker run --name cowrie --rm=true -p 22:2222 -p 23:2223 -v /data/cowrie:/data/cowrie -v /data/ews:/data/ews dtagdevsec/cowrie:latest1610
ExecStop=/usr/bin/docker stop cowrie
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
[Unit]
Description=dionaea
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop dionaea
ExecStartPre=-/usr/bin/docker rm -v dionaea
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh dionaea off'
ExecStart=/usr/bin/docker run --name dionaea --cap-add=NET_BIND_SERVICE --rm=true -p 21:21 -p 42:42 -p 69:69/udp -p 8081:80 -p 135:135 -p 443:443 -p 445:445 -p 1433:1433 -p 1723:1723 -p 1883:1883 -p 1900:1900 -p 3306:3306 -p 5060:5060 -p 5061:5061 -p 5060:5060/udp -p 11211:11211 -v /data/dionaea:/data/dionaea -v /data/ews:/data/ews dtagdevsec/dionaea:latest1610
ExecStop=/usr/bin/docker stop dionaea
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
[Unit]
Description=elasticpot
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop elasticpot
ExecStartPre=-/usr/bin/docker rm -v elasticpot
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh elasticpot off'
ExecStart=/usr/bin/docker run --name elasticpot --rm=true -v /data/elasticpot:/data/elasticpot -v /data/ews:/data/ews -p 9200:9200 dtagdevsec/elasticpot:latest1610
ExecStop=/usr/bin/docker stop elasticpot
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
[Unit]
Description=elk
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop elk
ExecStartPre=-/usr/bin/docker rm -v elk
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh elk'
ExecStart=/usr/bin/docker run --name=elk -v /data:/data -v /var/log:/data/host/log -p 127.0.0.1:64296:5601 -p 127.0.0.1:64298:9200 --rm=true dtagdevsec/elk:latest1610
ExecStop=/usr/bin/docker stop elk
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
[Unit]
Description=emobility
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop emobility
ExecStartPre=-/usr/bin/docker rm -v emobility
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh emobility off'
ExecStart=/usr/bin/docker run --name emobility --cap-add=NET_ADMIN -p 8080:8080 -v /data/emobility:/data/eMobility -v /data/ews:/data/ews --rm=true dtagdevsec/emobility:latest1610
ExecStop=/usr/bin/docker stop emobility
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
[Unit]
Description=glastopf
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop glastopf
ExecStartPre=-/usr/bin/docker rm -v glastopf
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh glastopf off'
ExecStart=/usr/bin/docker run --name glastopf --rm=true -v /data/glastopf:/data/glastopf -v /data/ews:/data/ews -p 80:80 dtagdevsec/glastopf:latest1610
ExecStop=/usr/bin/docker stop glastopf
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,23 @@
[Unit]
Description=honeytrap
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop honeytrap
ExecStartPre=-/usr/bin/docker rm -v honeytrap
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh honeytrap off'
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 21,22,23,42,69,80,135,443,445,1433,1723,1883,1900 -j NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 3306,5060,5061,5601,11211 -j NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 64295,64296,64297,64298,64299,64300,64301 -j NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 1025,50100,8080,8081,9200 -j NFQUEUE
ExecStart=/usr/bin/docker run --name honeytrap --cap-add=NET_ADMIN --net=host --rm=true -v /data/honeytrap:/data/honeytrap -v /data/ews:/data/ews dtagdevsec/honeytrap:latest1610
ExecStop=/usr/bin/docker stop honeytrap
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 1025,50100,8080,8081,9200 -j NFQUEUE
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 64295,64296,64297,64298,64299,64300,64301 -j NFQUEUE
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 3306,5060,5061,5601,11211 -j NFQUEUE
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 21,22,23,42,69,80,135,443,445,1433,1723,1883,1900 -j NFQUEUE
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
[Unit]
Description=netdata
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop netdata
ExecStartPre=-/usr/bin/docker rm -v netdata
ExecStartPre=-/bin/chmod 666 /var/run/docker.sock
ExecStart=/usr/bin/docker run --name netdata --net=host --cap-add=SYS_PTRACE --rm=true -v /proc:/host/proc:ro -v /sys:/host/sys:ro -v /var/run/docker.sock:/var/run/docker.sock dtagdevsec/netdata:latest1610
ExecStop=/usr/bin/docker stop netdata
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,19 @@
[Unit]
Description=suricata
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop suricata
ExecStartPre=-/usr/bin/docker rm -v suricata
# Get IF, disable offloading, enable promiscious mode
ExecStartPre=/bin/bash -c '/sbin/ethtool --offload $(/sbin/ip route | /bin/grep $(/bin/hostname -I | /usr/bin/awk \'{print $1 }\') | /usr/bin/awk \'{print $3 }\') rx off tx off'
ExecStartPre=/bin/bash -c '/sbin/ethtool -K $(/sbin/ip route | /bin/grep $(/bin/hostname -I | /usr/bin/awk \'{print $1 }\') | /usr/bin/awk \'{print $3 }\') gso off gro off'
ExecStartPre=/bin/bash -c '/sbin/ip link set $(/sbin/ip route | /bin/grep $(/bin/hostname -I | /usr/bin/awk \'{print $1 }\') | /usr/bin/awk \'{print $3 }\') promisc on'
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh suricata off'
ExecStart=/usr/bin/docker run --name suricata --cap-add=NET_ADMIN --net=host --rm=true -v /data/suricata:/data/suricata dtagdevsec/suricata:latest1610
ExecStop=/usr/bin/docker stop suricata
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,14 @@
[Unit]
Description=ui-for-docker
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop ui-for-docker
ExecStartPre=-/usr/bin/docker rm -v ui-for-docker
ExecStart=/usr/bin/docker run --name ui-for-docker --rm=true -v /var/run/docker.sock:/var/run/docker.sock -p 127.0.0.1:64299:9000 dtagdevsec/ui-for-docker:latest1610
ExecStop=/usr/bin/docker stop ui-for-docker
[Install]
WantedBy=multi-user.target

View File

@ -1,144 +0,0 @@
#
# Run-time configuration file for dialog
#
# Automatically generated by "dialog --create-rc <file>"
#
#
# Types of values:
#
# Number - <number>
# String - "string"
# Boolean - <ON|OFF>
# Attribute - (foreground,background,highlight?)
# Set aspect-ration.
aspect = 0
# Set separator (for multiple widgets output).
separate_widget = ""
# Set tab-length (for textbox tab-conversion).
tab_len = 0
# Make tab-traversal for checklist, etc., include the list.
visit_items = OFF
# Shadow dialog boxes? This also turns on color.
use_shadow = ON
# Turn color support ON or OFF
use_colors = ON
# Screen color
screen_color = (WHITE,MAGENTA,ON)
# Shadow color
shadow_color = (BLACK,BLACK,ON)
# Dialog box color
dialog_color = (BLACK,WHITE,OFF)
# Dialog box title color
title_color = (MAGENTA,WHITE,OFF)
# Dialog box border color
border_color = (WHITE,WHITE,ON)
# Active button color
button_active_color = (WHITE,MAGENTA,OFF)
# Inactive button color
button_inactive_color = dialog_color
# Active button key color
button_key_active_color = button_active_color
# Inactive button key color
button_key_inactive_color = (RED,WHITE,OFF)
# Active button label color
button_label_active_color = (YELLOW,MAGENTA,ON)
# Inactive button label color
button_label_inactive_color = (BLACK,WHITE,OFF)
# Input box color
inputbox_color = dialog_color
# Input box border color
inputbox_border_color = dialog_color
# Search box color
searchbox_color = dialog_color
# Search box title color
searchbox_title_color = title_color
# Search box border color
searchbox_border_color = border_color
# File position indicator color
position_indicator_color = title_color
# Menu box color
menubox_color = dialog_color
# Menu box border color
menubox_border_color = border_color
# Item color
item_color = dialog_color
# Selected item color
item_selected_color = button_active_color
# Tag color
tag_color = title_color
# Selected tag color
tag_selected_color = button_label_active_color
# Tag key color
tag_key_color = button_key_inactive_color
# Selected tag key color
tag_key_selected_color = (RED,MAGENTA,ON)
# Check box color
check_color = dialog_color
# Selected check box color
check_selected_color = button_active_color
# Up arrow color
uarrow_color = (MAGENTA,WHITE,ON)
# Down arrow color
darrow_color = uarrow_color
# Item help-text color
itemhelp_color = (WHITE,BLACK,OFF)
# Active form text color
form_active_text_color = button_active_color
# Form text color
form_text_color = (WHITE,CYAN,ON)
# Readonly form item color
form_item_readonly_color = (CYAN,WHITE,ON)
# Dialog box gauge color
gauge_color = title_color
# Dialog box border2 color
border2_color = dialog_color
# Input box border2 color
inputbox_border2_color = dialog_color
# Search box border2 color
searchbox_border2_color = dialog_color
# Menu box border2 color
menubox_border2_color = dialog_color

View File

@ -1,20 +1,16 @@

┌──────────────────────────────────────────────┐
│ _____ ____ _ _ _____ _ ___ │
│|_ _| | _ \\ ___ | |_ / |___ / |/ _ \\ │
│ | |_____| |_) / _ \\| __| | | / /| | | | |│
│ | |_____| __/ (_) | |_ | | / /_| | |_| |│
│ |_| |_| \\___/ \\__| |_|/_/(_)_|\\___/ │
│ │
└──────────────────────────────────────────────┘
,---- [ \n ] [ \d ] [ \t ]
|
| IP:
| SSH:
| WEB:
|
`----
T-Pot 16.10
Hostname: \n
___________ _____________________________
\\__ ___/ \\______ \\_____ \\__ ___/
| | ______ | ___// | \\| |
| | /_____/ | | / | \\ |
|____| |____| \\_______ /____|
\\/
IP:
SSH:
WEB:

View File

@ -104,22 +104,24 @@ server {
### Kibana
location /kibana/ {
proxy_pass http://localhost:64296;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
rewrite /kibana/(.*)$ /$1 break;
}
### ES
location /es/ {
proxy_pass http://localhost:64298/;
rewrite /es/(.*)$ /$1 break;
}
### head standalone
### Head plugin
location /myhead/ {
proxy_pass http://localhost:64302/;
proxy_pass http://localhost:64298/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
rewrite /myhead/(.*)$ /$1 break;
}
### portainer
### ui-for-docker
location /ui {
proxy_pass http://127.0.0.1:64299;
proxy_http_version 1.1;
@ -132,24 +134,28 @@ server {
### web tty
location /wetty {
proxy_pass http://127.0.0.1:64300/wetty;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 43200000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
}
### netdata
location /netdata/ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:64301;
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Connection "keep-alive";
proxy_store off;
rewrite /netdata/(.*)$ /$1 break;
}
### spiderfoot
location /spiderfoot {
proxy_pass http://127.0.0.1:64303;
}
location /static {
proxy_pass http://127.0.0.1:64303/spiderfoot/static;
}
location /scanviz {
proxy_pass http://127.0.0.1:64303/spiderfoot/scanviz;
}
}

View File

@ -1,2 +1,17 @@
#!/bin/bash
exit 0
# Let's add the first local ip to the /etc/issue and external ip to ews.ip file
source /etc/environment
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(/usr/bin/myip.sh)
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
echo $myLOCALIP > /data/elk/logstash/mylocal.ip
chown tpot:tpot /data/ews/conf/ews.ip
if [ -f /var/run/check.lock ];
then rm /var/run/check.lock
fi

View File

@ -1,313 +0,0 @@
# T-Pot (Everything)
# For docker-compose ...
version: '2.1'
networks:
conpot_local:
cowrie_local:
dionaea_local:
elasticpot_local:
emobility_local:
ewsposter_local:
glastopf_local:
mailoney_local:
rdpy_local:
spiderfoot_local:
ui-for-docker_local:
vnclowpot_local:
services:
# Conpot service
conpot:
container_name: conpot
restart: always
networks:
- conpot_local
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:1710"
volumes:
- /data/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
networks:
- cowrie_local
cap_add:
- NET_BIND_SERVICE
ports:
- "22:2222"
- "23:2223"
image: "dtagdevsec/cowrie:1710"
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
- /data/cowrie/keys:/home/cowrie/cowrie/etc
- /data/cowrie/log:/home/cowrie/cowrie/log
- /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
restart: always
networks:
- dionaea_local
cap_add:
- NET_BIND_SERVICE
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "8081:80"
- "135:135"
- "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "1900:1900/udp"
- "3306:3306"
- "5060:5060"
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1710"
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- /data/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- /data/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- /data/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- /data/dionaea:/opt/dionaea/var/dionaea
- /data/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# Elasticpot service
elasticpot:
container_name: elasticpot
restart: always
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1710"
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
# ELK services
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
environment:
- bootstrap.memory_lock=true
# - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
# mem_limit: 2g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1710"
volumes:
- /data:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1710"
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
env_file:
- /etc/tpot/elk/environment
image: "dtagdevsec/logstash:1710"
volumes:
- /data:/data
- /var/log:/data/host/log
## Elasticsearch-head service
head:
container_name: head
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1710"
# Emobility service
emobility:
container_name: emobility
restart: always
networks:
- emobility_local
cap_add:
- NET_ADMIN
ports:
- "8080:8080"
image: "dtagdevsec/emobility:1710"
volumes:
- /data/emobility:/data/eMobility
- /data/ews:/data/ews
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Glastopf service
glastopf:
container_name: glastopf
restart: always
networks:
- glastopf_local
ports:
- "80:80"
image: "dtagdevsec/glastopf:1710"
volumes:
- /data/glastopf/db:/opt/glastopf/db
- /data/glastopf/log:/opt/glastopf/log
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1710"
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
- /data/honeytrap/downloads:/opt/honeytrap/var/downloads
- /data/honeytrap/log:/opt/honeytrap/var/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
networks:
- mailoney_local
ports:
- "25:2525"
image: "dtagdevsec/mailoney:1710"
volumes:
- /data/mailoney/log:/opt/mailoney/logs
# Netdata service
netdata:
container_name: netdata
restart: always
network_mode: "host"
depends_on:
elasticsearch:
condition: service_healthy
cap_add:
- SYS_PTRACE
security_opt:
- apparmor=unconfined
image: "dtagdevsec/netdata:1710"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
# Rdpy service
rdpy:
container_name: rdpy
restart: always
networks:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1710"
volumes:
- /data/rdpy/log:/var/log/rdpy
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1710"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
# Ui-for-docker service
ui-for-docker:
container_name: ui-for-docker
command: -H unix:///var/run/docker.sock --no-auth
restart: always
networks:
- ui-for-docker_local
ports:
- "127.0.0.1:64299:9000"
image: "dtagdevsec/ui-for-docker:1710"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# Suricata service
suricata:
container_name: suricata
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1710"
volumes:
- /data/suricata/log:/var/log/suricata
# P0f service
p0f:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1710"
volumes:
- /data/p0f/log:/var/log/p0f
# Vnclowpot service
vnclowpot:
container_name: vnclowpot
restart: always
networks:
- vnclowpot_local
ports:
- "5900:5900"
image: "dtagdevsec/vnclowpot:1710"
volumes:
- /data/vnclowpot/log:/var/log/vnclowpot

View File

@ -1,156 +0,0 @@
# T-Pot (HP)
# For docker-compose ...
version: '2.1'
networks:
cowrie_local:
dionaea_local:
elasticpot_local:
ewsposter_local:
glastopf_local:
mailoney_local:
rdpy_local:
vnclowpot_local:
services:
# Cowrie service
cowrie:
container_name: cowrie
restart: always
networks:
- cowrie_local
cap_add:
- NET_BIND_SERVICE
ports:
- "22:2222"
- "23:2223"
image: "dtagdevsec/cowrie:1710"
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
- /data/cowrie/keys:/home/cowrie/cowrie/etc
- /data/cowrie/log:/home/cowrie/cowrie/log
- /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
restart: always
networks:
- dionaea_local
cap_add:
- NET_BIND_SERVICE
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "8081:80"
- "135:135"
- "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "1900:1900/udp"
- "3306:3306"
- "5060:5060"
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1710"
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- /data/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- /data/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- /data/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- /data/dionaea:/opt/dionaea/var/dionaea
- /data/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# Elasticpot service
elasticpot:
container_name: elasticpot
restart: always
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1710"
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Glastopf service
glastopf:
container_name: glastopf
restart: always
networks:
- glastopf_local
ports:
- "80:80"
image: "dtagdevsec/glastopf:1710"
volumes:
- /data/glastopf/db:/opt/glastopf/db
- /data/glastopf/log:/opt/glastopf/log
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1710"
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
- /data/honeytrap/downloads:/opt/honeytrap/var/downloads
- /data/honeytrap/log:/opt/honeytrap/var/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
networks:
- mailoney_local
ports:
- "25:2525"
image: "dtagdevsec/mailoney:1710"
volumes:
- /data/mailoney/log:/opt/mailoney/logs
# Rdpy service
rdpy:
container_name: rdpy
restart: always
networks:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1710"
volumes:
- /data/rdpy/log:/var/log/rdpy
# Vnclowpot service
vnclowpot:
container_name: vnclowpot
restart: always
networks:
- vnclowpot_local
ports:
- "5900:5900"
image: "dtagdevsec/vnclowpot:1710"
volumes:
- /data/vnclowpot/log:/var/log/vnclowpot

View File

@ -1,176 +0,0 @@
# T-Pot (Industrial)
# For docker-compose ...
version: '2.1'
networks:
conpot_local:
emobility_local:
ewsposter_local:
spiderfoot_local:
ui-for-docker_local:
services:
# Conpot service
conpot:
container_name: conpot
restart: always
networks:
- conpot_local
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:1710"
volumes:
- /data/conpot/log:/var/log/conpot
# ELK services
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
environment:
- bootstrap.memory_lock=true
# - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
# mem_limit: 2g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1710"
volumes:
- /data:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1710"
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
env_file:
- /etc/tpot/elk/environment
image: "dtagdevsec/logstash:1710"
volumes:
- /data:/data
- /var/log:/data/host/log
## Elasticsearch-head service
head:
container_name: head
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1710"
# Emobility service
emobility:
container_name: emobility
restart: always
networks:
- emobility_local
cap_add:
- NET_ADMIN
ports:
- "8080:8080"
image: "dtagdevsec/emobility:1710"
volumes:
- /data/emobility:/data/eMobility
- /data/ews:/data/ews
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Netdata service
netdata:
container_name: netdata
restart: always
network_mode: "host"
depends_on:
elasticsearch:
condition: service_healthy
cap_add:
- SYS_PTRACE
security_opt:
- apparmor=unconfined
image: "dtagdevsec/netdata:1710"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1710"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
# Ui-for-docker service
ui-for-docker:
container_name: ui-for-docker
command: -H unix:///var/run/docker.sock --no-auth
restart: always
networks:
- ui-for-docker_local
ports:
- "127.0.0.1:64299:9000"
image: "dtagdevsec/ui-for-docker:1710"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# Suricata service
suricata:
container_name: suricata
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1710"
volumes:
- /data/suricata/log:/var/log/suricata
# P0f service
p0f:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1710"
volumes:
- /data/p0f/log:/var/log/p0f

View File

@ -1,283 +0,0 @@
# T-Pot (Standard)
# For docker-compose ...
version: '2.1'
networks:
cowrie_local:
dionaea_local:
elasticpot_local:
ewsposter_local:
glastopf_local:
mailoney_local:
rdpy_local:
spiderfoot_local:
ui-for-docker_local:
vnclowpot_local:
services:
# Cowrie service
cowrie:
container_name: cowrie
restart: always
networks:
- cowrie_local
cap_add:
- NET_BIND_SERVICE
ports:
- "22:2222"
- "23:2223"
image: "dtagdevsec/cowrie:1710"
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
- /data/cowrie/keys:/home/cowrie/cowrie/etc
- /data/cowrie/log:/home/cowrie/cowrie/log
- /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
restart: always
networks:
- dionaea_local
cap_add:
- NET_BIND_SERVICE
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "8081:80"
- "135:135"
- "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "1900:1900/udp"
- "3306:3306"
- "5060:5060"
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1710"
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- /data/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- /data/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- /data/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- /data/dionaea:/opt/dionaea/var/dionaea
- /data/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# Elasticpot service
elasticpot:
container_name: elasticpot
restart: always
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1710"
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
# ELK services
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
# mem_limit: 2g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1710"
volumes:
- /data:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1710"
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
env_file:
- /etc/tpot/elk/environment
image: "dtagdevsec/logstash:1710"
volumes:
- /data:/data
- /var/log:/data/host/log
## Elasticsearch-head service
head:
container_name: head
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1710"
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Glastopf service
glastopf:
container_name: glastopf
restart: always
networks:
- glastopf_local
ports:
- "80:80"
image: "dtagdevsec/glastopf:1710"
volumes:
- /data/glastopf/db:/opt/glastopf/db
- /data/glastopf/log:/opt/glastopf/log
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1710"
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
- /data/honeytrap/downloads:/opt/honeytrap/var/downloads
- /data/honeytrap/log:/opt/honeytrap/var/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
networks:
- mailoney_local
ports:
- "25:2525"
image: "dtagdevsec/mailoney:1710"
volumes:
- /data/mailoney/log:/opt/mailoney/logs
# Netdata service
netdata:
container_name: netdata
restart: always
network_mode: "host"
depends_on:
elasticsearch:
condition: service_healthy
cap_add:
- SYS_PTRACE
security_opt:
- apparmor=unconfined
image: "dtagdevsec/netdata:1710"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
# Rdpy service
rdpy:
container_name: rdpy
restart: always
networks:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1710"
volumes:
- /data/rdpy/log:/var/log/rdpy
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1710"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
# Ui-for-docker service
ui-for-docker:
container_name: ui-for-docker
command: -H unix:///var/run/docker.sock --no-auth
restart: always
networks:
- ui-for-docker_local
ports:
- "127.0.0.1:64299:9000"
image: "dtagdevsec/ui-for-docker:1710"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# Suricata service
suricata:
container_name: suricata
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1710"
volumes:
- /data/suricata/log:/var/log/suricata
# P0f service
p0f:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1710"
volumes:
- /data/p0f/log:/var/log/p0f
# Vnclowpot service
vnclowpot:
container_name: vnclowpot
restart: always
networks:
- vnclowpot_local
ports:
- "5900:5900"
image: "dtagdevsec/vnclowpot:1710"
volumes:
- /data/vnclowpot/log:/var/log/vnclowpot

View File

@ -1,26 +0,0 @@
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete indices older than 90 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: logstash-
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 90

View File

@ -1,21 +0,0 @@
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
hosts:
- 127.0.0.1
port: 64298
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile: /var/log/curator.log
logformat: default
blacklist: ['elasticsearch', 'urllib3']

Binary file not shown.

View File

@ -1,38 +0,0 @@
/data/conpot/log/conpot.json
/data/conpot/log/conpot.log
/data/cowrie/log/cowrie.json
/data/cowrie/log/cowrie-textlog.log
/data/cowrie/log/lastlog.txt
/data/cowrie/log/ttylogs.tgz
/data/cowrie/downloads.tgz
/data/dionaea/log/dionaea.json
/data/dionaea/log/dionaea.sqlite
/data/dionaea/bistreams.tgz
/data/dionaea/binaries.tgz
/data/dionaea/dionaea-errors.log
/data/elasticpot/log/elasticpot.log
/data/elk/log/*.log
/data/emobility/log/centralsystem.log
/data/emobility/log/centralsystemEWS.log
/data/glastopf/log/glastopf.log
/data/glastopf/db/glastopf.db
/data/honeytrap/log/*.log
/data/honeytrap/log/*.json
/data/honeytrap/attacks.tgz
/data/honeytrap/downloads.tgz
/data/mailoney/log/commands.log
/data/p0f/log/p0f.json
/data/rdpy/log/rdpy.log
/data/suricata/log/*.log
/data/suricata/log/*.json
/data/vnclowpot/log/vnclowpot.log
{
su tpot tpot
copytruncate
create 760 tpot tpot
daily
missingok
notifempty
rotate 30
compress
}

View File

@ -1,57 +0,0 @@
[Unit]
Description=tpot
Requires=docker.service
After=docker.service
[Service]
Restart=always
RestartSec=5
# Get and set internal, external IP infos, but ignore errors
ExecStartPre=-/usr/share/tpot/bin/updateip.sh
# Clear state or if persistence is enabled rotate and compress logs from /data
ExecStartPre=-/bin/bash -c '/usr/share/tpot/bin/clean.sh on'
# Remove old containers, images and volumes
ExecStartPre=-/usr/local/bin/docker-compose -f /etc/tpot/tpot.yml down -v
ExecStartPre=-/usr/local/bin/docker-compose -f /etc/tpot/tpot.yml rm -v
ExecStartPre=-/bin/bash -c 'docker volume rm $(docker volume ls -q)'
ExecStartPre=-/bin/bash -c 'docker rm -v $(docker ps -aq)'
ExecStartPre=-/bin/bash -c 'docker rmi $(docker images | grep "<none>" | awk \'{print $3}\')'
# Get IF, disable offloading, enable promiscious mode for p0f and suricata
ExecStartPre=/bin/bash -c '/sbin/ethtool --offload $(/sbin/ip address | grep "^2: " | awk \'{ print $2 }\' | tr -d [:punct:]) rx off tx off'
ExecStartPre=/bin/bash -c '/sbin/ethtool -K $(/sbin/ip address | grep "^2: " | awk \'{ print $2 }\' | tr -d [:punct:]) gso off gro off'
ExecStartPre=/bin/bash -c '/sbin/ip link set $(/sbin/ip address | grep "^2: " | awk \'{ print $2 }\' | tr -d [:punct:]) promisc on'
# Modify access rights on docker.sock for netdata
ExecStartPre=-/bin/chmod 666 /var/run/docker.sock
# Set iptables accept rules to avoid forwarding to honeytrap / NFQUEUE
# Forward all other connections to honeytrap / NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -s 127.0.0.1 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -d 127.0.0.1 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 64295:64303,7634 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 20:23,25,42,69,80,135,443,445,1433,1723,1883,1900 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 3306,3389,5060,5061,5601,5900,27017 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 1025,50100,8080,8081,9200 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
# Compose T-Pot up
ExecStart=/usr/local/bin/docker-compose -f /etc/tpot/tpot.yml up --no-color
# Compose T-Pot down, remove containers and volumes
ExecStop=/usr/local/bin/docker-compose -f /etc/tpot/tpot.yml down -v
# Remove only previously set iptables rules
ExecStopPost=/sbin/iptables -w -D INPUT -s 127.0.0.1 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -d 127.0.0.1 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 64295:64303,7634 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 20:23,25,42,69,80,135,443,445,1433,1723,1883,1900 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 3306,3389,5060,5061,5601,5900,27017 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 1025,50100,8080,8081,9200 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
[Install]
WantedBy=multi-user.target

View File

@ -1,12 +1,10 @@
#!/bin/bash
# T-Pot post install script
# Set TERM, DIALOGRC
export TERM=linux
export DIALOGRC=/etc/dialogrc
# Let's load dialog color theme
cp /root/tpot/etc/dialogrc /etc/
########################################################
# T-Pot post install script #
# Ubuntu server 16.04.0, x64 #
# #
# v16.10.0 by mo, DTAG, 2016-12-03 #
########################################################
# Some global vars
myPROXYFILEPATH="/root/tpot/etc/proxy"
@ -14,9 +12,15 @@ myNTPCONFPATH="/root/tpot/etc/ntp"
myPFXPATH="/root/tpot/keys/8021x.pfx"
myPFXPWPATH="/root/tpot/keys/8021x.pw"
myPFXHOSTIDPATH="/root/tpot/keys/8021x.id"
myBACKTITLE="T-Pot-Installer"
mySITES="https://index.docker.io https://github.com https://pypi.python.org https://ubuntu.com"
myPROGRESSBOXCONF=" --backtitle "$myBACKTITLE" --progressbox 24 80"
# Let's create a function for colorful output
fuECHO () {
local myRED=1
local myWHT=7
tput setaf $myRED -T xterm
echo "$1" "$2"
tput setaf $myWHT -T xterm
}
fuRANDOMWORD () {
local myWORDFILE="$1"
@ -26,18 +30,124 @@ fuRANDOMWORD () {
echo -n $(sed -n "$myNUM p" $myWORDFILE | tr -d \' | tr A-Z a-z)
}
# Let's make sure there is a warning if running for a second time
if [ -f install.log ];
then fuECHO "### Running more than once may complicate things. Erase install.log if you are really sure."
exit 1;
fi
# Let's log for the beauty of it
set -e
exec 2> >(tee "install.err")
exec > >(tee "install.log")
# Let's remove NGINX default website
fuECHO "### Removing NGINX default website."
rm /etc/nginx/sites-enabled/default
rm /etc/nginx/sites-available/default
rm /usr/share/nginx/html/index.html
# Let's wait a few seconds to avoid interference with service messages
sleep 3
tput civis
dialog --no-ok --no-cancel --backtitle "$myBACKTITLE" --title "[ Wait to avoid interference with service messages ]" --pause "" 6 80 7
fuECHO "### Waiting a few seconds to avoid interference with service messages."
sleep 5
# Let's ask user for install type
# Install types are TPOT, HP, INDUSTRIAL, ALL
while [ 1 != 2 ]
do
fuECHO "### Please choose your install type and notice HW recommendation."
fuECHO
fuECHO " [T] - T-Pot Standard Installation"
fuECHO " - Cowrie, Dionaea, Elasticpot, Glastopf, Honeytrap, Suricata & ELK"
fuECHO " - 4 GB RAM (6-8 GB recommended)"
fuECHO " - 64GB disk (128 GB SSD recommended)"
fuECHO
fuECHO " [H] - Honeypots Only Installation"
fuECHO " - Cowrie, Dionaea, ElasticPot, Glastopf & Honeytrap"
fuECHO " - 3 GB RAM (4-6 GB recommended)"
fuECHO " - 64 GB disk (64 GB SSD recommended)"
fuECHO
fuECHO " [I] - Industrial"
fuECHO " - ConPot, eMobility, ELK & Suricata"
fuECHO " - 4 GB RAM (8 GB recommended)"
fuECHO " - 64 GB disk (128 GB SSD recommended)"
fuECHO
fuECHO " [E] - Everything"
fuECHO " - All of the above"
fuECHO " - 8 GB RAM"
fuECHO " - 128 GB disk or larger (128 GB SSD or larger recommended)"
fuECHO
read -p "Install Type: " myTYPE
case "$myTYPE" in
[t,T])
myFLAVOR="TPOT"
break
;;
[h,H])
myFLAVOR="HP"
break
;;
[i,I])
myFLAVOR="INDUSTRIAL"
break
;;
[e,E])
myFLAVOR="ALL"
break
;;
esac
done
fuECHO "### You chose: "$myFLAVOR
fuECHO
# Let's ask user for a web user and password
myOK="n"
myUSER="tsec"
while [ 1 != 2 ]
do
fuECHO "### Please enter a web user name and password."
read -p "Username (tsec not allowed): " myUSER
echo "Your username is: "$myUSER
fuECHO
read -p "OK (y/n)? " myOK
fuECHO
if [ "$myOK" = "y" ] && [ "$myUSER" != "tsec" ] && [ "$myUSER" != "" ];
then
break
fi
done
myPASS1="pass1"
myPASS2="pass2"
while [ "$myPASS1" != "$myPASS2" ]
do
while [ "$myPASS1" == "pass1" ] || [ "$myPASS1" == "" ]
do
read -s -p "Password: " myPASS1
fuECHO
done
read -s -p "Repeat password: " myPASS2
fuECHO
if [ "$myPASS1" != "$myPASS2" ];
then
fuECHO "### Passwords do not match."
myPASS1="pass1"
myPASS2="pass2"
fi
done
htpasswd -b -c /etc/nginx/nginxpasswd $myUSER $myPASS1
fuECHO
# Let's generate a SSL certificate
fuECHO "### Generating a self-signed-certificate for NGINX."
fuECHO "### If you are unsure you can use the default values."
mkdir -p /etc/nginx/ssl
openssl req -nodes -x509 -sha512 -newkey rsa:8192 -keyout "/etc/nginx/ssl/nginx.key" -out "/etc/nginx/ssl/nginx.crt" -days 3650
# Let's setup the proxy for env
if [ -f $myPROXYFILEPATH ];
then
dialog --title "[ Setting up the proxy ]" $myPROGRESSBOXCONF <<EOF
EOF
then fuECHO "### Setting up the proxy."
myPROXY=$(cat $myPROXYFILEPATH)
tee -a /etc/environment 2>&1>/dev/null <<EOF
tee -a /etc/environment <<EOF
export http_proxy=$myPROXY
export https_proxy=$myPROXY
export HTTP_PROXY=$myPROXY
@ -47,189 +157,31 @@ EOF
source /etc/environment
# Let's setup the proxy for apt
tee /etc/apt/apt.conf 2>&1>/dev/null <<EOF
tee /etc/apt/apt.conf <<EOF
Acquire::http::Proxy "$myPROXY";
Acquire::https::Proxy "$myPROXY";
EOF
# Let's add proxy settings to docker defaults
myPROXY=$(cat $myPROXYFILEPATH)
tee -a /etc/default/docker 2>&1>/dev/null <<EOF
http_proxy=$myPROXY
https_proxy=$myPROXY
HTTP_PROXY=$myPROXY
HTTPS_PROXY=$myPROXY
no_proxy=localhost,127.0.0.1,.sock
EOF
# Let's restart docker for proxy changes to take effect
systemctl stop docker 2>&1 | dialog --title "[ Stop docker service ]" $myPROGRESSBOXCONF
systemctl start docker 2>&1 | dialog --title "[ Start docker service ]" $myPROGRESSBOXCONF
fi
# Let's test the internet connection
mySITESCOUNT=$(echo $mySITES | wc -w)
j=0
for i in $mySITES;
do
dialog --title "[ Testing the internet connection ]" --backtitle "$myBACKTITLE" \
--gauge "\n Now checking: $i\n" 8 80 $(expr 100 \* $j / $mySITESCOUNT) <<EOF
EOF
curl --connect-timeout 5 -IsS $i 2>&1>/dev/null
if [ $? -ne 0 ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Continue? ]" --yesno "\nInternet connection test failed. This might indicate some problems with your connection. You can continue, but the installation might fail." 10 50
if [ $? = 1 ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Abort ]" --msgbox "\nInstallation aborted. Exiting the installer." 7 50
exit
else
break;
fi;
fi;
let j+=1
dialog --title "[ Testing the internet connection ]" --backtitle "$myBACKTITLE" \
--gauge "\n Now checking: $i\n" 8 80 $(expr 100 \* $j / $mySITESCOUNT) <<EOF
EOF
done;
# Let's remove NGINX default website
#fuECHO "### Removing NGINX default website."
rm -rf /etc/nginx/sites-enabled/default 2>&1 | dialog --title "[ Removing NGINX default website. ]" $myPROGRESSBOXCONF;
rm -rf /etc/nginx/sites-available/default 2>&1 | dialog --title "[ Removing NGINX default website. ]" $myPROGRESSBOXCONF;
rm -rf /usr/share/nginx/html/index.html 2>&1 | dialog --title "[ Removing NGINX default website. ]" $myPROGRESSBOXCONF;
# Let's ask user for install flavor
# Install types are TPOT, HP, INDUSTRIAL, ALL
tput cnorm
myFLAVOR=$(dialog --no-cancel --backtitle "$myBACKTITLE" --title "[ Choose your edition ]" --no-tags --menu \
"\nRequired: 4GB RAM, 64GB disk\nRecommended: 8GB RAM, 128GB SSD" 14 60 4 \
"TPOT" "Standard Honeypots, Suricata & ELK" \
"HP" "Honeypots only, w/o Suricata & ELK" \
"INDUSTRIAL" "Conpot, eMobility, Suricata & ELK" \
"EVERYTHING" "Everything" 3>&1 1>&2 2>&3 3>&-)
# Let's ask for a secure tsec password
myUSER="tsec"
myPASS1="pass1"
myPASS2="pass2"
mySECURE="0"
while [ "$myPASS1" != "$myPASS2" ] && [ "$mySECURE" == "0" ]
do
while [ "$myPASS1" == "pass1" ] || [ "$myPASS1" == "" ]
do
myPASS1=$(dialog --insecure --backtitle "$myBACKTITLE" \
--title "[ Enter password for console user (tsec) ]" \
--passwordbox "\nPassword" 9 60 3>&1 1>&2 2>&3 3>&-)
done
myPASS2=$(dialog --insecure --backtitle "$myBACKTITLE" \
--title "[ Repeat password for console user (tsec) ]" \
--passwordbox "\nPassword" 9 60 3>&1 1>&2 2>&3 3>&-)
if [ "$myPASS1" != "$myPASS2" ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Passwords do not match. ]" \
--msgbox "\nPlease re-enter your password." 7 60
myPASS1="pass1"
myPASS2="pass2"
fi
mySECURE=$(printf "%s" "$myPASS1" | cracklib-check | grep -c "OK")
if [ "$mySECURE" == "0" ] && [ "$myPASS1" == "$myPASS2" ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Password is not secure ]" --defaultno --yesno "\nKeep insecure password?" 7 50
myOK=$?
if [ "$myOK" == "1" ];
then
myPASS1="pass1"
myPASS2="pass2"
fi
fi
done
printf "%s" "$myUSER:$myPASS1" | chpasswd
# Let's ask for a web username with secure password
myOK="1"
myUSER="tsec"
myPASS1="pass1"
myPASS2="pass2"
mySECURE="0"
while [ 1 != 2 ]
do
myUSER=$(dialog --backtitle "$myBACKTITLE" --title "[ Enter your web user name ]" --inputbox "\nUsername (tsec not allowed)" 9 50 3>&1 1>&2 2>&3 3>&-)
myUSER=$(echo $myUSER | tr -cd "[:alnum:]_.-")
dialog --backtitle "$myBACKTITLE" --title "[ Your username is ]" --yesno "\n$myUSER" 7 50
myOK=$?
if [ "$myOK" = "0" ] && [ "$myUSER" != "tsec" ] && [ "$myUSER" != "" ];
then
break
fi
done
while [ "$myPASS1" != "$myPASS2" ] && [ "$mySECURE" == "0" ]
do
while [ "$myPASS1" == "pass1" ] || [ "$myPASS1" == "" ]
do
myPASS1=$(dialog --insecure --backtitle "$myBACKTITLE" \
--title "[ Enter password for your web user ]" \
--passwordbox "\nPassword" 9 60 3>&1 1>&2 2>&3 3>&-)
done
myPASS2=$(dialog --insecure --backtitle "$myBACKTITLE" \
--title "[ Repeat password for your web user ]" \
--passwordbox "\nPassword" 9 60 3>&1 1>&2 2>&3 3>&-)
if [ "$myPASS1" != "$myPASS2" ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Passwords do not match. ]" \
--msgbox "\nPlease re-enter your password." 7 60
myPASS1="pass1"
myPASS2="pass2"
fi
mySECURE=$(printf "%s" "$myPASS1" | cracklib-check | grep -c "OK")
if [ "$mySECURE" == "0" ] && [ "$myPASS1" == "$myPASS2" ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Password is not secure ]" --defaultno --yesno "\nKeep insecure password?" 7 50
myOK=$?
if [ "$myOK" == "1" ];
then
myPASS1="pass1"
myPASS2="pass2"
fi
fi
done
htpasswd -b -c /etc/nginx/nginxpasswd "$myUSER" "$myPASS1" 2>&1 | dialog --title "[ Setting up user and password ]" $myPROGRESSBOXCONF;
# Let's generate a SSL self-signed certificate without interaction (browsers will see it invalid anyway)
tput civis
mkdir -p /etc/nginx/ssl 2>&1 | dialog --title "[ Generating a self-signed-certificate for NGINX ]" $myPROGRESSBOXCONF;
openssl req \
-nodes \
-x509 \
-sha512 \
-newkey rsa:8192 \
-keyout "/etc/nginx/ssl/nginx.key" \
-out "/etc/nginx/ssl/nginx.crt" \
-days 3650 \
-subj '/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd' 2>&1 | dialog --title "[ Generating a self-signed-certificate for NGINX ]" $myPROGRESSBOXCONF;
# Let's setup the ntp server
if [ -f $myNTPCONFPATH ];
then
dialog --title "[ Setting up the ntp server ]" $myPROGRESSBOXCONF <<EOF
EOF
cp $myNTPCONFPATH /etc/ntp.conf 2>&1 | dialog --title "[ Setting up the ntp server ]" $myPROGRESSBOXCONF
fuECHO "### Setting up the ntp server."
cp $myNTPCONFPATH /etc/ntp.conf
fi
# Let's setup 802.1x networking
if [ -f $myPFXPATH ];
then
dialog --title "[ Setting 802.1x networking ]" $myPROGRESSBOXCONF <<EOF
EOF
cp $myPFXPATH /etc/wpa_supplicant/ 2>&1 | dialog --title "[ Setting 802.1x networking ]" $myPROGRESSBOXCONF
fuECHO "### Setting up 802.1x networking."
cp $myPFXPATH /etc/wpa_supplicant/
if [ -f $myPFXPWPATH ];
then
dialog --title "[ Setting up 802.1x password ]" $myPROGRESSBOXCONF <<EOF
EOF
fuECHO "### Setting up 802.1x password."
myPFXPW=$(cat $myPFXPWPATH)
fi
myPFXHOSTID=$(cat $myPFXHOSTIDPATH)
tee -a /etc/network/interfaces 2>&1>/dev/null <<EOF
tee -a /etc/network/interfaces <<EOF
wpa-driver wired
wpa-conf /etc/wpa_supplicant/wired8021x.conf
@ -245,7 +197,7 @@ tee -a /etc/network/interfaces 2>&1>/dev/null <<EOF
# wpa-conf /etc/wpa_supplicant/wireless8021x.conf
EOF
tee /etc/wpa_supplicant/wired8021x.conf 2>&1>/dev/null <<EOF
tee /etc/wpa_supplicant/wired8021x.conf <<EOF
ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=root
eapol_version=1
@ -259,7 +211,7 @@ network={
}
EOF
tee /etc/wpa_supplicant/wireless8021x.conf 2>&1>/dev/null <<EOF
tee /etc/wpa_supplicant/wireless8021x.conf <<EOF
ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=root
eapol_version=1
@ -278,20 +230,8 @@ EOF
fi
# Let's provide a wireless example config ...
fuECHO "### Providing static ip, wireless example config."
tee -a /etc/network/interfaces 2>&1>/dev/null <<EOF
### Example static ip config
### Replace <eth0> with the name of your physical interface name
#
#auto eth0
#iface eth0 inet static
# address 192.168.1.1
# netmask 255.255.255.0
# network 192.168.1.0
# broadcast 192.168.1.255
# gateway 192.168.1.1
# dns-nameservers 192.168.1.1
fuECHO "### Providing a wireless example config."
tee -a /etc/network/interfaces <<EOF
### Example wireless config without 802.1x
### This configuration was tested with the IntelNUC series
@ -314,211 +254,237 @@ sed -i '/cdrom/d' /etc/apt/sources.list
# Let's make sure SSH roaming is turned off (CVE-2016-0777, CVE-2016-0778)
fuECHO "### Let's make sure SSH roaming is turned off."
tee -a /etc/ssh/ssh_config 2>&1>/dev/null <<EOF
tee -a /etc/ssh/ssh_config <<EOF
UseRoaming no
EOF
# Let's pull some updates
apt-get update -y 2>&1 | dialog --title "[ Pulling updates ]" $myPROGRESSBOXCONF
apt-get upgrade -y 2>&1 | dialog --title "[ Pulling updates ]" $myPROGRESSBOXCONF
fuECHO "### Pulling Updates."
apt-get update -y
apt-get upgrade -y
# Let's clean up apt
apt-get autoclean -y 2>&1 | dialog --title "[ Pulling updates ]" $myPROGRESSBOXCONF
apt-get autoremove -y 2>&1 | dialog --title "[ Pulling updates ]" $myPROGRESSBOXCONF
apt-get autoclean -y
apt-get autoremove -y
# Installing docker-compose, wetty, ctop, elasticdump
pip install --upgrade pip 2>&1 | dialog --title "[ Installing pip ]" $myPROGRESSBOXCONF
pip install docker-compose==1.12.0 2>&1 | dialog --title "[ Installing docker-compose ]" $myPROGRESSBOXCONF
pip install elasticsearch-curator==5.1.1 2>&1 | dialog --title "[ Installing elasticsearch-curator ]" $myPROGRESSBOXCONF
ln -s /usr/bin/nodejs /usr/bin/node 2>&1 | dialog --title "[ Installing wetty ]" $myPROGRESSBOXCONF
npm install https://github.com/t3chn0m4g3/wetty -g 2>&1 | dialog --title "[ Installing wetty ]" $myPROGRESSBOXCONF
npm install https://github.com/t3chn0m4g3/elasticsearch-dump -g 2>&1 | dialog --title "[ Installing elasticsearch-dump ]" $myPROGRESSBOXCONF
wget https://github.com/bcicen/ctop/releases/download/v0.6.1/ctop-0.6.1-linux-amd64 -O ctop 2>&1 | dialog --title "[ Installing ctop ]" $myPROGRESSBOXCONF
mv ctop /usr/bin/ 2>&1 | dialog --title "[ Installing ctop ]" $myPROGRESSBOXCONF
chmod +x /usr/bin/ctop 2>&1 | dialog --title "[ Installing ctop ]" $myPROGRESSBOXCONF
# Installing alerta-cli, wetty
fuECHO "### Installing alerta-cli."
pip install --upgrade pip
pip install alerta
fuECHO "### Installing wetty."
ln -s /usr/bin/nodejs /usr/bin/node
npm install https://github.com/t3chn0m4g3/wetty -g
# Let's add proxy settings to docker defaults
if [ -f $myPROXYFILEPATH ];
then fuECHO "### Setting up the proxy for docker."
myPROXY=$(cat $myPROXYFILEPATH)
tee -a /etc/default/docker <<EOF
http_proxy=$myPROXY
https_proxy=$myPROXY
HTTP_PROXY=$myPROXY
HTTPS_PROXY=$myPROXY
no_proxy=localhost,127.0.0.1,.sock
EOF
fi
# Let's add a new user
addgroup --gid 2000 tpot 2>&1 | dialog --title "[ Adding new user ]" $myPROGRESSBOXCONF
adduser --system --no-create-home --uid 2000 --disabled-password --disabled-login --gid 2000 tpot 2>&1 | dialog --title "[ Adding new user ]" $myPROGRESSBOXCONF
fuECHO "### Adding new user."
addgroup --gid 2000 tpot
adduser --system --no-create-home --uid 2000 --disabled-password --disabled-login --gid 2000 tpot
# Let's set the hostname
fuECHO "### Setting a new hostname."
a=$(fuRANDOMWORD /usr/share/dict/a.txt)
n=$(fuRANDOMWORD /usr/share/dict/n.txt)
myHOST=$a$n
hostnamectl set-hostname $myHOST 2>&1 | dialog --title "[ Setting new hostname ]" $myPROGRESSBOXCONF
sed -i 's#127.0.1.1.*#127.0.1.1\t'"$myHOST"'#g' /etc/hosts 2>&1 | dialog --title "[ Setting new hostname ]" $myPROGRESSBOXCONF
hostnamectl set-hostname $myHOST
sed -i 's#127.0.1.1.*#127.0.1.1\t'"$myHOST"'#g' /etc/hosts
# Let's patch sshd_config
sed -i 's#Port 22#Port 64295#' /etc/ssh/sshd_config 2>&1 | dialog --title "[ SSH listen on tcp/64295 ]" $myPROGRESSBOXCONF
sed -i 's#\#PasswordAuthentication yes#PasswordAuthentication no#' /etc/ssh/sshd_config 2>&1 | dialog --title "[ SSH password authentication only from RFC1918 networks ]" $myPROGRESSBOXCONF
tee -a /etc/ssh/sshd_config 2>&1>/dev/null <<EOF
fuECHO "### Patching sshd_config to listen on port 64295 and deny password authentication."
sed -i 's#Port 22#Port 64295#' /etc/ssh/sshd_config
sed -i 's#\#PasswordAuthentication yes#PasswordAuthentication no#' /etc/ssh/sshd_config
# Let's allow ssh password authentication from RFC1918 networks
fuECHO "### Allow SSH password authentication from RFC1918 networks"
tee -a /etc/ssh/sshd_config <<EOF
Match address 127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
PasswordAuthentication yes
EOF
# Let's patch docker defaults, so we can run images as service
fuECHO "### Patching docker defaults."
tee -a /etc/default/docker <<EOF
DOCKER_OPTS="-r=false"
EOF
# Let's restart docker for proxy changes to take effect
systemctl restart docker
sleep 5
# Let's make sure only myFLAVOR images will be downloaded and started
case $myFLAVOR in
HP)
echo "### Preparing HONEYPOT flavor installation."
cp /root/tpot/etc/tpot/compose/hp.yml /root/tpot/etc/tpot/tpot.yml 2>&1>/dev/null
cp /root/tpot/data/imgcfg/hp_images.conf /root/tpot/data/images.conf
;;
INDUSTRIAL)
echo "### Preparing INDUSTRIAL flavor installation."
cp /root/tpot/etc/tpot/compose/industrial.yml /root/tpot/etc/tpot/tpot.yml 2>&1>/dev/null
cp /root/tpot/data/imgcfg/industrial_images.conf /root/tpot/data/images.conf
;;
TPOT)
echo "### Preparing TPOT flavor installation."
cp /root/tpot/etc/tpot/compose/tpot.yml /root/tpot/etc/tpot/tpot.yml 2>&1>/dev/null
cp /root/tpot/data/imgcfg/tpot_images.conf /root/tpot/data/images.conf
;;
EVERYTHING)
ALL)
echo "### Preparing EVERYTHING flavor installation."
cp /root/tpot/etc/tpot/compose/all.yml /root/tpot/etc/tpot/tpot.yml 2>&1>/dev/null
cp /root/tpot/data/imgcfg/all_images.conf /root/tpot/data/images.conf
;;
esac
# Let's load docker images
myIMAGESCOUNT=$(cat /root/tpot/etc/tpot/tpot.yml | grep -v '#' | grep image | cut -d: -f2 | wc -l)
j=0
for name in $(cat /root/tpot/etc/tpot/tpot.yml | grep -v '#' | grep image | cut -d'"' -f2)
fuECHO "### Loading docker images. Please be patient, this may take a while."
for name in $(cat /root/tpot/data/images.conf)
do
dialog --title "[ Downloading docker images, please be patient ]" --backtitle "$myBACKTITLE" \
--gauge "\n Now downloading: $name\n" 8 80 $(expr 100 \* $j / $myIMAGESCOUNT) <<EOF
EOF
docker pull $name 2>&1>/dev/null
let j+=1
dialog --title "[ Downloading docker images, please be patient ]" --backtitle "$myBACKTITLE" \
--gauge "\n Now downloading: $name\n" 8 80 $(expr 100 \* $j / $myIMAGESCOUNT) <<EOF
EOF
docker pull dtagdevsec/$name:latest1610
done
# Let's add the daily update check with a weekly clean interval
dialog --title "[ Modifying update checks ]" $myPROGRESSBOXCONF <<EOF
EOF
tee /etc/apt/apt.conf.d/10periodic 2>&1>/dev/null <<EOF
fuECHO "### Modifying update checks."
tee /etc/apt/apt.conf.d/10periodic <<EOF
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "7";
EOF
# Let's make sure to reboot the system after a kernel panic
dialog --title "[ Reboot after kernel panic ]" $myPROGRESSBOXCONF <<EOF
EOF
tee -a /etc/sysctl.conf 2>&1>/dev/null <<EOF
fuECHO "### Reboot after kernel panic."
tee -a /etc/sysctl.conf <<EOF
# Reboot after kernel panic, check via /proc/sys/kernel/panic[_on_oops]
# Set required map count for ELK
kernel.panic = 1
kernel.panic_on_oops = 1
vm.max_map_count = 262144
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
EOF
# Let's add some cronjobs
dialog --title "[ Adding cronjobs ]" $myPROGRESSBOXCONF <<EOF
EOF
tee -a /etc/crontab 2>&1>/dev/null <<EOF
fuECHO "### Adding cronjobs."
tee -a /etc/crontab <<EOF
# Show running containers every 60s via /dev/tty2
#*/2 * * * * root status.sh > /dev/tty2
# Check if containers and services are up
*/5 * * * * root check.sh
# Example for alerta-cli IP update
#*/5 * * * * root alerta --endpoint-url http://<ip>:<port>/api delete --filters resource=<host> && alerta --endpoint-url http://<ip>:<port>/api send -e IP -r <host> -E Production -s ok -S T-Pot -t \$(cat /data/elk/logstash/mylocal.ip) --status open
# Check if updated images are available and download them
27 1 * * * root /usr/bin/docker-compose -f /etc/tpot/tpot.yml pull
27 1 * * * root for i in \$(cat /data/images.conf); do docker pull dtagdevsec/\$i:latest1610; done
# Delete elasticsearch logstash indices older than 90 days
27 4 * * * root /usr/local/bin/curator --config /etc/tpot/curator/curator.yml /etc/tpot/curator/actions.yml
# Restart docker service and containers
27 3 * * * root dcres.sh
# Uploaded binaries are not supposed to be downloaded
*/1 * * * * root mv --backup=numbered /data/dionaea/roots/ftp/* /data/dionaea/binaries/
# Delete elastic indices older than 90 days (kibana index is omitted by default)
27 4 * * * root docker exec elk bash -c '/usr/local/bin/curator --host 127.0.0.1 delete indices --older-than 90 --time-unit days --timestring \%Y.\%m.\%d'
# Update IP and erase check.lock if it exists
27 15 * * * root /etc/rc.local
# Daily reboot
27 3 * * * root reboot
27 23 * * * root reboot
# Check for updated packages every sunday, upgrade and reboot
27 16 * * 0 root apt-get autoclean -y && apt-get autoremove -y && apt-get update -y && apt-get upgrade -y && sleep 10 && reboot
27 16 * * 0 root apt-get autoclean -y && apt-get autoremove -y && apt-get update -y && apt-get upgrade -y && sleep 10 && reboot
EOF
# Let's create some files and folders
fuECHO "### Creating some files and folders."
mkdir -p /data/conpot/log \
/data/cowrie/log/tty/ /data/cowrie/downloads/ /data/cowrie/keys/ /data/cowrie/misc/ \
/data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/roots/ftp /data/dionaea/roots/tftp /data/dionaea/roots/www /data/dionaea/roots/upnp \
/data/elasticpot/log \
/data/elk/data /data/elk/log \
/data/elk/data /data/elk/log /data/elk/logstash/conf \
/data/glastopf /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/ \
/data/mailoney/log \
/data/emobility/log \
/data/ews/conf \
/data/rdpy/log \
/data/spiderfoot \
/data/suricata/log /home/tsec/.ssh/ \
/data/p0f/log \
/data/vnclowpot/log \
/etc/tpot/elk /etc/tpot/compose /etc/tpot/systemd \
/usr/share/tpot/bin 2>&1 | dialog --title "[ Creating some files and folders ]" $myPROGRESSBOXCONF
touch /data/spiderfoot/spiderfoot.db 2>&1 | dialog --title "[ Creating some files and folders ]" $myPROGRESSBOXCONF
/data/ews/log /data/ews/conf /data/ews/dionaea /data/ews/emobility \
/data/suricata/log /home/tsec/.ssh/
# Let's take care of some files and permissions before copying
chmod 500 /root/tpot/bin/* 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
chmod 600 -R /root/tpot/etc/tpot 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
chmod 644 /root/tpot/etc/issue 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
chmod 755 /root/tpot/etc/rc.local 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
chmod 644 /root/tpot/etc/tpot/systemd/* 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
chmod 500 /root/tpot/bin/*
chmod 600 /root/tpot/data/*
chmod 644 /root/tpot/etc/issue
chmod 755 /root/tpot/etc/rc.local
chmod 644 /root/tpot/data/systemd/*
# Let's copy some files
tar xvfz /root/tpot/etc/tpot/elkbase.tgz -C / 2>&1 | dialog --title "[ Extracting elkbase.tgz ]" $myPROGRESSBOXCONF
cp -R /root/tpot/bin/* /usr/share/tpot/bin/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp -R /root/tpot/etc/tpot/* /etc/tpot/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/etc/tpot/systemd/* /etc/systemd/system/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/etc/issue /etc/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp -R /root/tpot/etc/nginx/ssl /etc/nginx/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/etc/nginx/tpotweb.conf /etc/nginx/sites-available/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/etc/nginx/nginx.conf /etc/nginx/nginx.conf 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/keys/authorized_keys /home/tsec/.ssh/authorized_keys 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/usr/share/nginx/html/* /usr/share/nginx/html/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
systemctl enable tpot 2>&1 | dialog --title "[ Enabling service for tpot ]" $myPROGRESSBOXCONF
systemctl enable wetty 2>&1 | dialog --title "[ Enabling service for wetty ]" $myPROGRESSBOXCONF
tar xvfz /root/tpot/data/elkbase.tgz -C /
cp /root/tpot/data/elkbase.tgz /data/
cp -R /root/tpot/bin/* /usr/bin/
cp -R /root/tpot/data/* /data/
cp /root/tpot/data/systemd/* /etc/systemd/system/
cp /root/tpot/etc/issue /etc/
cp -R /root/tpot/etc/nginx/ssl /etc/nginx/
cp /root/tpot/etc/nginx/tpotweb.conf /etc/nginx/sites-available/
cp /root/tpot/etc/nginx/nginx.conf /etc/nginx/nginx.conf
cp /root/tpot/keys/authorized_keys /home/tsec/.ssh/authorized_keys
cp /root/tpot/usr/share/nginx/html/* /usr/share/nginx/html/
for i in $(cat /data/images.conf);
do
systemctl enable $i;
done
systemctl enable wetty
# Let's enable T-Pot website
ln -s /etc/nginx/sites-available/tpotweb.conf /etc/nginx/sites-enabled/tpotweb.conf 2>&1 | dialog --title "[ Enabling T-Pot website ]" $myPROGRESSBOXCONF
fuECHO "### Enabling T-Pot website."
ln -s /etc/nginx/sites-available/tpotweb.conf /etc/nginx/sites-enabled/tpotweb.conf
# Let's take care of some files and permissions
chmod 760 -R /data 2>&1 | dialog --title "[ Set permissions and ownerships ]" $myPROGRESSBOXCONF
chown tpot:tpot -R /data 2>&1 | dialog --title "[ Set permissions and ownerships ]" $myPROGRESSBOXCONF
chmod 600 /home/tsec/.ssh/authorized_keys 2>&1 | dialog --title "[ Set permissions and ownerships ]" $myPROGRESSBOXCONF
chown tsec:tsec /home/tsec/.ssh /home/tsec/.ssh/authorized_keys 2>&1 | dialog --title "[ Set permissions and ownerships ]" $myPROGRESSBOXCONF
chmod 760 -R /data
chown tpot:tpot -R /data
chmod 600 /home/tsec/.ssh/authorized_keys
chown tsec:tsec /home/tsec/.ssh /home/tsec/.ssh/authorized_keys
# Let's replace "quiet splash" options, set a console font for more screen canvas and update grub
sed -i 's#GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"#GRUB_CMDLINE_LINUX_DEFAULT="consoleblank=0"#' /etc/default/grub 2>&1>/dev/null
sed -i 's#GRUB_CMDLINE_LINUX=""#GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"#' /etc/default/grub 2>&1>/dev/null
sed -i 's#GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"#GRUB_CMDLINE_LINUX_DEFAULT="consoleblank=0"#' /etc/default/grub
sed -i 's#GRUB_CMDLINE_LINUX=""#GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"#' /etc/default/grub
#sed -i 's#\#GRUB_GFXMODE=640x480#GRUB_GFXMODE=800x600x32#' /etc/default/grub
#tee -a /etc/default/grub <<EOF
#GRUB_GFXPAYLOAD=800x600x32
#GRUB_GFXPAYLOAD_LINUX=800x600x32
#EOF
update-grub 2>&1 | dialog --title "[ Update grub ]" $myPROGRESSBOXCONF
update-grub
cp /usr/share/consolefonts/Uni2-Terminus12x6.psf.gz /etc/console-setup/
gunzip /etc/console-setup/Uni2-Terminus12x6.psf.gz
sed -i 's#FONTFACE=".*#FONTFACE="Terminus"#' /etc/default/console-setup
sed -i 's#FONTSIZE=".*#FONTSIZE="12x6"#' /etc/default/console-setup
update-initramfs -u 2>&1 | dialog --title "[ Update initramfs ]" $myPROGRESSBOXCONF
update-initramfs -u
# Let's enable a color prompt and add /usr/share/tpot/bin to path
# Let's enable a color prompt
myROOTPROMPT='PS1="\[\033[38;5;8m\][\[$(tput sgr0)\]\[\033[38;5;1m\]\u\[$(tput sgr0)\]\[\033[38;5;6m\]@\[$(tput sgr0)\]\[\033[38;5;4m\]\h\[$(tput sgr0)\]\[\033[38;5;6m\]:\[$(tput sgr0)\]\[\033[38;5;5m\]\w\[$(tput sgr0)\]\[\033[38;5;8m\]]\[$(tput sgr0)\]\[\033[38;5;1m\]\\$\[$(tput sgr0)\]\[\033[38;5;15m\] \[$(tput sgr0)\]"'
myUSERPROMPT='PS1="\[\033[38;5;8m\][\[$(tput sgr0)\]\[\033[38;5;2m\]\u\[$(tput sgr0)\]\[\033[38;5;6m\]@\[$(tput sgr0)\]\[\033[38;5;4m\]\h\[$(tput sgr0)\]\[\033[38;5;6m\]:\[$(tput sgr0)\]\[\033[38;5;5m\]\w\[$(tput sgr0)\]\[\033[38;5;8m\]]\[$(tput sgr0)\]\[\033[38;5;2m\]\\$\[$(tput sgr0)\]\[\033[38;5;15m\] \[$(tput sgr0)\]"'
tee -a /root/.bashrc 2>&1>/dev/null <<EOF
tee -a /root/.bashrc << EOF
$myROOTPROMPT
PATH="$PATH:/usr/share/tpot/bin"
EOF
tee -a /home/tsec/.bashrc 2>&1>/dev/null <<EOF
tee -a /home/tsec/.bashrc << EOF
$myUSERPROMPT
PATH="$PATH:/usr/share/tpot/bin"
EOF
# Let's create ews.ip before reboot and prevent race condition for first start
/usr/share/tpot/bin/updateip.sh 2>&1>/dev/null
source /etc/environment
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(curl -s myexternalip.com/raw)
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
echo $myLOCALIP > /data/elk/logstash/mylocal.ip
chown tpot:tpot /data/ews/conf/ews.ip
# Final steps
mv /root/tpot/etc/rc.local /etc/rc.local 2>&1>/dev/null && \
rm -rf /root/tpot/ 2>&1>/dev/null && \
dialog --no-ok --no-cancel --backtitle "$myBACKTITLE" --title "[ Thanks for your patience. Now rebooting. ]" --pause "" 6 80 2 && \
reboot
fuECHO "### Thanks for your patience. Now rebooting."
mv /root/tpot/etc/rc.local /etc/rc.local && rm -rf /root/tpot/ && sleep 2 && reboot

View File

@ -1,7 +1,7 @@
average
big
colossal
fat
fat
giant
gigantic
great
@ -19,7 +19,7 @@ short
small
tall
tiny
boiling
boiling
breezy
broken
bumpy
@ -237,7 +237,7 @@ friendly
funny
gentle
glorious
good
good
happy
healthy
helpful
@ -1317,7 +1317,7 @@ fragile
helpful
helpless
important
impossible
impossible
innocent
inquisitive
modern
@ -1337,7 +1337,7 @@ uninterested
wandering
wild
wrong
adorable
adorable
alert
average
beautiful

View File

@ -911,7 +911,7 @@ constellation
construction
consul
consulate
contactlens
contact lens
contagion
contest
context
@ -998,7 +998,7 @@ credenza
credit
creditor
creek
cremebrulee
creme brulee
crest
crew
crib
@ -1284,13 +1284,13 @@ duffel
dugout
dulcimer
dumbwaiter
dumptruck
dunebuggy
dump truck
dune buggy
dungarees
dungeon
duplexer
dust
duststorm
dust storm
duster
duty
dwarf
@ -2623,8 +2623,8 @@ noodle
normal
norse
north
northamerica
northkorea
north america
north korea
nose
note
notebook
@ -3367,7 +3367,7 @@ satire
satisfaction
saturday
sauce
saudiarabia
saudi arabia
sausage
save
saving
@ -3634,9 +3634,9 @@ source
sourwood
sousaphone
south
southafrica
southamerica
southkorea
south africa
south america
south korea
sow
soy
soybean
@ -4157,7 +4157,7 @@ unibody
uniform
union
unit
unitedkingdom
united kingdom
university
urn
use

View File

@ -10,12 +10,12 @@
<body bgcolor="#E20074">
<center>
<a href="/tpotweb.html" target="_top" class="btn">Home</a>
<a href="/kibana" target="main" class="btn">Kibana</a>
<a href="/myhead/" target="main" class="btn">ES Head</a>
<a href="/kibana/" target="main" class="btn">Kibana</a>
<a href="/myhead/_plugin/head/" target="main" class="btn">ES Head Plugin</a>
<a href="/ui/" target="main" class="btn">UI-For-Docker</a>
<a href="/wetty/ssh/tsec" target="main" class="btn">WebSSH</a>
<a href="/netdata/" target="_blank" class="btn">Netdata</a>
<a href="/spiderfoot/" target="main" class="btn">Spiderfoot</a>
<a href="/ui/" target="main" class="btn">Portainer</a>
<a href="/wetty/ssh/tsec" target="main" class="btn">WebTTY</a>
</center>
</body>
</html>

View File

@ -8,7 +8,7 @@
<frameset rows='20,*' border='0' frameborder='0' framespacing='0'>
<frame src='navbar.html' name='navbar' marginwidth='0' marginheight='0' scrolling='no' noresize>
<frame src='/kibana' name='main' marginwidth='0' marginheight='0' scrolling='auto' noresize>
<frame src='/kibana/' name='main' marginwidth='0' marginheight='0' scrolling='auto' noresize>
<noframes>
</noframes>
</frameset>

View File

@ -1,6 +1,6 @@
default install
label install
menu label ^T-Pot 17.10 (Alpha)
menu label ^T-Pot 16.10
menu default
kernel linux
append vga=788 initrd=initrd.gz console-setup/ask_detect=true --

View File

@ -1,13 +1,14 @@
#!/bin/bash
# Set TERM, DIALOGRC
export DIALOGRC=/etc/dialogrc
export TERM=linux
########################################################
# T-Pot #
# .ISO creator #
# #
# v16.10.0 by mo, DTAG, 2016-10-28 #
########################################################
# Let's define some global vars
myBACKTITLE="T-Pot - ISO Creator"
# If you need latest hardware support, try using the hardware enablement (hwe) ISO
# myUBUNTULINK="http://archive.ubuntu.com/ubuntu/dists/xenial-updates/main/installer-amd64/current/images/hwe-netboot/mini.iso"
myUBUNTULINK="http://archive.ubuntu.com/ubuntu/dists/xenial-updates/main/installer-amd64/current/images/netboot/mini.iso"
myUBUNTUISO="mini.iso"
myTPOTISO="tpot.iso"
@ -32,9 +33,6 @@ if [ "$myWHOAMI" != "root" ]
exit
fi
# Let's load dialog color theme
cp installer/etc/dialogrc /etc/
# Let's clean up at the end or if something goes wrong ...
function fuCLEANUP {
rm -rf $myTMP $myTPOTDIR $myPROXYCONFIG $myPFXPATH $myPFXPWPATH $myPFXHOSTIDPATH $myNTPCONFPATH

View File

@ -63,7 +63,7 @@ d-i passwd/root-login boolean false
d-i passwd/make-user boolean true
d-i passwd/user-fullname string tsec
d-i passwd/username string tsec
d-i passwd/user-password-crypted password $1$jAw1TW8v$a2WFamxQJfpPYZmn4qJT71
#d-i passwd/user-password-crypted password $1$jAw1TW8v$a2WFamxQJfpPYZmn4qJT71
d-i user-setup/encrypt-home boolean false
########################################
@ -100,7 +100,7 @@ tasksel tasksel/first multiselect ubuntu-server
########################
### Package Installation
########################
d-i pkgsel/include string apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount curl dialog dnsutils docker.io dstat ethtool genisoimage git glances html2text htop iptables iw jq libcrack2 libltdl7 lm-sensors man nginx-extras nodejs npm ntp openssh-server openssl prips syslinux psmisc pv python-pip unzip vim wireless-tools wpasupplicant
d-i pkgsel/include string apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount curl dialog dnsutils docker.io dstat ethtool genisoimage git glances html2text htop iptables iw libltdl7 lm-sensors man nginx-extras nodejs npm ntp openssh-server openssl syslinux psmisc pv python-pip vim wireless-tools wpasupplicant
#################
### Update Policy