93 Commits

Author SHA1 Message Date
8cb6457882 rename webs to correct name webtty 2017-09-20 15:12:10 +02:00
ed563f3d90 update readme 2017-09-20 14:02:17 +02:00
928c55af6b Update README.md 2017-09-19 09:29:08 +02:00
38beb25e76 update licenses 2017-09-19 09:26:41 +02:00
cf7b1c3e5d update some license information 2017-09-19 09:09:47 +02:00
972e5f5085 continue working on readme 2017-09-18 23:01:34 +02:00
0c8ea5576f update screenshots 2017-09-18 22:36:27 +02:00
a137e09dc6 17.10 dashboard 2017-09-18 22:23:44 +02:00
e3b112322a start working on updated readme 2017-09-06 17:51:18 +02:00
767943d5ce update architecture 2017-09-06 15:14:47 +02:00
0363b218ec update logo 2017-08-30 15:12:29 +00:00
ba56a6c923 fix install error regarding blanks in hostname 2017-08-30 11:20:09 +00:00
0a3b67e01c tweaking, t-pot docker tags to 1710 2017-08-28 20:03:46 +00:00
3ee9ad15d7 add mailoney, rdpy dashboards 2017-08-27 20:40:55 +00:00
56ebd9f05c include rdpy honeypot 2017-08-27 00:37:57 +00:00
46eea25f38 bump ctop version to 0.6.1 2017-08-24 22:43:57 +00:00
fc52474fa0 add glastopf.db to logrotate.conf 2017-08-23 10:02:00 +00:00
6ff5c6b94f all dashboards are now adapted to 17.x
will probably still need some finetuning
2017-08-20 21:12:46 +00:00
6d98aaf1bd tweaking, account for new elk versions 2017-08-18 22:54:01 +00:00
277f24e0ee prepare for vnclowpot tweaking 2017-08-18 22:05:30 +00:00
26f019c894 prepare for vnclowpot and more dashboards finished 2017-08-18 22:02:51 +00:00
93e6ce9712 re-eanble ipv6 for docker 2017-08-14 22:40:51 +00:00
53f11c419c dashboards tweaking 2017-08-14 16:32:54 +00:00
796e74059e clean up 2017-08-14 15:10:21 +00:00
d1c167bd5f tweaking
allow for ftp data
forward ftp data into dionaea container
disable ipv6 since it messes up dionaea ip logging
2017-08-14 14:55:28 +00:00
adc8ddd090 tweaking
Update backup_es_folders to fit latest 17.x design
Include updated elkbase
Include updated kibana-objects
2017-08-11 20:27:20 +00:00
9e2313d7ca fix visual bug (sometimes only string PORTS is displayed) 2017-08-07 14:54:33 +00:00
8e8f94b1b4 fix curator
-the old curator does not support ES 5.x
-include curator 5.1.1 and pin version to exactly that to avoid surprises with disruptive updates
-configs reside in /etc/tpot/curator
-will be started daily through /etc/crontab
-by default all logstash indices older than 90 days will be deleted
2017-08-07 13:18:55 +00:00
b25caf6302 improve dps.sh output 2017-08-07 10:24:25 +00:00
36bb76d999 add dep for listbot (prips) 2017-07-23 22:56:50 +00:00
77a4635f59 maltrail is too far off scope 2017-07-23 10:25:40 +00:00
01d4ef2928 account for unresolved external ip address 2017-07-21 15:26:37 +00:00
07c3f48894 compress and rotate logs if persistence enabled
if persistence is enabled, log files, downloads, binaries, etc. will be compressed and rotated
each start / stop of the t-pot service will account for a full rotation cycle if files are not empty
basically the rotation will recycle logs after 30 days, unless the service is stopped / started manually which will cause for a shorter period
2017-07-20 20:25:49 +00:00
0dedd4a172 add unzip as dep for ip rep downloader 2017-07-13 17:24:13 +00:00
c8c3124f04 tweaking 2017-07-12 18:53:20 +00:00
022a48f1b8 tweaking 2017-07-12 18:51:20 +00:00
6549f8f582 nsa gen is no more, offline alternative 2017-06-21 22:46:12 +00:00
51e8dc1aca fix path 2017-06-21 19:34:08 +00:00
0e7563da17 prepare for honeypot changes 2017-06-21 19:26:42 +00:00
77e68f0e64 tweaking, add new honeypot
correct a typo in CONTRIBUTING.MD
preapre for and add mailoney honeypot
2017-06-15 22:08:56 +00:00
a1bc127698 consider commented config lines 2017-06-07 16:24:42 +00:00
66cdb0e60a modifications for conpot update 2017-06-07 15:51:42 +00:00
4e6f4fc9e8 finetuning
add p0f
change some defaults
2017-06-06 22:32:49 +00:00
48d36f999d finetuning suricata 2017-06-03 23:56:10 +00:00
aea18d5f92 squashing some bugs
do not forward tcp connections to or from 127.0.0.1 to NFQ (fixes strange netdata behaviour)
run netdata on network mode host again (update compose files) including host metrics
2017-05-30 19:07:43 +00:00
5d8ad0a623 add spiderfoot persistence 2017-05-25 21:59:26 +00:00
2bbafbc791 handle iptables differently 2017-05-23 23:32:07 +00:00
345df08941 improvements
use docker-compose from pypi with support for 2.1 compose file version
logstash, kibana, head & netdata are now depending on a healthy elasticsearch container before starting
remove alerta-cli
tweak installer
2017-05-22 19:36:41 +00:00
931ac2dd85 tweaking
update dps.sh
adjust docker-compose related tpot configs for dionaea (stdin_open: true)
adjust tpot.service (suritcata / p0f prep) to be aware of a situation without local network route ( Fixes#99 )
2017-05-11 17:01:21 +00:00
ce0e42e555 get latest ctop 2017-05-04 22:52:32 +00:00
b36c63962d tweaking, prepare for elk microservice 2017-05-03 20:55:18 +00:00
8c475544b3 Merge pull request #97 from dtag-dev-sec/17.06dc
17.06dc
2017-05-01 22:11:27 +02:00
3de02ee7b0 tweaking for docker-compose
get rid of self-check scripts, docker-compose takes care of that now
use tpot.yml config for tpot scripts
wipe crontab clean of legacy scripts
check.lock no longer needed (rc.local)
adjust installer (invisible cursor, get image info from tpot.yml, some tweaking)
2017-05-01 19:03:27 +00:00
365e1a1e5c prepare switch to docker-compose 2017-04-30 23:34:30 +00:00
291034d53e feed newlines when patching sshd config 2017-04-26 20:01:15 +02:00
dc30cd81c2 fix token for everything installation 2017-04-24 17:21:45 +02:00
0d684cc825 add pypi to list of internet checks 2017-04-24 16:57:58 +02:00
843ba30762 final touches on installer
move tsec password dialog from debian installer to t-pot-installer
check for secure password for tsec and web user
fix layout issue
2017-04-24 16:06:23 +02:00
50a93f5abf neatify two installer widgets 2017-04-22 20:05:12 +02:00
66dd2398e8 cleanup and prettyfy installer
reorganized installer
now using dialog throughout the whole installation
2017-04-21 01:11:10 +02:00
8417ed2fbd fix path 2017-04-19 15:48:27 +02:00
845a11e240 fix path 2017-04-19 15:39:34 +02:00
9fea0461fc Clean up, add Spiderfoot
tpot configs are now stored in /etc/tpot/
tpot related scripts are now stored /usr/share/tpot/bin
some scripts are improved
some scripts are cleaned of old comments
spiderfoot is now part of tpot
2017-04-19 12:22:51 +00:00
62ce12a8a9 disable logging for installer
1. improve performance
2. improve convenience, user sees progress
3. infos and errors are displayed
2017-04-17 00:53:47 +02:00
5b267b396f improve installer 2017-04-16 23:44:19 +02:00
c9827f0f03 manage kibana objetcs, ES dump and restore, ES folder backup 2017-04-14 22:08:35 +00:00
90592e7388 manage kibana objetcs, dump and restore 2017-04-12 20:46:12 +00:00
d54702ece8 include updates 2017-04-10 20:38:22 +00:00
1453e26f76 prepare for forward logs to cc 2017-04-07 15:20:56 +00:00
ff4a87ff42 set linux as term 2017-03-22 18:42:24 +00:00
9090b5cfd7 installer ui improvements 2017-03-22 18:27:43 +00:00
052a3489e9 fix typo 2017-03-17 23:49:29 +00:00
ffc0edd587 prepare for elk 5.x and improvements 2017-03-17 23:47:04 +00:00
a94b34c8a8 add some colors 2017-03-15 09:28:12 +00:00
71e1069dbe fix 2017-03-13 22:17:02 +00:00
412c7fa508 fix 2017-03-13 21:58:48 +00:00
fcbb2952d3 fixes and improvements 2017-03-13 21:19:28 +00:00
a556a193f7 fix netdata error 2017-03-13 19:44:02 +00:00
d3599bcc10 update ui-for-docker systemd 2017-03-13 16:29:51 +00:00
fddfc68ff3 improvements 2017-03-13 16:10:37 +00:00
b4f157d020 cleanup 2017-03-13 10:11:46 +00:00
ff75c6c588 modify installer for 17.06 2017-03-13 10:07:46 +00:00
a98e6bfc53 prepare for 17.06 dev env 2017-03-13 00:38:43 +00:00
4a67a47a04 remove some services from myip.sh 2017-03-12 23:50:27 +00:00
4a58f7488a fix bug myip.sh 2017-03-12 23:46:12 +00:00
c5de828d7e prepare for new ewsposter 2017-03-12 23:31:34 +00:00
fb02d41e57 add latest ctop 2017-03-12 20:57:56 +00:00
35700a731b update /etc/issue 2017-03-12 12:05:22 +00:00
26a9357d84 modify elk service 2017-03-08 17:06:13 +00:00
fab294bdda remove patching docker defaults
handled in systemd scripts for each container
2017-03-04 21:24:50 +01:00
9fbdcf80f5 add working solution for head 2017-02-27 17:42:34 +00:00
6298afae4a Update install.sh 2017-02-26 12:29:38 +01:00
20759a7c5c starting with elk5 2017-02-26 11:22:56 +00:00
64 changed files with 8151 additions and 1206 deletions

View File

@ -23,7 +23,7 @@ Thank you :smiley:
<a name="info"></a> <a name="info"></a>
### Baisc support information ### Basic support information
- What T-Pot version are you currtently using? - What T-Pot version are you currtently using?
- Are you running on a Intel NUC or a VM? - Are you running on a Intel NUC or a VM?

299
README.md
View File

@ -1,16 +1,16 @@
# T-Pot 16.10 Image Creator # T-Pot 17.10 (Alpha)
This repository contains the necessary files to create the **[T-Pot community honeypot](http://dtag-dev-sec.github.io/)** ISO image. This repository contains the necessary files to create the **[T-Pot](http://dtag-dev-sec.github.io/)** ISO image.
The image can then be used to install T-Pot on a physical or virtual machine. The image can then be used to install T-Pot on a physical or virtual machine.
In March 2016 we released In October 2016 we released
[T-Pot 16.03](http://dtag-dev-sec.github.io/mediator/feature/2016/03/11/t-pot-16.03.html) [T-Pot 16.10](http://dtag-dev-sec.github.io/mediator/feature/2016/10/31/t-pot-16.10.html)
# T-Pot 16.10 # T-Pot 17.10 (Alpha - be careful there may be dragons!)
T-Pot 16.10 now uses Ubuntu Server 16.04 LTS and is based on T-Pot 17.10 uses latest 16.04 LTS Ubuntu Server Network Installer image, is based on
[docker](https://www.docker.com/) [docker](https://www.docker.com/), [docker-compose](https://docs.docker.com/compose/)
and includes dockerized versions of the following honeypots and includes dockerized versions of the following honeypots
@ -19,8 +19,12 @@ and includes dockerized versions of the following honeypots
* [dionaea](https://github.com/DinoTools/dionaea), * [dionaea](https://github.com/DinoTools/dionaea),
* [elasticpot](https://github.com/schmalle/ElasticPot), * [elasticpot](https://github.com/schmalle/ElasticPot),
* [emobility](https://github.com/dtag-dev-sec/emobility), * [emobility](https://github.com/dtag-dev-sec/emobility),
* [glastopf](http://glastopf.org/) and * [glastopf](http://glastopf.org/),
* [honeytrap](https://github.com/armedpot/honeytrap/) * [honeytrap](https://github.com/armedpot/honeytrap/),
* [mailoney](https://github.com/awhitehatter/mailoney),
* [rdpy](https://github.com/citronneur/rdpy) and
* [vnclowpot](https://github.com/magisterquis/vnclowpot)
Furthermore we use the following tools Furthermore we use the following tools
@ -28,6 +32,7 @@ Furthermore we use the following tools
* [Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster. * [Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster.
* [Netdata](http://my-netdata.io/) for real-time performance monitoring. * [Netdata](http://my-netdata.io/) for real-time performance monitoring.
* [Portainer](http://portainer.io/) a web based UI for docker. * [Portainer](http://portainer.io/) a web based UI for docker.
* [Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool.
* [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine. * [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine.
* [Wetty](https://github.com/krishnasrinivas/wetty) a web based SSH client. * [Wetty](https://github.com/krishnasrinivas/wetty) a web based SSH client.
@ -35,7 +40,7 @@ Furthermore we use the following tools
# TL;DR # TL;DR
1. Meet the [system requirements](#requirements). The T-Pot installation needs at least 4 GB RAM and 64 GB free disk space as well as a working internet connection. 1. Meet the [system requirements](#requirements). The T-Pot installation needs at least 4 GB RAM and 64 GB free disk space as well as a working internet connection.
2. Download the [tpot.iso](http://community-honeypot.de/tpot.iso) or [create it yourself](#createiso). 2. Download the T-Pot ISO from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases) or [create it yourself](#createiso).
3. Install the system in a [VM](#vm) or on [physical hardware](#hw) with [internet access](#placement). 3. Install the system in a [VM](#vm) or on [physical hardware](#hw) with [internet access](#placement).
4. Enjoy your favorite beverage - [watch](http://sicherheitstacho.eu/?peers=communityPeers) and [analyze](#kibana). 4. Enjoy your favorite beverage - [watch](http://sicherheitstacho.eu/?peers=communityPeers) and [analyze](#kibana).
@ -72,58 +77,57 @@ Seeing is believing :bowtie:
<a name="background"></a> <a name="background"></a>
# Changelog # Changelog
- **Ubuntu 16.04 LTS** is now being used as T-Pot's OS base - **Size still matters** 😅
- **Size does matter** 😅 - All docker images have been rebuilt as micro containers based on Alpine Linux to even further reduce the image size and leading to image sizes (compressed) below the 50 MB mark. The uncompressed size of eMobility and the ELK stack could each be reduced by a whopping 600 MB!
- `tpot.iso` is now based on **Ubuntu's** network installer reducing the image download size by 600MB from 650MB to only **50MB** - A "Everything" installation now takes roughly 1.6 GB download size
- All docker images have been rebuilt to reduce the image size at least by 50MB in some cases even 400-600MB - **docker-compose**
- A "Everything" installation takes roughly 2GB less download size (counting from initial image download) - T-Pot containers are now being controlled and monitored through docker-compose and a single configuration file `/etc/tpot/tpot.yml` allowing for greater flexibility and resulting in easier image management (i.e. updated images).
- **Introducing** new tools making things a lot easier for new users - As a benefit only a single `systemd` script `/etc/systemd/system/tpot.service` is needed to start `systemctl start tpot` and stop `systemctl stop tpot` the T-Pot services.
- [Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster. - There are four pre-configured compose configurations which do reflect the T-Pot editions `/etc/tpot/compose`. Simply stop the T-Pot services and copy i.e. `cp /etc/tpot/compose/all.yml /etc/tpot/tpot.yml`, restart the T-Pot services and the selcted edition will be running after downloading the required docker images.
- [Netdata](http://my-netdata.io/) for real-time performance monitoring.
- [Portainer](http://portainer.io/) a web based UI for docker. - **Introducing** [Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool.
- [Wetty](https://github.com/krishnasrinivas/wetty) a web based SSH client. - **Installation** procedure simplified
- **NGINX** implemented as HTTPS reverse proxy - Within the Ubuntu Installer you only have to choose language settings
- Access Kibana, ES Head plugin, UI-for-Docker, WebSSH and Netdata via browser! - After the first reboot the T-Pot installer checks if internet and required services are reachable before the installation procedure begins
- Two factor based SSH tunnel is no longer needed! - T-Pot Installer now uses a “dialog” which looks way better than the old text based installer
- **Installation** procedure improved - `tsec` user & password dialog is now part of the T-Pot Installer
- Set your own password for the *tsec* user - The self-signed certificate is now created automatically to reduce unnecessary overhead for novice users
- Choose your installation type without the need of building your own image - New ASCII logo and login screen pointing to web and ssh logins
- Setup a remote user / password for secure web access including a self-signed-certificate - Hostnames are now generated using an offline name generator, which still produces funny and collision free hostnames
- Easy to remember hostnames - **CVE IDs for Suricata**
- **First login** easy and secure - Our very own [Listbot](https://github.com/dtag-dev-sec/listbot) builds translation maps for Logstash. If Logstash registers a match the events' CVE ID will be stored alongside the event within Elasticsearch.
- Access from console, ssh or web - **IP Reputations**
- No two-factor-authentication needed for ssh when logging in from RFC1918 networks - [Listbot](https://github.com/dtag-dev-sec/listbot) also builds translation maps for blacklisted IPs
- Enforcing public-key authentication for ssh connections other than RFC1918 networks - Based upon 30+ publicly available IP blacklisting sources listbot creates a logstash translation map matching the events' source IP addresses against the IPs reputation
- **Systemd** now supersedes *upstart* as init system. All upstart scripts were ported to systemd along with the following improvements: - If the source IP is known to a blacklist service a corresponding tag will be stored with the event
- Improved start / stop handling of containers - Updates occur on every logstash container start; by default every 24h
- Set persistence individually per container startup scripts (`/etc/systemd/system`)
- Set persistence globally (`/usr/bin/clean.sh`)
- **Honeypot updates and improvements** - **Honeypot updates and improvements**
- **Conpot** now supports **JSON logging** with many thanks as to making this feature request possible going to: - All honeypots were updated to their latest & stable versions.
- [Andrea Pasquale](https://github.com/adepasquale), - **New Honeypots** were added ...
- [Danilo Massa](https://github.com/danilo-massa) & * [mailoney](https://github.com/awhitehatter/mailoney)
- [Johnny Vestergaard](https://github.com/johnnykv) - A low interaction SMTP honeypot
- **Cowrie** is now supporting **telnet** which is highly appreciated and thank you * [rdpy](https://github.com/citronneur/rdpy)
- [Michel Oosterhof](https://github.com/micheloosterhof) - A low interaction RDP honeypot
- **Dionaea** now supports **JSON logging** with many thanks as to making this feature request possible going to: * [vnclowpot](https://github.com/magisterquis/vnclowpot)
- [PhiBo](https://github.com/phibos) - A low interaction VNC honeypot
- **Elasticpot** now supports **logging all queries and requests** with many thanks as to making this feature request possible going to: - **Persistence** is now enabled by default and will keep honeypot logs and tools data in `/data/` and its sub-folders by default for 30 days. You may change that behavior in `/etc/tpot/logrotate/logrotate.conf`. ELK data however will be kept for 90 days by default. You may change that behavior in `/etc/tpot/curator/actions.yml`. Scripts will be triggered through `/etc/crontab`.
- [Markus Schmall](https://github.com/schmalle)
- **Honeytrap** now supports **JSON logging** with many thanks as to making this feature request possible going to:
- [Andrea Pasquale](https://github.com/adepasquale)
- **Updates** - **Updates**
- **Docker** was updated to the latest **1.12.2** release - **Docker** was updated to the latest **1.12.6** release within Ubuntu 16.04.x LTS
- **ELK** was updated to the latest **Kibana 4.6.2**, **Elasticsearch 2.4.1** and **Logstash 2.4.0** releases. - **ELK** was updated to the latest **Kibana 5.6.1**, **Elasticsearch 5.6.1** and **Logstash 5.6.1** releases.
- **Suricata** was updated to the latest **3.1.2** version including the latest **Emerging Threats** community ruleset. - **Suricata** was updated to the latest **4.0.0** version including the latest **Emerging Threats** community ruleset.
- We now have **150 Visualizations** pre-configured and compiled to 14 individual **Kibana Dashboards** for every honeypot. Monitor all *honeypot events* locally on your T-Pot installation. Aside from *honeypot events* you can also view *Suricata NSM, Syslog and NGINX* events for a quick overview of local host events.
- More **Smart links** are now included. - **Dashboards Makeover**
- We now have **160+ Visualizations** pre-configured and compiled to 14 individual **Kibana Dashboards** for every honeypot. Monitor all *honeypot events* locally on your T-Pot installation. Aside from *honeypot events* you can also view *Suricata NSM, Syslog and NGINX* events for a quick overview of local host events.
- View available IP reputation of any source IP address
- View available CVE ID for events
- More **Smart links** are now included.
<a name="concept"></a> <a name="concept"></a>
# Technical Concept # Technical Concept
T-Pot is based on the network installer of Ubuntu Server 16.04 LTS. T-Pot is based on the network installer of Ubuntu Server 16.04.x LTS.
The honeypot daemons as well as other support components being used have been paravirtualized using [docker](http://docker.io). The honeypot daemons as well as other support components being used have been containerized using [docker](http://docker.io).
This allows us to run multiple honeypot daemons on the same network interface without problems and thus making the entire system very low maintenance. <br>The encapsulation of the honeypot daemons in docker provides a good isolation of the runtime environments and easy update mechanisms. This allows us to run multiple honeypot daemons on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment.
In T-Pot we combine the dockerized honeypots In T-Pot we combine the dockerized honeypots
[conpot](http://conpot.org/), [conpot](http://conpot.org/),
@ -131,27 +135,34 @@ In T-Pot we combine the dockerized honeypots
[dionaea](https://github.com/DinoTools/dionaea), [dionaea](https://github.com/DinoTools/dionaea),
[elasticpot](https://github.com/schmalle/ElasticPot), [elasticpot](https://github.com/schmalle/ElasticPot),
[emobility](https://github.com/dtag-dev-sec/emobility), [emobility](https://github.com/dtag-dev-sec/emobility),
[glastopf](http://glastopf.org/) and [glastopf](http://glastopf.org/),
[honeytrap](https://github.com/armedpot/honeytrap/) with [honeytrap](https://github.com/armedpot/honeytrap/),
[suricata](http://suricata-ids.org/) a Network Security Monitoring engine and the [mailoney](https://github.com/awhitehatter/mailoney),
[ELK stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot. Events will be correlated by our own data submission tool [ewsposter](https://github.com/dtag-dev-sec/ews) which also supports Honeynet project hpfeeds honeypot data sharing. [rdpy](https://github.com/citronneur/rdpy) and
[vnclowpot](https://github.com/magisterquis/vnclowpot) with
[ELK stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot,
[Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster,
[Netdata](http://my-netdata.io/) for real-time performance monitoring,
[Portainer](http://portainer.io/) a web based UI for docker,
[Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool,
[Suricata](http://suricata-ids.org/) a Network Security Monitoring engine and
[Wetty](https://github.com/krishnasrinivas/wetty) a web based SSH client.
![Architecture](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/architecture.png) ![Architecture](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/architecture.png)
All data in docker is volatile. Once a docker container crashes, all data produced within its environment is gone and a fresh instance is restarted. Hence, for some data that needs to be persistent, i.e. config files, we have a persistent storage **`/data/`** on the host in order to make it available and persistent across container or system restarts.<br> While data within docker containers is volatile we do now ensure a default 30 day persistence of all relevant honeypot and tool data in the well known `/data` folder and sub-folders. The persistence configuration may be adjusted in `/etc/tpot/logrotate/logrotate.conf`. Once a docker container crashes, all other data produced within its environment is erased and a fresh instance is started from the corresponding docker image.<br>
Important log data is now also stored outside the container in `/data/<container-name>` allowing easy access to logs from within the host and. The **systemd** scripts have been adjusted to support storing data on the host either volatile (*default*) or persistent (adjust individual systemd scripts in `/etc/systemd/system` or use a global setting in `/usr/bin/clear.sh`).
Basically, what happens when the system is booted up is the following: Basically, what happens when the system is booted up is the following:
- start host system - start host system
- start all the necessary services (i.e. docker-engine, reverse proxy, etc.) - start all the necessary services (i.e. docker-engine, reverse proxy, etc.)
- start all docker containers (honeypots, nms, elk) - start all docker containers via docker-compose (honeypots, nms, elk)
Within the T-Pot project, we provide all the tools and documentation necessary to build your own honeypot system and contribute to our [community data view](http://sicherheitstacho.eu/?peers=communityPeers), a separate channel on our [Sicherheitstacho](http://sicherheitstacho.eu) that is powered by T-Pot community data. Within the T-Pot project, we provide all the tools and documentation necessary to build your own honeypot system and contribute to our [community data view](http://sicherheitstacho.eu/?peers=communityPeers), a separate channel on our [Sicherheitstacho](http://sicherheitstacho.eu) that is powered by T-Pot community data.
The source code and configuration files are stored in individual GitHub repositories, which are linked below. The docker images are tailored to be run in this environment. If you want to run the docker images separately, make sure you study the upstart scripts, as they provide an insight on how we configured them. The source code and configuration files are stored in individual GitHub repositories, which are linked below. The docker images are pre-configured for the T-Pot environment. If you want to run the docker images separately, make sure you study the docker-compose configuration (`/etc/tpot/tpot.yml`) and the T-Pot systemd script (`/etc/systemd/system/tpot.service`), as they provide a good starting point for implementing changes.
The individual docker configurations etc. we used can be found here: The individual docker configurations are located in the following GitHub repositories:
- [conpot](https://github.com/dtag-dev-sec/conpot) - [conpot](https://github.com/dtag-dev-sec/conpot)
- [cowrie](https://github.com/dtag-dev-sec/cowrie) - [cowrie](https://github.com/dtag-dev-sec/cowrie)
@ -159,63 +170,65 @@ The individual docker configurations etc. we used can be found here:
- [elasticpot](https://github.com/dtag-dev-sec/elasticpot) - [elasticpot](https://github.com/dtag-dev-sec/elasticpot)
- [elk-stack](https://github.com/dtag-dev-sec/elk) - [elk-stack](https://github.com/dtag-dev-sec/elk)
- [emobility](https://github.com/dtag-dev-sec/emobility) - [emobility](https://github.com/dtag-dev-sec/emobility)
- [ewsposter](https://github.com/dtag-dev-sec/ews)
- [glastopf](https://github.com/dtag-dev-sec/glastopf) - [glastopf](https://github.com/dtag-dev-sec/glastopf)
- [honeytrap](https://github.com/dtag-dev-sec/honeytrap) - [honeytrap](https://github.com/dtag-dev-sec/honeytrap)
- [mailoney](https://github.com/dtag-dev-sec/mailoney)
- [netdata](https://github.com/dtag-dev-sec/netdata) - [netdata](https://github.com/dtag-dev-sec/netdata)
- [portainer](https://github.com/dtag-dev-sec/ui-for-docker) - [portainer](https://github.com/dtag-dev-sec/ui-for-docker)
- [suricata](https://github.com/dtag-dev-sec/suricata) - [rdpy](https://github.com/dtag-dev-sec/rdpy)
- [spiderfoot](https://github.com/dtag-dev-sec/spiderfoot)
- [suricata & p0f](https://github.com/dtag-dev-sec/suricata)
- [vnclowpot](https://github.com/dtag-dev-sec/vnclowpot)
<a name="requirements"></a> <a name="requirements"></a>
# System Requirements # System Requirements
Depending on your installation type, whether you install on [real hardware](#hardware) or in a [virtual machine](#vm), make sure your designated T-Pot system meets the following requirements: Depending on your installation type, whether you install on [real hardware](#hardware) or in a [virtual machine](#vm), make sure your designated T-Pot system meets the following requirements:
##### T-Pot Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap, ELK, Suricata+P0f & Tools) ##### T-Pot Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap, Mailoney, Rdpy, Vnclowpot, ELK, Suricata+P0f & Tools)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements: When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 4 GB RAM (6-8 GB recommended) - 4 GB RAM (6-8 GB recommended)
- 64 GB disk (128 GB SSD recommended) - 64 GB SSD (128 GB SSD recommended)
- Network via DHCP - Network via DHCP
- A working internet connection - A working, non-proxied, internet connection
##### Sensor Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap) ##### Honeypot Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap, Mailoney, Rdpy, Vnclowpot)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements: When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 3 GB RAM (4-6 GB recommended) - 3 GB RAM (4-6 GB recommended)
- 64 GB disk (64 GB SSD recommended) - 64 GB SSD (64 GB SSD recommended)
- Network via DHCP - Network via DHCP
- A working internet connection - A working, non-proxied, internet connection
##### Industrial Installation (ConPot, eMobility, ELK, Suricata+P0f & Tools) ##### Industrial Installation (ConPot, eMobility, ELK, Suricata+P0f & Tools)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements: When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 4 GB RAM (8 GB recommended) - 4 GB RAM (8 GB recommended)
- 64 GB disk (128 GB SSD recommended) - 64 GB SSD (128 GB SSD recommended)
- Network via DHCP - Network via DHCP
- A working internet connection - A working, non-proxied, internet connection
##### Everything Installation (Everything, all of the above) ##### Everything Installation (Everything, all of the above)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements: When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 8 GB RAM - 8+ GB RAM
- 128 GB disk or larger (128 GB SSD or larger recommended) - 128+ GB SSD
- Network via DHCP - Network via DHCP
- A working internet connection - A working, non-proxied, internet connection
<a name="installation"></a> <a name="installation"></a>
# Installation # Installation
The installation of T-Pot is straight forward. Please be advised that you should have an internet connection up and running as all all the docker images for the chosen installation type need to be pulled from docker hub. The installation of T-Pot is straight forward and heavily depends on a working, transparent and non-proxied up and running internet connection. Otherwise the installation **will fail!**
Firstly, decide if you want to download our prebuilt installation ISO image [tpot.iso](http://community-honeypot.de/tpot.iso) ***or*** [create it yourself](#createiso). Firstly, decide if you want to download our prebuilt installation ISO image from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases) ***or*** [create it yourself](#createiso).
Secondly, decide where you want to let the system run: [real hardware](#hardware) or in a [virtual machine](#vm)? Secondly, decide where you want to let the system run: [real hardware](#hardware) or in a [virtual machine](#vm)?
<a name="prebuilt"></a> <a name="prebuilt"></a>
## Prebuilt ISO Image ## Prebuilt ISO Image
We provide an installation ISO image for download (~50MB), which is created using the same [tool](https://github.com/dtag-dev-sec/tpotce) you can use yourself in order to create your own image. It will basically just save you some time downloading components and creating the ISO image. We provide an installation ISO image for download (~50MB), which is created using the same [tool](https://github.com/dtag-dev-sec/tpotce) you can use yourself in order to create your own image. It will basically just save you some time downloading components and creating the ISO image.
You can download the prebuilt installation image [here](http://community-honeypot.de/tpot.iso) and jump to the [installation](#vm) section. The ISO image is hosted by our friends from [Strato](http://www.strato.de) / [Cronon](http://www.cronon.de). You can download the prebuilt installation image from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases) and jump to the [installation](#vm) section.
sha256sum tpot.iso
df6b1db24d0dcc421125dc973fbb2d17aa91cd9ff94607dde9d1b09a92bcbaf0 tpot.iso
<a name="createiso"></a> <a name="createiso"></a>
## Create your own ISO Image ## Create your own ISO Image
@ -230,15 +243,15 @@ For transparency reasons and to give you the ability to customize your install,
**How to create the ISO image:** **How to create the ISO image:**
1. Clone the repository and enter it. 1. Clone the repository and enter it.
```
git clone https://github.com/dtag-dev-sec/tpotce.git git clone https://github.com/dtag-dev-sec/tpotce
cd tpotce cd tpotce
```
2. Invoke the script that builds the ISO image. 2. Invoke the script that builds the ISO image.
The script will download and install dependencies necessary to build the image on the invoking machine. It will further download the ubuntu network installer image (~50MB) which T-Pot is based on. The script will download and install dependencies necessary to build the image on the invoking machine. It will further download the ubuntu network installer image (~50MB) which T-Pot is based on.
```
sudo ./makeiso.sh sudo ./makeiso.sh
```
After a successful build, you will find the ISO image `tpot.iso` along with a SHA256 checksum `tpot.sha256`in your directory. After a successful build, you will find the ISO image `tpot.iso` along with a SHA256 checksum `tpot.sha256`in your directory.
<a name="vm"></a> <a name="vm"></a>
@ -249,9 +262,9 @@ We successfully tested T-Pot with [VirtualBox](https://www.virtualbox.org) and [
It is important to make sure you meet the [system requirements](#requirements) and assign a virtual harddisk >=64 GB, >=4 GB RAM and bridged networking to T-Pot. It is important to make sure you meet the [system requirements](#requirements) and assign a virtual harddisk >=64 GB, >=4 GB RAM and bridged networking to T-Pot.
You need to enable promiscuous mode for the network interface for suricata to work properly. Make sure you enable it during configuration. You need to enable promiscuous mode for the network interface for suricata and p0f to work properly. Make sure you enable it during configuration.
If you want to use a wifi card as primary NIC for T-Pot, please remind that not all network interface drivers support all wireless cards. E.g. in VirtualBox, you then have to choose the *"MT SERVER"* model of the NIC. If you want to use a wifi card as primary NIC for T-Pot, please be aware of the fact that not all network interface drivers support all wireless cards. E.g. in VirtualBox, you then have to choose the *"MT SERVER"* model of the NIC.
Lastly, mount the `tpot.iso` ISO to the VM and continue with the installation.<br> Lastly, mount the `tpot.iso` ISO to the VM and continue with the installation.<br>
@ -269,9 +282,9 @@ Whereas most CD burning tools allow you to burn from ISO images, the procedure t
<a name="firstrun"></a> <a name="firstrun"></a>
## First Run ## First Run
The installation requires very little interaction, only some locales and keyboard settings have to be answered. Everything else will be configured automatically. The system will reboot two times. Make sure it can access the internet as it needs to download the updates and the dockerized honeypot components. Depending on your network connection and the chosen installation type, the installation may take some time. During our tests (50Mbit down, 10Mbit up), the installation is usually finished within <=30 minutes. The installation requires very little interaction, only a locale and keyboard setting has to be answered for the basic linux installation. The system will reboot and please maintain an active internet connection. The T-Pot installer will start and ask you for an installation type, password for the **tsec** user and credentials for a **web user**. Everything else will be configured automatically. All docker images and other componenents will be downloaded. Depending on your network connection and the chosen installation type, the installation may take some time. During our tests (50Mbit down, 10Mbit up), the installation is usually finished within a 30 minute timeframe.
Once the installation is finished, the system will automatically reboot and you will be presented with the T-Pot login screen. The user credentials for the first login are: Once the installation is finished, the system will automatically reboot and you will be presented with the T-Pot login screen. On the console you may login with the **tsec** user:
- user: **tsec** - user: **tsec**
- pass: **password you chose during the installation** - pass: **password you chose during the installation**
@ -286,21 +299,9 @@ You can also login from your browser: ``https://<your.ip>:64297``
<a name="placement"></a> <a name="placement"></a>
# System Placement # System Placement
Make sure your system is reachable through the internet. Otherwise it will not capture any attacks, other than the ones from your hostile internal network! We recommend you put it in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface. Make sure your system is reachable through the internet. Otherwise it will not capture any attacks, other than the ones from your internal network! We recommend you put it in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface.
If you are behind a NAT gateway (e.g. home router), here is a list of ports that should be forwarded to T-Pot.
| Honeypot|Transport|Forwarded ports|
|---|---|---|
| conpot | TCP | 1025, 50100 |
| cowrie | TCP | 22, 23 |
| dionaea | TCP | 21, 42, 135, 443, 445, 1433, 1723, 1883, 1900, 3306, 5060, 5061, 8081, 11211 |
| dionaea | UDP | 69, 5060 |
| elasticpot | TCP | 9200 |
| emobility | TCP | 8080 |
| glastopf | TCP | 80 |
| honeytrap | TCP | 25, 110, 139, 3389, 4444, 4899, 5900, 21000 |
A list of all relevant ports is available as part of the [Technical Concept](#concept)
<br> <br>
Basically, you can forward as many TCP ports as you want, as honeytrap dynamically binds any TCP port that is not covered by the other honeypot daemons. Basically, you can forward as many TCP ports as you want, as honeytrap dynamically binds any TCP port that is not covered by the other honeypot daemons.
@ -308,8 +309,7 @@ Basically, you can forward as many TCP ports as you want, as honeytrap dynamical
In case you need external SSH access, forward TCP port 64295 to T-Pot, see below. In case you need external SSH access, forward TCP port 64295 to T-Pot, see below.
In case you need external web access, forward TCP port 64297 to T-Pot, see below. In case you need external web access, forward TCP port 64297 to T-Pot, see below.
T-Pot requires outgoing http and https connections for updates (ubuntu, docker) and attack submission (ewsposter, hpfeeds). T-Pot requires outgoing git, http, https connections for updates (Ubuntu, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location.
<a name="options"></a> <a name="options"></a>
# Options # Options
@ -326,9 +326,9 @@ If you do not have a SSH client at hand and still want to access the machine via
- user: **user you chose during the installation** - user: **user you chose during the installation**
- pass: **password you chose during the installation** - pass: **password you chose during the installation**
and choose **WebSSH** from the navigation bar. You will be prompted to allow access for this connection and enter the password for the user **tsec**. and choose **WebTTY** from the navigation bar. You will be prompted to allow access for this connection and enter the password for the user **tsec**.
![WebSSH](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/webssh.png) ![WebTTY](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/webssh.png)
<a name="kibana"></a> <a name="kibana"></a>
## Kibana Dashboard ## Kibana Dashboard
@ -337,47 +337,60 @@ Just open a web browser and access and connect to `https://<your.ip>:64297`, ent
- user: **user you chose during the installation** - user: **user you chose during the installation**
- pass: **password you chose during the installation** - pass: **password you chose during the installation**
and the **Kibana dashboard** will automagically load. The Kibana dashboard can be customized to fit your needs. By default, we haven't added any filtering, because the filters depend on your setup. E.g. you might want to filter out your incoming administrative ssh connections and connections to update servers. and **Kibana** will automagically load. The Kibana dashboard can be customized to fit your needs. By default, we haven't added any filtering, because the filters depend on your setup. E.g. you might want to filter out your incoming administrative ssh connections and connections to update servers.
![Dashbaord](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/dashboard.png) ![Dashbaord](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/dashboard.png)
<a name="tools"></a> <a name="tools"></a>
## Tools ## Tools
We included some web based management tools to improve and ease up on your daily tasks. We included some web based management tools to improve and ease up on your daily tasks.
![ES Head Plugin](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/headplugin.png) ![ES Head Plugin](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/headplugin.png)
![UI-For-Docker](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/dockerui.png) ![Netdata](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/netdata.png)
![Netdata](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/netdata.png) ![Portainer](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/dockerui.png)
![Spiderfoot](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/17.06/doc/spiderfoot.png)
<a name="maintenance"></a> <a name="maintenance"></a>
## Maintenance ## Maintenance
As mentioned before, the system was designed to be low maintenance. Basically, there is nothing you have to do but let it run. If one of the dockerized daemon fails, it will restart. If this fails, the regarding upstart job will be restarted. As mentioned before, the system was designed to be low maintenance. Basically, there is nothing you have to do but let it run.
If you run into any problems, a reboot may fix it. ;) If you run into any problems, a reboot may fix it :bowtie:
If new versions of the components involved appear, we will test them and build new docker images. Those new docker images will be pushed to docker hub and downloaded to T-Pot and activated accordingly. If new versions of the components involved appear, we will test them and build new docker images. Those new docker images will be pushed to docker hub and downloaded to T-Pot and activated accordingly.
<a name="submission"></a> <a name="submission"></a>
## Community Data Submission ## Community Data Submission
We provide T-Pot in order to make it accessible to all parties interested in honeypot deployment. By default, the data captured is submitted to a community backend. This community backend uses the data to feed a [community data view](http://sicherheitstacho.eu/?peers=communityPeers), a separate channel on our own [Sicherheitstacho](http://sicherheitstacho.eu), which is powered by our own set of honeypots. We provide T-Pot in order to make it accessible to all parties interested in honeypot deployment. By default, the data captured is submitted to a community backend. This community backend uses the data to feed a [community data view](http://sicherheitstacho.eu/?peers=communityPeers), a separate channel on our own [Sicherheitstacho](http://sicherheitstacho.eu), which is powered by our own set of honeypots.
You may opt out the submission to our community server by disabling it in the `[EWS]`-section of the config file `/data/ews/conf/ews.cfg`. You may opt out the submission to our community server by removing the `# Ewsposter service` from `/etc/tpot/tpot.yml`:
1. Stop T-Pot services: `systemctl stop tpot`
Further we support [hpfeeds](https://github.com/rep/hpfeeds). It is disabled by default since you need to supply a channel you want to post to and enter your user credentials. To enable hpfeeds, edit the config file `/data/ews/conf/ews.cfg`, section `[HPFEED]` and set it to true. 2. Remove Ewsposter service: `vi /etc/tpot/tpot.yml`
3. Remove the following lines, save and exit vi (`:x!`):<br>
```
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
```
4. Start T-Pot services: `systemctl start tpot`
Data is submitted in a structured ews-format, a XML stucture. Hence, you can parse out the information that is relevant to you. Data is submitted in a structured ews-format, a XML stucture. Hence, you can parse out the information that is relevant to you.
We encourage you not to disable the data submission as it is the main purpose of the community approach - as you all know **sharing is caring** 😍 We encourage you not to disable the data submission as it is the main purpose of the community approach - as you all know **sharing is caring** 😍
The *`/data/ews/conf/ews.cfg`* file contains many configuration parameters required for the system to run. You can - if you want - add an email address, that will be included with your submissions, in order to be able to identify your requests later. Further you can add a proxy.
Please do not change anything other than those settings and only if you absolutely need to. Otherwise, the system may not work as expected.
<a name="roadmap"></a> <a name="roadmap"></a>
# Roadmap # Roadmap
As with every development there is always room for improvements ... As with every development there is always room for improvements ...
- Bump ELK-stack to 5.0 - Bump ELK-stack to 6.x
- Move from Glastopf to SNARE - Introduce new honeypots
- Documentation 😎 - Include automatic updates
Some features may be provided with updated docker images, others may require some hands on from your side. Some features may be provided with updated docker images, others may require some hands on from your side.
@ -390,10 +403,6 @@ You are always invited to participate in development on our [GitHub](https://git
- You install and you run within your responsibility. Choose your deployment wisely as a system compromise can never be ruled out. - You install and you run within your responsibility. Choose your deployment wisely as a system compromise can never be ruled out.
- Honeypots should - by design - not host any sensitive data. Make sure you don't add any. - Honeypots should - by design - not host any sensitive data. Make sure you don't add any.
- By default, your data is submitted to the community dashboard. You can disable this in the config. But hey, wouldn't it be better to contribute to the community? - By default, your data is submitted to the community dashboard. You can disable this in the config. But hey, wouldn't it be better to contribute to the community?
- By default, hpfeeds submission is disabled. You can enable it in the config section for hpfeeds. This is due to the nature of hpfeeds. We do not want to spam any channel, so you can choose where to post your data and who to share it with.
- Malware submission is enabled by default but malware is currently not processed on the submission backend. This may be added later, but can also be disabled in the `ews.cfg` config file.
- The system restarts the docker containers every night to avoid clutter and reduce disk consumption. *All data in the container is then reset.* The data displayed in kibana is kept for <=90 days.
<a name="faq"></a> <a name="faq"></a>
# FAQ # FAQ
@ -408,12 +417,14 @@ For general feedback you can write to cert @ telekom.de.
<a name="licenses"></a> <a name="licenses"></a>
# Licenses # Licenses
The software that T-Pot is built on, uses the following licenses. The software that T-Pot is built on uses the following licenses.
<br>GPLv2: [conpot (by Lukas Rist)](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap (by Tillmann Werner)](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/) <br>GPLv2: [conpot (by Lukas Rist)](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap (by Tillmann Werner)](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
<br>GPLv3: [elasticpot (by Markus Schmall)](https://github.com/schmalle/ElasticPot), [emobility (by Mohamad Sbeiti)](https://github.com/dtag-dev-sec/emobility/blob/master/LICENSE), [ewsposter (by Markus Schroer)](https://github.com/dtag-dev-sec/ews/), [glastopf (by Lukas Rist)](https://github.com/glastopf/glastopf/blob/master/GPL), [netdata](https://github.com/firehol/netdata/blob/master/LICENSE.md) <br>GPLv3: [elasticpot (by Markus Schmall)](https://github.com/schmalle/ElasticPot), [emobility (by Mohamad Sbeiti)](https://github.com/dtag-dev-sec/emobility/blob/master/LICENSE), [ewsposter (by Markus Schroer)](https://github.com/dtag-dev-sec/ews/), [glastopf (by Lukas Rist)](https://github.com/glastopf/glastopf/blob/master/GPL), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [netdata](https://github.com/firehol/netdata/blob/master/LICENSE.md)
<br>Apache 2 License: [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker] (https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE) <br>Apache 2 License: [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE)
<br>MIT License: [tagcloud (by Shelby Sturgis)](https://github.com/stormpython/tagcloud/blob/master/LICENSE), [heatmap (by Shelby Sturgis)](https://github.com/stormpython/heatmap/blob/master/LICENSE), [wetty](https://github.com/krishnasrinivas/wetty/blob/master/LICENSE) <br>MIT License: [ctop](https://github.com/bcicen/ctop/blob/master/LICENSE), [wetty](https://github.com/krishnasrinivas/wetty/blob/master/LICENSE)
<br>zlib License: [vnclowpot](https://github.com/magisterquis/vnclowpot/blob/master/LICENSE)
<br>[cowrie (copyright disclaimer by Upi Tamminen)](https://github.com/micheloosterhof/cowrie/blob/master/doc/COPYRIGHT) <br>[cowrie (copyright disclaimer by Upi Tamminen)](https://github.com/micheloosterhof/cowrie/blob/master/doc/COPYRIGHT)
<br>[mailoney](https://github.com/awhitehatter/mailoney)
<br>[Ubuntu licensing](http://www.ubuntu.com/about/about-ubuntu/licensing) <br>[Ubuntu licensing](http://www.ubuntu.com/about/about-ubuntu/licensing)
<br>[Portainer](https://github.com/portainer/portainer/blob/develop/LICENSE) <br>[Portainer](https://github.com/portainer/portainer/blob/develop/LICENSE)
@ -421,7 +432,7 @@ The software that T-Pot is built on, uses the following licenses.
# Credits # Credits
Without open source and the fruitful development community we are proud to be a part of T-Pot would not have been possible. Our thanks are extended but not limited to the following people and organizations: Without open source and the fruitful development community we are proud to be a part of T-Pot would not have been possible. Our thanks are extended but not limited to the following people and organizations:
###The developers and development communities of ### The developers and development communities of
* [conpot](https://github.com/mushorg/conpot/graphs/contributors) * [conpot](https://github.com/mushorg/conpot/graphs/contributors)
* [cowrie](https://github.com/micheloosterhof/cowrie/graphs/contributors) * [cowrie](https://github.com/micheloosterhof/cowrie/graphs/contributors)
@ -433,19 +444,21 @@ Without open source and the fruitful development community we are proud to be a
* [emobility](https://github.com/dtag-dev-sec/emobility/graphs/contributors) * [emobility](https://github.com/dtag-dev-sec/emobility/graphs/contributors)
* [ewsposter](https://github.com/armedpot/ewsposter/graphs/contributors) * [ewsposter](https://github.com/armedpot/ewsposter/graphs/contributors)
* [glastopf](https://github.com/mushorg/glastopf/graphs/contributors) * [glastopf](https://github.com/mushorg/glastopf/graphs/contributors)
* [heatmap](https://github.com/stormpython/heatmap/graphs/contributors)
* [honeytrap](https://github.com/armedpot/honeytrap/graphs/contributors) * [honeytrap](https://github.com/armedpot/honeytrap/graphs/contributors)
* [kibana](https://github.com/elastic/kibana/graphs/contributors) * [kibana](https://github.com/elastic/kibana/graphs/contributors)
* [logstash](https://github.com/elastic/logstash/graphs/contributors) * [logstash](https://github.com/elastic/logstash/graphs/contributors)
* [mailoney](https://github.com/awhitehatter/mailoney)
* [netdata](https://github.com/firehol/netdata/graphs/contributors) * [netdata](https://github.com/firehol/netdata/graphs/contributors)
* [p0f](http://lcamtuf.coredump.cx/p0f3/) * [p0f](http://lcamtuf.coredump.cx/p0f3/)
* [portainer](https://github.com/portainer/portainer/graphs/contributors) * [portainer](https://github.com/portainer/portainer/graphs/contributors)
* [rdpy](https://github.com/citronneur/rdpy)
* [spiderfoot](https://github.com/smicallef/spiderfoot)
* [suricata](https://github.com/inliniac/suricata/graphs/contributors) * [suricata](https://github.com/inliniac/suricata/graphs/contributors)
* [tagcloud](https://github.com/stormpython/tagcloud/graphs/contributors)
* [ubuntu](http://www.ubuntu.com/) * [ubuntu](http://www.ubuntu.com/)
* [vnclowpot](https://github.com/magisterquis/vnclowpot)
* [wetty](https://github.com/krishnasrinivas/wetty/graphs/contributors) * [wetty](https://github.com/krishnasrinivas/wetty/graphs/contributors)
###The following companies and organizations ### The following companies and organizations
* [cannonical](http://www.canonical.com/) * [cannonical](http://www.canonical.com/)
* [docker](https://www.docker.com/) * [docker](https://www.docker.com/)
* [elastic.io](https://www.elastic.co/) * [elastic.io](https://www.elastic.co/)
@ -457,9 +470,9 @@ Without open source and the fruitful development community we are proud to be a
<a name="staytuned"></a> <a name="staytuned"></a>
# Stay tuned ... # Stay tuned ...
We will be releasing a new version of T-Pot about every 6 months. We will be releasing a new version of T-Pot about every 6-12 months.
<a name="funfact"></a> <a name="funfact"></a>
# Fun Fact # Fun Fact
Coffee just does not cut it anymore which is why we needed a different caffeine source and consumed *107* bottles of [Club Mate](https://de.wikipedia.org/wiki/Club-Mate) during the development of T-Pot 16.10 😇 Coffee just does not cut it anymore which is why we needed a different caffeine source and consumed *215* bottles of [Club Mate](https://de.wikipedia.org/wiki/Club-Mate) during the development of T-Pot 17.10 😇

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 319 KiB

After

Width:  |  Height:  |  Size: 594 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 211 KiB

After

Width:  |  Height:  |  Size: 199 KiB

BIN
doc/spiderfoot.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 148 KiB

View File

@ -1,44 +0,0 @@
#!/bin/bash
########################################################
# T-Pot #
# Export docker images maker #
# #
# v16.03.1 by mo, DTAG, 2016-03-09 #
########################################################
# This feature is experimental and requires at least docker 1.7!
# Using any docker version < 1.7 may result in a unusable T-Pot installation
# This script will download the docker images and export them to the folder "images".
# When building the .iso image the preloaded docker images will be exported to the .iso which
# may be useful if you need to install more than one machine.
# Got root?
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Please run as root ..."
exit
fi
if [ -z $1 ]
then
echo "Please view the script for more details!"
exit
fi
if [ $1 == "now" ]
then
for name in $(cat installer/data/imgcfg/all_images.conf)
do
docker pull dtagdevsec/$name:latest1610
done
mkdir images
chmod 777 images
for name in $(cat installer/data/full_images.conf)
do
echo "Now exporting dtagdevsec/$name:latest1603"
docker save -o images/$name:latest1610.img dtagdevsec/$name:latest1610
done
chmod 777 images/*.img
fi

View File

@ -1,60 +0,0 @@
#!/bin/bash
########################################################
# T-Pot #
# ELK DB backup script #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
myCOUNT=1
myDATE=$(date +%Y%m%d%H%M)
myELKPATH="/data/elk/"
myBACKUPPATH="/data/"
# Make sure not to interrupt a check
while true
do
if ! [ -a /var/run/check.lock ];
then break
fi
sleep 0.1
if [ "$myCOUNT" = "1" ];
then
echo -n "Waiting for services "
else echo -n .
fi
if [ "$myCOUNT" = "6000" ];
then
echo
echo "Overriding check.lock"
rm /var/run/check.lock
break
fi
myCOUNT=$[$myCOUNT +1]
done
# We do not want to get interrupted by a check
touch /var/run/check.lock
# Stop ELK to lift db lock
echo "Now stopping ELK ..."
systemctl stop elk
sleep 10
# Backup DB in 2 flavors
echo "Now backing up Elasticsearch data ..."
tar cvfz $myBACKUPPATH"$myDATE"_elkall.tgz $myELKPATH
rm -rf "$myELKPATH"log/*
rm -rf "$myELKPATH"data/tpotcluster/nodes/0/indices/logstash*
tar cvfz $myBACKUPPATH"$myDATE"_elkbase.tgz $myELKPATH
rm -rf $myELKPATH
tar xvfz $myBACKUPPATH"$myDATE"_elkall.tgz -C /
chmod 760 -R $myELKPATH
chown tpot:tpot -R $myELKPATH
# Start ELK
systemctl start elk
echo "Now starting up ELK ..."
# Allow checks to resume
rm /var/run/check.lock

View File

@ -0,0 +1,38 @@
#!/bin/bash
# Backup all ES relevant folders
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
# Set vars
myCOUNT=1
myDATE=$(date +%Y%m%d%H%M)
myELKPATH="/data/elk/data"
myKIBANAINDEXNAME=$(curl -s -XGET ''$myES'_cat/indices/' | grep .kibana | awk '{ print $4 }')
myKIBANAINDEXPATH=$myELKPATH/nodes/0/indices/$myKIBANAINDEXNAME
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
### Start ELK
systemctl start tpot
echo "### Now starting T-Pot ..."
}
trap fuCLEANUP EXIT
# Stop T-Pot to lift db lock
echo "### Now stopping T-Pot"
systemctl stop tpot
sleep 2
# Backup DB in 2 flavors
echo "### Now backing up Elasticsearch folders ..."
tar cvfz "elkall_"$myDATE".tgz" $myELKPATH
tar cvfz "elkbase_"$myDATE".tgz" $myKIBANAINDEXPATH

View File

@ -1,41 +0,0 @@
#!/bin/bash
########################################################
# T-Pot #
# Check container and services script #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
if [ -a /var/run/check.lock ];
then
echo "Lock exists. Exiting now."
exit
fi
myIMAGES=$(cat /data/images.conf)
touch /var/run/check.lock
myUPTIME=$(awk '{print int($1/60)}' /proc/uptime)
for i in $myIMAGES
do
if [ "$i" != "ui-for-docker" ] && [ "$i" != "netdata" ];
then
myCIDSTATUS=$(docker exec $i supervisorctl status)
if [ $? -ne 0 ];
then
myCIDSTATUS=1
else
myCIDSTATUS=$(echo $myCIDSTATUS | egrep -c "(STOPPED|FATAL)")
fi
if [ $myUPTIME -gt 4 ] && [ $myCIDSTATUS -gt 0 ];
then
echo "Restarting "$i"."
systemctl stop $i
sleep 5
systemctl start $i
fi
fi
done
rm /var/run/check.lock

View File

@ -1,25 +1,72 @@
#!/bin/bash #!/bin/bash
# T-Pot Container Data Cleaner & Log Rotator
######################################################## # Set colors
# T-Pot # myRED=""
# Container Data Cleaner # myGREEN=""
# # myWHITE=""
# v16.10.0 by mo, DTAG, 2016-05-28 #
########################################################
# Set persistence # Set persistence
myPERSISTENCE=$2 myPERSISTENCE=$1
# Check persistence # Let's create a function to check if folder is empty
if [ "$myPERSISTENCE" = "on" ]; fuEMPTY () {
then local myFOLDER=$1
echo "### Persistence enabled, nothing to do."
exit echo $(ls $myFOLDER | wc -l)
fi }
# Let's create a function to rotate and compress logs
fuLOGROTATE () {
local mySTATUS="/etc/tpot/logrotate/status"
local myCONF="/etc/tpot/logrotate/logrotate.conf"
local myCOWRIETTYLOGS="/data/cowrie/log/tty/"
local myCOWRIETTYTGZ="/data/cowrie/log/ttylogs.tgz"
local myCOWRIEDL="/data/cowrie/downloads/"
local myCOWRIEDLTGZ="/data/cowrie/downloads.tgz"
local myDIONAEABI="/data/dionaea/bistreams/"
local myDIONAEABITGZ="/data/dionaea/bistreams.tgz"
local myDIONAEABIN="/data/dionaea/binaries/"
local myDIONAEABINTGZ="/data/dionaea/binaries.tgz"
local myHONEYTRAPATTACKS="/data/honeytrap/attacks/"
local myHONEYTRAPATTACKSTGZ="/data/honeytrap/attacks.tgz"
local myHONEYTRAPDL="/data/honeytrap/downloads/"
local myHONEYTRAPDLTGZ="/data/honeytrap/downloads.tgz"
# Ensure correct permissions and ownerships for logrotate to run without issues
chmod 760 /data/ -R
chown tpot:tpot /data -R
# Run logrotate with force (-f) first, so the status file can be written and race conditions (with tar) be avoided
logrotate -f -s $mySTATUS $myCONF
# Compressing some folders first and rotate them later
if [ "$(fuEMPTY $myCOWRIETTYLOGS)" != "0" ]; then tar cvfz $myCOWRIETTYTGZ $myCOWRIETTYLOGS; fi
if [ "$(fuEMPTY $myCOWRIEDL)" != "0" ]; then tar cvfz $myCOWRIEDLTGZ $myCOWRIEDL; fi
if [ "$(fuEMPTY $myDIONAEABI)" != "0" ]; then tar cvfz $myDIONAEABITGZ $myDIONAEABI; fi
if [ "$(fuEMPTY $myDIONAEABIN)" != "0" ]; then tar cvfz $myDIONAEABINTGZ $myDIONAEABIN; fi
if [ "$(fuEMPTY $myHONEYTRAPATTACKS)" != "0" ]; then tar cvfz $myHONEYTRAPATTACKSTGZ $myHONEYTRAPATTACKS; fi
if [ "$(fuEMPTY $myHONEYTRAPDL)" != "0" ]; then tar cvfz $myHONEYTRAPDLTGZ $myHONEYTRAPDL; fi
# Ensure correct permissions and ownership for previously created archives
chmod 760 $myCOWRIETTYTGZ $myCOWRIEDLTGZ $myDIONAEABITGZ $myDIONAEABINTGZ $myHONEYTRAPATTACKSTGZ $myHONEYTRAPDLTGZ
chown tpot:tpot $myCOWRIETTYTGZ $myCOWRIEDLTGZ $myDIONAEABITGZ $myDIONAEABINTGZ $myHONEYTRAPATTACKSTGZ $myHONEYTRAPDLTGZ
# Need to remove subfolders since too many files cause rm to exit with errors
rm -rf $myCOWRIETTYLOGS $myCOWRIEDL $myDIONAEABI $myDIONAEABIN $myHONEYTRAPATTACKS $myHONEYTRAPDL
# Recreate subfolders with correct permissions and ownership
mkdir -p $myCOWRIETTYLOGS $myCOWRIEDL $myDIONAEABI $myDIONAEABIN $myHONEYTRAPATTACKS $myHONEYTRAPDL
chmod 760 $myCOWRIETTYLOGS $myCOWRIEDL $myDIONAEABI $myDIONAEABIN $myHONEYTRAPATTACKS $myHONEYTRAPDL
chown tpot:tpot $myCOWRIETTYLOGS $myCOWRIEDL $myDIONAEABI $myDIONAEABIN $myHONEYTRAPATTACKS $myHONEYTRAPDL
# Run logrotate again to account for previously created archives - DO NOT FORCE HERE!
logrotate -s $mySTATUS $myCONF
}
# Let's create a function to clean up and prepare conpot data # Let's create a function to clean up and prepare conpot data
fuCONPOT () { fuCONPOT () {
rm -rf /data/conpot/* if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/conpot/*; fi
mkdir -p /data/conpot/log mkdir -p /data/conpot/log
chmod 760 /data/conpot -R chmod 760 /data/conpot -R
chown tpot:tpot /data/conpot -R chown tpot:tpot /data/conpot -R
@ -27,7 +74,7 @@ fuCONPOT () {
# Let's create a function to clean up and prepare cowrie data # Let's create a function to clean up and prepare cowrie data
fuCOWRIE () { fuCOWRIE () {
rm -rf /data/cowrie/* if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/cowrie/*; fi
mkdir -p /data/cowrie/log/tty/ /data/cowrie/downloads/ /data/cowrie/keys/ /data/cowrie/misc/ mkdir -p /data/cowrie/log/tty/ /data/cowrie/downloads/ /data/cowrie/keys/ /data/cowrie/misc/
chmod 760 /data/cowrie -R chmod 760 /data/cowrie -R
chown tpot:tpot /data/cowrie -R chown tpot:tpot /data/cowrie -R
@ -35,8 +82,7 @@ fuCOWRIE () {
# Let's create a function to clean up and prepare dionaea data # Let's create a function to clean up and prepare dionaea data
fuDIONAEA () { fuDIONAEA () {
rm -rf /data/dionaea/* if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/dionaea/*; fi
rm /data/ews/dionaea/ews.json
mkdir -p /data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/roots/ftp /data/dionaea/roots/tftp /data/dionaea/roots/www /data/dionaea/roots/upnp mkdir -p /data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/roots/ftp /data/dionaea/roots/tftp /data/dionaea/roots/www /data/dionaea/roots/upnp
chmod 760 /data/dionaea -R chmod 760 /data/dionaea -R
chown tpot:tpot /data/dionaea -R chown tpot:tpot /data/dionaea -R
@ -44,7 +90,7 @@ fuDIONAEA () {
# Let's create a function to clean up and prepare elasticpot data # Let's create a function to clean up and prepare elasticpot data
fuELASTICPOT () { fuELASTICPOT () {
rm -rf /data/elasticpot/* if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/elasticpot/*; fi
mkdir -p /data/elasticpot/log mkdir -p /data/elasticpot/log
chmod 760 /data/elasticpot -R chmod 760 /data/elasticpot -R
chown tpot:tpot /data/elasticpot -R chown tpot:tpot /data/elasticpot -R
@ -54,24 +100,23 @@ fuELASTICPOT () {
fuELK () { fuELK () {
# ELK data will be kept for <= 90 days, check /etc/crontab for curator modification # ELK data will be kept for <= 90 days, check /etc/crontab for curator modification
# ELK daemon log files will be removed # ELK daemon log files will be removed
rm -rf /data/elk/log/* if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/elk/log/*; fi
mkdir -p /data/elk/logstash/conf mkdir -p /data/elk
chmod 760 /data/elk -R chmod 760 /data/elk -R
chown tpot:tpot /data/elk -R chown tpot:tpot /data/elk -R
} }
# Let's create a function to clean up and prepare emobility data # Let's create a function to clean up and prepare emobility data
fuEMOBILITY () { fuEMOBILITY () {
rm -rf /data/emobility/* if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/emobility/*; fi
rm /data/ews/emobility/ews.json mkdir -p /data/emobility/log
mkdir -p /data/emobility/log /data/ews/emobility
chmod 760 /data/emobility -R chmod 760 /data/emobility -R
chown tpot:tpot /data/emobility -R chown tpot:tpot /data/emobility -R
} }
# Let's create a function to clean up and prepare glastopf data # Let's create a function to clean up and prepare glastopf data
fuGLASTOPF () { fuGLASTOPF () {
rm -rf /data/glastopf/* if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/glastopf/*; fi
mkdir -p /data/glastopf mkdir -p /data/glastopf
chmod 760 /data/glastopf -R chmod 760 /data/glastopf -R
chown tpot:tpot /data/glastopf -R chown tpot:tpot /data/glastopf -R
@ -79,46 +124,96 @@ fuGLASTOPF () {
# Let's create a function to clean up and prepare honeytrap data # Let's create a function to clean up and prepare honeytrap data
fuHONEYTRAP () { fuHONEYTRAP () {
rm -rf /data/honeytrap/* if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/honeytrap/*; fi
mkdir -p /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/ mkdir -p /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/
chmod 760 /data/honeytrap/ -R chmod 760 /data/honeytrap/ -R
chown tpot:tpot /data/honeytrap/ -R chown tpot:tpot /data/honeytrap/ -R
} }
# Let's create a function to clean up and prepare mailoney data
fuMAILONEY () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/mailoney/*; fi
mkdir -p /data/mailoney/log/
chmod 760 /data/mailoney/ -R
chown tpot:tpot /data/mailoney/ -R
}
# Let's create a function to clean up and prepare rdpy data
fuRDPY () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/rdpy/*; fi
mkdir -p /data/rdpy/log/
chmod 760 /data/rdpy/ -R
chown tpot:tpot /data/rdpy/ -R
}
# Let's create a function to prepare spiderfoot db
fuSPIDERFOOT () {
mkdir -p /data/spiderfoot
touch /data/spiderfoot/spiderfoot.db
chmod 760 -R /data/spiderfoot
chown tpot:tpot -R /data/spiderfoot
}
# Let's create a function to clean up and prepare suricata data # Let's create a function to clean up and prepare suricata data
fuSURICATA () { fuSURICATA () {
rm -rf /data/suricata/* if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/suricata/*; fi
mkdir -p /data/suricata/log mkdir -p /data/suricata/log
chmod 760 -R /data/suricata chmod 760 -R /data/suricata
chown tpot:tpot -R /data/suricata chown tpot:tpot -R /data/suricata
} }
case $1 in # Let's create a function to clean up and prepare p0f data
conpot) fuP0F () {
fuCONPOT $1 if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/p0f/*; fi
;; mkdir -p /data/p0f/log
cowrie) chmod 760 -R /data/p0f
fuCOWRIE $1 chown tpot:tpot -R /data/p0f
;; }
dionaea)
fuDIONAEA $1 # Let's create a function to clean up and prepare vnclowpot data
;; fuVNCLOWPOT () {
elasticpot) if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/vnclowpot/*; fi
fuELASTICPOT $1 mkdir -p /data/vnclowpot/log/
;; chmod 760 /data/vnclowpot/ -R
elk) chown tpot:tpot /data/vnclowpot/ -R
fuELK $1 }
;;
emobility)
fuEMOBILITY $1 # Avoid unwanted cleaning
;; if [ "$myPERSISTENCE" = "" ];
glastopf) then
fuGLASTOPF $1 echo $myRED"!!! WARNING !!! - This will delete ALL honeypot logs. "$myWHITE
;; while [ "$myQST" != "y" ] && [ "$myQST" != "n" ];
honeytrap) do
fuHONEYTRAP $1 read -p "Continue? (y/n) " myQST
;; done
suricata) if [ "$myQST" = "n" ];
fuSURICATA $1 then
;; echo $myGREEN"Puuh! That was close! Aborting!"$myWHITE
esac exit
fi
fi
# Check persistence, if enabled compress and rotate logs
if [ "$myPERSISTENCE" = "on" ];
then
echo "Persistence enabled, now rotating and compressing logs."
fuLOGROTATE
else
echo "Cleaning up and preparing data folders."
fuCONPOT
fuCOWRIE
fuDIONAEA
fuELASTICPOT
fuELK
fuEMOBILITY
fuGLASTOPF
fuHONEYTRAP
fuMAILONEY
fuRDPY
fuSPIDERFOOT
fuSURICATA
fuP0F
fuVNCLOWPOT
fi

View File

@ -1,76 +0,0 @@
#!/bin/bash
########################################################
# T-Pot #
# Container and services restart script #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
myCOUNT=1
while true
do
if ! [ -a /var/run/check.lock ];
then break
fi
sleep 0.1
if [ "$myCOUNT" = "1" ];
then
echo -n "Waiting for services "
else echo -n .
fi
if [ "$myCOUNT" = "6000" ];
then
echo
echo "Overriding check.lock"
rm /var/run/check.lock
break
fi
myCOUNT=$[$myCOUNT +1]
done
myIMAGES=$(cat /data/images.conf)
touch /var/run/check.lock
myUPTIME=$(awk '{print int($1/60)}' /proc/uptime)
if [ $myUPTIME -gt 4 ];
then
for i in $myIMAGES
do
systemctl stop $i
done
echo "### Waiting 10 seconds before restarting docker ..."
sleep 10
iptables -w -F
systemctl restart docker
while true
do
docker info > /dev/null
if [ $? -ne 0 ];
then
echo Docker daemon is still starting.
else
echo Docker daemon is now available.
break
fi
sleep 0.1
done
echo "### Docker is now up and running again."
echo "### Removing obsolete container data ..."
docker rm -v $(docker ps -aq)
echo "### Removing obsolete image data ..."
docker rmi $(docker images | grep "<none>" | awk '{print $3}')
echo "### Starting T-Pot services ..."
for i in $myIMAGES
do
systemctl start $i
done
sleep 5
else
echo "### T-Pot needs to be up and running for at least 5 minutes."
fi
rm /var/run/check.lock
/etc/rc.local

View File

@ -1,32 +1,71 @@
#/bin/bash #/bin/bash
stty -echo -icanon time 0 min 0 # Show current status of all running containers
myIMAGES=$(cat /data/images.conf) myPARAM="$1"
myIMAGES="$(cat /etc/tpot/tpot.yml | grep -v '#' | grep container_name | cut -d: -f2)"
myRED=""
myGREEN=""
myBLUE=""
myWHITE=""
myMAGENTA=""
function fuCONTAINERSTATUS {
local myNAME="$1"
local mySTATUS="$(/usr/bin/docker ps -f name=$myNAME --format "table {{.Status}}" -f status=running -f status=exited | tail -n 1)"
myDOWN="$(echo "$mySTATUS" | grep -o -E "(STATUS|NAMES|Exited)")"
case "$myDOWN" in
STATUS)
mySTATUS="$myRED"DOWN"$myWHITE"
;;
NAMES)
mySTATUS="$myRED"DOWN"$myWHITE"
;;
Exited)
mySTATUS="$myRED$mySTATUS$myWHITE"
;;
*)
mySTATUS="$myGREEN$mySTATUS$myWHITE"
;;
esac
printf "$mySTATUS"
}
function fuCONTAINERPORTS {
local myNAME="$1"
local myPORTS="$(/usr/bin/docker ps -f name=$myNAME --format "table {{.Ports}}" -f status=running -f status=exited | tail -n 1 | sed s/","/",\n\t\t\t\t\t\t\t"/g)"
if [ "$myPORTS" != "PORTS" ];
then
printf "$myBLUE$myPORTS$myWHITE"
fi
}
function fuGETSYS {
printf "========| System |========\n"
printf "%+10s %-20s\n" "Date: " "$(date)"
printf "%+10s %-20s\n" "Uptime: " "$(uptime | cut -b 2-)"
printf "%+10s %-20s\n" "CPU temp: " "$(sensors | grep 'Physical' | awk '{ print $4" " }' | tr -d [:cntrl:])"
echo
}
while true while true
do do
clear fuGETSYS
echo "======| System |======" printf "%-19s %-36s %s\n" "NAME" "STATUS" "PORTS"
echo Date:" "$(date)
echo Uptime:" "$(uptime)
echo CPU temp: $(sensors | grep "Physical" | awk '{ print $4 }')
echo
echo "NAME CREATED PORTS"
for i in $myIMAGES; do for i in $myIMAGES; do
/usr/bin/docker ps -f name=$i --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" -f status=running -f status=exited | GREP_COLORS='mt=01;35' /bin/egrep --color=always "(^[_a-z-]+ |$)|$" | GREP_COLORS='mt=01;32' /bin/egrep --color=always "(Up[ 0-9a-Z ]+ |$)|$" | GREP_COLORS='mt=01;31' /bin/egrep --color=always "(Exited[ \(0-9\) ]+ [0-9a-Z ]+ ago|$)|$" | tail -n 1 myNAME="$myMAGENTA$i$myWHITE"
if [ "$1" = "vv" ]; printf "%-32s %-49s %s" "$myNAME" "$(fuCONTAINERSTATUS $i)" "$(fuCONTAINERPORTS $i)"
echo
if [ "$myPARAM" = "vv" ];
then then
/usr/bin/docker exec -t $i /bin/ps -awfuwfxwf | egrep -v -E "awfuwfxwf|/bin/ps" /usr/bin/docker exec -t "$i" /bin/ps awfuwfxwf | egrep -v -E "awfuwfxwf|/bin/ps"
fi fi
done done
if [[ $1 =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]]; if [[ $myPARAM =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
then then
sleep $1 sleep "$myPARAM"
else else
break break
fi fi
read myKEY
if [ "$myKEY" == "q" ];
then
break;
fi
done done
stty sane

45
installer/bin/dump_es.sh Executable file
View File

@ -0,0 +1,45 @@
#/bin/bash
# Dump all ES data
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
rm -rf tmp
}
trap fuCLEANUP EXIT
# Set vars
myDATE=$(date +%Y%m%d%H%M)
myINDICES=$(curl -s -XGET ''$myES'_cat/indices/' | grep logstash | awk '{ print $3 }' | sort | grep -v 1970)
myES="http://127.0.0.1:64298/"
myCOL1=""
myCOL0=""
# Dumping all ES data
echo $myCOL1"### The following indices will be dumped: "$myCOL0
echo $myINDICES
echo
mkdir tmp
for i in $myINDICES;
do
echo $myCOL1"### Now dumping: "$i $myCOL0
elasticdump --input=$myES$i --output="tmp/"$i --limit 7500
echo $myCOL1"### Now compressing: tmp/$i" $myCOL0
gzip -f "tmp/"$i
done;
# Build tar archive
echo $myCOL1"### Now building tar archive: es_dump_"$myDATE".tgz" $myCOL0
tar cvf es_dump_$myDATE.tar tmp/*
echo $myCOL1"### Done."$myCOL0

View File

@ -0,0 +1,77 @@
#!/bin/bash
# Export all Kibana objects
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
# Set vars
myDATE=$(date +%Y%m%d%H%M)
myINDEXCOUNT=$(curl -s -XGET ''$myES'.kibana/index-pattern/logstash-*' | tr '\\' '\n' | grep "scripted" | wc -w)
myDASHBOARDS=$(curl -s -XGET ''$myES'.kibana/dashboard/_search?filter_path=hits.hits._id&pretty&size=10000' | jq '.hits.hits[] | {_id}' | jq -r '._id')
myVISUALIZATIONS=$(curl -s -XGET ''$myES'.kibana/visualization/_search?filter_path=hits.hits._id&pretty&size=10000' | jq '.hits.hits[] | {_id}' | jq -r '._id')
mySEARCHES=$(curl -s -XGET ''$myES'.kibana/search/_search?filter_path=hits.hits._id&pretty&size=10000' | jq '.hits.hits[] | {_id}' | jq -r '._id')
myCOL1=""
myCOL0=""
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
rm -rf patterns/ dashboards/ visualizations/ searches/
}
trap fuCLEANUP EXIT
# Export index patterns
mkdir -p patterns
echo $myCOL1"### Now exporting"$myCOL0 $myINDEXCOUNT $myCOL1"index patterns." $myCOL0
curl -s -XGET ''$myES'.kibana/index-pattern/logstash-*?' | jq '._source' > patterns/index-patterns.json
echo
# Export dashboards
mkdir -p dashboards
echo $myCOL1"### Now exporting"$myCOL0 $(echo $myDASHBOARDS | wc -w) $myCOL1"dashboards." $myCOL0
for i in $myDASHBOARDS;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myES'.kibana/dashboard/'$i'' | jq '._source' > dashboards/$i.json
done;
echo
# Export visualizations
mkdir -p visualizations
echo $myCOL1"### Now exporting"$myCOL0 $(echo $myVISUALIZATIONS | wc -w) $myCOL1"visualizations." $myCOL0
for i in $myVISUALIZATIONS;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myES'.kibana/visualization/'$i'' | jq '._source' > visualizations/$i.json
done;
echo
# Export searches
mkdir -p searches
echo $myCOL1"### Now exporting"$myCOL0 $(echo $mySEARCHES | wc -w) $myCOL1"searches." $myCOL0
for i in $mySEARCHES;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XGET ''$myES'.kibana/search/'$i'' | jq '._source' > searches/$i.json
done;
echo
# Building tar archive
echo $myCOL1"### Now building archive"$myCOL0 "kibana-objects_"$myDATE".tgz"
tar cvfz kibana-objects_$myDATE.tgz patterns dashboards visualizations searches > /dev/null
# Stats
echo
echo $myCOL1"### Statistics"
echo $myCOL1"###### Exported"$myCOL0 $myINDEXCOUNT $myCOL1"index patterns." $myCOL0
echo $myCOL1"###### Exported"$myCOL0 $(echo $myDASHBOARDS | wc -w) $myCOL1"dashboards." $myCOL0
echo $myCOL1"###### Exported"$myCOL0 $(echo $myVISUALIZATIONS | wc -w) $myCOL1"visualizations." $myCOL0
echo $myCOL1"###### Exported"$myCOL0 $(echo $mySEARCHES | wc -w) $myCOL1"searches." $myCOL0
echo

View File

@ -0,0 +1,91 @@
#!/bin/bash
# Import Kibana objects
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
echo
fi
# Set vars
myDUMP=$1
myCOL1=""
myCOL0=""
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
rm -rf patterns/ dashboards/ visualizations/ searches/
}
trap fuCLEANUP EXIT
# Check if parameter is given and file exists
if [ "$myDUMP" = "" ];
then
echo $myCOL1"### Please provide a backup file name."$myCOL0
echo $myCOL1"### restore-kibana-objects.sh <kibana-objects.tgz>"$myCOL0
echo
exit
fi
if ! [ -a $myDUMP ];
then
echo $myCOL1"### File not found."$myCOL0
exit
fi
# Unpack tar
tar xvfz $myDUMP > /dev/null
# Restore index patterns
myINDEXCOUNT=$(cat patterns/index-patterns.json | tr '\\' '\n' | grep "scripted" | wc -w)
echo $myCOL1"### Now importing"$myCOL0 $myINDEXCOUNT $myCOL1"index patterns." $myCOL0
curl -s -XDELETE ''$myES'.kibana/index-pattern/logstash-*' > /dev/null
curl -s -XPUT ''$myES'.kibana/index-pattern/logstash-*' -T patterns/index-patterns.json > /dev/null
echo
# Restore dashboards
myDASHBOARDS=$(ls dashboards/*.json | cut -c 12- | rev | cut -c 6- | rev)
echo $myCOL1"### Now importing "$myCOL0$(echo $myDASHBOARDS | wc -w)$myCOL1 "dashboards." $myCOL0
for i in $myDASHBOARDS;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XDELETE ''$myES'.kibana/dashboard/'$i'' > /dev/null
curl -s -XPUT ''$myES'.kibana/dashboard/'$i'' -T dashboards/$i.json > /dev/null
done;
echo
# Restore visualizations
myVISUALIZATIONS=$(ls visualizations/*.json | cut -c 16- | rev | cut -c 6- | rev)
echo $myCOL1"### Now importing "$myCOL0$(echo $myVISUALIZATIONS | wc -w)$myCOL1 "visualizations." $myCOL0
for i in $myVISUALIZATIONS;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XDELETE ''$myES'.kibana/visualization/'$i'' > /dev/null
curl -s -XPUT ''$myES'.kibana/visualization/'$i'' -T visualizations/$i.json > /dev/null
done;
echo
# Restore searches
mySEARCHES=$(ls searches/*.json | cut -c 10- | rev | cut -c 6- | rev)
echo $myCOL1"### Now importing "$myCOL0$(echo $mySEARCHES | wc -w)$myCOL1 "searches." $myCOL0
for i in $mySEARCHES;
do
echo $myCOL1"###### "$i $myCOL0
curl -s -XDELETE ''$myES'.kibana/search/'$i'' > /dev/null
curl -s -XPUT ''$myES'.kibana/search/'$i'' -T searches/$i.json > /dev/null
done;
echo
# Stats
echo
echo $myCOL1"### Statistics"
echo $myCOL1"###### Imported"$myCOL0 $myINDEXCOUNT $myCOL1"index patterns." $myCOL0
echo $myCOL1"###### Imported"$myCOL0 $(echo $myDASHBOARDS | wc -w) $myCOL1"dashboards." $myCOL0
echo $myCOL1"###### Imported"$myCOL0 $(echo $myVISUALIZATIONS | wc -w) $myCOL1"visualizations." $myCOL0
echo $myCOL1"###### Imported"$myCOL0 $(echo $mySEARCHES | wc -w) $myCOL1"searches." $myCOL0
echo

View File

@ -10,13 +10,11 @@ dnslist=(
"dig +short myip.opendns.com @resolver2.opendns.com" "dig +short myip.opendns.com @resolver2.opendns.com"
"dig +short myip.opendns.com @resolver3.opendns.com" "dig +short myip.opendns.com @resolver3.opendns.com"
"dig +short myip.opendns.com @resolver4.opendns.com" "dig +short myip.opendns.com @resolver4.opendns.com"
"dig +short -t txt o-o.myaddr.l.google.com @ns1.google.com"
"dig +short -4 -t a whoami.akamai.net @ns1-1.akamaitech.net" "dig +short -4 -t a whoami.akamai.net @ns1-1.akamaitech.net"
"dig +short whoami.akamai.net @ns1-1.akamaitech.net" "dig +short whoami.akamai.net @ns1-1.akamaitech.net"
) )
httplist=( httplist=(
4.ifcfg.me
alma.ch/myip.cgi alma.ch/myip.cgi
api.infoip.io/ip api.infoip.io/ip
api.ipify.org api.ipify.org
@ -32,13 +30,10 @@ httplist=(
ip.tyk.nu ip.tyk.nu
l2.io/ip l2.io/ip
smart-ip.net/myip smart-ip.net/myip
tnx.nl/ip
wgetip.com wgetip.com
whatismyip.akamai.com whatismyip.akamai.com
) )
# function to shuffle the global array "array" # function to shuffle the global array "array"
shuffle() { shuffle() {
local i tmp size max rand local i tmp size max rand
@ -51,7 +46,6 @@ shuffle() {
done done
} }
# if we have dig and a list of dns methods, try that first # if we have dig and a list of dns methods, try that first
if hash dig 2>/dev/null && [ ${#dnslist[*]} -gt 0 ]; then if hash dig 2>/dev/null && [ ${#dnslist[*]} -gt 0 ]; then
eval array=( \"\${dnslist[@]}\" ) eval array=( \"\${dnslist[@]}\" )
@ -67,9 +61,7 @@ if hash dig 2>/dev/null && [ ${#dnslist[*]} -gt 0 ]; then
done done
fi fi
# if we haven't succeeded with DNS, try HTTP # if we haven't succeeded with DNS, try HTTP
if [ ${#httplist[*]} == 0 ]; then if [ ${#httplist[*]} == 0 ]; then
echo "No hosts in httplist array!" >&2 echo "No hosts in httplist array!" >&2
exit 1 exit 1

61
installer/bin/restore_es.sh Executable file
View File

@ -0,0 +1,61 @@
#/bin/bash
# Restore folder based ES backup
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
exit
else
echo "### Elasticsearch is available, now continuing."
fi
# Let's ensure normal operation on exit or if interrupted ...
function fuCLEANUP {
rm -rf tmp
}
trap fuCLEANUP EXIT
# Set vars
myDUMP=$1
myCOL1=""
myCOL0=""
# Check if parameter is given and file exists
if [ "$myDUMP" = "" ];
then
echo $myCOL1"### Please provide a backup file name."$myCOL0
echo $myCOL1"### restore-elk.sh <es_dump.tar>"$myCOL0
echo
exit
fi
if ! [ -a $myDUMP ];
then
echo $myCOL1"### File not found."$myCOL0
exit
fi
# Unpack tar archive
echo $myCOL1"### Now unpacking tar archive: "$myDUMP $myCOL0
tar xvf $myDUMP
# Build indices list
myINDICES=$(ls tmp/logstash*.gz | cut -c 5- | rev | cut -c 4- | rev)
echo $myCOL1"### The following indices will be restored: "$myCOL0
echo $myINDICES
echo
# Restore indices
for i in $myINDICES;
do
# Delete index if it already exists
curl -s -XDELETE $myES$i > /dev/null
echo $myCOL1"### Now uncompressing: tmp/$i.gz" $myCOL0
gunzip -f tmp/$i.gz
# Restore index to ES
echo $myCOL1"### Now restoring: "$i $myCOL0
elasticdump --input=tmp/$i --output=$myES$i --limit 7500
rm tmp/$i
done;
echo $myCOL1"### Done."$myCOL0

View File

@ -1,51 +0,0 @@
#!/bin/bash
########################################################
# T-Pot #
# Container and services status script #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
myCOUNT=1
if [[ $1 == "" ]]
then
myIMAGES=$(cat /data/images.conf)
else myIMAGES=$1
fi
while true
do
if ! [ -a /var/run/check.lock ];
then break
fi
sleep 0.1
if [ $myCOUNT = 1 ];
then
echo -n "Waiting for services "
else echo -n .
fi
if [ $myCOUNT = 300 ];
then
echo
echo "Services are busy or not available. Please retry later."
exit 1
fi
myCOUNT=$[$myCOUNT +1]
done
echo
echo
echo "======| System |======"
echo Date:" "$(date)
echo Uptime:" "$(uptime)
echo CPU temp: $(sensors | grep "Physical" | awk '{ print $4 }')
echo
for i in $myIMAGES
do
if [ "$i" != "ui-for-docker" ] && [ "$i" != "netdata" ];
then
echo "======| Container:" $i "|======"
docker exec $i supervisorctl status | GREP_COLORS='mt=01;32' egrep --color=always "(RUNNING)|$" | GREP_COLORS='mt=01;31' egrep --color=always "(STOPPED|FATAL)|$"
echo
fi
done

View File

@ -1,66 +0,0 @@
#!/bin/bash
########################################################
# T-Pot #
# Only start the containers found in /etc/init/ #
# #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
# Make sure not to interrupt a check
while true
do
if ! [ -a /var/run/check.lock ];
then break
fi
sleep 0.1
if [ "$myCOUNT" = "1" ];
then
echo -n "Waiting for services "
else echo -n .
fi
if [ "$myCOUNT" = "6000" ];
then
echo
echo "Overriding check.lock"
rm /var/run/check.lock
break
fi
myCOUNT=$[$myCOUNT +1]
done
# We do not want to get interrupted by a check
touch /var/run/check.lock
# Stop T-Pot services and disable all T-Pot services
echo "### Stopping T-Pot services and cleaning up."
for i in $(cat /data/imgcfg/all_images.conf);
do
systemctl stop $i
sleep 2
systemctl disable $i;
done
# Restarting docker services
echo "### Restarting docker services ..."
systemctl stop docker
sleep 2
systemctl start docker
sleep 2
# Enable only T-Pot upstart scripts from images.conf and pull the images
for i in $(cat /data/images.conf);
do
docker pull dtagdevsec/$i:latest1610;
systemctl enable $i;
done
# Announce reboot
echo "### Rebooting in 60 seconds for the changes to take effect."
sleep 60
# Allow checks to resume
rm /var/run/check.lock
# Reboot
reboot

24
installer/bin/updateip.sh Executable file
View File

@ -0,0 +1,24 @@
#!/bin/bash
# Let's add the first local ip to the /etc/issue and external ip to ews.ip file
# If the external IP cannot be detected, the internal IP will be inherited.
source /etc/environment
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(/usr/share/tpot/bin/myip.sh)
if [ "$myEXTIP" = "" ];
then
myEXTIP=$myLOCALIP
fi
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
tee /etc/tpot/elk/environment << EOF
MY_EXTIP=$myEXTIP
MY_INTIP=$myLOCALIP
MY_HOSTNAME=$HOSTNAME
EOF
chown tpot:tpot /data/ews/conf/ews.ip
chmod 760 /data/ews/conf/ews.ip

Binary file not shown.

View File

@ -1,83 +0,0 @@
[MAIN]
homedir = /opt/ewsposter/
spooldir = /opt/ewsposter/spool/
logdir = /opt/ewsposter/log/
del_malware_after_send = false
send_malware = true
sendlimit = 400
contact = your_email_address
proxy =
ip =
[EWS]
ews = true
username = community-01-user
token = foth{a5maiCee8fineu7
rhost_first = https://community.sicherheitstacho.eu/ews-0.1/alert/postSimpleMessage
rhost_second = https://community.sicherheitstacho.eu/ews-0.1/alert/postSimpleMessage
ignorecert = false
[HPFEED]
hpfeed = false
host = 0.0.0.0
port = 0
channels = 0
ident = 0
secret= 0
[EWSJSON]
json = false
jsondir = /data/ews/
[GLASTOPFV3]
glastopfv3 = true
nodeid = glastopfv3-community-01
sqlitedb = /data/glastopf/db/glastopf.db
malwaredir = /data/glastopf/data/files/
[GLASTOPFV2]
glastopfv2 = false
nodeid =
mysqlhost =
mysqldb =
mysqluser =
mysqlpw =
malwaredir =
[KIPPO]
kippo = true
nodeid = kippo-community-01
mysqlhost = localhost
mysqldb = cowrie
mysqluser = cowrie
mysqlpw = s0m3Secr3T!
malwaredir = /data/cowrie/downloads/
[DIONAEA]
dionaea = true
nodeid = dionaea-community-01
malwaredir = /data/dionaea/binaries/
sqlitedb = /data/dionaea/log/dionaea.sqlite
[HONEYTRAP]
honeytrap = true
nodeid = honeytrap-community-01
newversion = true
payloaddir = /data/honeytrap/attacks/
attackerfile = /data/honeytrap/log/attacker.log
[RDPDETECT]
rdpdetect = false
nodeid =
iptableslog =
targetip =
[EMOBILITY]
eMobility = true
nodeid = emobility-community-01
logfile = /data/eMobility/log/centralsystemEWS.log
[CONPOT]
conpot = true
nodeid = conpot-community-01
logfile = /data/conpot/log/conpot.json

View File

@ -1,11 +0,0 @@
conpot
cowrie
dionaea
elasticpot
elk
emobility
glastopf
honeytrap
suricata
netdata
ui-for-docker

View File

@ -1,5 +0,0 @@
cowrie
dionaea
elasticpot
glastopf
honeytrap

View File

@ -1,6 +0,0 @@
conpot
elk
emobility
suricata
netdata
ui-for-docker

View File

@ -1,9 +0,0 @@
cowrie
dionaea
elasticpot
elk
glastopf
honeytrap
suricata
netdata
ui-for-docker

View File

@ -1,15 +0,0 @@
[Unit]
Description=conpot
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop conpot
ExecStartPre=-/usr/bin/docker rm -v conpot
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh conpot off'
ExecStart=/usr/bin/docker run --name conpot --rm=true -v /data/conpot:/data/conpot -v /data/ews:/data/ews -p 1025:1025 -p 50100:50100 dtagdevsec/conpot:latest1610
ExecStop=/usr/bin/docker stop conpot
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=cowrie
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop cowrie
ExecStartPre=-/usr/bin/docker rm -v cowrie
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh cowrie off'
ExecStart=/usr/bin/docker run --name cowrie --rm=true -p 22:2222 -p 23:2223 -v /data/cowrie:/data/cowrie -v /data/ews:/data/ews dtagdevsec/cowrie:latest1610
ExecStop=/usr/bin/docker stop cowrie
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=dionaea
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop dionaea
ExecStartPre=-/usr/bin/docker rm -v dionaea
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh dionaea off'
ExecStart=/usr/bin/docker run --name dionaea --cap-add=NET_BIND_SERVICE --rm=true -p 21:21 -p 42:42 -p 69:69/udp -p 8081:80 -p 135:135 -p 443:443 -p 445:445 -p 1433:1433 -p 1723:1723 -p 1883:1883 -p 1900:1900 -p 3306:3306 -p 5060:5060 -p 5061:5061 -p 5060:5060/udp -p 11211:11211 -v /data/dionaea:/data/dionaea -v /data/ews:/data/ews dtagdevsec/dionaea:latest1610
ExecStop=/usr/bin/docker stop dionaea
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=elasticpot
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop elasticpot
ExecStartPre=-/usr/bin/docker rm -v elasticpot
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh elasticpot off'
ExecStart=/usr/bin/docker run --name elasticpot --rm=true -v /data/elasticpot:/data/elasticpot -v /data/ews:/data/ews -p 9200:9200 dtagdevsec/elasticpot:latest1610
ExecStop=/usr/bin/docker stop elasticpot
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=elk
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop elk
ExecStartPre=-/usr/bin/docker rm -v elk
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh elk'
ExecStart=/usr/bin/docker run --name=elk -v /data:/data -v /var/log:/data/host/log -p 127.0.0.1:64296:5601 -p 127.0.0.1:64298:9200 --rm=true dtagdevsec/elk:latest1610
ExecStop=/usr/bin/docker stop elk
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=emobility
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop emobility
ExecStartPre=-/usr/bin/docker rm -v emobility
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh emobility off'
ExecStart=/usr/bin/docker run --name emobility --cap-add=NET_ADMIN -p 8080:8080 -v /data/emobility:/data/eMobility -v /data/ews:/data/ews --rm=true dtagdevsec/emobility:latest1610
ExecStop=/usr/bin/docker stop emobility
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=glastopf
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop glastopf
ExecStartPre=-/usr/bin/docker rm -v glastopf
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh glastopf off'
ExecStart=/usr/bin/docker run --name glastopf --rm=true -v /data/glastopf:/data/glastopf -v /data/ews:/data/ews -p 80:80 dtagdevsec/glastopf:latest1610
ExecStop=/usr/bin/docker stop glastopf
[Install]
WantedBy=multi-user.target

View File

@ -1,23 +0,0 @@
[Unit]
Description=honeytrap
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop honeytrap
ExecStartPre=-/usr/bin/docker rm -v honeytrap
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh honeytrap off'
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 21,22,23,42,69,80,135,443,445,1433,1723,1883,1900 -j NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 3306,5060,5061,5601,11211 -j NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 64295,64296,64297,64298,64299,64300,64301 -j NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 1025,50100,8080,8081,9200 -j NFQUEUE
ExecStart=/usr/bin/docker run --name honeytrap --cap-add=NET_ADMIN --net=host --rm=true -v /data/honeytrap:/data/honeytrap -v /data/ews:/data/ews dtagdevsec/honeytrap:latest1610
ExecStop=/usr/bin/docker stop honeytrap
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 1025,50100,8080,8081,9200 -j NFQUEUE
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 64295,64296,64297,64298,64299,64300,64301 -j NFQUEUE
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 3306,5060,5061,5601,11211 -j NFQUEUE
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 21,22,23,42,69,80,135,443,445,1433,1723,1883,1900 -j NFQUEUE
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=netdata
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop netdata
ExecStartPre=-/usr/bin/docker rm -v netdata
ExecStartPre=-/bin/chmod 666 /var/run/docker.sock
ExecStart=/usr/bin/docker run --name netdata --net=host --cap-add=SYS_PTRACE --rm=true -v /proc:/host/proc:ro -v /sys:/host/sys:ro -v /var/run/docker.sock:/var/run/docker.sock dtagdevsec/netdata:latest1610
ExecStop=/usr/bin/docker stop netdata
[Install]
WantedBy=multi-user.target

View File

@ -1,19 +0,0 @@
[Unit]
Description=suricata
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop suricata
ExecStartPre=-/usr/bin/docker rm -v suricata
# Get IF, disable offloading, enable promiscious mode
ExecStartPre=/bin/bash -c '/sbin/ethtool --offload $(/sbin/ip route | /bin/grep $(/bin/hostname -I | /usr/bin/awk \'{print $1 }\') | /usr/bin/awk \'{print $3 }\') rx off tx off'
ExecStartPre=/bin/bash -c '/sbin/ethtool -K $(/sbin/ip route | /bin/grep $(/bin/hostname -I | /usr/bin/awk \'{print $1 }\') | /usr/bin/awk \'{print $3 }\') gso off gro off'
ExecStartPre=/bin/bash -c '/sbin/ip link set $(/sbin/ip route | /bin/grep $(/bin/hostname -I | /usr/bin/awk \'{print $1 }\') | /usr/bin/awk \'{print $3 }\') promisc on'
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh suricata off'
ExecStart=/usr/bin/docker run --name suricata --cap-add=NET_ADMIN --net=host --rm=true -v /data/suricata:/data/suricata dtagdevsec/suricata:latest1610
ExecStop=/usr/bin/docker stop suricata
[Install]
WantedBy=multi-user.target

View File

@ -1,14 +0,0 @@
[Unit]
Description=ui-for-docker
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop ui-for-docker
ExecStartPre=-/usr/bin/docker rm -v ui-for-docker
ExecStart=/usr/bin/docker run --name ui-for-docker --rm=true -v /var/run/docker.sock:/var/run/docker.sock -p 127.0.0.1:64299:9000 dtagdevsec/ui-for-docker:latest1610
ExecStop=/usr/bin/docker stop ui-for-docker
[Install]
WantedBy=multi-user.target

144
installer/etc/dialogrc Normal file
View File

@ -0,0 +1,144 @@
#
# Run-time configuration file for dialog
#
# Automatically generated by "dialog --create-rc <file>"
#
#
# Types of values:
#
# Number - <number>
# String - "string"
# Boolean - <ON|OFF>
# Attribute - (foreground,background,highlight?)
# Set aspect-ration.
aspect = 0
# Set separator (for multiple widgets output).
separate_widget = ""
# Set tab-length (for textbox tab-conversion).
tab_len = 0
# Make tab-traversal for checklist, etc., include the list.
visit_items = OFF
# Shadow dialog boxes? This also turns on color.
use_shadow = ON
# Turn color support ON or OFF
use_colors = ON
# Screen color
screen_color = (WHITE,MAGENTA,ON)
# Shadow color
shadow_color = (BLACK,BLACK,ON)
# Dialog box color
dialog_color = (BLACK,WHITE,OFF)
# Dialog box title color
title_color = (MAGENTA,WHITE,OFF)
# Dialog box border color
border_color = (WHITE,WHITE,ON)
# Active button color
button_active_color = (WHITE,MAGENTA,OFF)
# Inactive button color
button_inactive_color = dialog_color
# Active button key color
button_key_active_color = button_active_color
# Inactive button key color
button_key_inactive_color = (RED,WHITE,OFF)
# Active button label color
button_label_active_color = (YELLOW,MAGENTA,ON)
# Inactive button label color
button_label_inactive_color = (BLACK,WHITE,OFF)
# Input box color
inputbox_color = dialog_color
# Input box border color
inputbox_border_color = dialog_color
# Search box color
searchbox_color = dialog_color
# Search box title color
searchbox_title_color = title_color
# Search box border color
searchbox_border_color = border_color
# File position indicator color
position_indicator_color = title_color
# Menu box color
menubox_color = dialog_color
# Menu box border color
menubox_border_color = border_color
# Item color
item_color = dialog_color
# Selected item color
item_selected_color = button_active_color
# Tag color
tag_color = title_color
# Selected tag color
tag_selected_color = button_label_active_color
# Tag key color
tag_key_color = button_key_inactive_color
# Selected tag key color
tag_key_selected_color = (RED,MAGENTA,ON)
# Check box color
check_color = dialog_color
# Selected check box color
check_selected_color = button_active_color
# Up arrow color
uarrow_color = (MAGENTA,WHITE,ON)
# Down arrow color
darrow_color = uarrow_color
# Item help-text color
itemhelp_color = (WHITE,BLACK,OFF)
# Active form text color
form_active_text_color = button_active_color
# Form text color
form_text_color = (WHITE,CYAN,ON)
# Readonly form item color
form_item_readonly_color = (CYAN,WHITE,ON)
# Dialog box gauge color
gauge_color = title_color
# Dialog box border2 color
border2_color = dialog_color
# Input box border2 color
inputbox_border2_color = dialog_color
# Search box border2 color
searchbox_border2_color = dialog_color
# Menu box border2 color
menubox_border2_color = dialog_color

View File

@ -1,16 +1,20 @@
T-Pot 16.10 
Hostname: \n ┌──────────────────────────────────────────────┐
│ _____ ____ _ _ _____ _ ___ │
___________ _____________________________ │|_ _| | _ \\ ___ | |_ / |___ / |/ _ \\ │
\\__ ___/ \\______ \\_____ \\__ ___/ │ | |_____| |_) / _ \\| __| | | / /| | | | |│
| | ______ | ___// | \\| | │ | |_____| __/ (_) | |_ | | / /_| | |_| |│
| | /_____/ | | / | \\ | │ |_| |_| \\___/ \\__| |_|/_/(_)_|\\___/ │
|____| |____| \\_______ /____| │ │
\\/ └──────────────────────────────────────────────┘
IP:
SSH: ,---- [ \n ] [ \d ] [ \t ]
WEB: |
| IP:
| SSH:
| WEB:
|
`----

View File

@ -104,24 +104,22 @@ server {
### Kibana ### Kibana
location /kibana/ { location /kibana/ {
proxy_pass http://localhost:64296; proxy_pass http://localhost:64296;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
rewrite /kibana/(.*)$ /$1 break; rewrite /kibana/(.*)$ /$1 break;
} }
### Head plugin ### ES
location /myhead/ { location /es/ {
proxy_pass http://localhost:64298/; proxy_pass http://localhost:64298/;
proxy_http_version 1.1; rewrite /es/(.*)$ /$1 break;
proxy_set_header Upgrade $http_upgrade; }
proxy_set_header Connection "upgrade";
proxy_set_header Host $host; ### head standalone
location /myhead/ {
proxy_pass http://localhost:64302/;
rewrite /myhead/(.*)$ /$1 break; rewrite /myhead/(.*)$ /$1 break;
} }
### ui-for-docker ### portainer
location /ui { location /ui {
proxy_pass http://127.0.0.1:64299; proxy_pass http://127.0.0.1:64299;
proxy_http_version 1.1; proxy_http_version 1.1;
@ -134,28 +132,24 @@ server {
### web tty ### web tty
location /wetty { location /wetty {
proxy_pass http://127.0.0.1:64300/wetty; proxy_pass http://127.0.0.1:64300/wetty;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 43200000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
} }
### netdata ### netdata
location /netdata/ { location /netdata/ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:64301; proxy_pass http://localhost:64301;
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Connection "keep-alive";
proxy_store off;
rewrite /netdata/(.*)$ /$1 break; rewrite /netdata/(.*)$ /$1 break;
} }
} ### spiderfoot
location /spiderfoot {
proxy_pass http://127.0.0.1:64303;
}
location /static {
proxy_pass http://127.0.0.1:64303/spiderfoot/static;
}
location /scanviz {
proxy_pass http://127.0.0.1:64303/spiderfoot/scanviz;
}
}

View File

@ -1,17 +1,2 @@
#!/bin/bash #!/bin/bash
# Let's add the first local ip to the /etc/issue and external ip to ews.ip file exit 0
source /etc/environment
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(/usr/bin/myip.sh)
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
echo $myLOCALIP > /data/elk/logstash/mylocal.ip
chown tpot:tpot /data/ews/conf/ews.ip
if [ -f /var/run/check.lock ];
then rm /var/run/check.lock
fi

View File

@ -0,0 +1,313 @@
# T-Pot (Everything)
# For docker-compose ...
version: '2.1'
networks:
conpot_local:
cowrie_local:
dionaea_local:
elasticpot_local:
emobility_local:
ewsposter_local:
glastopf_local:
mailoney_local:
rdpy_local:
spiderfoot_local:
ui-for-docker_local:
vnclowpot_local:
services:
# Conpot service
conpot:
container_name: conpot
restart: always
networks:
- conpot_local
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:1710"
volumes:
- /data/conpot/log:/var/log/conpot
# Cowrie service
cowrie:
container_name: cowrie
restart: always
networks:
- cowrie_local
cap_add:
- NET_BIND_SERVICE
ports:
- "22:2222"
- "23:2223"
image: "dtagdevsec/cowrie:1710"
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
- /data/cowrie/keys:/home/cowrie/cowrie/etc
- /data/cowrie/log:/home/cowrie/cowrie/log
- /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
restart: always
networks:
- dionaea_local
cap_add:
- NET_BIND_SERVICE
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "8081:80"
- "135:135"
- "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "1900:1900/udp"
- "3306:3306"
- "5060:5060"
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1710"
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- /data/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- /data/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- /data/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- /data/dionaea:/opt/dionaea/var/dionaea
- /data/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# Elasticpot service
elasticpot:
container_name: elasticpot
restart: always
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1710"
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
# ELK services
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
environment:
- bootstrap.memory_lock=true
# - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
# mem_limit: 2g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1710"
volumes:
- /data:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1710"
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
env_file:
- /etc/tpot/elk/environment
image: "dtagdevsec/logstash:1710"
volumes:
- /data:/data
- /var/log:/data/host/log
## Elasticsearch-head service
head:
container_name: head
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1710"
# Emobility service
emobility:
container_name: emobility
restart: always
networks:
- emobility_local
cap_add:
- NET_ADMIN
ports:
- "8080:8080"
image: "dtagdevsec/emobility:1710"
volumes:
- /data/emobility:/data/eMobility
- /data/ews:/data/ews
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Glastopf service
glastopf:
container_name: glastopf
restart: always
networks:
- glastopf_local
ports:
- "80:80"
image: "dtagdevsec/glastopf:1710"
volumes:
- /data/glastopf/db:/opt/glastopf/db
- /data/glastopf/log:/opt/glastopf/log
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1710"
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
- /data/honeytrap/downloads:/opt/honeytrap/var/downloads
- /data/honeytrap/log:/opt/honeytrap/var/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
networks:
- mailoney_local
ports:
- "25:2525"
image: "dtagdevsec/mailoney:1710"
volumes:
- /data/mailoney/log:/opt/mailoney/logs
# Netdata service
netdata:
container_name: netdata
restart: always
network_mode: "host"
depends_on:
elasticsearch:
condition: service_healthy
cap_add:
- SYS_PTRACE
security_opt:
- apparmor=unconfined
image: "dtagdevsec/netdata:1710"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
# Rdpy service
rdpy:
container_name: rdpy
restart: always
networks:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1710"
volumes:
- /data/rdpy/log:/var/log/rdpy
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1710"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
# Ui-for-docker service
ui-for-docker:
container_name: ui-for-docker
command: -H unix:///var/run/docker.sock --no-auth
restart: always
networks:
- ui-for-docker_local
ports:
- "127.0.0.1:64299:9000"
image: "dtagdevsec/ui-for-docker:1710"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# Suricata service
suricata:
container_name: suricata
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1710"
volumes:
- /data/suricata/log:/var/log/suricata
# P0f service
p0f:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1710"
volumes:
- /data/p0f/log:/var/log/p0f
# Vnclowpot service
vnclowpot:
container_name: vnclowpot
restart: always
networks:
- vnclowpot_local
ports:
- "5900:5900"
image: "dtagdevsec/vnclowpot:1710"
volumes:
- /data/vnclowpot/log:/var/log/vnclowpot

View File

@ -0,0 +1,156 @@
# T-Pot (HP)
# For docker-compose ...
version: '2.1'
networks:
cowrie_local:
dionaea_local:
elasticpot_local:
ewsposter_local:
glastopf_local:
mailoney_local:
rdpy_local:
vnclowpot_local:
services:
# Cowrie service
cowrie:
container_name: cowrie
restart: always
networks:
- cowrie_local
cap_add:
- NET_BIND_SERVICE
ports:
- "22:2222"
- "23:2223"
image: "dtagdevsec/cowrie:1710"
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
- /data/cowrie/keys:/home/cowrie/cowrie/etc
- /data/cowrie/log:/home/cowrie/cowrie/log
- /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
restart: always
networks:
- dionaea_local
cap_add:
- NET_BIND_SERVICE
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "8081:80"
- "135:135"
- "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "1900:1900/udp"
- "3306:3306"
- "5060:5060"
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1710"
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- /data/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- /data/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- /data/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- /data/dionaea:/opt/dionaea/var/dionaea
- /data/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# Elasticpot service
elasticpot:
container_name: elasticpot
restart: always
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1710"
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Glastopf service
glastopf:
container_name: glastopf
restart: always
networks:
- glastopf_local
ports:
- "80:80"
image: "dtagdevsec/glastopf:1710"
volumes:
- /data/glastopf/db:/opt/glastopf/db
- /data/glastopf/log:/opt/glastopf/log
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1710"
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
- /data/honeytrap/downloads:/opt/honeytrap/var/downloads
- /data/honeytrap/log:/opt/honeytrap/var/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
networks:
- mailoney_local
ports:
- "25:2525"
image: "dtagdevsec/mailoney:1710"
volumes:
- /data/mailoney/log:/opt/mailoney/logs
# Rdpy service
rdpy:
container_name: rdpy
restart: always
networks:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1710"
volumes:
- /data/rdpy/log:/var/log/rdpy
# Vnclowpot service
vnclowpot:
container_name: vnclowpot
restart: always
networks:
- vnclowpot_local
ports:
- "5900:5900"
image: "dtagdevsec/vnclowpot:1710"
volumes:
- /data/vnclowpot/log:/var/log/vnclowpot

View File

@ -0,0 +1,176 @@
# T-Pot (Industrial)
# For docker-compose ...
version: '2.1'
networks:
conpot_local:
emobility_local:
ewsposter_local:
spiderfoot_local:
ui-for-docker_local:
services:
# Conpot service
conpot:
container_name: conpot
restart: always
networks:
- conpot_local
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:1710"
volumes:
- /data/conpot/log:/var/log/conpot
# ELK services
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
environment:
- bootstrap.memory_lock=true
# - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
# mem_limit: 2g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1710"
volumes:
- /data:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1710"
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
env_file:
- /etc/tpot/elk/environment
image: "dtagdevsec/logstash:1710"
volumes:
- /data:/data
- /var/log:/data/host/log
## Elasticsearch-head service
head:
container_name: head
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1710"
# Emobility service
emobility:
container_name: emobility
restart: always
networks:
- emobility_local
cap_add:
- NET_ADMIN
ports:
- "8080:8080"
image: "dtagdevsec/emobility:1710"
volumes:
- /data/emobility:/data/eMobility
- /data/ews:/data/ews
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Netdata service
netdata:
container_name: netdata
restart: always
network_mode: "host"
depends_on:
elasticsearch:
condition: service_healthy
cap_add:
- SYS_PTRACE
security_opt:
- apparmor=unconfined
image: "dtagdevsec/netdata:1710"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1710"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
# Ui-for-docker service
ui-for-docker:
container_name: ui-for-docker
command: -H unix:///var/run/docker.sock --no-auth
restart: always
networks:
- ui-for-docker_local
ports:
- "127.0.0.1:64299:9000"
image: "dtagdevsec/ui-for-docker:1710"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# Suricata service
suricata:
container_name: suricata
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1710"
volumes:
- /data/suricata/log:/var/log/suricata
# P0f service
p0f:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1710"
volumes:
- /data/p0f/log:/var/log/p0f

View File

@ -0,0 +1,283 @@
# T-Pot (Standard)
# For docker-compose ...
version: '2.1'
networks:
cowrie_local:
dionaea_local:
elasticpot_local:
ewsposter_local:
glastopf_local:
mailoney_local:
rdpy_local:
spiderfoot_local:
ui-for-docker_local:
vnclowpot_local:
services:
# Cowrie service
cowrie:
container_name: cowrie
restart: always
networks:
- cowrie_local
cap_add:
- NET_BIND_SERVICE
ports:
- "22:2222"
- "23:2223"
image: "dtagdevsec/cowrie:1710"
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
- /data/cowrie/keys:/home/cowrie/cowrie/etc
- /data/cowrie/log:/home/cowrie/cowrie/log
- /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
restart: always
networks:
- dionaea_local
cap_add:
- NET_BIND_SERVICE
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "8081:80"
- "135:135"
- "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "1900:1900/udp"
- "3306:3306"
- "5060:5060"
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1710"
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- /data/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- /data/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- /data/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- /data/dionaea:/opt/dionaea/var/dionaea
- /data/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# Elasticpot service
elasticpot:
container_name: elasticpot
restart: always
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1710"
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
# ELK services
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
# mem_limit: 2g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1710"
volumes:
- /data:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1710"
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
env_file:
- /etc/tpot/elk/environment
image: "dtagdevsec/logstash:1710"
volumes:
- /data:/data
- /var/log:/data/host/log
## Elasticsearch-head service
head:
container_name: head
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1710"
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1710"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Glastopf service
glastopf:
container_name: glastopf
restart: always
networks:
- glastopf_local
ports:
- "80:80"
image: "dtagdevsec/glastopf:1710"
volumes:
- /data/glastopf/db:/opt/glastopf/db
- /data/glastopf/log:/opt/glastopf/log
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1710"
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
- /data/honeytrap/downloads:/opt/honeytrap/var/downloads
- /data/honeytrap/log:/opt/honeytrap/var/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
networks:
- mailoney_local
ports:
- "25:2525"
image: "dtagdevsec/mailoney:1710"
volumes:
- /data/mailoney/log:/opt/mailoney/logs
# Netdata service
netdata:
container_name: netdata
restart: always
network_mode: "host"
depends_on:
elasticsearch:
condition: service_healthy
cap_add:
- SYS_PTRACE
security_opt:
- apparmor=unconfined
image: "dtagdevsec/netdata:1710"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
# Rdpy service
rdpy:
container_name: rdpy
restart: always
networks:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1710"
volumes:
- /data/rdpy/log:/var/log/rdpy
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1710"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
# Ui-for-docker service
ui-for-docker:
container_name: ui-for-docker
command: -H unix:///var/run/docker.sock --no-auth
restart: always
networks:
- ui-for-docker_local
ports:
- "127.0.0.1:64299:9000"
image: "dtagdevsec/ui-for-docker:1710"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# Suricata service
suricata:
container_name: suricata
restart: always
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1710"
volumes:
- /data/suricata/log:/var/log/suricata
# P0f service
p0f:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1710"
volumes:
- /data/p0f/log:/var/log/p0f
# Vnclowpot service
vnclowpot:
container_name: vnclowpot
restart: always
networks:
- vnclowpot_local
ports:
- "5900:5900"
image: "dtagdevsec/vnclowpot:1710"
volumes:
- /data/vnclowpot/log:/var/log/vnclowpot

View File

@ -0,0 +1,26 @@
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete indices older than 90 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: logstash-
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 90

View File

@ -0,0 +1,21 @@
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
hosts:
- 127.0.0.1
port: 64298
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile: /var/log/curator.log
logformat: default
blacklist: ['elasticsearch', 'urllib3']

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,38 @@
/data/conpot/log/conpot.json
/data/conpot/log/conpot.log
/data/cowrie/log/cowrie.json
/data/cowrie/log/cowrie-textlog.log
/data/cowrie/log/lastlog.txt
/data/cowrie/log/ttylogs.tgz
/data/cowrie/downloads.tgz
/data/dionaea/log/dionaea.json
/data/dionaea/log/dionaea.sqlite
/data/dionaea/bistreams.tgz
/data/dionaea/binaries.tgz
/data/dionaea/dionaea-errors.log
/data/elasticpot/log/elasticpot.log
/data/elk/log/*.log
/data/emobility/log/centralsystem.log
/data/emobility/log/centralsystemEWS.log
/data/glastopf/log/glastopf.log
/data/glastopf/db/glastopf.db
/data/honeytrap/log/*.log
/data/honeytrap/log/*.json
/data/honeytrap/attacks.tgz
/data/honeytrap/downloads.tgz
/data/mailoney/log/commands.log
/data/p0f/log/p0f.json
/data/rdpy/log/rdpy.log
/data/suricata/log/*.log
/data/suricata/log/*.json
/data/vnclowpot/log/vnclowpot.log
{
su tpot tpot
copytruncate
create 760 tpot tpot
daily
missingok
notifempty
rotate 30
compress
}

View File

@ -0,0 +1,57 @@
[Unit]
Description=tpot
Requires=docker.service
After=docker.service
[Service]
Restart=always
RestartSec=5
# Get and set internal, external IP infos, but ignore errors
ExecStartPre=-/usr/share/tpot/bin/updateip.sh
# Clear state or if persistence is enabled rotate and compress logs from /data
ExecStartPre=-/bin/bash -c '/usr/share/tpot/bin/clean.sh on'
# Remove old containers, images and volumes
ExecStartPre=-/usr/local/bin/docker-compose -f /etc/tpot/tpot.yml down -v
ExecStartPre=-/usr/local/bin/docker-compose -f /etc/tpot/tpot.yml rm -v
ExecStartPre=-/bin/bash -c 'docker volume rm $(docker volume ls -q)'
ExecStartPre=-/bin/bash -c 'docker rm -v $(docker ps -aq)'
ExecStartPre=-/bin/bash -c 'docker rmi $(docker images | grep "<none>" | awk \'{print $3}\')'
# Get IF, disable offloading, enable promiscious mode for p0f and suricata
ExecStartPre=/bin/bash -c '/sbin/ethtool --offload $(/sbin/ip address | grep "^2: " | awk \'{ print $2 }\' | tr -d [:punct:]) rx off tx off'
ExecStartPre=/bin/bash -c '/sbin/ethtool -K $(/sbin/ip address | grep "^2: " | awk \'{ print $2 }\' | tr -d [:punct:]) gso off gro off'
ExecStartPre=/bin/bash -c '/sbin/ip link set $(/sbin/ip address | grep "^2: " | awk \'{ print $2 }\' | tr -d [:punct:]) promisc on'
# Modify access rights on docker.sock for netdata
ExecStartPre=-/bin/chmod 666 /var/run/docker.sock
# Set iptables accept rules to avoid forwarding to honeytrap / NFQUEUE
# Forward all other connections to honeytrap / NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -s 127.0.0.1 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -d 127.0.0.1 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 64295:64303,7634 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 20:23,25,42,69,80,135,443,445,1433,1723,1883,1900 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 3306,3389,5060,5061,5601,5900,27017 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 1025,50100,8080,8081,9200 -j ACCEPT
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
# Compose T-Pot up
ExecStart=/usr/local/bin/docker-compose -f /etc/tpot/tpot.yml up --no-color
# Compose T-Pot down, remove containers and volumes
ExecStop=/usr/local/bin/docker-compose -f /etc/tpot/tpot.yml down -v
# Remove only previously set iptables rules
ExecStopPost=/sbin/iptables -w -D INPUT -s 127.0.0.1 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -d 127.0.0.1 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 64295:64303,7634 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 20:23,25,42,69,80,135,443,445,1433,1723,1883,1900 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 3306,3389,5060,5061,5601,5900,27017 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 1025,50100,8080,8081,9200 -j ACCEPT
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
[Install]
WantedBy=multi-user.target

View File

@ -1,10 +1,12 @@
#!/bin/bash #!/bin/bash
######################################################## # T-Pot post install script
# T-Pot post install script #
# Ubuntu server 16.04.0, x64 # # Set TERM, DIALOGRC
# # export TERM=linux
# v16.10.0 by mo, DTAG, 2016-12-03 # export DIALOGRC=/etc/dialogrc
########################################################
# Let's load dialog color theme
cp /root/tpot/etc/dialogrc /etc/
# Some global vars # Some global vars
myPROXYFILEPATH="/root/tpot/etc/proxy" myPROXYFILEPATH="/root/tpot/etc/proxy"
@ -12,142 +14,30 @@ myNTPCONFPATH="/root/tpot/etc/ntp"
myPFXPATH="/root/tpot/keys/8021x.pfx" myPFXPATH="/root/tpot/keys/8021x.pfx"
myPFXPWPATH="/root/tpot/keys/8021x.pw" myPFXPWPATH="/root/tpot/keys/8021x.pw"
myPFXHOSTIDPATH="/root/tpot/keys/8021x.id" myPFXHOSTIDPATH="/root/tpot/keys/8021x.id"
myBACKTITLE="T-Pot-Installer"
# Let's create a function for colorful output mySITES="https://index.docker.io https://github.com https://pypi.python.org https://ubuntu.com"
fuECHO () { myPROGRESSBOXCONF=" --backtitle "$myBACKTITLE" --progressbox 24 80"
local myRED=1
local myWHT=7
tput setaf $myRED -T xterm
echo "$1" "$2"
tput setaf $myWHT -T xterm
}
fuRANDOMWORD () { fuRANDOMWORD () {
local myWORDFILE=/usr/share/dict/names local myWORDFILE="$1"
local myLINES=$(cat $myWORDFILE | wc -l) local myLINES=$(cat $myWORDFILE | wc -l)
local myRANDOM=$((RANDOM % $myLINES)) local myRANDOM=$((RANDOM % $myLINES))
local myNUM=$((myRANDOM * myRANDOM % $myLINES + 1)) local myNUM=$((myRANDOM * myRANDOM % $myLINES + 1))
echo -n $(sed -n "$myNUM p" $myWORDFILE | tr -d \' | tr A-Z a-z) echo -n $(sed -n "$myNUM p" $myWORDFILE | tr -d \' | tr A-Z a-z)
} }
# Let's make sure there is a warning if running for a second time
if [ -f install.log ];
then fuECHO "### Running more than once may complicate things. Erase install.log if you are really sure."
exit 1;
fi
# Let's log for the beauty of it
set -e
exec 2> >(tee "install.err")
exec > >(tee "install.log")
# Let's remove NGINX default website
fuECHO "### Removing NGINX default website."
rm /etc/nginx/sites-enabled/default
rm /etc/nginx/sites-available/default
rm /usr/share/nginx/html/index.html
# Let's wait a few seconds to avoid interference with service messages # Let's wait a few seconds to avoid interference with service messages
fuECHO "### Waiting a few seconds to avoid interference with service messages." sleep 3
sleep 5 tput civis
dialog --no-ok --no-cancel --backtitle "$myBACKTITLE" --title "[ Wait to avoid interference with service messages ]" --pause "" 6 80 7
# Let's ask user for install type
# Install types are TPOT, HP, INDUSTRIAL, ALL
while [ 1 != 2 ]
do
fuECHO "### Please choose your install type and notice HW recommendation."
fuECHO
fuECHO " [T] - T-Pot Standard Installation"
fuECHO " - Cowrie, Dionaea, Elasticpot, Glastopf, Honeytrap, Suricata & ELK"
fuECHO " - 4 GB RAM (6-8 GB recommended)"
fuECHO " - 64GB disk (128 GB SSD recommended)"
fuECHO
fuECHO " [H] - Honeypots Only Installation"
fuECHO " - Cowrie, Dionaea, ElasticPot, Glastopf & Honeytrap"
fuECHO " - 3 GB RAM (4-6 GB recommended)"
fuECHO " - 64 GB disk (64 GB SSD recommended)"
fuECHO
fuECHO " [I] - Industrial"
fuECHO " - ConPot, eMobility, ELK & Suricata"
fuECHO " - 4 GB RAM (8 GB recommended)"
fuECHO " - 64 GB disk (128 GB SSD recommended)"
fuECHO
fuECHO " [E] - Everything"
fuECHO " - All of the above"
fuECHO " - 8 GB RAM"
fuECHO " - 128 GB disk or larger (128 GB SSD or larger recommended)"
fuECHO
read -p "Install Type: " myTYPE
case "$myTYPE" in
[t,T])
myFLAVOR="TPOT"
break
;;
[h,H])
myFLAVOR="HP"
break
;;
[i,I])
myFLAVOR="INDUSTRIAL"
break
;;
[e,E])
myFLAVOR="ALL"
break
;;
esac
done
fuECHO "### You chose: "$myFLAVOR
fuECHO
# Let's ask user for a web user and password
myOK="n"
myUSER="tsec"
while [ 1 != 2 ]
do
fuECHO "### Please enter a web user name and password."
read -p "Username (tsec not allowed): " myUSER
echo "Your username is: "$myUSER
fuECHO
read -p "OK (y/n)? " myOK
fuECHO
if [ "$myOK" = "y" ] && [ "$myUSER" != "tsec" ] && [ "$myUSER" != "" ];
then
break
fi
done
myPASS1="pass1"
myPASS2="pass2"
while [ "$myPASS1" != "$myPASS2" ]
do
while [ "$myPASS1" == "pass1" ] || [ "$myPASS1" == "" ]
do
read -s -p "Password: " myPASS1
fuECHO
done
read -s -p "Repeat password: " myPASS2
fuECHO
if [ "$myPASS1" != "$myPASS2" ];
then
fuECHO "### Passwords do not match."
myPASS1="pass1"
myPASS2="pass2"
fi
done
htpasswd -b -c /etc/nginx/nginxpasswd $myUSER $myPASS1
fuECHO
# Let's generate a SSL certificate
fuECHO "### Generating a self-signed-certificate for NGINX."
fuECHO "### If you are unsure you can use the default values."
mkdir -p /etc/nginx/ssl
openssl req -nodes -x509 -sha512 -newkey rsa:8192 -keyout "/etc/nginx/ssl/nginx.key" -out "/etc/nginx/ssl/nginx.crt" -days 3650
# Let's setup the proxy for env # Let's setup the proxy for env
if [ -f $myPROXYFILEPATH ]; if [ -f $myPROXYFILEPATH ];
then fuECHO "### Setting up the proxy." then
dialog --title "[ Setting up the proxy ]" $myPROGRESSBOXCONF <<EOF
EOF
myPROXY=$(cat $myPROXYFILEPATH) myPROXY=$(cat $myPROXYFILEPATH)
tee -a /etc/environment <<EOF tee -a /etc/environment 2>&1>/dev/null <<EOF
export http_proxy=$myPROXY export http_proxy=$myPROXY
export https_proxy=$myPROXY export https_proxy=$myPROXY
export HTTP_PROXY=$myPROXY export HTTP_PROXY=$myPROXY
@ -157,31 +47,189 @@ EOF
source /etc/environment source /etc/environment
# Let's setup the proxy for apt # Let's setup the proxy for apt
tee /etc/apt/apt.conf <<EOF tee /etc/apt/apt.conf 2>&1>/dev/null <<EOF
Acquire::http::Proxy "$myPROXY"; Acquire::http::Proxy "$myPROXY";
Acquire::https::Proxy "$myPROXY"; Acquire::https::Proxy "$myPROXY";
EOF EOF
# Let's add proxy settings to docker defaults
myPROXY=$(cat $myPROXYFILEPATH)
tee -a /etc/default/docker 2>&1>/dev/null <<EOF
http_proxy=$myPROXY
https_proxy=$myPROXY
HTTP_PROXY=$myPROXY
HTTPS_PROXY=$myPROXY
no_proxy=localhost,127.0.0.1,.sock
EOF
# Let's restart docker for proxy changes to take effect
systemctl stop docker 2>&1 | dialog --title "[ Stop docker service ]" $myPROGRESSBOXCONF
systemctl start docker 2>&1 | dialog --title "[ Start docker service ]" $myPROGRESSBOXCONF
fi fi
# Let's test the internet connection
mySITESCOUNT=$(echo $mySITES | wc -w)
j=0
for i in $mySITES;
do
dialog --title "[ Testing the internet connection ]" --backtitle "$myBACKTITLE" \
--gauge "\n Now checking: $i\n" 8 80 $(expr 100 \* $j / $mySITESCOUNT) <<EOF
EOF
curl --connect-timeout 5 -IsS $i 2>&1>/dev/null
if [ $? -ne 0 ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Continue? ]" --yesno "\nInternet connection test failed. This might indicate some problems with your connection. You can continue, but the installation might fail." 10 50
if [ $? = 1 ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Abort ]" --msgbox "\nInstallation aborted. Exiting the installer." 7 50
exit
else
break;
fi;
fi;
let j+=1
dialog --title "[ Testing the internet connection ]" --backtitle "$myBACKTITLE" \
--gauge "\n Now checking: $i\n" 8 80 $(expr 100 \* $j / $mySITESCOUNT) <<EOF
EOF
done;
# Let's remove NGINX default website
#fuECHO "### Removing NGINX default website."
rm -rf /etc/nginx/sites-enabled/default 2>&1 | dialog --title "[ Removing NGINX default website. ]" $myPROGRESSBOXCONF;
rm -rf /etc/nginx/sites-available/default 2>&1 | dialog --title "[ Removing NGINX default website. ]" $myPROGRESSBOXCONF;
rm -rf /usr/share/nginx/html/index.html 2>&1 | dialog --title "[ Removing NGINX default website. ]" $myPROGRESSBOXCONF;
# Let's ask user for install flavor
# Install types are TPOT, HP, INDUSTRIAL, ALL
tput cnorm
myFLAVOR=$(dialog --no-cancel --backtitle "$myBACKTITLE" --title "[ Choose your edition ]" --no-tags --menu \
"\nRequired: 4GB RAM, 64GB disk\nRecommended: 8GB RAM, 128GB SSD" 14 60 4 \
"TPOT" "Standard Honeypots, Suricata & ELK" \
"HP" "Honeypots only, w/o Suricata & ELK" \
"INDUSTRIAL" "Conpot, eMobility, Suricata & ELK" \
"EVERYTHING" "Everything" 3>&1 1>&2 2>&3 3>&-)
# Let's ask for a secure tsec password
myUSER="tsec"
myPASS1="pass1"
myPASS2="pass2"
mySECURE="0"
while [ "$myPASS1" != "$myPASS2" ] && [ "$mySECURE" == "0" ]
do
while [ "$myPASS1" == "pass1" ] || [ "$myPASS1" == "" ]
do
myPASS1=$(dialog --insecure --backtitle "$myBACKTITLE" \
--title "[ Enter password for console user (tsec) ]" \
--passwordbox "\nPassword" 9 60 3>&1 1>&2 2>&3 3>&-)
done
myPASS2=$(dialog --insecure --backtitle "$myBACKTITLE" \
--title "[ Repeat password for console user (tsec) ]" \
--passwordbox "\nPassword" 9 60 3>&1 1>&2 2>&3 3>&-)
if [ "$myPASS1" != "$myPASS2" ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Passwords do not match. ]" \
--msgbox "\nPlease re-enter your password." 7 60
myPASS1="pass1"
myPASS2="pass2"
fi
mySECURE=$(printf "%s" "$myPASS1" | cracklib-check | grep -c "OK")
if [ "$mySECURE" == "0" ] && [ "$myPASS1" == "$myPASS2" ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Password is not secure ]" --defaultno --yesno "\nKeep insecure password?" 7 50
myOK=$?
if [ "$myOK" == "1" ];
then
myPASS1="pass1"
myPASS2="pass2"
fi
fi
done
printf "%s" "$myUSER:$myPASS1" | chpasswd
# Let's ask for a web username with secure password
myOK="1"
myUSER="tsec"
myPASS1="pass1"
myPASS2="pass2"
mySECURE="0"
while [ 1 != 2 ]
do
myUSER=$(dialog --backtitle "$myBACKTITLE" --title "[ Enter your web user name ]" --inputbox "\nUsername (tsec not allowed)" 9 50 3>&1 1>&2 2>&3 3>&-)
myUSER=$(echo $myUSER | tr -cd "[:alnum:]_.-")
dialog --backtitle "$myBACKTITLE" --title "[ Your username is ]" --yesno "\n$myUSER" 7 50
myOK=$?
if [ "$myOK" = "0" ] && [ "$myUSER" != "tsec" ] && [ "$myUSER" != "" ];
then
break
fi
done
while [ "$myPASS1" != "$myPASS2" ] && [ "$mySECURE" == "0" ]
do
while [ "$myPASS1" == "pass1" ] || [ "$myPASS1" == "" ]
do
myPASS1=$(dialog --insecure --backtitle "$myBACKTITLE" \
--title "[ Enter password for your web user ]" \
--passwordbox "\nPassword" 9 60 3>&1 1>&2 2>&3 3>&-)
done
myPASS2=$(dialog --insecure --backtitle "$myBACKTITLE" \
--title "[ Repeat password for your web user ]" \
--passwordbox "\nPassword" 9 60 3>&1 1>&2 2>&3 3>&-)
if [ "$myPASS1" != "$myPASS2" ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Passwords do not match. ]" \
--msgbox "\nPlease re-enter your password." 7 60
myPASS1="pass1"
myPASS2="pass2"
fi
mySECURE=$(printf "%s" "$myPASS1" | cracklib-check | grep -c "OK")
if [ "$mySECURE" == "0" ] && [ "$myPASS1" == "$myPASS2" ];
then
dialog --backtitle "$myBACKTITLE" --title "[ Password is not secure ]" --defaultno --yesno "\nKeep insecure password?" 7 50
myOK=$?
if [ "$myOK" == "1" ];
then
myPASS1="pass1"
myPASS2="pass2"
fi
fi
done
htpasswd -b -c /etc/nginx/nginxpasswd "$myUSER" "$myPASS1" 2>&1 | dialog --title "[ Setting up user and password ]" $myPROGRESSBOXCONF;
# Let's generate a SSL self-signed certificate without interaction (browsers will see it invalid anyway)
tput civis
mkdir -p /etc/nginx/ssl 2>&1 | dialog --title "[ Generating a self-signed-certificate for NGINX ]" $myPROGRESSBOXCONF;
openssl req \
-nodes \
-x509 \
-sha512 \
-newkey rsa:8192 \
-keyout "/etc/nginx/ssl/nginx.key" \
-out "/etc/nginx/ssl/nginx.crt" \
-days 3650 \
-subj '/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd' 2>&1 | dialog --title "[ Generating a self-signed-certificate for NGINX ]" $myPROGRESSBOXCONF;
# Let's setup the ntp server # Let's setup the ntp server
if [ -f $myNTPCONFPATH ]; if [ -f $myNTPCONFPATH ];
then then
fuECHO "### Setting up the ntp server." dialog --title "[ Setting up the ntp server ]" $myPROGRESSBOXCONF <<EOF
cp $myNTPCONFPATH /etc/ntp.conf EOF
cp $myNTPCONFPATH /etc/ntp.conf 2>&1 | dialog --title "[ Setting up the ntp server ]" $myPROGRESSBOXCONF
fi fi
# Let's setup 802.1x networking # Let's setup 802.1x networking
if [ -f $myPFXPATH ]; if [ -f $myPFXPATH ];
then then
fuECHO "### Setting up 802.1x networking." dialog --title "[ Setting 802.1x networking ]" $myPROGRESSBOXCONF <<EOF
cp $myPFXPATH /etc/wpa_supplicant/ EOF
cp $myPFXPATH /etc/wpa_supplicant/ 2>&1 | dialog --title "[ Setting 802.1x networking ]" $myPROGRESSBOXCONF
if [ -f $myPFXPWPATH ]; if [ -f $myPFXPWPATH ];
then then
fuECHO "### Setting up 802.1x password." dialog --title "[ Setting up 802.1x password ]" $myPROGRESSBOXCONF <<EOF
EOF
myPFXPW=$(cat $myPFXPWPATH) myPFXPW=$(cat $myPFXPWPATH)
fi fi
myPFXHOSTID=$(cat $myPFXHOSTIDPATH) myPFXHOSTID=$(cat $myPFXHOSTIDPATH)
tee -a /etc/network/interfaces <<EOF tee -a /etc/network/interfaces 2>&1>/dev/null <<EOF
wpa-driver wired wpa-driver wired
wpa-conf /etc/wpa_supplicant/wired8021x.conf wpa-conf /etc/wpa_supplicant/wired8021x.conf
@ -197,7 +245,7 @@ tee -a /etc/network/interfaces <<EOF
# wpa-conf /etc/wpa_supplicant/wireless8021x.conf # wpa-conf /etc/wpa_supplicant/wireless8021x.conf
EOF EOF
tee /etc/wpa_supplicant/wired8021x.conf <<EOF tee /etc/wpa_supplicant/wired8021x.conf 2>&1>/dev/null <<EOF
ctrl_interface=/var/run/wpa_supplicant ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=root ctrl_interface_group=root
eapol_version=1 eapol_version=1
@ -211,7 +259,7 @@ network={
} }
EOF EOF
tee /etc/wpa_supplicant/wireless8021x.conf <<EOF tee /etc/wpa_supplicant/wireless8021x.conf 2>&1>/dev/null <<EOF
ctrl_interface=/var/run/wpa_supplicant ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=root ctrl_interface_group=root
eapol_version=1 eapol_version=1
@ -230,8 +278,20 @@ EOF
fi fi
# Let's provide a wireless example config ... # Let's provide a wireless example config ...
fuECHO "### Providing a wireless example config." fuECHO "### Providing static ip, wireless example config."
tee -a /etc/network/interfaces <<EOF tee -a /etc/network/interfaces 2>&1>/dev/null <<EOF
### Example static ip config
### Replace <eth0> with the name of your physical interface name
#
#auto eth0
#iface eth0 inet static
# address 192.168.1.1
# netmask 255.255.255.0
# network 192.168.1.0
# broadcast 192.168.1.255
# gateway 192.168.1.1
# dns-nameservers 192.168.1.1
### Example wireless config without 802.1x ### Example wireless config without 802.1x
### This configuration was tested with the IntelNUC series ### This configuration was tested with the IntelNUC series
@ -254,239 +314,211 @@ sed -i '/cdrom/d' /etc/apt/sources.list
# Let's make sure SSH roaming is turned off (CVE-2016-0777, CVE-2016-0778) # Let's make sure SSH roaming is turned off (CVE-2016-0777, CVE-2016-0778)
fuECHO "### Let's make sure SSH roaming is turned off." fuECHO "### Let's make sure SSH roaming is turned off."
tee -a /etc/ssh/ssh_config <<EOF tee -a /etc/ssh/ssh_config 2>&1>/dev/null <<EOF
UseRoaming no UseRoaming no
EOF EOF
# Let's pull some updates # Let's pull some updates
fuECHO "### Pulling Updates." apt-get update -y 2>&1 | dialog --title "[ Pulling updates ]" $myPROGRESSBOXCONF
apt-get update -y apt-get upgrade -y 2>&1 | dialog --title "[ Pulling updates ]" $myPROGRESSBOXCONF
apt-get upgrade -y
# Let's clean up apt # Let's clean up apt
apt-get autoclean -y apt-get autoclean -y 2>&1 | dialog --title "[ Pulling updates ]" $myPROGRESSBOXCONF
apt-get autoremove -y apt-get autoremove -y 2>&1 | dialog --title "[ Pulling updates ]" $myPROGRESSBOXCONF
# Installing alerta-cli, wetty # Installing docker-compose, wetty, ctop, elasticdump
fuECHO "### Installing alerta-cli." pip install --upgrade pip 2>&1 | dialog --title "[ Installing pip ]" $myPROGRESSBOXCONF
pip install --upgrade pip pip install docker-compose==1.12.0 2>&1 | dialog --title "[ Installing docker-compose ]" $myPROGRESSBOXCONF
pip install alerta pip install elasticsearch-curator==5.1.1 2>&1 | dialog --title "[ Installing elasticsearch-curator ]" $myPROGRESSBOXCONF
fuECHO "### Installing wetty." ln -s /usr/bin/nodejs /usr/bin/node 2>&1 | dialog --title "[ Installing wetty ]" $myPROGRESSBOXCONF
ln -s /usr/bin/nodejs /usr/bin/node npm install https://github.com/t3chn0m4g3/wetty -g 2>&1 | dialog --title "[ Installing wetty ]" $myPROGRESSBOXCONF
npm install https://github.com/t3chn0m4g3/wetty -g npm install https://github.com/t3chn0m4g3/elasticsearch-dump -g 2>&1 | dialog --title "[ Installing elasticsearch-dump ]" $myPROGRESSBOXCONF
wget https://github.com/bcicen/ctop/releases/download/v0.6.1/ctop-0.6.1-linux-amd64 -O ctop 2>&1 | dialog --title "[ Installing ctop ]" $myPROGRESSBOXCONF
# Let's add proxy settings to docker defaults mv ctop /usr/bin/ 2>&1 | dialog --title "[ Installing ctop ]" $myPROGRESSBOXCONF
if [ -f $myPROXYFILEPATH ]; chmod +x /usr/bin/ctop 2>&1 | dialog --title "[ Installing ctop ]" $myPROGRESSBOXCONF
then fuECHO "### Setting up the proxy for docker."
myPROXY=$(cat $myPROXYFILEPATH)
tee -a /etc/default/docker <<EOF
http_proxy=$myPROXY
https_proxy=$myPROXY
HTTP_PROXY=$myPROXY
HTTPS_PROXY=$myPROXY
no_proxy=localhost,127.0.0.1,.sock
EOF
fi
# Let's add a new user # Let's add a new user
fuECHO "### Adding new user." addgroup --gid 2000 tpot 2>&1 | dialog --title "[ Adding new user ]" $myPROGRESSBOXCONF
addgroup --gid 2000 tpot adduser --system --no-create-home --uid 2000 --disabled-password --disabled-login --gid 2000 tpot 2>&1 | dialog --title "[ Adding new user ]" $myPROGRESSBOXCONF
adduser --system --no-create-home --uid 2000 --disabled-password --disabled-login --gid 2000 tpot
# Let's set the hostname # Let's set the hostname
fuECHO "### Setting a new hostname." a=$(fuRANDOMWORD /usr/share/dict/a.txt)
myHOST=$(curl -s -f www.nsanamegenerator.com | html2text | tr A-Z a-z | awk '{print $1}') n=$(fuRANDOMWORD /usr/share/dict/n.txt)
if [ "$myHOST" = "" ]; then myHOST=$a$n
fuECHO "### Failed to fetch name from remote, using local cache." hostnamectl set-hostname $myHOST 2>&1 | dialog --title "[ Setting new hostname ]" $myPROGRESSBOXCONF
myHOST=$(fuRANDOMWORD) sed -i 's#127.0.1.1.*#127.0.1.1\t'"$myHOST"'#g' /etc/hosts 2>&1 | dialog --title "[ Setting new hostname ]" $myPROGRESSBOXCONF
fi
hostnamectl set-hostname $myHOST
sed -i 's#127.0.1.1.*#127.0.1.1\t'"$myHOST"'#g' /etc/hosts
# Let's patch sshd_config # Let's patch sshd_config
fuECHO "### Patching sshd_config to listen on port 64295 and deny password authentication." sed -i 's#Port 22#Port 64295#' /etc/ssh/sshd_config 2>&1 | dialog --title "[ SSH listen on tcp/64295 ]" $myPROGRESSBOXCONF
sed -i 's#Port 22#Port 64295#' /etc/ssh/sshd_config sed -i 's#\#PasswordAuthentication yes#PasswordAuthentication no#' /etc/ssh/sshd_config 2>&1 | dialog --title "[ SSH password authentication only from RFC1918 networks ]" $myPROGRESSBOXCONF
sed -i 's#\#PasswordAuthentication yes#PasswordAuthentication no#' /etc/ssh/sshd_config tee -a /etc/ssh/sshd_config 2>&1>/dev/null <<EOF
# Let's allow ssh password authentication from RFC1918 networks
fuECHO "### Allow SSH password authentication from RFC1918 networks"
tee -a /etc/ssh/sshd_config <<EOF
Match address 127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 Match address 127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
PasswordAuthentication yes PasswordAuthentication yes
EOF EOF
# Let's patch docker defaults, so we can run images as service
fuECHO "### Patching docker defaults."
tee -a /etc/default/docker <<EOF
DOCKER_OPTS="-r=false"
EOF
# Let's restart docker for proxy changes to take effect
systemctl restart docker
sleep 5
# Let's make sure only myFLAVOR images will be downloaded and started # Let's make sure only myFLAVOR images will be downloaded and started
case $myFLAVOR in case $myFLAVOR in
HP) HP)
echo "### Preparing HONEYPOT flavor installation." echo "### Preparing HONEYPOT flavor installation."
cp /root/tpot/data/imgcfg/hp_images.conf /root/tpot/data/images.conf cp /root/tpot/etc/tpot/compose/hp.yml /root/tpot/etc/tpot/tpot.yml 2>&1>/dev/null
;; ;;
INDUSTRIAL) INDUSTRIAL)
echo "### Preparing INDUSTRIAL flavor installation." echo "### Preparing INDUSTRIAL flavor installation."
cp /root/tpot/data/imgcfg/industrial_images.conf /root/tpot/data/images.conf cp /root/tpot/etc/tpot/compose/industrial.yml /root/tpot/etc/tpot/tpot.yml 2>&1>/dev/null
;; ;;
TPOT) TPOT)
echo "### Preparing TPOT flavor installation." echo "### Preparing TPOT flavor installation."
cp /root/tpot/data/imgcfg/tpot_images.conf /root/tpot/data/images.conf cp /root/tpot/etc/tpot/compose/tpot.yml /root/tpot/etc/tpot/tpot.yml 2>&1>/dev/null
;; ;;
ALL) EVERYTHING)
echo "### Preparing EVERYTHING flavor installation." echo "### Preparing EVERYTHING flavor installation."
cp /root/tpot/data/imgcfg/all_images.conf /root/tpot/data/images.conf cp /root/tpot/etc/tpot/compose/all.yml /root/tpot/etc/tpot/tpot.yml 2>&1>/dev/null
;; ;;
esac esac
# Let's load docker images # Let's load docker images
fuECHO "### Loading docker images. Please be patient, this may take a while." myIMAGESCOUNT=$(cat /root/tpot/etc/tpot/tpot.yml | grep -v '#' | grep image | cut -d: -f2 | wc -l)
for name in $(cat /root/tpot/data/images.conf) j=0
for name in $(cat /root/tpot/etc/tpot/tpot.yml | grep -v '#' | grep image | cut -d'"' -f2)
do do
docker pull dtagdevsec/$name:latest1610 dialog --title "[ Downloading docker images, please be patient ]" --backtitle "$myBACKTITLE" \
--gauge "\n Now downloading: $name\n" 8 80 $(expr 100 \* $j / $myIMAGESCOUNT) <<EOF
EOF
docker pull $name 2>&1>/dev/null
let j+=1
dialog --title "[ Downloading docker images, please be patient ]" --backtitle "$myBACKTITLE" \
--gauge "\n Now downloading: $name\n" 8 80 $(expr 100 \* $j / $myIMAGESCOUNT) <<EOF
EOF
done done
# Let's add the daily update check with a weekly clean interval # Let's add the daily update check with a weekly clean interval
fuECHO "### Modifying update checks." dialog --title "[ Modifying update checks ]" $myPROGRESSBOXCONF <<EOF
tee /etc/apt/apt.conf.d/10periodic <<EOF EOF
tee /etc/apt/apt.conf.d/10periodic 2>&1>/dev/null <<EOF
APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "0"; APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "7"; APT::Periodic::AutocleanInterval "7";
EOF EOF
# Let's make sure to reboot the system after a kernel panic # Let's make sure to reboot the system after a kernel panic
fuECHO "### Reboot after kernel panic." dialog --title "[ Reboot after kernel panic ]" $myPROGRESSBOXCONF <<EOF
tee -a /etc/sysctl.conf <<EOF EOF
tee -a /etc/sysctl.conf 2>&1>/dev/null <<EOF
# Reboot after kernel panic, check via /proc/sys/kernel/panic[_on_oops] # Reboot after kernel panic, check via /proc/sys/kernel/panic[_on_oops]
# Set required map count for ELK
kernel.panic = 1 kernel.panic = 1
kernel.panic_on_oops = 1 kernel.panic_on_oops = 1
vm.max_map_count = 262144
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
EOF EOF
# Let's add some cronjobs # Let's add some cronjobs
fuECHO "### Adding cronjobs." dialog --title "[ Adding cronjobs ]" $myPROGRESSBOXCONF <<EOF
tee -a /etc/crontab <<EOF EOF
tee -a /etc/crontab 2>&1>/dev/null <<EOF
# Show running containers every 60s via /dev/tty2
#*/2 * * * * root status.sh > /dev/tty2
# Check if containers and services are up
*/5 * * * * root check.sh
# Example for alerta-cli IP update
#*/5 * * * * root alerta --endpoint-url http://<ip>:<port>/api delete --filters resource=<host> && alerta --endpoint-url http://<ip>:<port>/api send -e IP -r <host> -E Production -s ok -S T-Pot -t \$(cat /data/elk/logstash/mylocal.ip) --status open
# Check if updated images are available and download them # Check if updated images are available and download them
27 1 * * * root for i in \$(cat /data/images.conf); do docker pull dtagdevsec/\$i:latest1610; done 27 1 * * * root /usr/bin/docker-compose -f /etc/tpot/tpot.yml pull
# Restart docker service and containers # Delete elasticsearch logstash indices older than 90 days
27 3 * * * root dcres.sh 27 4 * * * root /usr/local/bin/curator --config /etc/tpot/curator/curator.yml /etc/tpot/curator/actions.yml
# Delete elastic indices older than 90 days (kibana index is omitted by default) # Uploaded binaries are not supposed to be downloaded
27 4 * * * root docker exec elk bash -c '/usr/local/bin/curator --host 127.0.0.1 delete indices --older-than 90 --time-unit days --timestring \%Y.\%m.\%d' */1 * * * * root mv --backup=numbered /data/dionaea/roots/ftp/* /data/dionaea/binaries/
# Update IP and erase check.lock if it exists
27 15 * * * root /etc/rc.local
# Daily reboot # Daily reboot
27 23 * * * root reboot 27 3 * * * root reboot
# Check for updated packages every sunday, upgrade and reboot # Check for updated packages every sunday, upgrade and reboot
27 16 * * 0 root apt-get autoclean -y && apt-get autoremove -y && apt-get update -y && apt-get upgrade -y && sleep 10 && reboot 27 16 * * 0 root apt-get autoclean -y && apt-get autoremove -y && apt-get update -y && apt-get upgrade -y && sleep 10 && reboot
EOF EOF
# Let's create some files and folders # Let's create some files and folders
fuECHO "### Creating some files and folders."
mkdir -p /data/conpot/log \ mkdir -p /data/conpot/log \
/data/cowrie/log/tty/ /data/cowrie/downloads/ /data/cowrie/keys/ /data/cowrie/misc/ \ /data/cowrie/log/tty/ /data/cowrie/downloads/ /data/cowrie/keys/ /data/cowrie/misc/ \
/data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/roots/ftp /data/dionaea/roots/tftp /data/dionaea/roots/www /data/dionaea/roots/upnp \ /data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/roots/ftp /data/dionaea/roots/tftp /data/dionaea/roots/www /data/dionaea/roots/upnp \
/data/elasticpot/log \ /data/elasticpot/log \
/data/elk/data /data/elk/log /data/elk/logstash/conf \ /data/elk/data /data/elk/log \
/data/glastopf /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/ \ /data/glastopf /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/ \
/data/mailoney/log \
/data/emobility/log \ /data/emobility/log \
/data/ews/log /data/ews/conf /data/ews/dionaea /data/ews/emobility \ /data/ews/conf \
/data/suricata/log /home/tsec/.ssh/ /data/rdpy/log \
/data/spiderfoot \
/data/suricata/log /home/tsec/.ssh/ \
/data/p0f/log \
/data/vnclowpot/log \
/etc/tpot/elk /etc/tpot/compose /etc/tpot/systemd \
/usr/share/tpot/bin 2>&1 | dialog --title "[ Creating some files and folders ]" $myPROGRESSBOXCONF
touch /data/spiderfoot/spiderfoot.db 2>&1 | dialog --title "[ Creating some files and folders ]" $myPROGRESSBOXCONF
# Let's take care of some files and permissions before copying # Let's take care of some files and permissions before copying
chmod 500 /root/tpot/bin/* chmod 500 /root/tpot/bin/* 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
chmod 600 /root/tpot/data/* chmod 600 -R /root/tpot/etc/tpot 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
chmod 644 /root/tpot/etc/issue chmod 644 /root/tpot/etc/issue 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
chmod 755 /root/tpot/etc/rc.local chmod 755 /root/tpot/etc/rc.local 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
chmod 644 /root/tpot/data/systemd/* chmod 644 /root/tpot/etc/tpot/systemd/* 2>&1 | dialog --title "[ Setting permissions ]" $myPROGRESSBOXCONF
# Let's copy some files # Let's copy some files
tar xvfz /root/tpot/data/elkbase.tgz -C / tar xvfz /root/tpot/etc/tpot/elkbase.tgz -C / 2>&1 | dialog --title "[ Extracting elkbase.tgz ]" $myPROGRESSBOXCONF
cp /root/tpot/data/elkbase.tgz /data/ cp -R /root/tpot/bin/* /usr/share/tpot/bin/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp -R /root/tpot/bin/* /usr/bin/ cp -R /root/tpot/etc/tpot/* /etc/tpot/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp -R /root/tpot/data/* /data/ cp /root/tpot/etc/tpot/systemd/* /etc/systemd/system/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/data/systemd/* /etc/systemd/system/ cp /root/tpot/etc/issue /etc/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/etc/issue /etc/ cp -R /root/tpot/etc/nginx/ssl /etc/nginx/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp -R /root/tpot/etc/nginx/ssl /etc/nginx/ cp /root/tpot/etc/nginx/tpotweb.conf /etc/nginx/sites-available/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/etc/nginx/tpotweb.conf /etc/nginx/sites-available/ cp /root/tpot/etc/nginx/nginx.conf /etc/nginx/nginx.conf 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/etc/nginx/nginx.conf /etc/nginx/nginx.conf cp /root/tpot/keys/authorized_keys /home/tsec/.ssh/authorized_keys 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/keys/authorized_keys /home/tsec/.ssh/authorized_keys cp /root/tpot/usr/share/nginx/html/* /usr/share/nginx/html/ 2>&1 | dialog --title "[ Copy configs ]" $myPROGRESSBOXCONF
cp /root/tpot/usr/share/nginx/html/* /usr/share/nginx/html/ systemctl enable tpot 2>&1 | dialog --title "[ Enabling service for tpot ]" $myPROGRESSBOXCONF
for i in $(cat /data/images.conf); systemctl enable wetty 2>&1 | dialog --title "[ Enabling service for wetty ]" $myPROGRESSBOXCONF
do
systemctl enable $i;
done
systemctl enable wetty
# Let's enable T-Pot website # Let's enable T-Pot website
fuECHO "### Enabling T-Pot website." ln -s /etc/nginx/sites-available/tpotweb.conf /etc/nginx/sites-enabled/tpotweb.conf 2>&1 | dialog --title "[ Enabling T-Pot website ]" $myPROGRESSBOXCONF
ln -s /etc/nginx/sites-available/tpotweb.conf /etc/nginx/sites-enabled/tpotweb.conf
# Let's take care of some files and permissions # Let's take care of some files and permissions
chmod 760 -R /data chmod 760 -R /data 2>&1 | dialog --title "[ Set permissions and ownerships ]" $myPROGRESSBOXCONF
chown tpot:tpot -R /data chown tpot:tpot -R /data 2>&1 | dialog --title "[ Set permissions and ownerships ]" $myPROGRESSBOXCONF
chmod 600 /home/tsec/.ssh/authorized_keys chmod 600 /home/tsec/.ssh/authorized_keys 2>&1 | dialog --title "[ Set permissions and ownerships ]" $myPROGRESSBOXCONF
chown tsec:tsec /home/tsec/.ssh /home/tsec/.ssh/authorized_keys chown tsec:tsec /home/tsec/.ssh /home/tsec/.ssh/authorized_keys 2>&1 | dialog --title "[ Set permissions and ownerships ]" $myPROGRESSBOXCONF
# Let's replace "quiet splash" options, set a console font for more screen canvas and update grub # Let's replace "quiet splash" options, set a console font for more screen canvas and update grub
sed -i 's#GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"#GRUB_CMDLINE_LINUX_DEFAULT="consoleblank=0"#' /etc/default/grub sed -i 's#GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"#GRUB_CMDLINE_LINUX_DEFAULT="consoleblank=0"#' /etc/default/grub 2>&1>/dev/null
sed -i 's#GRUB_CMDLINE_LINUX=""#GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"#' /etc/default/grub sed -i 's#GRUB_CMDLINE_LINUX=""#GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"#' /etc/default/grub 2>&1>/dev/null
#sed -i 's#\#GRUB_GFXMODE=640x480#GRUB_GFXMODE=800x600x32#' /etc/default/grub #sed -i 's#\#GRUB_GFXMODE=640x480#GRUB_GFXMODE=800x600x32#' /etc/default/grub
#tee -a /etc/default/grub <<EOF #tee -a /etc/default/grub <<EOF
#GRUB_GFXPAYLOAD=800x600x32 #GRUB_GFXPAYLOAD=800x600x32
#GRUB_GFXPAYLOAD_LINUX=800x600x32 #GRUB_GFXPAYLOAD_LINUX=800x600x32
#EOF #EOF
update-grub update-grub 2>&1 | dialog --title "[ Update grub ]" $myPROGRESSBOXCONF
cp /usr/share/consolefonts/Uni2-Terminus12x6.psf.gz /etc/console-setup/ cp /usr/share/consolefonts/Uni2-Terminus12x6.psf.gz /etc/console-setup/
gunzip /etc/console-setup/Uni2-Terminus12x6.psf.gz gunzip /etc/console-setup/Uni2-Terminus12x6.psf.gz
sed -i 's#FONTFACE=".*#FONTFACE="Terminus"#' /etc/default/console-setup sed -i 's#FONTFACE=".*#FONTFACE="Terminus"#' /etc/default/console-setup
sed -i 's#FONTSIZE=".*#FONTSIZE="12x6"#' /etc/default/console-setup sed -i 's#FONTSIZE=".*#FONTSIZE="12x6"#' /etc/default/console-setup
update-initramfs -u update-initramfs -u 2>&1 | dialog --title "[ Update initramfs ]" $myPROGRESSBOXCONF
# Let's enable a color prompt # Let's enable a color prompt and add /usr/share/tpot/bin to path
myROOTPROMPT='PS1="\[\033[38;5;8m\][\[$(tput sgr0)\]\[\033[38;5;1m\]\u\[$(tput sgr0)\]\[\033[38;5;6m\]@\[$(tput sgr0)\]\[\033[38;5;4m\]\h\[$(tput sgr0)\]\[\033[38;5;6m\]:\[$(tput sgr0)\]\[\033[38;5;5m\]\w\[$(tput sgr0)\]\[\033[38;5;8m\]]\[$(tput sgr0)\]\[\033[38;5;1m\]\\$\[$(tput sgr0)\]\[\033[38;5;15m\] \[$(tput sgr0)\]"' myROOTPROMPT='PS1="\[\033[38;5;8m\][\[$(tput sgr0)\]\[\033[38;5;1m\]\u\[$(tput sgr0)\]\[\033[38;5;6m\]@\[$(tput sgr0)\]\[\033[38;5;4m\]\h\[$(tput sgr0)\]\[\033[38;5;6m\]:\[$(tput sgr0)\]\[\033[38;5;5m\]\w\[$(tput sgr0)\]\[\033[38;5;8m\]]\[$(tput sgr0)\]\[\033[38;5;1m\]\\$\[$(tput sgr0)\]\[\033[38;5;15m\] \[$(tput sgr0)\]"'
myUSERPROMPT='PS1="\[\033[38;5;8m\][\[$(tput sgr0)\]\[\033[38;5;2m\]\u\[$(tput sgr0)\]\[\033[38;5;6m\]@\[$(tput sgr0)\]\[\033[38;5;4m\]\h\[$(tput sgr0)\]\[\033[38;5;6m\]:\[$(tput sgr0)\]\[\033[38;5;5m\]\w\[$(tput sgr0)\]\[\033[38;5;8m\]]\[$(tput sgr0)\]\[\033[38;5;2m\]\\$\[$(tput sgr0)\]\[\033[38;5;15m\] \[$(tput sgr0)\]"' myUSERPROMPT='PS1="\[\033[38;5;8m\][\[$(tput sgr0)\]\[\033[38;5;2m\]\u\[$(tput sgr0)\]\[\033[38;5;6m\]@\[$(tput sgr0)\]\[\033[38;5;4m\]\h\[$(tput sgr0)\]\[\033[38;5;6m\]:\[$(tput sgr0)\]\[\033[38;5;5m\]\w\[$(tput sgr0)\]\[\033[38;5;8m\]]\[$(tput sgr0)\]\[\033[38;5;2m\]\\$\[$(tput sgr0)\]\[\033[38;5;15m\] \[$(tput sgr0)\]"'
tee -a /root/.bashrc << EOF tee -a /root/.bashrc 2>&1>/dev/null <<EOF
$myROOTPROMPT $myROOTPROMPT
PATH="$PATH:/usr/share/tpot/bin"
EOF EOF
tee -a /home/tsec/.bashrc << EOF tee -a /home/tsec/.bashrc 2>&1>/dev/null <<EOF
$myUSERPROMPT $myUSERPROMPT
PATH="$PATH:/usr/share/tpot/bin"
EOF EOF
# Let's create ews.ip before reboot and prevent race condition for first start # Let's create ews.ip before reboot and prevent race condition for first start
source /etc/environment /usr/share/tpot/bin/updateip.sh 2>&1>/dev/null
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(curl -s myexternalip.com/raw)
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
echo $myLOCALIP > /data/elk/logstash/mylocal.ip
chown tpot:tpot /data/ews/conf/ews.ip
# Final steps # Final steps
fuECHO "### Thanks for your patience. Now rebooting." mv /root/tpot/etc/rc.local /etc/rc.local 2>&1>/dev/null && \
mv /root/tpot/etc/rc.local /etc/rc.local && rm -rf /root/tpot/ && sleep 2 && reboot rm -rf /root/tpot/ 2>&1>/dev/null && \
dialog --no-ok --no-cancel --backtitle "$myBACKTITLE" --title "[ Thanks for your patience. Now rebooting. ]" --pause "" 6 80 2 && \
reboot

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -10,12 +10,12 @@
<body bgcolor="#E20074"> <body bgcolor="#E20074">
<center> <center>
<a href="/tpotweb.html" target="_top" class="btn">Home</a> <a href="/tpotweb.html" target="_top" class="btn">Home</a>
<a href="/kibana/" target="main" class="btn">Kibana</a> <a href="/kibana" target="main" class="btn">Kibana</a>
<a href="/myhead/_plugin/head/" target="main" class="btn">ES Head Plugin</a> <a href="/myhead/" target="main" class="btn">ES Head</a>
<a href="/ui/" target="main" class="btn">UI-For-Docker</a>
<a href="/wetty/ssh/tsec" target="main" class="btn">WebSSH</a>
<a href="/netdata/" target="_blank" class="btn">Netdata</a> <a href="/netdata/" target="_blank" class="btn">Netdata</a>
<a href="/spiderfoot/" target="main" class="btn">Spiderfoot</a>
<a href="/ui/" target="main" class="btn">Portainer</a>
<a href="/wetty/ssh/tsec" target="main" class="btn">WebTTY</a>
</center> </center>
</body> </body>
</html> </html>

View File

@ -8,7 +8,7 @@
<frameset rows='20,*' border='0' frameborder='0' framespacing='0'> <frameset rows='20,*' border='0' frameborder='0' framespacing='0'>
<frame src='navbar.html' name='navbar' marginwidth='0' marginheight='0' scrolling='no' noresize> <frame src='navbar.html' name='navbar' marginwidth='0' marginheight='0' scrolling='no' noresize>
<frame src='/kibana/' name='main' marginwidth='0' marginheight='0' scrolling='auto' noresize> <frame src='/kibana' name='main' marginwidth='0' marginheight='0' scrolling='auto' noresize>
<noframes> <noframes>
</noframes> </noframes>
</frameset> </frameset>

View File

@ -1,6 +1,6 @@
default install default install
label install label install
menu label ^T-Pot 16.10 menu label ^T-Pot 17.10 (Alpha)
menu default menu default
kernel linux kernel linux
append vga=788 initrd=initrd.gz console-setup/ask_detect=true -- append vga=788 initrd=initrd.gz console-setup/ask_detect=true --

View File

@ -1,14 +1,13 @@
#!/bin/bash #!/bin/bash
######################################################## # Set TERM, DIALOGRC
# T-Pot # export DIALOGRC=/etc/dialogrc
# .ISO creator # export TERM=linux
# #
# v16.10.0 by mo, DTAG, 2016-10-28 #
########################################################
# Let's define some global vars # Let's define some global vars
myBACKTITLE="T-Pot - ISO Creator" myBACKTITLE="T-Pot - ISO Creator"
# If you need latest hardware support, try using the hardware enablement (hwe) ISO
# myUBUNTULINK="http://archive.ubuntu.com/ubuntu/dists/xenial-updates/main/installer-amd64/current/images/hwe-netboot/mini.iso"
myUBUNTULINK="http://archive.ubuntu.com/ubuntu/dists/xenial-updates/main/installer-amd64/current/images/netboot/mini.iso" myUBUNTULINK="http://archive.ubuntu.com/ubuntu/dists/xenial-updates/main/installer-amd64/current/images/netboot/mini.iso"
myUBUNTUISO="mini.iso" myUBUNTUISO="mini.iso"
myTPOTISO="tpot.iso" myTPOTISO="tpot.iso"
@ -33,6 +32,9 @@ if [ "$myWHOAMI" != "root" ]
exit exit
fi fi
# Let's load dialog color theme
cp installer/etc/dialogrc /etc/
# Let's clean up at the end or if something goes wrong ... # Let's clean up at the end or if something goes wrong ...
function fuCLEANUP { function fuCLEANUP {
rm -rf $myTMP $myTPOTDIR $myPROXYCONFIG $myPFXPATH $myPFXPWPATH $myPFXHOSTIDPATH $myNTPCONFPATH rm -rf $myTMP $myTPOTDIR $myPROXYCONFIG $myPFXPATH $myPFXPWPATH $myPFXHOSTIDPATH $myNTPCONFPATH

View File

@ -63,7 +63,7 @@ d-i passwd/root-login boolean false
d-i passwd/make-user boolean true d-i passwd/make-user boolean true
d-i passwd/user-fullname string tsec d-i passwd/user-fullname string tsec
d-i passwd/username string tsec d-i passwd/username string tsec
#d-i passwd/user-password-crypted password $1$jAw1TW8v$a2WFamxQJfpPYZmn4qJT71 d-i passwd/user-password-crypted password $1$jAw1TW8v$a2WFamxQJfpPYZmn4qJT71
d-i user-setup/encrypt-home boolean false d-i user-setup/encrypt-home boolean false
######################################## ########################################
@ -100,7 +100,7 @@ tasksel tasksel/first multiselect ubuntu-server
######################## ########################
### Package Installation ### Package Installation
######################## ########################
d-i pkgsel/include string apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount curl dialog dnsutils docker.io dstat ethtool genisoimage git glances html2text htop iptables iw libltdl7 lm-sensors man nginx-extras nodejs npm ntp openssh-server openssl syslinux psmisc pv python-pip vim wireless-tools wpasupplicant d-i pkgsel/include string apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount curl dialog dnsutils docker.io dstat ethtool genisoimage git glances html2text htop iptables iw jq libcrack2 libltdl7 lm-sensors man nginx-extras nodejs npm ntp openssh-server openssl prips syslinux psmisc pv python-pip unzip vim wireless-tools wpasupplicant
################# #################
### Update Policy ### Update Policy
@ -116,7 +116,7 @@ in-target grub-install --force $(debconf-get partman-auto/disk); \
in-target update-grub; \ in-target update-grub; \
cp /opt/tpot/rc.local.install /target/etc/rc.local; \ cp /opt/tpot/rc.local.install /target/etc/rc.local; \
cp -r /opt/tpot/ /target/root/; \ cp -r /opt/tpot/ /target/root/; \
cp /opt/tpot/usr/share/dict/names /target/usr/share/dict/names cp /opt/tpot/usr/share/dict/* /target/usr/share/dict/
########## ##########
### Reboot ### Reboot