
<!--- Provide a general summary of your changes in the Title above --> ## Description <!--- Describe your changes in detail --> Display support for the following screens: - [Pimoroni GFX Hat](https://shop.pimoroni.com/products/gfx-hat?variant=12828343631955): ui.display.type = "gfxhat" Contrast and Backlight color can be imported from config.toml: ui.display.contrast = 40 ui.display.blcolor = "olive" Available backlight colors: white, grey, maroon, red, purple, fuchsia, green, lime, olive, yellow, navy, blue, teal, aqua - [Adafruit miniPiTFT](https://www.adafruit.com/product/4484): ui.display.type = "minipitft" - [Adafruit miniPiTFT2](https://www.adafruit.com/product/4393): ui.display.type = "minipitft2" - [ArgonPod](https://argon40.com/products/pod-display-2-8inch): ui.display.type = "argonpod" - [DisplayHatMini](https://shop.pimoroni.com/products/display-hat-mini?variant=39496084717651): Driver updated to fix issues - I2C Oled: Default I2C address changed to 0x3C, because most boards are coming with it by default. Can be modified in config ui.display.type = "i2coled" ui.display.i2c_addr = 0x3C ui.display.width = 128 ui.display.height = 64 ## Motivation and Context Future plan for LCD and OLED screens: Change from the pwnagotchis hw libraries for drivers to LumaOLED LumaLCD packages. Luma Core: https://github.com/rm-hull/luma.core Luma LCD: https://github.com/rm-hull/luma.lcd Luma OLED: https://github.com/rm-hull/luma.oled It has the most used LCD and OLED drivers ready, and adding new screens could be easier in the future. ## How Has This Been Tested? Except the argonpod and minipitft2 all screens were tested on previous builds, should work in 2.9.2, but before release I would like to test it with an image. ## Types of changes <!--- What types of changes does your code introduce? Put an `x` in all the boxes that apply: --> - [x] Bug fix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Checklist: <!--- Go over all the following points, and put an `x` in all the boxes that apply. --> <!--- If you're unsure about any of these, don't hesitate to ask. We're here to help! --> - [x] My code follows the code style of this project. - [ ] My change requires a change to the documentation. - [x] I have updated the documentation accordingly. - [x] I've read the [CONTRIBUTION](https://github.com/evilsocket/pwnagotchi/blob/master/CONTRIBUTING.md) guide - [x] I have signed-off my commits with `git commit -s`
Pwnagotchi
This is the main source for all forks:
- RPiZeroW (32bit)
- RPiZero2W, RPi3, RPi4, RPi5 (64bit)
For installation docs check out the wiki!
Proudly partnering with PiSugar!!
Pwnagotchi is an A2C-based "AI" leveraging bettercap that learns from its surrounding Wi-Fi environment to maximize the crackable WPA key material it captures (either passively, or by performing authentication and association attacks). This material is collected as PCAP files containing any form of handshake supported by hashcat, including PMKIDs, full and half WPA handshakes.
Instead of merely playing Super Mario or Atari games like most reinforcement learning-based "AI" (yawn), Pwnagotchi tunes its parameters over time to get better at pwning Wi-Fi things to in the environments you expose it to.
More specifically, Pwnagotchi is using an LSTM with MLP feature extractor as its policy network for the A2C agent. If you're unfamiliar with A2C, here is a very good introductory explanation (in comic form!) of the basic principles behind how Pwnagotchi learns. (You can read more about how Pwnagotchi learns in the Usage doc.)
Keep in mind: Unlike the usual RL simulations, Pwnagotchi learns over time. Time for a Pwnagotchi is measured in epochs; a single epoch can last from a few seconds to minutes, depending on how many access points and client stations are visible. Do not expect your Pwnagotchi to perform amazingly well at the very beginning, as it will be exploring several combinations of key parameters to determine ideal adjustments for pwning the particular environment you are exposing it to during its beginning epochs ... but ** listen to your Pwnagotchi when it tells you it's boring!** Bring it into novel Wi-Fi environments with you and have it observe new networks and capture new handshakes—and you'll see. :)
Multiple units within close physical proximity can "talk" to each other, advertising their presence to each other by broadcasting custom information elements using a parasite protocol I've built on top of the existing dot11 standard. Over time, two or more units trained together will learn to cooperate upon detecting each other's presence by dividing the available channels among them for optimal pwnage.
Documentation
https://github.com/jayofelony/pwnagotchi/wiki https://www.pwnagotchi.org
Links
Official Links | |
---|---|
Website | pwnagotchi.org |
Forum | discord.gg |
Subreddit | r/pwnagotchi |
License
pwnagotchi
created by @evilsocket and updated by us. It is released under the GPL3 license.