* --prompt-file to specify a file from which to read the prompt.
* --prompt to specify a prompt string on the command line
* --config to specify an alternate config file
The config file now contains a new "system_prompt" value in the [llm] section. This would be the same for all DECEIVE instances, and configures how the emulation itself will act. The honeypot administrator should mostly keep this intact. The prompt.txt file now focuses more on what type of system to emulate, and optional details such as valid users, contents to stage on the system, etc.
* Project is now called DECEIVE, so the README.md has been updated to reflect this
* Added more details about installation, configuration, host platform support, and logging to the README.md
* All logging is now in JSON lines format!
* Fixed a bug where the session summary was generated twice for the same session
* Fixed a reversion in the exit handling when the user logged out gracefully.
* Session summaries now occur both at normal session termination (e.g., the user gracefully logs out) or abnormal termination, such as if the client disconnects suddenly.
* Now encode the AI results as UTF-8 instead of ASCII, because it would ocassionally send back non-ASCII characters which caused the server to throw errors
Rather than explicitly checking to see if the user
was typing a shell exit command, the LLM is now
instructed to provide a specific token starting
("XXX-END-OF-SESSION-XXX") to indicate that the
session should be closed. This allows the user to
exit the shell in any way they see fit, and the
LLM will still know when to end the session. It
also means that typing 'exit' or similar commands
to subshells or command interpreters (e.g. Python)
are less likely to cause the session to end.
* Added langchain support (OpenAI's gpt-4o model)
* Created a system prompt that gives functional results
* Initial integration of logging for LLM responses (needs improvement)