mirror of
https://github.com/splunk/DECEIVE.git
synced 2025-07-01 16:47:28 -04:00
Streamline the prompting
The config file now contains a new "system_prompt" value in the [llm] section. This would be the same for all DECEIVE instances, and configures how the emulation itself will act. The honeypot administrator should mostly keep this intact. The prompt.txt file now focuses more on what type of system to emulate, and optional details such as valid users, contents to stage on the system, etc.
This commit is contained in:
@ -2,7 +2,6 @@
|
||||
# We provide reasonable defaults for most configuration items, but you should
|
||||
# review this before using it in production.
|
||||
|
||||
|
||||
[honeypot]
|
||||
# The name of the file you wish to write the honeypot log to.
|
||||
log_file = ssh_log.log
|
||||
@ -37,17 +36,41 @@ model_name = gpt-4o
|
||||
#llm_provider = gemini
|
||||
#model_name = gemini-1.5-pro
|
||||
|
||||
|
||||
# The plain text file containing the LLM system prompt.
|
||||
# This is required, no matter which LLM you use.
|
||||
system_prompt_file = prompt.txt
|
||||
|
||||
# The maximum number of tokens to send to the LLM backend in a single
|
||||
# request. This includes the message history for the session, so should
|
||||
# be fairly high. Not all models support large token counts, so be sure
|
||||
# to test this with your model.
|
||||
trimmer_max_tokens = 64000
|
||||
|
||||
# System prompt for the LLM. This will be the same for any running instance.
|
||||
# Change this if you need to tweak behaviors of the simulation itself.
|
||||
system_prompt = Interpret all inputs as though they were SSH commands and provide a realistic
|
||||
output. Unless otherwise specified, assume you are emulating a Linux operating
|
||||
system, so be sure to include realistic users, processes, and files. Do not
|
||||
include extraneous formatting in your responses. Also make sure that your
|
||||
responses do not indicate that the user is on a virtual machine, container,
|
||||
or emulation at all.
|
||||
|
||||
On the first call, be sure to include a realistic MOTD.
|
||||
|
||||
End all responses with a realistic shell prompt to display to the user,
|
||||
including a space at the end.
|
||||
|
||||
Include ANSI color codes for the terminal with the output of ls commands
|
||||
(including any flags), or in any other situation where it is appropriate, but
|
||||
do not include the ``` code formatting around those blocks.
|
||||
|
||||
Make sure all user and host names conform to some reasonable corporate naming
|
||||
standard. Never use obviously fake names like "Jane Doe" or just Alice, Bob, and Charlie.
|
||||
|
||||
If at any time the user's input would cause the SSH session to close (e.g., if
|
||||
they exited the login shell), your only answer should be "XXX-END-OF-SESSION-XXX"
|
||||
with no additional output before or after. Remember that the user could start up
|
||||
subshells or other command interpreters, and exiting those subprocesses should not
|
||||
end the SSH session.
|
||||
|
||||
Assume the username is {username}.
|
||||
|
||||
# The valid user accounts and passwords for the SSH server, in the
|
||||
# form "username = password". Note that you can enable login without
|
||||
# a password by leaving that field blank (e.g., "guest =" on a line by
|
||||
@ -56,3 +79,4 @@ trimmer_max_tokens = 64000
|
||||
guest =
|
||||
user1 = secretpw
|
||||
user2 = password123
|
||||
|
||||
|
@ -1,27 +1,2 @@
|
||||
You are a linux operating system accepting commands from a user via SSH.
|
||||
Interpret all inputs as though they were SSH commands and provide a realistic
|
||||
output. You are emulating a video game developer's system, so be sure to
|
||||
include realistic users, processes, and files, especially video game source
|
||||
and asset files. Do not include extraneous formatting in your responses.
|
||||
Also make sure that your responses do not indicate that the user is on a
|
||||
virtual machine, container, or emulation at all.
|
||||
|
||||
On the first call, be sure to include a realistic MOTD.
|
||||
|
||||
End all responses with a realistic shell prompt to display to the user,
|
||||
including a space at the end.
|
||||
|
||||
Include ANSI color codes for the terminal with the output of ls commands
|
||||
(including any flags), or in any other situation where it is appropriate, but
|
||||
do not include the ``` code formatting around those blocks.
|
||||
|
||||
Make sure all user and host names conform to some reasonable corporate naming
|
||||
standard. Never use obviously fake names like "Jane Doe" or just Alice, Bob, and Charlie.
|
||||
|
||||
If at any time the user's input would cause the SSH session to close (e.g., if
|
||||
they exited the login shell), your only answer should be "XXX-END-OF-SESSION-XXX"
|
||||
with no additional output before or after. Remember that the user could start up
|
||||
subshells or other command interpreters, and exiting those subprocesses should not
|
||||
end the SSH session.
|
||||
|
||||
Assume the username is {username}.
|
||||
You are a video game developer's system. Include realistic video game source
|
||||
and asset files.
|
@ -285,6 +285,15 @@ def choose_llm():
|
||||
|
||||
return llm_model
|
||||
|
||||
def get_prompts() -> dict:
|
||||
system_prompt = config['llm']['system_prompt']
|
||||
with open("prompt.txt", "r") as f:
|
||||
user_prompt = f.read()
|
||||
return {
|
||||
"system_prompt": system_prompt,
|
||||
"user_prompt": user_prompt
|
||||
}
|
||||
|
||||
#### MAIN ####
|
||||
|
||||
# Always use UTC for logging
|
||||
@ -311,9 +320,9 @@ logger.addFilter(f)
|
||||
|
||||
# Now get access to the LLM
|
||||
|
||||
prompt_file = config['llm'].get("system_prompt_file", "prompt.txt")
|
||||
with open(prompt_file, "r") as f:
|
||||
llm_system_prompt = f.read()
|
||||
prompts = get_prompts()
|
||||
llm_system_prompt = prompts["system_prompt"]
|
||||
llm_user_prompt = prompts["user_prompt"]
|
||||
|
||||
llm = choose_llm()
|
||||
|
||||
@ -334,6 +343,10 @@ llm_prompt = ChatPromptTemplate.from_messages(
|
||||
"system",
|
||||
llm_system_prompt
|
||||
),
|
||||
(
|
||||
"system",
|
||||
llm_user_prompt
|
||||
),
|
||||
MessagesPlaceholder(variable_name="messages"),
|
||||
]
|
||||
)
|
||||
|
Reference in New Issue
Block a user