CobaltSplunk

Images are broken, I will fix when I have time.

TLDR; use Splunk as a central log database and analysis system for offensive infrastructure logs. In many engagements, you will want accurate logging across multiple RAT systems, phishing web servers, mail systems, and more. Currently only supports Cobalt Strike, but will be looking at supporting Empire, Pupy, Metasploit, Apache, Nginx, and more!

Introduction

How many times have you lead or operated on an attack simulation and the point of contact calls you at the most unexpected moment and says "We are experiencing disruptions on X service, is it you?". Being able to respond accurately in a swift manner is key to differentiating your services against competition in the market place. Good project management, organisation, and ability to answer questions as they come, help towards creating a stronger relationship.

This blog post will highlight a project that I've been working on called CobaltSplunk (will be later renamed), which is essentially a dashboard and set of pre-defined and packaged queries for querying Cobalt Strike logs.

Setup

Installing Splunk Forwarder

In my scenario I am using a Cobalt Strike server hosted in the cloud (as it's a test environment). There has been plenty of arguments as to whether this is good or bad practice. I'm not going to talk about that here. That's for the choice of your team and your company's data protection.

If you don't need to use a Cobalt Strike server in the cloud and you are already punching a hole out to the cloud, then you can skip the hole punch piece and just set the forwarder to connect to a local Splunk server.

First, install Splunk on the server: dpkg -i splunkforwarder.deb You can download the SplunkForwarder DEB file from the Splunk website after logging in.

Configure the forwarder: cd /opt/splunkforwarder/bin; ./splunk enable boot-start

It will ask you for an indexer IP, set it to your Splunk instance IP. If you are punching a hole such as: ssh TARGET -R 127.0.0.1:9997:127.0.0.1:9997 from the Splunk server out to your cloud-based server, then you can set the indexer IP to 127.0.0.1:9997.

Once that has been configured you can check your configuration settings using: ./splunk list forward-server which will show the forwarder as inactive. Type ./splunk start to activate it and ./splunk list forward-server again to ensure that the forwarder is now indeed active.

Once your forwarder is active, you are ready to proceed and start adding logs to be monitored and ingested into Splunk.

Getting logs into Splunk

I began by making an index called SSH and Cobalt through the Splunk GUI and proceeded to the following commands. SSH Logs: ./splunk add monitor /var/log/auth.log -index ssh -sourcetype %APP% Cobalt Strike Logs: ./splunk add monitor /root/cobaltstrike/logs/.../weblog.log -index cobalt -sourcetype weblog ./splunk add monitor /root/cobaltstrike/logs/.../beacon_*.log -index cobalt -sourcetype beacon_log

Analysing Web Logs

Analysing web logs allow the operator to see who's communicating with the teamserver from an external perspective. This section goes over some analysis we can perform and leverage from web logs.

Parsing Logs

Before we can begin utilising the web logs effectively we have to parse each of the logs we ingest. As Splunk does not know how to "understand" web and beacon logs, we have to teach it how. I used the field extractor tool with some manual regular expressions to effectively parse all of the log types. This is included in the CobaltSplunk application I am releasing with this post.

Request URIs and Status Codes by IP

Discover web log requests grouped by IP index=cobalt sourcetype=weblog | stats values(request) as request values(status_code) as status_code by ip

Web Log Requests on a Map

Discover where requests are being made geographically by visualising in a map. If you're looking at a tracker that you may have used in an e-mail or phishing campaign, you easily visualise where the trackers have been triggered from.

The following uses the CENTRLOPS WHOIS plugin which allows us to quickly WHOIS all of the IR user agents and discover who they might be.

Analysing Beacon Session Logs

By having the Beacon session logs at our disposal, we understand all commands ran, and also visualise a couple of things such as the external IP address where applicable. Once again, all parsing is already complete and I have this all readily available in the released CobaltSplunk Splunk App.

A quick look at when I was debugging the parsing of logs using rex:

We can see all of the logs by beacon session obtained:

We can also search for commands run by a specific operator across all servers.

We can grab all of the commands run on a specific beacon session:

Dashboards

ATT&CK Dashboard

Why ATT&CK?

Now that that's out of the way, we can go into some of the more useful stuff. ATT&CK has pretty much become the standard for classifying adversary actions that allow organisations to better understand the TTP of threat actors. When we need to provide evidence of what techniques were run that map back to ATT&CK TTP, we can do that quickly in the CobaltSplunk dashboard, in real-time. I know that Cobalt Strike already has it's reporting features but that requires manually clicking and exporting a report. This gives the data in real-time as new commands are being executed. See below:

Map Overview Dashboard

Maps are always fun, probably not the most informative. However, I'd still prefer a map overview rather than going to IP location websites to try and figure out what the location is from. Being able to do this on a massive scale across all received logs and display it on a map is good value. Splunk makes it easy to do so as we can see below:

The map above shows the connections received on HTTP/S on Cobalt Strike across the world, and where concentration points are. For example, if someone has been bruteforcing my server, we can see a larger spike.

For example, you could track phishing campaign clicks by using the input field:

At some point, I will work on incorporating Apache and Nginx logs. So if you choose to host your phishing campaigns elsewhere, you can track it all in one central location. You can then construct queries such as "how many people clicked on the link across the Globe, grouped by Country" or you could even check for execution if you have trackers in code execution before implant trigger.

Many people like to use GoPhish for mass phishing campaigns, their dashboard is pretty useful. The output is pretty useful too. I'm hoping to encorporate all of those into Splunk so that you can track per campaign, all of the obtained credentials, at ease.

Back on topic, I've also whipped the same thing up for more likely IR user-agents such as Curl and Wget. This way you can immediately see if it's likely that your target has IR teams spotting you, or if its just an automated scanner / sandbox:

Compromise Overview Dashboard

How many times have you been on an engagement and a client asks you on the call "we've seen activity on computer XLAP005211, is that you?" or "we spotted a phishing attack from <disgustingdomain.com>". Your client expects you to have consolidated information across your team of maybe 5 guys on a long term gig that's been running for 6 months. What if you can just search on Splunk?

I whipped up a quick dashboard that would allow me to search through compromised assets for a hostname, IP, or external IP. This currently only supports Cobalt Strike logs, but will be expanded to other logs in the future. Many operators will begin using Cobalt Strike across long gigs due to discoverability these days. Many commands will be issued out of band, one solution is to monitor SOCKS proxy logs and commands issued. I will go into this later in a separate post and centralise all of the logging to make it easier.

For now, let's have a look at the following dashboard:

GitHub

You can download the CobaltSplunk application here: https://github.com/vysecurity/CobaltSplunk

Conclusion

Splunk log aggregation and collection can help your team better manage what's happening across many assets and servers. If you have an operation that requires 50 different servers, this would help you to quickly answer questions especially for larger teams. Let's say the point of contact calls you at 3am in the night, your team needs to be able to answer questions, quickly and precisely. This helps to achieve that goal and also provide your team with centralised logging.

What's coming next?

  • Apache / Nginx logs

  • Bash history per server

    • Log and track use of tools such as proxychains to allow ease of discovery for further touched assets.

  • Further operational security

    • Incorporate VT hash checking and alerting

    • Alerting when SSH login to servers

  • ? GoPhish logs ?

    • More useful for mass phishing campaigns, not a priority.

Last updated