hunting with Splunk
Last updated
Last updated
Splunk's creators describe it as a solution to aggregate, analyze and get answers from machine data. Splunk can be used for Application Management, Operations Management, Security & Compliance, etc.
When it comes to security, Splunk can be used as a log management solution but most importantly as an analytics-driven SIEM. Splunk can fortify investigations of dynamic, multi-step attacks with detailed visualizations and even enhance an organization's detection capabilities through User Behavior Analytics.
Splunk can ingest almost any data from almost any source, through both an agent-less and a forwarder approach.
Splunk Architecture Overview:
Splunk's architecture (at a high level) consists of the:
Forwarder component
Universal Forwarders collect data from remote sources and send them to one or more Splunk Indexers. Universal Forwarders are separate downloads that can be installed on any remote source, with little impact on network or host performance.
Heavy Forwarders also collect data from remote sources, but they are typically used for heavy data aggregation tasks, from sources like firewalls or data routing/filtering passing points. According to Splunk, unlike other forwarder types, heavy forwarders parse data before forwarding them and can route data based on criteria such as source or type of event. They can also index data locally while forwarding the data to another indexer. Heavy Forwarders are usually run as "data collection nodes" for API/scripted data access, and they are only compatible with Splunk Enterprise.
Note: HTTP Event Collectors (HECs) also exist to collect data directly from applications, at-scale, through a token-based JSON or raw API way. Data are sent directly to the Indexer level.
Indexer Component The Indexer processes machine data, storing the results in indexes as events, enabling fast search and analysis. As the indexer indexes data, it creates a number of files organized in sets of directories by age. Each directory contains raw data (compressed) and indexes (points to the raw data).
Search Head Component The Search Head component allows users to use the Search language to search for indexed data. It distributes user search requests to the Indexers and consolidates the results as well as extracts field value pairs from the events to the user. Knowledge Objects on the Search Heads can be created to extract additional fields and transform the data without changing the underlying index data. It should be noted that Search Heads also provide tools to enhance the search experience such as reports, dashboards, and visualizations.
Splunk Apps and Technology Add-ons (TAs):
Splunk Apps are designed to address a wide variety of use cases and to extend the power of Splunk. Essentially, they are collections of files containing data inputs, UI elements, and/or knowledge objects. Splunk Apps also allow multiple workspaces for different use cases/user roles to co-exist on a single Splunk instance. Ready-made apps are available on Splunkbase (splunkbase.com).
Splunk Technology Add-ons
abstract the collection methodology and they typically include relevant field extractions (schema-on-the-fly). They also include relevant config files (props/transforms) and ancillary scripts binaries.
You can think of a Splunk App as a complete solution, that typically uses one or more Technology Add-ons.
Splunk Users and Roles:
Splunk users are assigned roles which determine their capabilities and data access. Out of the box, there are three main roles:
admin: This role has the most capabilities assigned to it.
power: This role can edit all shared objects (saved searches, etc.) and alerts, tag events, and other similar tasks.
user: This role can create and edit its own saved searches, run searches, edit its own preferences, create and edit event types, and other similar tasks.
Splunk's Search & Reporting App:
You will spend most of your time inside Splunk's Search & Reporting App.
Data Summary can provide you with hosts, sources or sourcetypes on separate tabs.
Finally, this is how Events will look like.
Splunk's Search Processing Language (SPL):
According to Splunk, SPL combines the best capabilities of SQL with the Unix pipeline syntax allowing you to:
Access all data in its original format
Optimize for time-series events
Use the same language for visualizations
SPL provides over 140 commands that allow you to search, correlate, analyze and visualize any data.
The below diagram represents a search, broken down to its syntax components.
Searches are made up of five basic components.
Search terms, where you specify what you are looking for. Search terms contain keywords, phrases, Booleans, etc.
Commands, where you specify how you want to manipulate the results. For example, create a chart, compute statistics, etc.
Functions, where you specify how exactly do you want to chart, compute or evaluate the results.
Arguments, in case there are variables you want to apply to a function.
Clauses, where you specify how exactly do you want to group or rename the fields in the results.
As you write searches, you will notice that some parts of the search string are automatically colored. The color is based on the search syntax. Example:
Something else to consider while submitting searches is Splunk's search modes.
There are three search modes:
Fast Mode: Field discovery off for event searches. No event or field data for stats searches.
Smart Mode: Field discovery on for event searches. No event or field data for stats searches.
Verbose Mode: All event & field data.
It is recommended to start searching with Smart and then go from there.
We strongly suggest you spend time studying the Exploring Splunk (https://www.splunk.com/goto/book) e-book before proceeding to the lab's tasks. Especially Chapter 4, as that covers the most commonly used search commands.
Various Search aspects are also nicely documented, in the following resource in th Splunk Search Manual
let's begin Hunting
Once you are logged into Splunk's web management interface, click the Search & Reporting application that resides on the Apps column on your left.
In order to test if Splunk can successfully access the ingested/loaded data, first change the time range picker to All time and then, submit the following search.
The first thing we should do is determine the available sourcetypes. Specifically, we should first determine the sourcetypes that are associated with 192.168.250.70
to identify what was the average password length used in the attack, we can use the below (more complex query).
Calculate a length for the userpassword string eval lenpword=len(userpassword)* <- Calculate a length for the userpassword string and store the value in lenpword
stats avg(lenpword) AS avgPword <- Calculate the average of all lenpword and rename it avgPword
eval avgPword=round(avgPword,0) <- Round the avgPword field to 0 decimal places and put it into the avgPword field
timechart syntax is similar to stats
We also receive a visualization by clicking on Visualization
List all process execution activity and all executed commands
All commands being executed can be listed through the Splunk search below.
If you want to list all commands being executed by a specific (usually abused) process, you can do so as follows.
If you want to identify, for example, the longest cmd.exe command that was executed (overly long commands are suspicious), you can do so as follows.
--
All hunts of this lab build mostly on hypotheses derived from the MITRE ATT&CK framework.
Specifically, you will hunt for:
When looking at https://github.com/EmpireProject/Empire/blob/master/setup/cert.sh, we can see the default key that is created uses C=US as the subject of the certificate.
(for advanced hunting): You may have noticed that the SPL search above took some time to complete. We could have accelerated our search by using the tstats
Goals of a Datamodel:
Make it easy to share/reuse domain knowledge [Admins/power users build data models]
Acceleration of large datasets to make searching more efficient
Provide an interface for 'non-technical users' interact with data via pivot UI [note: non-technical == analyst with less "domain" knowledge]
To list all available datamodels in your environment, execute the search below.
let's see what data we have for the common destination we noticed, 45.77.65.211
We notice that we have web and Sysmon logs containing that indicator/IP. This means that this IP isn't being blocked at the network level and that it has reached the system level.
We obtain a similar view of the situation through Suricata logs
Let's check Suricata's inbound alerts, as follows.
Once the search is completed click on 32 more fields and select the alert.signature field.
Let's focus on another available sourcetype stream:http, to identify communication flows with the IP indicator (45.77.65.211).
Let's investigate the identified IPs above one by one.
172.31.4.249 The indicator IP (45.77.65.211) has a tremendous amount of events generated with a destination of 172.31.4.249. Note that 172.31.4.249 is a Linux server belonging to our organization.
Let's drill into the events where the source address is our indicator and the destination is our web server 172.31.4.249.
By searching online for w3af.org, we see that it is an open source web application security scanner. It looks like the IP indicator has also been scanning a web server of ours in addition to being a destination for some of our workstations/servers. Note that our web server that was scanned didn't appear in the outbound communications.
71.39.18.125
71.39.18.125 is the IP address is the external interface of our firewall. We may want to investigate this further later. Let's move to the last IP we identified 10.0.2.109.
10.0.2.109
Nothing that can advance our hypothesis about PowerShell Empire being executed comes up. We may come back to investigate this further later.
It is about time we move our attention to host centric data through Microsoft Sysmon logs, as follows.
We see a large amount of events coming from a single server, Venus (10.0.1.101) communicating to our indicator.
Let's use a wildcard on dest to ensure we get all of the values we saw in the previous search
Let's analyze these Network Connection Sysmon events related with powershell.exe
The user executing these events are either the SYSTEM account or a frothly domain account called service3.
By using the timechart command we can see that this network activity took place between 8/23 and 8/26. We are focusing on the service3 user.
Let's dig deeper into what has been executing through the service3 account
Not only we can see malicious PowerShell commands being executed but we also notice whoami.exe and ftp.exe being executed by PowerShell(which is suspicious). These three suspicious processes appear in both venus.frothly.local and wrk-klagerf.frothly.local.
How we know that these PowerShell commands are malicious you may ask. Let's Base64 decode them.
The output also contains /admin/get.php. Asking for this file is characteristic of PowerShell Empire! Other strings may also exist that can indicate the existence of PowerShell Empire in our environment)
For the extra mile try to identify similar PowerShell commands on other workstations/servers. We focused on the service3 account only. Maybe another account executed similar commands.
or
We notice that two internal systems speaking to the same external destination address, 160.153.91.7.
If we look at Palo Alto data, we notice we have two sourcetypes, pan:traffic and pan:threat. The traffic sourcetype has by far the greater volume of traffic, but interestingly, we see the vast majority of traffic heading to the external interface of our firewall. So additional inspection of this data to learn more is warranted.
Let's step away from the network level for a bit and focus on the endpoint level. First, Sysmon events related to FTP by host.
Then, Sysmon events related to FTP by CommandLine
We notice some curious-looking ftp activity! We will get back to this activity and investigate further.
Back to the network now, let's try to see inside the FTP traffic.
The interesting fields to browse are:
flow_id
reply_content
method
method_parameter
filename
So, let's refine our search as follows
Looking at the results you will notice the exact same sequence of actions taking place at both 10.0.2.107 and 10.0.2.109. We can see that we have 3 executables, a DLL, a MSI installer for python, a python script and a hwp file extension. If we aren't sure what a hwp file is, we can Google for the extension and we find out that it is a Korean word processing application.
Make also sure you give reply_content a look for interesting information.
It is worth looking into each separate transmission. You can do that through the query below, clicking into each green bar of the histogram and then pressing +Zoom to Selection.
Finally, we should focus our attention on the DLL files we spotted. To identify the sourcetypes that include information about these DLLs execute the search below.
Let's also not forget to search for the other files that were downloaded.
To look for events related to these files on
--
With the IP address of 160.153.91.7, we can leverage this indicator to determine if any exfiltration was being attempted using DNS. stream:dns provides us with all DNS query and responses occurring within Frothly (our domain).
Let's start investigating 10.0.2.107, as follows.
If we click the name field we notice a curious-looking domain.
Let's investigate further as follows.
Let's look into the queries (since we are hunting for DNS exfiltration).
Let's leverage URL Toolbox to refine our search. We will need to reference the mozilla list (part of URL Toolbox) and then run the ut_parse_extended macro against the query field. This will chop the query into finer pieces to analyze. We can then calculate the Shannon entropy score of the subdomain that was extracted and then table the output..
index=botsv2 sourcetype=stream:dns hildegardsfarm.com "query{}"="*" query *.hildegardsfarm.com | eval query{}=mvdedup(query) | eval list="mozilla" | ut_parse_extended(query{},list) | ut_shannon(ut_subdomain) | table src_ip dest_ip query{} ut_subdomain ut_shannon
index=botsv2 sourcetype=stream:dns hildegardsfarm.com "query{}"="*" query *.hildegardsfarm.com | eval query{}=mvdedup(query) | eval list="mozilla" | ut_parse_extended(query{},list)
| ut_shannon(ut_subdomain)
| eval sublen = length(ut_subdomain) | table ut_domain ut_subdomain ut_shannon sublen | stats count avg(ut_shannon) as avg_entropy avg(sublen) as avg_sublen stdev(sublen) as stdev_sublen by ut_domain
You should now have a better idea of how DNS exfiltration looks like and how it can be proactively detected in your environment.
We already know that our 10.0.2.107 endpoint communicated with the 45.77.65.211 IP indicator. We also know that the communication was based on a SSL certificate of specific configuration. Let's extract the certificate's sha256 hash and hunt for it in the wild as follows.
if we submit this hash to Censys.io, we recieve no results.
However, let's submit the IP Indicator to virustotal.com and see what the results are. The initial results show as clean, but lets look at relations. You can see under Communicating Files that there are three malicious files associated to the IP Address. By clicking on one or more of these files you now have an opportunity to expand you indicators of compromise list and have a look at other assocaited IP addresses and domains.
Based on the guidance provided in the JPCERT doc, we can craft a search using Sysmon and we find 11 events
all associated with wrk-klagef. Recall that we need escape characters for .
We didn't have any luck with the Network Connection event in Sysmon
The below is what we should see in Remote Execution via WMI at the destination side, but we don't have all the needed data in our Sysmon logs.
Let's focus on Teymur Kheirklabarov's advice on hunting for remote execution via WMI (systems with Network 4624 Event followed by Sysmon Process Creations).
To better see all the events that occurred and all processes that were running during the identified sessions (LogonId) above on the venus and wrk-klagerf hosts, use the searches below.
You should now have a better idea of how to hunt for remote execution through WMI.
--
For this task you will have to identify the events/artifacts/traces related to various brute force attacks inside an Active Directory environment. Specifically, hunt for:
S01D01 - Logon attempts using a non-existing account (Kerberos)
Look for events 4768 auditing Kerberos authentication ticket (TGT) requests and filters them on the Status field equal to code 0x6. This value means that the submitted username does not exist [3].
S01D02 - Logon attempts using a non-existing account (NTLM)
Look for events 4776 auditing NTLM authentication and filters them on the Status field equal to code 0xC0000064. This value means that the submitted username does not exist [4].
S01D03 - Excessive failed password attempts from one source (Kerberos)
Filter events 4771 that are logged for several status codes [3]. Among them, there is the code 0x18 which means invalid pre-authentication information, usually a wrong password. The rest of the search is very similar to previous searches, see S01D01 or S01D02 for reference.
S01D04 - Excessive failed password attempts from one source (NTLM)
Filter events 4776 that audit NTLM authentication. Status code 0xC000006A means a logon with misspelled or wrong password [4]. The rest of the search is very similar to previous searches, see S01D01, S01D02 or S01D03 for reference.
S01D05 - Excessive failed password attempts towards one account
Filter both Kerberos and NTLM authentication events for status code meaning bad password. That is the status code 0xC000006A for event 4776 and the status code *0x18 for event 4771
S01D06 - Multiple locked accounts from one source
Look into events 4740 - A user account was locked out [5].
S01D08 - Logon attempts towards disabled accounts (Kerberos)
Focus on events 4768 with the status code 0x12, which means that the account is disabled [3]
S01D01 - Logon attempts using a non-existing account (Kerberos)
This search looks for events 4768 auditing Kerberos authentication ticket (TGT) requests and filters them on the Status field equal to code 0x6. This value means that the submitted username does not exist [3]. The transaction command is used to group subsequent attempts coming from the same IP address with a maximal pause of 5 minutes between them. Only transactions containing more than five events are preserved. These are the conditions that define parameters of a brute force attack. If there is an IP address of localhost in the event, the computer name is taken as a source. The number of different usernames is calculated with the mvcount function of the eval command. The rule triggers only in the case there are attempts towards more than two different accounts.
How to Implement: Logging on domain controllers needs to be set up to log TGT requests. It can be done by configuring Audit Kerberos Authentication Service auditing setting [3]. The constants in the conditions that appear in the search, such as eventcount, maxpause, and accounts, need to reflect Account Lockout Policy settings of the domain environment.
Known False Positives: Common false positives are attempts with misspelled usernames. These can be limited by adjusting the search constants precisely for the particular environment.
S01D02 - Logon attempts using a non-existing account (NTLM)
This search looks for events 4776 auditing NTLM authentication and filters them on the Status field equal to code 0xC0000064. This value means that the submitted username does not exist [4]. The transaction command is used to group subsequent attempts coming from the same source workstation with a maximal pause of 5 minutes between them. Only transactions containing more than five events are preserved. These are the conditions that define parameters of a brute force attack. The number of different usernames is calculated with mvcount function of the eval command. The rule triggers only in the case there are attempts towards more than two different accounts.
How to Implement: Credential Validation auditing policy needs to be enabled on the monitored systems. The constants in the conditions that appear in the search, such as eventcount, maxpause, and accounts, need to reflect Account Lockout Policy settings of the domain environment.
Known False Positives: Common false positives are attempts with misspelled usernames. These can be limited by adjusting the search constants precisely for the particular environment.
S01D03 - Excessive failed password attempts from one source (Kerberos)
This search detects password brute force attempts coming from one source. Those may be targeting one or multiple accounts, which means this search is also able to identify a password spraying activity. It searches for attempts where Kerberos authentication was used. The rule can be useful even if the lockout policy is applied in the domain because it can detect attempts from adversaries that are aware of the policy. The search filters events 4771 that are logged for several status codes [3]. Among them, there is the code 0x18 which means invalid pre-authentication information, usually a wrong password.
How to Implement: Logging on domain controllers needs to be set up to log TGT requests. It can be done by configuring Audit Kerberos Authentication Service auditing setting [3]. The constants in the conditions that appear in the search, such as eventcount, maxpause, and accounts, need to reflect Account Lockout Policy settings of the domain environment.
Known False Positives: If there are lots of false positive detections, optionally a condition where accounts > 1 can be added to the search. However, the condition should be added only in the case that the Account Lockout Policy is set up in the environment. If there are multiple IP addresses in use for the same host, the rule can trigger more times for events which should be grouped together in a single result. Users using multiple accounts on one computer may trigger this rule by accidentally typing the wrong password. This behavior can be limited by setting the search constants precisely for the needs of a particular environment.
S01D04 - Excessive failed password attempts from one source (NTLM)
This search detects password brute force attempts coming from one source. Those may be targeting one or multiple accounts, which means this search is also able to identify a password spraying activity. It searches for attempts where NTLM authentication was used. The rule can be useful even if the lockout policy is applied in the domain because it can detect attempts from adversaries that are aware of the policy. The search filters events 4776 that audit NTLM authentication. Status code 0xC000006A means a logon with misspelled or wrong password [4].
How to Implement: Credential Validation auditing policy needs to be enabled on the monitored systems. The constants in the conditions that appear in the search, such as eventcount, maxpause, and accounts, need to reflect Account Lockout Policy settings of the domain environment.
Known False Positives: If there are lots of false positive detections, optionally a condition where accounts > 1 can be added to the search. However, the condition should be added only in the case that the Account Lockout Policy is set up in the environment. If there are multiple IP addresses in use for the same host, the rule can trigger more times for events which should be grouped together in a single result. Users using multiple accounts on one computer may trigger this rule by accidentally typing the wrong password. This behavior can be limited by setting the search constants precisely for the needs of a particular environment.
How to Implement: Credential Validation auditing policy needs to be enabled on the monitored systems. The constants in the conditions that appear in the search, such as eventcount, maxpause, and accounts, need to reflect Account Lockout Policy settings of the domain environment.
Known False Positives: If there are lots of false positive detections, optionally a condition where accounts > 1 can be added to the search. However, the condition should be added only in the case that the Account Lockout Policy is set up in the environment. If there are multiple IP addresses in use for the same host, the rule can trigger more times for events which should be grouped together in a single result. Users using multiple accounts on one computer may trigger this rule by accidentally typing the wrong password. This behavior can be limited by setting the search constants precisely for the needs of a particular environment.
S01D05 - Excessive failed password attempts towards one account
This search detects password brute force attempts towards one target account. Unlike previous searches, this rule is set up to capture trials coming from several sources. Both Kerberos and NTLM authentication events are filtered for status code meaning bad password. That is the status code 0xC000006A for event 4776 and the status code 0x18 for event 4771. Transactions are made on TargetUserName field with the maxpause parameter set to 5 minutes between single events. Only transactions that contain more than five events in total and events from more than two different sources are preserved as results. These constants should be changed to values that fit the particular environment.
How to Implement: Both Credential Validation and Kerberos Authentication Service auditing policies need to be enabled on the monitored systems. The constants in the conditions that appear in the search, such as eventcount, maxpause, and accounts, need to reflect Account Lockout Policy settings of the domain environment.
Known False Positives: If some hosts use multiple IP addresses, the number of source hosts may be evaluated wrongly. If the condition for sources is sources > 1, the rule can trigger false-positively for the same source host, showing as two events. This happens because the event 4776 logs hostname, while event 4771 logs IP addresses only. It can be fixed by using lookup. A single user logging on multiple computers in a short time may trigger this rule by accidentally typing the wrong password. This behavior can be limited by setting the search constants precisely.
S01D06 - Multiple locked accounts from one source
This search detects several account lockouts made by a single source in a short time. This anomaly can be caused by adversaries trying to guess passwords on multiple accounts in the case they are not aware of the Account Lockout Policy in the domain. The search is looking into events 4740 - A user account was locked out [5]. The transaction command is used with the field TargetDomainName. This is the field containing the name of computer account from which logon attempt was received. This applies to XML event format; the field is called Caller Computer Name in the standard event format. The condition triggers when there is more than one account locked from the same source computer.
How to Implement: User Account Management auditing setting enables logging for account lockout events (4740). The constants in the conditions that appear in the search, such as maxpause, and accounts, need to reflect Account Lockout Policy settings of the domain environment. Replacing the condition accounts > 1 by eventcount > 1 returns also repetitive lockout attempts for the same account. It can be useful in the case that accounts are being unlocked automatically after some time.
Known False Positives: A user may lock his own multiple accounts by accidentally typing a wrong password. If some computers are shared among several users, these users can lock their accounts in a short time from a single comput
S01D08 - Logon attempts towards disabled accounts (Kerberos)
This search identifies repetitive logon attempts with the use of disabled accounts. This rule detects attempts where Kerberos authentication was used. An attacker may want to exploit forgotten accounts that are no longer used in the environment. Or eventually, also default Windows built-in accounts in the case they are disabled. The search is focusing on events 4768 with the status code 0x12, which means that the account is disabled [3]. Transactions are made on the IpAddress field, and only those containing more than five matches are kept in the results. This condition detects signs of a brute force activity. The number of target accounts is evaluated by using the mvcount function on the field TargetUserName. Only transactions with more than two accounts are displayed
How to Implement: Logging on domain controllers needs to be configured to audit Kerberos Authentication Service. Note that event 4768 with status code 0x12 means a disabled account, whereas the event 4771 with status code 0x12 means that the account is locked.
Known False Positives: Some logon attempts can occur shortly after the account is disabled by an administrator.
S02D01 - Possible Kerberoasting activity
This search finds sources that requested service tickets with weak cipher suites. These encryption types should be no longer used by modern operating systems in the domain. Therefore, they are likely signs of possible Kerberoasting activity. The search looks at events 4769 auditing service ticket requests. It filters for any ticket requests with encryption type constants equal to the values of vulnerable cipher suites. List of all encryption types can be found at [8]. All requests for tickets with these encryption types are displayed.
How to Implement: Domain controllers can log Kerberos TGS service ticket requests (event 4769) by configuring Audit Kerberos Service Ticket Operations [8]. As the amount of these events can be quite high, filtering for particular encryption types may also be applied to log forwarding.
Known False Positives: Using older operating systems or services that do not support AES encryption types may create false positives. In such a case, the search needs to be modified to allow the particular encryption type to the specific service.
S02D02 - Excessive service ticket requests from one source
Requests for several different service names (not related to each other) within a short time period from a single account are suspicious. Even more so if weak encryption was used in the service tickets. This search may help to reveal such activities. Service ticket requests for krbtgt service and computer account service names (those ending with $) are filtered out from the results. The search focuses on service accounts that were created for specific services. Subsequent events are grouped on the IpAddress field by the transaction command. Number of services in each transaction is calculated to display only results where the number is higher than the one specified in the condition.
How to Implement: Domain controllers can log Kerberos TGS service ticket requests (event 4769) by configuring Audit Kerberos Service Ticket Operations [8]. The search constant indicating the number of services needs to be tuned to the needs of a particular domain environment.
Known False Positives: The assessment of whether services are related or not needs to be added to the search if possible. Otherwise it needs to be done manually by a security analyst. Narrowing the search for particular ticket encryption types may prevent false positives. This search may also be combined with detection search S02D01 which could help to prevent false positive detections.
S02D03 - Suspicious external service ticket requests
The above is due to the Source being logged improperly. There is no such activity in this lab's dataset.
This search tracks service requests by examining the IP address and port number. Unusual values indicate the use of outbound connection for the service request, which is suspicious. The search examines the IpPort and IpAddress fields in events 4769. Port values under 1024 and any non-private IP addresses are those of interest. The search displays results whenever such values appear in the request, together with details about the requestor.
How to Implement: Domain controllers can log Kerberos TGS service ticket requests (event 4769) by configuring Audit Kerberos Service Ticket Operations [8]. If there is a legitimate service using port beyond 1024 or external IP address, it needs to be whitelisted in the search.
Known False Positives: None identified if whitelisting is done correctly.
S02D04 - Detecting Kerberoasting with a honeypot
This search uses a detection method for Kerberoasting based on a honeypot. Honeypot is a fake service account that is never really used in the environment, but it is set up to look like a legitimate service account with high privileges assigned. A service ticket requests for this account are only made by an adversary and will be revealed by this search. The search filters all events auditing service requests (4769) for ServiceName equal to the honeypot service account (Honeypot01). The search directly produces results that detect malicious TGS requests.
How to Implement: Domain controllers can log Kerberos TGS service ticket requests (event 4769) by configuring Audit Kerberos Service Ticket Operations [8]. A fake account (in the search named as Honeypot01) needs to be created, and a fake SPN must be registered for it. Preferably, both account name and SPN should seem as legitimate as possible. The account can also be part of a fake administrator group (set AdminCount attribute to 1) to raise its attractivity for an attacker. More details about the implementation can be found at [9].
Known False Positives: There should not be any false positives. There is no reason to request a service ticket for the honeypot service by a legitimate user.
S02D05 - Detecting Kerberoasting via PowerShell
This search is provided for completeness' sake. It won't produce any results.
This search detects attempts to manipulate with service accounts via PowerShell. PowerShell is a tool commonly used by attackers. The search may catch SPN scanning activity or successful acquisition of the service ticket hash. Computers that appear in the results of this search with an extensive amount of events are worth to investigate. The search is built on events from PowerShell Operational log. A transaction is created for all subsequent PowerShell events coming from a single workstation. A new field named raw is created and assigned the entire raw event. This is a preparation for the full-text search used in the next step. The search looks for the occurrence of any service account in the raw data of the PowerShell events. The list of service accounts is provided by a lookup file service_accounts.csv. Only the transactions containing more suspicious events than the specified threshold are displayed in the results.
..........
..
.
After researching through the provided technique and some of the references, including the proposed detection rule in Sigma, it appears that WMI persistence can be discovered by looking at WMI events that Sysmon captures in Event IDS 19, 20 and 21. With that in mind, we can perform a search for those events and display back to us concrete information if those events had occurred, such as what file is to be started.
It appears that a new activity was observed, with the name "Updater", that triggers on the login of any users and executes the command "cmd.exe /c C:/programdata/updater.bat".
a. Hunting for logon scripts through process creation
After researching provided resources, we discover that logon scripts are started by "userinit.exe". A detection rule is also provided in Sigma
A logon script was executed on 1 host and the running image with its command line argument are present to reveal that batch script was executed in %ProgramData%.
b. On the second hunt, utilizing the previously mentioned Sigma rule, we observe that Sysmon Event IDs (related to registry activity) 11, 12, 13 and 14 can be monitored if they had captured any activity towards the key UserInitMprLogonScripts.
After reviewing the MITRE documentation and the following Sigma detection rule, we'll base our hunt on any started process that has description "Windows PowerShell" and is not powershell.exe or powershell_ise.exe.
Another benefit of similar search to get a high level overview of the environment is to list all processes with description "Windows PowerShell", which will also show us the general usage of different PowerShell versions across the organization.
As the name suggests, PowerShell Empire is using PowerShell on victim machines to run malicious activity. Since we have PowerShell logs available to us (Script-Block logging), we can use that to detect any malicious usage. Tom Ueltschi (renowned security researcher) proposed a detection rule that looks for any of the following 3 strings:
$psversiontable.psversion.major
system.management.automation.utils
system.management.automation.amsiutils
Then, perform simple deobfuscation on the captured command and look for the occurrence of any of the following 5 strings:
EnableScriptBlockLogging
Enablescriptblockinvocationlogging
cachedgrouppolicysettings
ServerCertificateValidationCallback
expect100continue
Let's look closer in the first event:
As described in the course materials, Unmanaged PowerShell can be detected by looking at the Host Application when PowerShell starts and filtering the "known goods" such as PowerShell.exe.
For this part, we'll use Sysmon event id 1, looking for command line arguments that would match those of an encoded command. The parameter that we are looking for is "-encodedcommand" but the bare minimum that PowerShell needs provided as arguments to threat it as an encoded command is simply "-e". So, let's filter out for event id 1, look for all powershell processes (powershell.exe or PowerShell in description), and parameter that contains "-e" (although we will tune this down slightly to avoid potential false positives). Initially, we'll list information related to the parent processes that were observed to be association with this activity --
Immediately, we'll note the entry executing "c:\ProgramData\UserInitMprLogonScript.bat" The other entries appear suspicious for many reasons but for now, we'll focus on reviewing the actual command line that was flagged.
After researching through the provided technique and some of the references, the shares that we are interested in are C$, ADMIN$, and IPC$. Common tools that abuse these shares is some of PSExec's implementations - such as the one in Cobalt Strike. A distinctive characteristic is the connection back to the ADMIN$ share on localhost at 127.0.0.1 to execute a binary file which runs rundll32.exe. Taking that into account
To identify downloaded files, we'll utilize the fact that Windows sets alternate data streams "Zone.Identifier" to recognize web content and alert the user of potentially dangerous file. To detect that, we'll use Sysmon ID 15 -- FileCreateStreamHash, specifically looking for files that contain ".doc".
Hunt for malicious Word document
After reviewing the MITRE documentation and linking it to the title of task, we must be hunting for a malicious macro. Our query will simply list all processes that were spawned by winword.exe.
After reviewing the MITRE documentation, our hunt will focus on Sysmon Event ID 13, and focus on activity associated with the registry path "\Windows\CurrentVersion\Run".
Immediately an entry stands out, which has occurred only once and executes a file in "ProgramData".
After reviewing the MITRE documentation, the hunt we are performing focuses on processes started in the Startup location.
We discover that a batch script has been executed multiple times from that locations.
After reviewing the provided resource, we discover a long list of binary files that are commonly used in performing internal recon to gather information. Our hunt will focus on whether 2 or more were executed within a time-frame of 15 minutes.
After reviewing the provided resource and additional research, we discover that VBS scripts are executed by cscript.exe or wscript.exe. Using that with Sysmon Event ID 1, we'll search whether those binaries have executed, and if so, what scripts they executed.
---
To identify certain type of network traffic, we'll utilize Zeek's logs. The ones we are interested in for this task are those with sourcetype of "zeek_files". Zeek logs will provide hashes for transferred files, so we should take this into account during our hunt as well.
We can further enrich this by looking at Zeek logs with sourcetype of "zeek_smb_files", and specifically those with action of "SMB::FILE_OPEN"
Now we can see that those files were accessed remotely. From this we can suspect that 192.168.1.32 is the victim that connects to 192.168.1.34 on port 445.
The 6 character and randomly-named filenames of shares are one of the distinguishing artefacts that is linked to CrackMapExec v4.0 and 5.0.
For this task, we'll use Zeek logs again, specifically those with sourcetype "zeek_ntlm". Simply, we will list all logs in a defined table output format with the query:
We can see that 192.168.1.34 has connected successfully to 192.168.1.32 with the account "Administrator" (NTLM authentication, which is often generated by connecting remotely over SMB with plain text password or performing pass the hash).
For this task again, we'll look again Zeek logs again, specifically those with sourcetype "zeek_smb_mapping". Simply, we will list all logs in a defined table output format with the query:
It appears that 192.168.1.34 has created multiple connections to the IPC$ share of 192.168.1.32. This is expected behavior when executing commands over SMB on a remote machine.
For this task, we'll utilize Sysmon logs, specifically whether a process was started with suspicious command line argument that contains "powershell.", which is generated by default in Empire agents. We will use the following query to search for "powershell":
We could also look into PowerShell ScriptBlock logs with Event ID 4104, that will display captured PowerShell activity. A simple query and looking for those that contain "Warning" in them already provides a wealth of potentially malicious obfuscated commands:
An interesting indicator of compromise is the existence of self-signed SSL certificates in your environment (of course your organization will have to adopt using certificates signed by trusted entities for this hunting technique to have a meaning)
The below Splunk search can reveal self-signed SSL certificates or certificates with multiple empty fields. Both can be used as an alarm.
If you look carefully enough, you will identify that the self-signed certificate is related to the attacking host we found during the previous tasks!
--