Prodigy 13 - logo

What is Threat Hunting?

shallow focus photography of computer codes


Threat Hunting is a creative process. One’s abilities to think abstractly, challenge ideas, and be unafraid of failure lead to more knowledge and breakthroughs than someone who does everything the same way every time. The idea of creativity becomes hugely important when applying hunting techniques — with so many techniques to choose from (text-based searching, dozens of visualizations, endless permutations of machine learning algorithms).

What is Hunting?

Before we can talk about hunting maturity, though, we need to discuss what exactly we mean when we say “hunting”.

We define hunting as the process of proactively and iteratively searching through networks, system, cloud, application, and hardware assets to detect and isolate advanced threats that evade automated, rule- and signature-based security systems. There are many different techniques hunters might use to find the bad guys, and no single one of them is always “right”; the best one often depends on the type of activity you are trying to find.

Hunting is often machine-assisted but is always driven by an analyst; it can never be fully automated. Automated alerting is important, but cannot be the only thing your
detection program relies on.

In fact, one of the chief goals of hunting should be to improve your automated detection capabilities by prototyping new ways to detect malicious activity and turning those prototypes into production detection capabilities.

The Hunting Maturity Model

With that definition of hunting in mind, let’s consider what makes a good hunting program. There are three factors to consider when judging an organization’s hunting ability:

1. the quality and quantity of the data they collect for hunting

2. the tools they provide to access and analyze the data

3. and the skills of the analysts who actually use the
data and the tools to find security incidents.

The Hunting Maturity Model (HMM)

The Hunting Maturity Model, developed by security technologist and hunter David Bianco, describes five levels of organizational hunting capability, ranging from HM0 (the least capable) to HM4 (the most). Let’s examine each level in detail.

HM0 – Initial

At HM0, an organization relies primarily on automated alerting tools such as IDS, SIEM or antivirus to detect malicious activity across the enterprise. They may incorporate feeds of signature updates or threat intelligence indicators, and they may even create their own signatures or indicators, but these are fed directly into the monitoring systems. The human effort at HM0 is directed primarily toward alert resolution. HM0 organizations also do not collect much information from their IT systems so their ability to proactively find threats is severely limited.

Organizations at HM0 are not considered to be capable of hunting.

HM1 – Minimal

An organization at HM1 still relies primarily on automated alerting to drive their incident response process, but they are actually doing at least some routine collection of IT data. These organizations often aspire to intel-driven detection (that is, they base their detection decisions in large part upon their available threat intelligence).

They often track the latest threat reports from a combination of open and closed sources.

HM1 organizations routinely collect at least a few types of data from around their enterprise into a central location such as a SIEM or log management product. Some may actually collect a lot of information. Thus, when new threats come to their attention, analysts are able to extract the key indicators from these reports and search historical data to find out if they have been seen in at least the recent past.

Because of this search capability, HM1 is the first level in which any type of hunting occurs, even though it is minimal.

HM2 – Procedural

If you search the Internet for hunting procedures, you will find several great ones . These procedures most often combine an expected type of input data with a specific analysis technique to discover a single type of malicious activity (e.g., detecting malware by gathering data about which programs are set to automatically start on hosts).

Organizations at HM2 are able to learn and apply procedures developed by others on a somewhat regular basis, and may make minor changes, but are not yet capable of creating wholly new procedures themselves.

Because most of the commonly available procedures rely in some way on least-frequency analysis (as of this writing, anyway), HM2 organizations usually collect a large (sometimes very large) amount of data from across the enterprise.

HM2 is the most common level of capability among organizations that have active hunting programs.

HM3 – Innovative

HM3 organizations have at least a few hunters who understand a variety of different types of data analysis techniques and are able to apply them to identify malicious activity. Instead of relying on procedures developed by others (as is the case with HM2), these organizations are usually the ones who are creating and publishing the procedures.

Analytic skills may be as simple as basic statistics or involve more advanced topics such as linked data analysis, data visualization or machine learning.

The key at this stage is for Analysts to apply these techniques to create repeatable procedures, which are documented and performed on a frequent basis.

Data collection at HM3 at least as common as it is at HM2, if not more advanced. HM3 organizations can be quite effective at finding and combating threat actor activity.

However, as the number of hunting processes they develop increases over time, they may face scalability problems trying to perform them all on a reasonable schedule unless they increase the number of available analysts to match.

HM4 – Leading

An HM4 organization is essentially the same as one at HM3, with one important difference: automation.

At HM4, any successful hunting process will be operationalized and turned into automated detection.

This frees the analysts from the burden of running the same processes over and over, and allows them instead to concentrate on improving existing processes or creating new ones.

HM4 organizations are extremely effective at resisting adversary actions. The high level of automation allows them to focus their efforts on creating a stream of new hunting processes, which results in constant improvement to the detection program as a whole.

The Threat Hunting Process (Threat Hunting Loop)

The Threat Hunting process is meant to be iterative. You will never be able to fully secure your network after just a single hunt. To avoid one-off, potentially ineffective hunting trips, it’s important for your team to implement a formal cyber hunting process.

The following four stages make up a model process for successful hunting (Threat Hunting Loop):

1. Hypotheses

A hunt starts with creating a hypothesis , or an educated guess, about some type of activity that might be going on in your IT environment. An example of a hypothesis could be that users who have recently traveled abroad are at elevated risk of being targeted by state-sponsored threat actors, so you might begin your hunt by planning to look for signs of new malware on their laptops or assuming that their accounts are being misused around your network. Hypotheses are typically formulated by analysts based on any number of factors, including friendly and threat intelligence. There are various ways that a hunter might form a hypothesis. Often this involves laying out attack models and the possible tactics a threat might use, determining what would already be covered by automated alerting systems, and then formulating a hunting investigation of what else might be happening.

Another example would be, due to detection of high number of brute force attacks towards specific services (email accounts), a potential breach is likely, which could lead to accounts being abused for mass email or delivery of malicious content (malware, phishing, etc) – targeting internal and external recipients.

2. Investigation

A hunter follows up on hypotheses by investigating via various tools and techniques , including Linked Data Search and visualization. Effective tools will leverage both raw and linked data analysis techniques such as visualization, statistical analysis or machine learning to fuse disparate cybersecurity datasets. Linked Data Analysis is particularly effective at laying out the data necessary to address the hypotheses in an understandable way, and so is a critical component for a hunting platform. Linked data can even add weights and directionality to visualizations, making it easier to search large data sets and use more powerful analytics.

Many other complementary techniques exist, including row-oriented techniques such as stack counting and datapoint clustering. Analysts can use these techniques to discover new malicious patterns in their data and reconstruct complex attack paths to reveal an attacker’s Tactics, Techniques, and Procedures (TTPs) .

Review of email server logs, traffic originating from insider end points (on email ports) might indicate IoC.

3. Uncover new patterns and & TTPs

Various tools and techniques are used in uncovering new malicious patterns of behavior and adversary TTPs. This step is the definitive success criteria for a hunt. An example of this process could be that a previous investigation revealed that a user account has been behaving anomalously, with the account sending an unusually high amount of outbound traffic.

After conducting a Linked Data investigation, it is discovered that the user’s account was initially compromised via an exploit targeting a third party service provider of the organization. New hypotheses and analytics are developed to specifically discover other user accounts affiliated with similar third party service providers.

Increased CPU usage due to deployment of Bitcoin miner, create an alert for CPU increasing certain threshold in SIEM for instances/hosts hosting similar workloads/providing similar services.

4. Inform & Enrich

Finally, successful hunts form the basis for informing and enriching automated analytics . Don’t waste your team’s time doing the same hunts over and over. Once you find a technique that works to bring threats to light, automate it so that your team can continue to focus on the next new hunt. Information from hunts can be used to improve existing detection mechanisms, which might include updating SIEM rules or detection signatures. For example, you may uncover information that leads to new threat intelligence or indicators of compromise. You might even create some friendly intelligence, that is, information about your own environment and how it is meant to operate, such as network maps, software inventories, lists of authorized web servers, etc. The more you know about your own network, the better you can defend it, so it makes sense to try to record and leverage new findings as you encounter them on your hunts

Now what?

Ending a hunt is the epitome of ‘simple to understand, difficult to execute.’ The goal is to expand threat detection coverage (either through the identification of new opportunities or improvements to existing ones) — without this, it’s likely that hunting efforts will have no lasting impact on the organization. Difficulty in achieving this goal assumes that the utilized hunting technique did not provide suitable precision (because, by design, they almost never do). Generally, one has to deconstruct the results, generalize the procedure, and automate it into new or existing tools. Sounds hard, right? Not if you start with what you know and increase complexity as needed — hunts that lead to daily reports of vulnerable systems, identification of indicators of compromise, or new/improved intrusion detection system signatures are just as valid as hunts that lead to the creation of innovative threat detection tools, so long as the outcome provides value to the organization.

Zero Trust Blog

Get email alerts when we publish new blog articles!

more blog posts: