The NSIS has a mandate to
identify threats against the security of Kenya, collect and analyze
intelligence on these threats, and advise the Government accordingly.
In my 34 years career, I have been privileged to serve our
government in various capacities: an infantry officer; Military Intelligence
Officer; Aid-de-Camp to a President; Peace Missionary; Director of Intelligence
and Director General of NSIS. In all these assignments and specifically in the
field of intelligence, I have come to realize that the following five
attributes are of great significance in managing, upholding and sustaining a
robust National Intelligence Service:
a)
The government would need to continuously invest in the "character of
their gatekeepers and its watchdogs".
b) The Director General of the Service should have direct and
unfettered access to the Head of State and Government. In order to earn trust,
he has to do things right and the right thing without fear, favour or ill will.
In so doing, he must be efficient, loyal and balanced.
c) All men and women of the Service must direct all their time and
energy towards promoting and projecting that which only serves and informs the
National Interest.
d) The Service should operate within the law.
e) The Intelligence Service is a national insurance for Counter
Intelligence. Yet a balance has to be struck between the national security
interests and international threats and challenges. Information sharing with
other nation - states has been the practice from time immemorial. This
Partnership will need to be maintained taking into considerations, mutual
respect, national interests, International Law, and the nature of power and its
influence in the globalized environment.
What is
the difference between the NSIS and the CID?
The Criminal Investigation
Department (CID) is the Branch of the Kenya Police charged with the mandate of
prevention, detection, investigation, and prosecution of serious crime in
Kenya. This role differs from that of the NSIS, which is a civilian agency without
any Police powers of search, arrest, and prosecution.
http://computer-forensics.sans.org/blog/2009/10/14/security-intelligence-attacking-the-kill-chain/
4 comments Posted
by mikecloppert
Filed under Computer Forensics, Incident Response, Network Forensics
As the focus on information
security by the US Government heats up, you will likely see a lot of
professionals writing more about topics that touch on information warfare. The
same day I began writing this, I also found myself reading some of GreyLogic's excellent
analysis on some current events, for example. And as I was enjoying a beer in
between completing the outline of this series and beginning this entry, I was
both encouraged and disappointed to see Richard
Bejtlich writing
on part of the subject I plan to cover in Part 2: encouraged that other thought
leaders were very much in line with our approach, and disappointed that Richard
once again beat me to the punch! Richard is a professional whom I respect greatly;
I think you'll find our opinions on this topic very much complementary. Just as
is the case for so many aspects of our young industry, many of the writings on
these subjects form a Venn diagram interesting for both their similarities and
differences. As students of this subject - and we are all students, whether
formally or otherwise - I encourage you to read everything you can and form
your own conclusions. As for
why you are reading this in a forensics and incident response blog, while many
have written about information warfare theory, I have seen scant information on
how to apply it in a practical sense to computer security. While this series is
not 'information warfare' per-se, it is most certainly a derivative domain. As
you either know or will soon discover through your experiences in the field,
intrusion cases can proceed quite differently based on the nature of the
problem. Legal investigations, intrusions of opportunity, worms/viruses, and
sophisticated adversaries all quickly fork in their progression, although the
tools used by investigators may remain the same. This is my effort to infuse my
research and experience (and that of my team) in combating sophisticated
adversaries into the SANS educational framework. I hope you enjoy and learn from
it.
What is security intelligence? SI is a recognition of the evolution of
sophisticated adversaries, the study of that evolution, and the application of
this information in an actionable way to the defense of systems, networks, and
data. In short, it is threat-focused defense, or as I occasionally refer to it, intelligence-driven
response. You will see in the coming installments how this is
manifested in a practical way, and how some specific examples can be applied.
Definitions
are important, and terminology in such a young field can vary. For the
purposes of this series, I use the definitions given below. Whatever vernacular
you choose to use in your professional career, the most important thing to
remember is that you should use the terms consistently.
CND - Computer Network Defense. The
act of defending computers and data from compromises of confidentiality,
integrity, or availability, facilitated by other networked computers, and the
subsequent response when such a compromise occurs.
CNE - Computer Network Exploitation, or alternately,
Computer Network Espionage. The act of compromising computers for the purposes
of gaining access to or modifying data, facilitated by remote access via
computer networks. In short, compromises of integrity or confidentiality.
CNA - Computer Network Attack. The
act of adversely impacting the availability of data or functionality of
networked computer systems.
APT - Advanced, Persistent Threat. I
first heard this term used by the USAF's 8th Air Force in a small meeting in
2006. Unless contradicting evidence is brought to bear on the subject, I give
them credit for coining this term, which is any sophistcated adversary engaged
in information warfare in support of long-term strategic goals.
TTP - Tactics, Techniques, and Procedures. Methodology, in this case of CNE, CND, or
CNA.
In order to address the risks posed by APT actors, an evolution in thought on
both net defense as well as security writ large is needed. Security
Intelligence is an effort by my team and I to that end. Please keep in mind
that this is a recursive process of definition and implementation, trial and
error, discussion and peer review. I feel that this series brings us close to
some answers, but many open questions on this topic will remain, and this
exploration still needs broader peer review.
Apart
from the simple lack of brain cycles that have been applied to combating APT
actors, another major obstacle to success is refocusing what is an increasingly
myopic and misinformed vision of information warfare and security. If I hear
"cyber
9/11," "cyber
Pearl Harbor," "cyber
Katrina," or "cyber " one more time, I'm going to
scream. Besides the word "cyber" having no real meaning, these terms
are pure hyperbole and do not map in any way, shape, or form to the exigent
sophisticated threat environment. I know this not just from my own personal
experience, but by looking at the world around me with a trained eye. By far
and away, the goals of the most sophisticated adversaries in 2009 are focused
on the surreptitious acquisition of sensitive information for the purposes of
competitive economic advantage, or to counter, kill, or clone the technologies
of one's nation-state adversaries. No doubt, there are exceptions. While I
believe the Estonian
incident and
recentspate of
DDoS's received
far more press than they deserved, they were nevertheless notable in
highlighting CNA's emergent role in open, potentially armed conflict. But I
would argue that their overall economic and long-term impact has likely been
dwarfed by the continuous deluge of CNE operations impacting organizations.
Yet,
the focus of the media (and to a degree our profession) has been on CNA -
"the power grid", SCADA systems, wall street... yes, these are juicy
targets for an adversary bent on open conflict, but the impact of such CNA
operations would almost certainly lead to some sort of 'kinetic' response.
While worthy of attention, these are nevertheless movie-plot threats that take a backseat to the chronic issues
we are dealing with - nearly all of which are CNE in nature. To underscore my
point, I took a highly scientific poll (*cough*): articles on "cyber
attack" outnumber "cyber
espionage" better than 6-to-1.
As
a colleague remarked to me the other day, we do not refer to 9/11 as
"plane terrorism," nor do we refer to the Oklahoma City bombing as
"truck terrorism." Yet this is how many people think about CNA and
CNE. Why? Any CNA or CNE operation is part of a broader effort to achieve some
strategic, competitive goal. It is a tool, just as a truck bomb is a tool to
instill terror. And, while we're here, the last thing I want to hear is
anything about "cyber terrorism." There is simply no way that a
simultaneous failure of all DNS root name servers, for instance, could evoke
the same kind of fear as watching two of the world's largest buildings
collapse. The goals achieved by CNA are many, but instilling fear in a whole
population is not one of them. But I digress...
The
bottom line is that in order for progress to be made, there needs to be an
evolution in thought on APT actors by the media and, most importantly, the
information security industry. First, the line between CNE and CNA is often
blurred. Read any recent article about hackers stealing data, and you'll see an
immediate tendency on the part of writers and interviewees to slide to these
unrelated Hollywood CNA scenarios that do not at all map to the goals of the
adversaries discussed therein. And please people, the answers lie not in
patching systems, anti-virus, or user education. These strategies are
necessary, but insufficient as they do not always map directly to the threat
environment. Compromises by APT actors often do not happen because of some
security failure that can be addressed with an easily-branded compliance
strategy. They happen because adversaries are sophisticated, have extensive
knowledge of their target, and are not discouraged by failure. Compromises,
even in properly-secured environments, are inevitable - and the blame lies not
with the victim. We must therefore focus efforts on raising the bar,
introducing friction in an attack progression, earlier detection of attacks,
and the ensuing response.
Based on feedback, I have split this introduction into 2 parts.
The next part, which will be posted tomorrow, will discuss risk, where to apply
SI techniques, and outline the rest of the series.
0 comments Posted
by mikecloppert
Filed under Computer Forensics, Incident Response, Network Forensics
Yesterday, I introduced Security Intelligence in the first part
of the introduction with some definitions and a rough problem statement.
Today, I will get into more details of this domain, beginning with
understanding risk and when to apply SI techniques.
As
I like to say, we are in the business of risk management. In order to
understand security intelligence, it is imperative that we properly scope and
carefully define this concept. Different fields define risk in different
terms, but in security, Risk is the product of three primary components: Vulnerability,
Impact, and Threat.
Figure 1: Information Security Risk
Components.
Vulnerability -
Vulnerability is sometimes replaced with "exposure." I would
argue that they are represented together as one component. Vulnerability
is both mutable and ephemeral. This is good, because it means this
component of risk can be affected by individuals and organizations.
Applying the principle of least privilege, network segmentation, robust system
management, and adherence to software development and life-cycle best practices
are but a few high-level examples of how vulnerability (or exposure) can be
reduced, with a proportional reduction of Risk. The operative word here
is reduced - not eliminated. Again, vulnerability reduction, as you will
see, is necessary but
insufficient.
Impact -Impact
is immutable and changes are either slow or non-existent. This is what
happens when security systems fail and the confidentiality, integrity, or
availability (but mostly the first two) of data or systems are
compromised. This is largely a property of your organization and its
operational context - physical, industrial, and what have you. There is
typically not much you can do to influence impact.
Threat - Threat
is the most important Risk component in intelligence-driven response. In
fact, one could say that security intelligence is threat-driven security.
To understand, differentiate, and properly respond to threats, it is helpful to
divide this concept into a further three components: Intent, Opportunity, and
Capability (IOC). These terms are the MMO (Means, Motive, Opportunity) of security
intelligence - in fact, they map nicely to one another, but I feel IOC
encourages more clarity of thought on Threat.
Intent - Intent stems in a way from
impact. It is immutable, and driven by the industry you are in just as
Impact is. Typically, at a high level, the intent of adversaries to whom
security intelligence techniques are applied is data theft - CNE, if you
will. Of course, for each intrusion, each compromise, or each actor, the
intent will most likely be slightly different. Is the goal of the
adversary to compromise operational details of a campaign, or technical details
of a widget? There is nothing that can be done to influence intent.
Opportunity - Opportunity is about timing
and knowledge of the target space. In some cases it pairs with
vulnerability, but not always. It is one thing to be using a product with
a 0-day vulnerability in it, but quite another when your adversary knows
this. In other respects, however, opportunity is less related. For instance,
wouldn't a company's benefits open enrollment period be a great time for a
targeted attack on users using socially-engineered, topically-relevant email as
a delivery vector?
Capability - Put simply, capability is the
ability of adversaries to successfully achieve their intended goal and leverage
opportunity. It is influenced by things such as the skills of the
adversaries and the resources (financial, human, and technical) available to
them. To extend the 0-day example, a target may be vulnerable, the
adversary may intend to steal data by exploiting this 0-day, but if he or she
cannot write or obtain the exploit, then the risk is lower.
The
"intelligence" in intelligence-driven response is the information
acquired about one's adversaries, or collectively the threat landscape.
Each industry has a different threat landscape, and each organization in each
industry has a different risk profile, even to the same adversary.
Understanding one's threat environment is collecting actionable information on
known threat actors for CND, whether that action is purely detection or
detection with prevention. Now is the time to mention that there is no
such thing as protection without detection, or protection without reaction, in
this environment. This will be discussed in more detail in Part 3.
By
combining information on a threat with observations of activity, one can more
effectively and in some cases heuristically defend one's data and
systems. Perhaps a heuristic or anomalous event indicative of malicious
activity occurs too frequently across your enterprise to respond to it every
time it happens. If this maps directly to the TTP of a particular
adversary, and you know this adversary's intent is to acquire data which is
concentrated in a particular portion of your network, you can investigate the
heuristic with this scoping that would otherwise be unreasonable to leverage.
More
discretely, discovering the infrastructure, tools, and preferred techniques of
each particular adversary, and having processes in place to leverage the data,
allows you to detect hostile activity even if all but one minor aspect of an
adversary's attempt to break in has changed. Let's take an easy
example. If an adversary uses an IP address in an attack, you don't just
want to block it at your firewall. You want to detect when it is used in
the future, and also not reveal to the adversary that you discovered the attack
- otherwise, they'll just switch IPs. You want to let them think
subsequent attacks were successful, and then research these attacks for
"new" (or "different") techniques, which can then in turn
be pivoted on for further defense in case the adversary does ever switch to a
new IP.
In
this threat environment, you cannot rely on traditional tools like firewalls,
IDS, and (especially) anti-virus. These tools can sometimes be leveraged
to achieve detection or protection goals, but it will be you that is defining
those conditions, based on your security intelligence - not your vendor.
These vendors have by and large failed to adapt to targeted attacks, and most
are only interested in protecting against the broader, easier problems.
This isn't easy, folks, but trust me when I say it's pretty effective.
Key
to the success of security intelligence is mapping intent to impact. If
your research and compromise response investigations reveal that adversaries
are intent on stealing data, then there is little reason to be concerned about
denial-of-service attacks from those actors, as the impact of such an activity
is completely orthogonal to the goal of a confidentiality breach, and the
ancillary goal that is often paired with it, invisibility.
It
is also important to understand the threat which is likely behind certain hostile
activities. These techniques are not wisely applied to commodity viruses
or massive worms - such rigor provides little ROI from an analytical
perspective, and tends to waste resources on a problem which can be adequately
addressed with existing security tools and infrastructure. Only APT
actors should be subject to such scrutiny. Naturally, this creates a
derivative challenge: not only must you now identify hostile versus benign
activity, but further which of that hostile activity corresponds to APT
actors! This needle-in-a-needlestack challenge is at times very
difficult, but as you wrap your head around these techniques it becomes easier
in some cases. Unfortunately, our adversaries know all too well that they
can hide in the cruft, and can (and do) exploit this.
One
way to think about this is by answering the question of whether an attack or
intrusion is one of opportunity, or intent. Opportunistic intrusions are
generally a problem solved by existing best practices (architecture, AV,
patching, classic IR model, etc), rather than this analytical offshoot we're
calling SI. As that last sentence suggests, it is not the end-all, be-all
to CND, but rather one component of a large and complicated affair in
information security.
I'm
going to take a WAG at how long I'll need to transcribe the large jumble of
thoughts in my head onto the computer screen. When it's all said and
done, we'll see just how good a guess that is. While in 2 parts, I
consider the Introduction to be "Part 1? in aggregate.
Part 2 - Attacking The Kill Chain: Understanding attack
progression in the context of incident response. Expect this entry in the next 2 weeks.
Part 3 - Campaign Response: Why your IR model is broken. Expect this entry in the next 3-4 weeks.
Part 4 - User modeling. Expect this entry in 4-5 weeks.
0 comments Posted
by mikecloppert
Filed under Incident Response
Coming in much later than I'd hoped, this is the second
installment in a series of four discussing security intelligence principles in
computer network defense. If you missed the introduction (parts 1 and 2), I
highly recommend you read it before this article, as it sets the stage and
vernacular for intelligence-driven response necessary to follow what will be
discussed throughout the series. Once again, and as often is the case, the
knowledge conveyed herein is that of my associates and I, learned through many
man-years attending the School of Hard Knocks (TM?), and the credit belongs to
all of those involved in the evolution of this material.
In this segment, we will introduce the attack progression (aka
"kill chain") and briefly descibe its intersection with indicators.
The next segment will go into more detail about how to use the attack
progression model for more effective analysis and defense, including a few
contrived examples based on real attacks.
Just like you or I, adversaries have various computer resources at their
disposal. They have favorite computers, applications, techniques, websites,
etc. It is these fundamentally human tendencies and technical limitations that
we exploit by collecting information on our adversaries. No person acts truly
random, and no person has truly infinite resources at their disposal. Thus, it
behooves us in CND to record, track, and group information on our sophisticated
adversaries to develop profiles. With these profiles, we can draw inferences,
and with those inferences, we can be more adaptive and effectively defend our
data. After all, that's what intelligence-driven response is all about:
defending data that sophisticated adversaries want. It's not about the
computers. It's not about the networks. It's about the data. We have it, and
they want it.
Indicators
can be classified a number of ways. Over the years, I and my colleagues have
wrestled with the most effective way to break them down. Currently, I am of the
mind that indicators fall into one of three types: atomic, computed, and
behavioral (or TTP's)
Atomic indicators
are pieces of data that are indicators of adversary activity on their own.
Examples include IP addresses, email addresses, a static string in a Covert
Command-and-control (C2) channel, or fully-qualified domain names (FQDN's).
Atomic indicators can be problematic, as they may or may not exclusively
represent activity by an adversary. For instance, an IP address from whence an
attack is launched could very likely be an otherwise-legitimate site. Atomic
indicators often need vetting through analysis of available historical data to
determine whether they exclusively represent hostile intent.
Computed indicators
are those which are, well, computed. The most common amongst these indicators
are hashes of malicious files, but can also include specific data in decoded
custom C2 protocols, etc. Your more complicated IDS signatures may fall into
this category.
Behavioral indicators
are those which combine other indicators — including other behaviors - to form
a profile. Here is an example: 'Bad guy 1 likes to use IP addresses in West
Hackistan to relay email through East Hackistan and target our sales folks with
trojaned word documents that discuss our upcoming benefits enrollment, which
drops backdoors that communicate to A.B.C.D.' Here we see a combination of
computed indicators (Geolocation of IP addresses, MS Word attachments
determined by magic number, base64 encoded in email attachments) , behaviors
(targets sales force), and atomic indicators (A.B.C.D C2). To borrow some
parlance, these are also referred to as Tactics, Techniques, and Procedures
(TTP's). Already you can probably see where we're going with
intelligence-driven response... what if we can detect, or at least investigate,
behavior that matches that which I describe above?
One
likes to think of indicators as conceptually straightforward, but the truth is
that proper classification and storage has been elusive. I'll save the
intricacies of indicator difficulties for a later discussion.
The behavioral aspect of indicators deserves its own section. Indeed, most of
what we discuss in this installment centers on understanding behavior.
The best way to behaviorally describe an adversary is by how he or she does his
job — after all, this is the only discoverable part for an organization that is
strictly CND (some of our friends in the USG likely have better ways of
understanding adversaries). That "job" is compromising data, and
therefore we describe our attacker in terms of the anatomy of their attacks.
Ideally,
if we could attach a human being to each and every observed activity on our
network and hosts, we could easily identify our attackers, and respond
appropriately every time. At this point in history, that sort of capability
passes beyond 'pipe dream' into 'ludicrous.' However mad this goal is, it
provides a target for our analysis: we need to push our detection
"closer" to the adversary. If all we know is the forged email address
an adversary tends to use in delivering hostile email, assuming this is
uniquely linked to malicious behavior, we have a mutable and temporal indicator
upon which to detect. Sure, we can easily discover when it's used in the
future, and we are obliged to do so as part of our due diligence. The problem
is this can be changed at any time, on a whim. If, however, the adversary has
found an open mail relay that no one else uses, then we have found an indicator
"closer" to the adversary. It's much more difficult (though, in the
scheme of things, still somewhat easy) to find a new open mail relay to use
than it is to change the forged sending address. Thus, we have pushed our
detection "closer" to the adversary. Atomic, computed, and behavioral
indicators can describe more or less mutable/temporal indicators in a
hierarchy. We as analysts seek the most static of all indicators, at the top of
this list, but often must settle for indicators further from the adversary
until those key elements reveal themselves. The figure below shows some common
indicators of an attack, and where we've seen them fall in terms of proximity
to the adversary, variability, and inversely mutability and temporality.
Fig 1: Indicator Hierarchy
That
this analysis begins with the adversary and then dovetails into defense makes
it very much a security intelligence technique as we've defined the term.
Following a sophisticated actor over time is analogous to watching someone's
shadow. Many factors influence what you see, such as the time of day, angle of
sun, etc. After you account for these variables, you begin to notice nuances in
how the person moves, observations that make the shadow distinct from others.
Eventually, you know so much about how the person moves that you can pick them
out of a crowd of shadows. However, you never know for sure if you're looking
at the same person. At that point, for our purposes, it doesn't matter. If it looks
like a duck, and sounds like a duck... it hacks like a duck. Whether the same
person (or even group) is truly at the other end of behavior every time is
immaterial if the profile you build facilitates predicting future activity and
detecting it.
We have found that the phases of an attack can be described by 6 sequential
stages. Once again loosely borrowing vernacular, the phases of an operation can
be described as a "kill
chain." The importance here is not that this is a linear flow -
some phases may occur in parallel, and the order of earlier phases can be
interchanged - but rather how far along an adversary has progressed in his or
her attack, the corresponding damage, and investigation that must be performed.
Fig. 2: The Attack Progression
The reconnaissance phase is straightforward. However, in security intelligence,
often times this is manifested not in portscans, system enumeration, or the
like. It is the data equivalent: browsing websites, pulling down PDF's,
learning the internal structure of the target organization. A few years ago I
never would've believed that people went to this level of effort to target an
organization, but after witnessing it happen, I can say with confidence that it
does. The problem with activity in this phase is that it is often
indistinguishable from normal activity. There are precious few cases where one
can collect information here and find associated behavior in the delivery phase
matching an adversary's behavioral profile with high confidence and a low false
positive rate. These cases are truly gems — when they can be identified, they
link what is often two normal-looking events in a way that greatly enhances
detection.
The weaponization phase may or may not happen after reconnaissance; it is
placed here merely for convenience. This is the one phase that the victim
doesn't see happen, but can very much detect. Weaponizaiton is the act of
placing malicious payload into a delivery vehicle. It's the difference in how a
Soviet warhead is wired to the detonator versus how a US warhead is wired in.
For us, it is the technique used to obfuscate shellcode, the way an executable
is packed into a trojaned document, etc. Detection of this is not always
possible, nor is it always predictable, but when it can be done it is a highly
effective technique. Only by reverse engineering of delivered payloads is an
understanding of an adversary's weaponization achieved. This is distinctly
separate and often persistent across the subsequent stages.
Delivery is rather straightforward. Whether it is an HTTP request containing
SQL injection code or an email with a hyperlink to a compromised website, this
is the critical phase where the payload is delivered to its target. I heard a
term just the other day that I really like: "warheads on foreheads"
(courtesy US Army).
The
compromise phase will possibly have elements of a software vulnerability, a
human vulnerability aka "social engineering," or a hardware
vulnerability. While the latter are quite rare by comparison, I include
hardware vulnerabilities for the sake of completeness.
The
compromise of the target may itself be multi-phase, or more straightforward. As
a result, we sometimes have the tendency to pull apart this phase into separate
sub-phases, or peel out "Compromise" and "Exploit" as
wholly separate. For simplicity's sake, we'll keep this as a single phase. A
single-phase exploit results in the compromised host behaving according to the
attacker's wishes directly as a result of the successful execution of the
delivered payload. For example, if an attacker coaxes a user into running an
EXE attachment to an email which contained the desired backdoor code. A
multi-phase exploit typically will involve delivery of shellcode whose sole
function is to pull down and execute more capable code upon execution.
Shellcode often needs to be portable for a variety of reasons, necessitating
such an approach. We have seen other cases where, possibly through sheer
laziness, adversaries end up delivering exploits whose downloaders download
other downloaders before finally installing the desired code. As you can
imagine, the more phases involved, the lower an adversary's probability for
success.
This is the pivotal phase of the attack. If
this phase completes successfully, what we as security analysts have
classically called "incident response" is initiated: code is present
on a machine that should not be there. However, as will be discussed later, the
notion of "incident response" is so different in intelligence-driven
response (and the classic model so inapplicable) that we have started to move
away from using the term altogether. The better term for security intelligence
is "compromise response,"
as it removes ambiguity from the term "incident."
The command-and-control phase of the attack represents the period after which
adversaries leverage the exploit of a system. A compromise does not necessarily
mean C2, just as C2 doesn't necessarily mean exfiltration. In fact, we will
discuss how this can be exploited in CND, but recognize that successful
communications back to the adversary often must be made before any potential for
impact to data can be realized. This can be prevented intentionally by
identifying C2 in unsuccessful past attacks by the same adversary resulting in
network mitigations, or fortuitously when adversaries drop malware that is
somehow incompatible with your network infrastructure, to give but two
examples.
In
addition to the phone call going through, someone has to be present at the
other end to receive it. Your adversaries take time off, too... but not all of
them. In fact, a few groups have been observed to be so responsive that it
suggests a mature organization with shifts and procedures behind the attack
more refined than that of many incident response organizations.
We
will also lump lateral movement with compromised credentials, file system
enumeration, and additional tool dropping by adversaries broadly into this
phase of the attack. While an argument can be made that situational awareness
of the compromised environment is technically "exfiltration," the
intention of the next phase is somewhat different.
The exfiltration phase is conceptually very simple: this is when the data,
which has been the ultimate target all along, is taken. Previously I mentioned
that gathering information about the environment of the compromised machine
doesn't fall into the exfiltration phase. The reason for this is that such data
is being gathered to serve but one purpose, either immediately or longer-term:
facilitate gathering of sensitive information. The source code for the new O/S.
The new widget that cost billions to develop. Access to the credit cards, or
PII.
As
we analyze attacks, we begin to see that different indicators map to the phases
above. While an adversary may attempt to use the exploit du
jour to compromise
target systems, the backdoor (C2) may be the same as past attacks by the same
actor. Different proxy IP addresses may be used to relay an attack, but the
weaponization may not change between them. These immutable, or
infrequently-changing properties of attacks by an adversary make up
his/her/their behavioral profile as we discussed in moving detection closer to
the adversary. It's capturing, knowing, and detecting this modus
operandi that
facilitates our discovery of other attacks by the same adversary, even if many
other aspects of the attack change.
This
need for the accumulation of indicators for detection means that analysis of
unsuccessful attacks is important, to the extent that the attack is believed to
be related to an APT adversary. A detection of malware in email by perimeter
anti-virus, for instance, is only the beginning when the weaponization is one
commonly used by a persistent adversary. The backdoor that would have been
dropped may contain a new C2 location, or even a whole new backdoor altogether.
Learning this detail, and adjusting sensors accordingly, can permit future
detection when that tool or infrastructure is reused, even if detection at the
attack phase fails. Discovery of new indicators also means historical searches may
reveal past undetected attacks, possibly more successful than the latest one.
Analysis
of attacks quickly becomes complicated, and will be further explored in future
entries culminating with a new model for incident response.
As
a derivative (literary, not mathematical) of the analysis of attack
progression, we have the indicator lifecycle. The indicator lifecycle is
cyclical, with the discovery of known indicators begetting the revelation of
new ones. This lifecycle further emphasizes why the analysis of attacks that
never progress past the compromise phase are important.
Fig. 3: The Indicator Lifecycle State
Diagram
The
revelation of indicators comes from many places - internal investigations,
intelligence passed on by partners, etc. This represents the moment that an
indicator is revealed to be significant and related to a known-hostile actor.
This
is the point where the correct way to leverage the indicator is identified. Sensors
are updated, signatures written, detection tools put in the correct place,
development of a new tool makes observation of the indicator possible, etc.
This
is the point at which the indicator's potential is realized: when hostile
activity at some point of the kill chain is detected thanks to knowledge of the
indicator and correct tuning of detection devices, or data mining/trend
analysis revealing a behavioral indicator, for example. And of course, this
detection and the subsequent analysis likely reveals more indicators. Lather,
rinse, repeat.
In the next section, I will walk through a few examples and
illustrate how following the attack progression forward and backward leads to a
complete picture of the attack, as well as how attacks can be represented
graphically. Following that will be our new model of network defense which
brings all of these ideas together. You can expect amplifying entries
thereafter to further enhance detection using security intelligence principles,
starting with user modeling.
Michael is a senior member of Lockheed
Martin's Computer Incident Response Team. He has lectured for various audiences
including SANS, IEEE, and the annual DC3 CyberCrime Convention, and teaches an
introductory class on cryptography. His current work consists of security
intelligence analysis and development of new tools and techniques for incident
response. Michael holds a BS in computer engineering, has earned GCIA (#592) and
GCFA (#711) gold
certifications alongside various others, and is a professional member ofACM and IEEE.