Skip navigation
← Back to Index

A Security Analyst’s Guide to Identity Threats

by Ted Kietzman and Jennifer Golden

00. Introduction

The concept of “workforce identity” has been around for a long time. At any given employer, your human identity is mapped to a variety of company credentials and attributes allowing you to do things for work. In days of yore (say, the eighties), these “credentials” could have been a key to the office or a combination to a file cabinet for important documents.

In a modern workplace, these credentials and attributes are often digital (i.e. your corporate email address but also your role and permissions when interacting with specific software). If you’d like a comprehensive and well-researched history of workforce identity, Rak Garg did an excellent job in his piece “Identity Crisis: The Biggest Prize in Security.” Yet, as that piece, and many, other, pieces, of research and commentary point out, identity-based threats are on the rise.

There are many possible explanations for this trend. On the one hand, identity infrastructure keeps getting more complicated (i.e. the number of digital identities, the complexity of attributes associated with those identities, and the challenge in maintaining an up-to-date record of “current” identities are all increasing rapidly). On the other hand, identity-based attacks are gaining sophistication. Attackers often understand the security controls and develop interesting and creative ways to subvert them – especially by chaining techniques together.

However, the goal of this piece is not to attempt an in-depth explanation of why identity-based attacks are on the rise. Instead, it will be an in-depth look at the identity-based threat landscape as it stands today in early 2024. In other words, the goal of the piece is taxonomical – to illustrate and define current identity-based threats and then discuss mechanisms to both prevent and detect them. One important goal of the piece is to get “nigh comprehensive” - that is, to cover as much ground as possible when it comes to identity threats and how to address them. A reason for doing so is that many articles or blogs that discuss identity-based attacks (a bunch were read researching this piece!) do so for a specific threat or technique or use case. Therefore, to limit the tabs a security analyst might need to open when researching identity security, this post will attempt to centralize the lion’s share of context for anyone looking to understand the identity threat landscape.

That said, the post will be scoped to workforce identity. This means it will cover attacks on both individual worker identity and identity infrastructure like user directory or access management tools. However, the post will dodge (for now) some of the nuances associated with customer identity. The post will also focus on identities mapped to humans. There is a growing concern around machine and workload identity – but again, those concerns will remain outside the scope of this particular effort.

With those scoping pieces out of the way, let’s move on to how the post will be structured. There will be three parts:

I. Attackers, Attacks, and Techniques<BR> II. Prevention<BR> III. Detection

Attackers, Attacks, and Techniques will start with an overview of archetypal attackers and a few example attacks. As a note, the point here of highlighting attackers or attacks is not to lionize attackers or shame any particular security team associated with a breach, but to use these cases as learning examples. The section will then cover various techniques used by the attackers. In other words, this first section will cover the who, why, and how associated with identity-based attacks.

The Prevention section will cover security controls that help assuage or prevent identity-based attacks. The section is a super set of controls derived from recommendations across security professionals, threat research organizations, and in some cases – security vendors. The point of the prevention section is not to be an advertisement for any particular software or service, but to display general security tooling or controls that can help with identity security. Finally, the Detection section will cover how to detect identity-based attacks. To do so, the section will first overview common out-of-the-box identity-based detection logic provided by identity tools. These types of analytics typically contain if then logic looking out for known attack patterns and signatures. Then, the section will discuss identity-based detection more generally – showing which logs are effective for use in identity detection and some strategic thoughts on how to improve and tune detection logic.

Without further ado, let’s get into it. We hope you find this overview and discussion helpful. If we’ve missed anything glaring or obvious, know that we too are only human. Please X at us on X (eyeroll) @duolabs or send us an email directly at tkietzman@duosecurity.com or jgolden@duosecurity.com. We’re happy to update the piece to include new components.

01. Attackers, Attacks, and Techniques


Attackers

Let’s start with the attackers, or the “who.” To be clear, there are many types of attackers out there. There are amateur assailants – looking to steal personal crypto wallets from the average bitcoin enthusiast. There are also nation-state actors attempting to commit espionage and cyberwarfare. Both types of attackers, and many variants in-between, could fill a post with their associated attacks and techniques (and some of that information might overlap with this piece). However, the focus of this section is to profile the professional groups leveraging identity-based techniques on companies - with the direct or indirect goal of making money. The word “professional” here does not necessarily connote anything other than coordinated, sophisticated, and dangerous.

A few of the major breaches in the last 18 months have been attributed to ambitious groups with attack patterns that employ identity-based tactics. The groups obviously do not have a home office – and naming attacker groups is more than a bit murky. Each threat intelligence organization tends to create a unique name for each criminal organization. This can lead to both confusing names, but also multiple names referring to the same group of bad actors.

For example, the group known as Scattered Spider was introduced by Crowdstrike in 2022 – and CISA also tracks the group under the moniker Scattered Spider. However, the group (or at least a group that shares many similarities) is also tracked under the name Octo Tempest by Microsoft Threat Intelligence. They are also potentially tracked as UNC3944 by the Mandiant Threat Research team. In all honesty, the naming and tracking of attack groups almost seems purposefully confusing. And, it leads to an unneeded sense of fear and paranoia (i.e. look how many names are out there!).

At the end of the day, regardless of naming structure or whether the names even correspond to the same individuals - the most important thing is to look across names to evaluate what connects the various groups. In the case of Scattered Spider, Octo Tempest, UNC3944 - they are all bound by their attack types, goals, and the signre techniques they use to execute these attacks. Before diving deeper into their attacks and techniques, let’s do a quick overview of the attacker characteristics across these different names.

According to Microsoft, “Octo Tempest is a financially motivated collective of native English-speaking threat actors known for launching wide-ranging campaigns that prominently feature adversary-in-the-middle (AiTM) techniques, social engineering, and SIM swapping capabilities.”

CISA highlights that Scattered Spider threat actors “are considered experts in social engineering and use multiple social engineering techniques, especially phishing, push bombing, and subscriber identity module (SIM) swap attacks, to obtain credentials, install remote access tools, and/or bypass multi-factor authentication (MFA).”

Finally, Mandiant reports, “UNC3944 relies heavily on social engineering to obtain initial access to its victims. They frequently use SMS phishing campaigns and calls to victim help desks to attempt to obtain password resets or multifactor bypass codes.

The above quotes all highlight social engineering and SIM-swapping, but by consolidating additional elements the separate reports further we can construct a portrait of the identity-based attacker as:

  • Financially motivated, most likely English-speaking, based in the Western hemisphere
  • Adept at both basic and advanced social engineering techniques
    • Willing to employ threats of violence to coerce
  • Familiar with common identity and IT infrastructure and associated workflows
  • Technically conversant to proficient in identity and access management protocols (ex: SAML, OAuth)

Now, with our courtroom sketch of an identity-based attacker in hand, let us move on to the type of attack this persona / group deploys.

Attacks

This section will highlight three attacks: MGM, Okta, and Microsoft. To reiterate the caveat mentioned briefly in the introduction, analyzing or re-hashing a breach or attack for this piece is in no way a slight to these brands or their security teams – it is meant as a learning exercise. Moreover, the examples are always figurative because we don’t really know exactly what happened. Data on breaches comes from companies sharing the details required by law (sometimes more if they are proactive) and from the attackers themselves claiming credit – not ideal source material.

With those notes, one reason to look at specific breaches is to understand “why” attackers are attacking in the first place. At this point, it would be easy for you to raise your hand and say - “you just stated a few sentences ago that these attackers are financially motivated, so isn’t the why = money?”

Well, that’s part of the reason to choose these three attacks as illustrative. Let’s start with the case of MGM. In this example, the purported attack chain is as follows: attackers researched MGM employees on LinkedIn, they chose specific targets with roles and titles that had potential for high privilege, they contacted the help desk impersonating an employee asking for an authentication reset (ex: password and MFA factors). After successfully gaining entry, the compromised super admin account deployed ransomware causing outages across the MGM environment – forcing MGM to cooperate and pay the attackers.

Obviously, this is an oversimplification, and the post will dig further into the attacker techniques discussed in the next section – but the point here is that the attackers were interested in direct financial compensation. The “why” of the attack was to gain administrative privilege at a level that would enable the deployment of meaningful ransomware – and to get paid directly from the coffers of MGM to make the ransomware stop.

This type of attack lays in contrast to the recent Okta or Microsoft breaches. In the case of Okta, a “threat actor gained unauthorized access to files inside Okta’s customer support system,” with some of the files being “HAR files that contained session tokens which could in turn be used for session hijacking attacks.” Okta believes the initial unauthorized access was caused by the compromise of some service account credentials.

For Microsoft, “a threat actor used a password spray attack to compromise a legacy non-production test tenant account and gain a foothold, and then used the account’s permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents.”

In both the Okta and Microsoft attacks, there was no direct call for money. No ransomware or extortion. These attacks were about indirect financial compensation. The attackers were searching for information to be leveraged later. Session tokens for Okta customers, for example, are not as liquid as cash, but they can enable targeted exploits in the future. The same goes for compromising the emails of a leadership team member – this type of access unlocks many other attacks.

What is the point in drawing the distinction between direct and indirect financial compensation? By understanding the “why” of an attack, we can be better equipped to prevent and detect them. More specifically, it’s often easier to believe an attacker always wants direct compensation – we secure the figurative bank vault or lock boxes. However, by focusing narrowly on ways attackers might steal directly, it can be difficult to map resources to how they might steal indirectly. If we only secure super administrator accounts with the power to invoke ransomware, we might miss the support ticket system that holds credentials for other customers.

In any case, now that we’ve covered the who and the why, let’s move on to a much meatier topic: the how.

Techniques

There are a lot of attack techniques. So many in fact, that it’s no small task even to label them all. There are well-researched and maintained attack technique glossaries like MITRE ATT&CK or Push Security’s SaaS Attack Matrix. MITRE ATT&CK is a particularly broad, yet granular, repository. But, comprehensive glossaries can be hard to parse. Not all techniques are relevant to every given situation and reading through the whole framework is daunting.

Therefore, the following section will not try to recreate a complete list of all attack techniques, instead it will take a slightly different approach to categorization. For one, the scope of the section will again be identity-based methods, therefore it will exclude many other attack mechanisms. Secondarily, the research for this post included many different sources on identity-based attackers, their attacks, and their techniques. The following is a compilation of “greatest hits” across those sources. Each of the following attack techniques is regularly and effectively deployed by identity-based attackers and should be considered by any security team looking to defend the identity perimeter.

Moreover, before we begin, let’s not forget that “identity is the new perimeter.” Why reference the old phrase now? Because many of these techniques are used to gain initial access to an organization’s environment. Identity, authentication, and access management are key checkpoints that attackers need to bypass to “get in.” Therefore, the techniques that follow are often mechanisms for obtaining initial access. However, they won’t be labeled “initial access” explicitly because many of these techniques can be used again to escalate privileges, move laterally, or even establish persistence.

a. Social Engineering

Many cyberattacks have some element of social engineering. In the 2023 Verizon Data Breach Investigations Report they find that, “74% of all breaches include the human element, with people being involved either via error, privilege misuse, use of stolen credentials or social engineering.” While a daunting term, its basic definition is when an attacker impersonates someone with the goal of manipulating trusted users (typically an employee) into either knowingly or unknowingly granting access to said user’s organization. In the real world, this might look like someone pretending to be a policeman that is warning families about the increase in robberies in their neighborhood over the holidays. In the cyber-world this can look like an attacker pretending to be an employee asking the IT team for help after getting locked out of their account. Or an attacker pretending to be a member of the IT team to trick an employee into clicking on a malicious link.

As with most cyberattacks, social engineering has evolved from immature attempts to trick users (think Nigerian prince) to sophisticated, well-researched, and convincing attacks. If an employee gets an email from their IT team that says, “Resut your PassWord NOW!”, this will raise red flags. But if that same employee gets an email from their IT Director, who knows their name, their title, and the application they are using and that email says: “Hi there, sorry to bother you, but there was an error with your account when you attempted to access Salesforce earlier this week. Can you reset your password here? Thanks!” Now, that employee might not think twice about clicking on the link. And attackers know how to do their research. They understand the tone corporate employees use in emails. They might use a phrases like, pain point, pivot, or touch base. They are proficient English speakers - so their messages sound professionally written, and they can sound convincing on the phone. They can use LinkedIn to reference people’s roles, responsibilities, and colleagues. Many of these socially engineered emails would take a cyber professional to spot the red flags. In addition to manual research and development of social engineering content, the new era of generative AI only enables attackers to scale their operations more quickly. Attackers can leverage the Large Language Models (LLMs) of ChatGPT, Google, and Meta to research firms and scrape social media more succinctly and to craft more professionally-toned content than ever before.

b. Phishing

Phishing is probably the most well-known of the attack techniques on the internet. In Talos’ 2023 Year in Review, they found that phishing and compromised credentials made up 42% of attackers’ initial access. CISA defines phishing as a form of social engineering used by an attacker to get an individual to reveal login credentials or deploy malware. Social engineering is incorporated in a phishing attack when an attacker impersonates someone the victim knows to make it look like it comes from a trusted source. Spear fishing takes this a step further. This is when a specific individual or organization is targeted to give the attacker more credibility in tricking the victim. In real-life, this might look like someone saying, “Hey nice to meet you! I got your information from your brother, Henry.” You might think since this person knows my brother Henry, they are harmless. But what if they only know you have a brother named Henry from Instagram? Now this situation is sketchy.

Like social engineering, attackers using phishing techniques have discovered new insights about user behavior to improve the sophistication and success of these attacks.

Phishing doesn’t always have to target corporate accounts to get access to corporate resources. A common practice with the rise of hybrid work is for workers to treat their work computers like their personal computers. That might mean they sync their iCloud account to their work computer to send personal text messages or log into their Gmail account in the same chrome browser as their work Gmail. In these cases, phishing for personal Gmail credentials can provide similar value to corporate ones. When an attacker comes across a personal account that then opens up the keys to an enterprise, it is like striking gold. For a concrete example of attackers leveraging personal Gmail to pivot into workforce access, Cisco Talos did a nice write-up of a 2022 breach at their own Cisco Systems.

c. Password Spray, Brute Force, and MFA Fatigue

Ultimately, the main goal of many social engineering and phishing attacks is to get a user to give up their password, and therefore access, to their account. Attackers can also employ other, less subtle ways to gain access. One way they might try to force their way in is through password spray when they take the most commonly used passwords and try them across an entire company’s users to see if one sticks. If they try repeatedly, they’ll eventually find someone with the password, “password.” Attackers also might try the brute force method where they’ll try different combinations of usernames and passwords so they might try “password1” next and then “1234” after that. And attackers have seen success through these methods.

However, we don’t have a lot of insight into which types of attacks are used the most frequently or gaining the most traction. From the Verizon Report:

“We might have a good idea in terms of the different ways that one would be capable of getting credentials, such as buying them from password stealers who are nabbing them through social engineering or even spraying them in a brute force attack. What we don’t have is the exact breakdown of how many of our breaches and incidents are caused by each.”

But as attackers find success, organizations put new barriers in place, including multi-factor authentication. MFA using traditional push notifications has been a simple solution to protect against compromised passwords, regardless of the method the attacker used to get that password. So just like security has evolved, attackers evolve right along with it. In the past few years, we have seen a rise in MFA bombing, or MFA fatigue attacks. This is where an attacker will send one push request that a user might absentmindedly accept (especially when timed purposefully to coincide with the beginning of that user's workday) or send many MFA push requests until the user gives in and accepts it just to make their phone stop buzzing. Beyond Identity found in a survey of 1000 users that 62% had experienced an MFA fatigue attack. And in HYPR’s The State of Passwordless Security in 2022 they found a 33% increase rise in these attacks from the prior year, and in the 2023 report, they found a 133% increase.

Microsoft also found that about 1% of their users will accept a simple push request on the first try. And if an attacker has a username and password, there is nothing to stop them from hitting “login” over and over, sending MFA requests to the correct user over and over. While not the most technologically sophisticated, these MFA attacks take advantage of workers’ familiarity with MFA or harass them into accepting a fraudulent MFA.

d. Session Hijacking

Session Hijacking is another form of “compromising” an identity’s “credentials.” Traditionally, credential compromise is associated with the stealing of a username and password – or, potentially even an MFA one-time passcode as well. These types of “credentials” are presented at authentication to prove an identity can be trusted. However, the authentication process for web applications typically mints another type of credential: a session token.

Session tokens are granted after an identity has been authenticated so that user can stay signed in across different parts of a web application, or – if the organization is using a Single Sign-On solution – across different applications all together. The session token is stored by the web browser and referenced at subsequent waypoints as a credential proving the user has already authenticated. Just like other forms of credentials, they aren’t inherently bad - they help make navigating web applications much more convenient by reducing the number of times users are expected to present credentials manually.

However, a motivated attacker can target session tokens as an effective means to masquerade as an authenticated end user. By stealing a valid session token, attackers can access the same application as the original user by providing the stolen token – effectively "hijacking” the session.

There are currently two common ways for an attacker to steal a token. One is using Info Stealer malware delivered via phishing email or an attacker-controlled website. The use of malware for stealing access credentials is an unsettling one – so much so that SpyCloud listed it as the number one trend in their Annual Identity Exposure Report for 2023. Info Stealer malware can target credentials of any type (i.e. passwords in a password manager), however, they are often explicitly used to steal session tokens from the browser.

The second common method to steal a session token is by phishing a user to walk through an adversary-in-the-middle workflow as opposed to the valid authentication process. An adversary-in-the-middle attack directs an end user to an attacker-controlled domain with prompts to authenticate to an application the worker uses. In many cases, the attacker domain is proxied to the valid application and is veneered to look exactly like the normal login page or process.

When the unsuspecting user enters their credentials into the attacker-controlled page, the malicious domain proxies the entered credentials to the valid site. In cases where multi-factor authentication is required, the end user is then prompted with a regular MFA challenge, in whatever methods they normally have enabled for the application. The adversary-in-the-middle domain then forwards the MFA response to the valid application and receives the session token alongside the user. And, voila, the attacker is now the proud owner of a valid session token.

e. Business Email Compromise (BEC) & Instant Message Compromise

Business Email Compromise (BEC) is when an attacker leverages the control of a corporate-owned email account to fraudulently interact with employees, customers, or external vendors. Since control of an email account is required, BEC is typically attempted after an attacker has gained initial access. BEC is a common way for attackers to realize their direct or indirect financial goals.

The most common example of directly realizing financial fraud is the attacker searching the email inbox for conversations related to payments. After researching the process for completing a legitimate transaction within the organization, the attacker will initiate a fraudulent transaction via proper channels and contacts – using their control over a trusted email to diminish suspicion.

However, attackers can also leverage BEC to satisfy indirect financial goals. An indirect goal might be researching the credentials and behaviors of more powerful individuals in the company before attempting to compromise their inbox. It can also mean posing as a trusted business partner to gather information about external vendors or customers that might be worth exploiting. Business Email Compromise has been an effective technique for attackers for some time now. In 2022, the FBI reported $2.7 billion in organizational losses associated with BEC.

However, it is important to note that attackers are also constantly evolving their techniques. The value of BEC for an attacker is leveraging an organization’s trusted communication channel for fraudulent purposes. Therefore, now that many companies use instant messaging applications like Slack or Microsoft Teams alongside their corporate email – attackers have a new avenue to complete Business Email Compromise. Let’s call it Instant Message Compromise.

Luke Jennings and the Push Security team have done some fantastic research on the rise of Slack and Teams compromise. The concept is similar – attackers can now look to compromise the messaging application within an organization and leverage that trust into direct or indirect compensation. The worrisome element in this case is the relative lack of maturity around security protocol around messaging. For example, many companies use Slack to chat with customers and third parties via a feature called Slack Connect. Attackers can leverage Slack Connect to gain initial access and then fade into the Slack environment by customizing profiles and editing old message content. Instant Messaging Compromise is another example of how any given technique can be adapted to the new normal – as business email has transitioned to business chat – attackers have evolved as well.

f. Compromised Identity Provider

One of the more powerful techniques an attacker can perform is to compromise the identity infrastructure itself. Compromising an identity provider is a bit like using a genie to wish for more wishes, when done successfully, an attacker’s privileges and power within the organizational environment become extremely formidable. To compromise the identity provider at all an attacker must gain super administrative privileges to the relevant identity infrastructure. This typically means gaining access to Microsoft Entra or Okta as an administrator. Therefore, this technique usually comes later in the attack chain, after an attacker has already gained initial access.

Once an attacker has obtained administrative privileges to identity infrastructure – there are several key actions they can take to deepen their footprint and advance their attack. For one, they can change authentication requirements. It would be easy for them to remove an MFA requirement – or even just weaken the additional factor needed to authenticate. This would let them exploit other high-profile targets like a CEO or CFO more easily.

Another powerful way to use administrative access to an identity provider is to create additional administrators or modify existing administrators to have credentials they control. This can be helpful in both expanding their footprint and if the attacker wants to cover their tracks as they move on with their exploit. After creating the new administrator or modifying an old one, they can usually delete the associated logs. Moreover, a newly minted administrator typically won’t trigger detection logic that requires a baseline of behavioral data.

Finally, and most dramatically, an identity administrator can start to modify the identity infrastructure itself. Okta’s security team recently highlighted an exploit where, after gaining administrative access via social engineering, attackers configured a new Identity Provider (IdP) and linked this IdP to the compromised organization’s original Identity Provider via an inbound federation relationship. This inbound federation meant the attackers could effectively pass authentication from a “poisoned” IdP under attacker control to any application that trusted the organization’s original IdP. This is an incredibly powerful way move laterally within an organization into any application protected by the original IdP.

Well, there you have it, we’ve covered quite a bit of ground on techniques. As a reminder as we conclude the section, these techniques are not the only ones employed by identity-based attackers. However, they would most certainly be on the greatest hits album – noteworthy for both their consistent presence in identity-based attack chains and their continuing effectiveness. Before moving on to Part II, it’s important to highlight that this section aimed to vividly illustrate identity-based attackers, the types of motivations and attacks they commit, and the techniques they use to do so. By providing an accurate picture of these attackers, it becomes easier to strategize about how to defend against their attacks & techniques both by preventing them outright and detecting them quickly and effectively when they do strike. In the next section, the piece will discuss both concepts and concrete measures organizations can employ to prevent identity-based attacks – or at least dramatically bolster their defenses.

02. Prevention

The prevention section has two goals. The first is to discuss theoretical or philosophical approaches to improving posture against identity-based threats. The second is to explore concrete security controls that can be put in place to aid in preventing identity attacks, or at least limiting their blast radius. Both are important considerations when developing an effective security program. On the one hand, theory provides guiding principles for how to approach security strategically. Yet, one criticism of many theories is that they are idealized or impossible to put in place optimally. That may be true, but by using them as north stars – it becomes easier to know which direction to go. On the other hand, tactical security controls are incredibly helpful in the concrete sense that they can be put in place and monitored. But, by the same token, without a grander vision for how the tactics fit together – any given security control will be insufficient.

Security Philosophy: Zero Trust, Least Privilege, Zero Standing Privilege

To be a successful security organization, the team must align on a strategy. As mentioned, good security philosophy can guide good security practices. Zero Trust is a profoundly useful theory. As a set of guidelines, it provides strong value for security practitioners looking to address identity-based attacks. Unfortunately, over the last 10 to 15 years it has also become an adjective to be placed in front of any security tool for sale. To take a step back, let’s do a quick recap of Zero Trust, because at its heart – it's an incredibly simple philosophy.

After the analyst firm Forrester coined the term in 2009, “Zero Trust” became popular in the early 2010s in response to the fact that network-based security was becoming less reliable and effective. Trends like the migration to cloud-based computing, the increase in remote work, and the use of personal devices for work all decreased the potency of basing trust on whether a user was on a company-owned network. The core tenet of Zero Trust is relatively straightforward: instead of inherently trusting any user or device based on its connection to a trusted network – start every interaction as if the user or device was untrustworthy. This statement manifests itself in a few key principles:

  1. Never assume an entity is trusted
  2. Verify trust in an entity whenever possible
  3. Only give as much trust as required

Just like the Golden Rule tends to show up across religions and moral frameworks, these Zero Trust tenets tend to apply regardless of technology stack or control point. This means, as mentioned, basically any vendor or tool can argue that they provide some flavor of Zero Trust. This isn’t necessarily bad – it's just important to recognize the tool’s respective scope. To use another moral analogy, most vendors are saying the equivalent of “we’ll help you be a good person at ... the grocery store” (i.e. morality scoped to one location).

Another issue to look out for is the evolution in approach to any of the foundational tenets. The last decade or so sparked an ongoing discussion of ways to make Zero Trust principles more concrete and actionable. For example, the third tenet has carved out its own niche known as Least Privilege – or the idea that any user should only be provisioned the level of entitlement and privilege associated with their explicit job function. Least Privilege is often associated with the Identity Governance market space and specifically the functions of resource provisioning, role-based access control, and access control lists.

However, just like any philosophical debate, the foundations may stay the same, but the approaches may change over time. Recently, the security startup SGNL posted a challenge to the traditional Least Privilege model and called for a move to Zero Standing Privilege. The core qualm posed by Ian Glazer and the SGNL team is that under the Least Privilege model, users are still granted “birthright” privileges associated with their roles, but they don’t need these privileges on say, nights and weekends. The concept of Zero Standing Privilege argues that any given account should only be provisioned entitlements during an active session – when that session terminates any privileges associated with the account should be revoked. Zero Standing Privilege is associated with granular authorization policy and just-in-time access controls.

What’s the point of running through this? Well, there are three. For starters, security philosophies like Zero Trust are often very simple and provide nice guideposts for security strategy. However, it’s always up to the practitioner to choose how to concretely implement the theoretical framework (i.e. picking approaches, tools, and processes to realize the strategy). Second, when a philosophy becomes popular – security vendors of all stripes will claim to “do” the philosophy. It’s important to understand a vendor’s scope and how they will fit (or not) into an implemented strategy. Finally, approaches to strategic implementation will change (or at least be debated) over time – it's good to follow new approaches to any security framework to assess when a new way of doing things might significantly improve the current configuration.

However, at the end of the day, picking a strategy is only one portion of the puzzle. To realize any strategy, the security team must implement controls, tools and processes. The next section will highlight a compilation of concrete preventative measures for identity-based attacks. Again, these controls represent an amalgamation of best practices across a wide variety of sources. Hopefully, this will help security teams build out an identity security program, improve the one they have, or at least compare notes.

Preventative Measures

In this section, let’s move on from theory to practice. Given the profile of identity-based attackers and their techniques, what can be done specifically to deter and prevent their efforts? This section will cover measures that enhance and harden security posture against identity-based threats. Let’s start with a control that will span both prevention and detection and that’s identity visibility.

a. Visibility

The old adage “you can’t protect what you can’t see” also holds true for identity-based attacks. For any security team looking to prevent or detect identity-based attacks a first step will be understanding their current identity infrastructure. This means understanding the lifecycle of any given identity in the corporate environment and the macroscopic trends of identities on a daily basis.

Let’s zoom in on the lifecycle of any given identity. The security team should know each step in the process of a new employee getting their corporate accounts set up and provisioned with privileges. For example, is an employee record created in the HR system first and then synced to a corporate directory system? Are there multiple corporate directories? Does an employee's identity live in multiple sources? Which is the source of truth? What happens when an employee changes departments or leaves the company? How are privileges modified or deleted? By understanding this workflow end-to-end, the security team gains visibility both into the posture of the identity infrastructure (i.e. is the process secure?) but also into potential entry and lateral movement points in the identity lifecycle.

Regarding the macroscopic trends in the environment, a security team should understand the corporate access happening daily – especially for powerful accounts. The Microsoft Threat Intelligence organization explicitly recommends both understanding authentication flows in the environment and centralizing visibility of administrative activity into an easy-to-use report. For authentication flows, the types of authentication in use is particularly valuable – especially which identities are not performing stronger forms of authentication and why. On administrator activity, keeping logs of administrative creation, modification, and deletion actions will provide data on an “average day” for these powerful users.

By gaining a baseline of identity activity in an environment, it becomes easier to both maintain a stronger identity posture and detect suspicious or even malicious activity. A fair question to ask at this point is: “okay, but how?” As we want this section to provide concrete recommendations (not just theory), let’s discuss a couple methods for gaining identity infrastructure visibility.

To start, most Identity tooling will have a dashboard for activity within that platform. If the organization only uses one identity provider this might prove the most useful place to gather information. However, several tools are in place (as is the case in many larger enterprises), security teams may want to collect all relevant identity information into one place. Most Identity and Access Management tools have decent logging tools. If there is too much identity software in the environment to check each dashboard sustainably – centralizing relevant identity logs is the correct course of action. There are a variety of mechanisms to do this effectively pending organizational size and budget, but the most common ways are to centralize logs in:

  1. A Security Information and Event Management (SIEM) tool
  2. A custom, open-source logging stack (ex: Elastic, Logstash, Kibana)
  3. A dedicated Identity Threat Detection & Response tool

It’s beyond the scope of this piece to recommend which method will work best for any given organization – but the key point remains, security teams need visibility into their identity infrastructure and activity if they are going to prevent identity-based attacks.

b. Authentication

The next control to consider is authentication. Getting back to basics, authentication simply means proving you are who you say you are. On the internet, this is slightly more complicated than a handshake. As discussed, attackers are consistently and effectively impersonating corporate identities by leveraging potent techniques. Any time an attacker successfully utilizes an identity linked to a legitimate employee for nefarious purposes it is at least partly a problem of authentication. The problem of authentication is an old one on the internet – and many solutions have been put in place only to repeatedly be subverted by attackers. The first and most longstanding is passwords.

    i. Password Complexity

Most people understand the problems with passwords at this point (i.e. easy to steal, simple to guess, easy to spray to name a few). But, their use is still widespread as a first line of defense against identity-based attacks. The rise of MFA and now passwordless authentication are mitigating some of the password risk, but until the password passes into antiquity – it's still important to have good password hygiene wherever they exist in the environment. The two traditional mechanisms to increase the efficacy of passwords were to establish complex requirements and frequently rotate passwords. NIST recently removed password rotation as an explicit recommendation though – so, the only formal recommendation around passwords from the organization is increased complexity requirements. In any case, the password remains a relatively weak preventative measure. This is why adding an additional factor to authentication has been the standard recommendation for many years.

    ii. MFA and the MFA Spectrum

Most people understand the concept of multi-factor authentication at this point. The idea is that instead of just providing a single factor to authenticate, a user must also provide a second factor of assurance. In most workflows, the first factor is a password (something they know) and the second factor has traditionally been tied to a device (something they have). In many consumer use cases, the second factor is a text message to a phone the user is (in theory) controls.

However, MFA is more nuanced than that. And to dig in, it’s helpful to break down the relative strength of MFA factors and the use cases they can help prevent. Phillip Schafer, a Senior Manager of Security Data Science at Duo recently published a blog on the different types of MFA and how they hold up against identity attacks.

Physical compromise Logical compromise Phishing & MFA fatigue Social Engineering Adversary in the Middle
WebAuthn-based Varied Strong Strong Varied Strong
Push-based Varied Strong Varied Varied Weak
Token-based Varied Strong Weak Weak Weak
Telephony-based Weak Weak Weak Weak Weak

Most people are familiar with push-based or telephony methods in their personal, corporate, or student life. Telephony is when an SMS (Secure Message Service) passcode or phone call allow users to authenticate on their phones. Many individuals will use this type of authentication when logging into their personal accounts, like social media or their bank account. This is also a preferred method when an organization’s users do not have access to smartphones but need another factor to authenticate. However, it’s also the easiest for attackers to take advantage of as they can intercept the code and use it to login themselves.

Push-based authentication is when a user will accept a login request on an authentication application. This can either be a simple, “approve” or “deny,” or the user could be required to input a code from their access device (like their laptop) into the application. For the simple push-based requests, users run the risk of accepting a fraudulent request as we saw in the MFA fatigue/push bombing scenario. However, if trusted users are required to input a code every time they login, that can also lead to a decline in productivity and an increase in frustration with the security measures.

Token-based authentication options are a little less common as it requires a hardware device or application to generate a single-use passcode. These passcodes can expire after they are used (HOTP, or one-time password) or after a set amount of time (TOTP, or time-based passcode). Token-based codes share a similar weakness as telephony, as they are both vulnerable to passcode phishing. TOPT have a slight advantage since they do expire after a time limit, unlike the HOTP’s that can be used any time after they are generated.

Ultimately, the gold standard in MFA options, that can also be the most difficult to implement, is WebAuthn, or the Web Authentication API, backed by FIDO2 standards. A user registers their device and a biometric or code to receive credentials from the application without using a password to login. A typical WebAuthn login flow would include the user putting in their email, using TouchID to confirm their fingerprint (something you are) on their device (something you have), and that’s it. It’s easier for the end user and it’s phishing-resistant because those credentials can only be used on the actual website, and not a fake phishing page.

For employees that are used to painstakingly long and tedious login procedures, this sounds like a dream. Anyone unlocks their phone with their face or their computer with their fingerprint knows how much time, and brainpower in remembering passwords, can save you. You would think that anything that saves employees time (and therefore productivity and therefore money) and is also the most secure method would be an easy win for everyone. But when are things ever that easy?

    iii. Challenges with MFA

Therefore, it seems like the correct answer is to use the strongest form of authentication in all cases. In all the analysis of identity-based attacks researched for this piece, the first recommendation is putting in place phishing-resistant MFA as widely as possible. It came up every time. Why don’t teams do this currently? The answer is it’s complicated!

CISA and the NSA had a task force look into this issue and they found that adoption and employment of secure MFA was a key challenge with the technology that is currently available. One reason is that organizations working, or mandated, to deploy MFA do not know the different types of authentication methods available through their IAM provider. They also do not know if the MFA solution works with their infrastructure, or what other options exist in the market if their current provider cannot meet their needs. There is confusing and competing terminology across vendors, that makes this difficult for organizations.

There is also a lack of clarity on the MFA spectrum. Organizations need to be informed about the potential vulnerabilities of the less secure authentication options and be encouraged to adopt phishing-resistant MFA. On the vendor side, IAM vendors might support FIDO2 authenticators but have restrictions on how they can be used which can limit adoption. These restrictions can be around the need for a biometric on the device, or certain modernization requirements to utilize WebAuthn credentials.

Another key challenge with MFA can be the governance around enrollment, modification, and deletion of MFA factors. Organizations must manage the life cycle of employees. All authentication credentials are associated with user identities and must be managed just like the identity itself as employees onboard and offboard into the organization.

So what does the MFA lifecycle look like for an end user? When an employee is hired, they will be given their new corporate email and will have to self-enroll in MFA. Once enrolled, the user might also have privileges to enroll a new device if they lost theirs or upgraded their laptop. In this scenario, the user should be able to manage their devices in the MFA portal, but there should be requirements around only secure factors on trusted devices so attackers cannot register their attacker device on a trusted user’s MFA account. Finally, organizations should have policies around factor reset, or resetting or modifying MFA factors for existing users. This can include policies around when to provide a bypass code if a user is locked out of an account, and when users can use weaker factors (or only strong factors) to access specific applications.

c. Authorization

Authentication and Authorization (or, AuthN and AuthZ for the hip) are often confused, but their distinction is incredibly important for preventing identity-based attacks. As mentioned, authentication is confirming an identity is who they claim to be – authorization, on the other hand, is the level of privilege associated with a given identity in any particular use case. The most common example is that a guest may be authorized to come into your home, but not necessarily certain rooms. Or, your dog may be authorized to be in the house, but not on the couch. This problem is separate from whether the guest is the person you know, or that the dog is really your dog.

In the workforce identity space, authorization is often designated a component of corporate operations. (i.e. this new worker needs access to applications to do their job). As such, the process of provisioning application access and related privileges typically lives under the umbrella of IT. However, as mentioned in the visibility section, it is important for security professionals to have insight into this process and to collaborate with IT for mapping which types of employees get which types of access.

For example, many organizations leverage user groups and roles in their identity infrastructure to control access and privilege. The security team should understand these groups and help designate their relative privileges. The third Zero Trust principle should be invoked at this point: whenever possible, identities should only be granted the level of privilege needed to do their jobs. Another piece of the puzzle is controlling privileges within applications and resources. Many applications include some form of role-based access control, designating specific roles with the power to read, write, edit, and delete etc. For example, an administrator may be able to do all four actions, while an analyst can only “read.” Making sure that employee identities have the proper power within applications is also key to limiting the “blast radius” of any given identity compromise.

One concrete example of improving authorization posture is the problem of dormant or inactive accounts. Unused accounts still retain their privileges! Therefore, they are a compelling avenue for identity-based attackers to probe for entry. By making sure that unused accounts are deactivated, security teams can reduce a segment of their identity perimeter.

When it comes to administrative activity specifically, the security team should be intimately involved in monitoring and protecting these accounts. For organizations at a larger scale, this can often mean directly investing in a Privileged Access Management (PAM) solution dedicated to securing powerful super administrators. But in general, there are a few key controls to think about when approaching administrative accounts. The first is to limit the number of administrator accounts to the lowest possible number. The second is to be strict about granting new administrative accounts and be extremely proactive in disabling or deactivating unused administrators. Finally, monitor administrative activity closely for anomalous or suspicious activity. The piece will cover things to monitor more thoroughly in the detection section, but understanding administrator activity and actions is critical to identity security.

d. Access Policy

Access Policy is another key element in the prevention of identity-based attacks. The idea behind access policy is relatively straightforward: deciding when, how, and where user identities are allowed to access corporate applications and resources. Microsoft’s Conditional Access feature is probably the most ubiquitous mechanism to implement access policy today – but all identity and access management tools have some way to implement policy (for example, here’s documentation for setting policy in Okta and Duo.

While the stated goal of access policy is simple, implementing effective policy is incredibly difficult. There are a few reasons for this, but for starters - nested if-then logic is always a pain. And, access policy dealing with any sort of user group or per-application components tends to become a tangled mess of if-then properties very quickly. A second complicating component is whether there is policy from other security tools overlapping or overriding the policy from the IAM tool. Sure, it’s possible to set policy in the identity provider, but that policy is meaningless if the endpoint detection and response (EDR) software or the Zero Trust Network Access (ZTNA) tool overrule the policy. Finally, policy is often a blunt tool. Instead of providing granular options for what happens when a policy state is triggered, many tools will offer simple allow/block functionality. This can lead to either blocking users too often (inciting the gnashing of teeth) or permitting many “close calls” pass though.

All that to say, policy is not simple in implementation. It takes effort and planning to make sure access policy is doing the correct thing in the correct moment. But, taking that into account – there are some core components that should be added to access policy to strengthen it against identity-based attacks.

    i. Device Trust Policy

Requiring that all access to corporate resources come from a trusted device is a strong deterrent against compromised credentials. If an attacker manages to steal a username, password, and even their MFA code – they will be blocked if they aren’t attempting access from the correct device. This type of policy may seem hard to put in place without managing all the devices in an environment – and for the extremely security conscious only allowing managed devices might be the right path to take. However, many identity providers now offer a slightly lighter variant of device management where end users can “register” their device in the user directory. When they do this, the directory can link their identity to a trusted device. From here, a policy can be set to enforce only previously registered devices can be used to login.

Device Trust policy can be expanded to include an assessment of the device at authentication as well. Many tools can check if a device is running up to date software or if the right security software is running. By including posture assessment alongside the trusted device check, defenders can narrow the window for attackers even further.

    ii. Location or IP Policy

Restricting access to locations or IP ranges where the organization does business can be another useful way to prevent unwanted access. If the business only has employees in the United States and Canada, then access to corporate resources should be limited to those two countries. If an attacker impersonating an employee attempts access from outside the trusted geographic region – they will be blocked. A second component of this type of policy is keeping a list of “known bad” IPs (i.e. IP addresses associated with bad actors and attack techniques) and then implementing a block on this list. This does create a relatively proactive chore of keeping the known bad IP list up to date, but keeping abreast of the recent hacks and exploits is time well spent anyway.

However, this type of policy has gotten more difficult in recent years due to the rise of remote work and the prevalence of personal Virtual Private Networks (VPNs). Many legitimate users work from all over the globe and many also leverage a personal VPN (often to watch entertainment from other countries). Considering this, it might be helpful to keep an up-to-date directory of user locations to understand where access should be expected from – then update policy accordingly. In addition, it might be worth banning personal VPN use on computers accessing company resources. These aren’t perfect solutions and may not even be viable (especially if unmanaged devices can access corporate resources).

That said, creating a policy that blocks locations where there is no business being done – or known bad IP address ranges – is always a good first step.

    iii. Session Length Policy

Most identity providers allow administrators to designate how long a session will last post-authentication. There is typically an arbitrary starting value (ex: 12 hours) and it is set across all access policies by default. By shortening session length, security teams can tighten the time attackers have before needing to prove their impersonation again (hopefully, this is hard to keep doing!)

However, limiting session length is a double-edged sword, as shorter sessions mean more time spent re-authenticating for legitimate end users (again, potential gnashing of teeth). The compromise here is typically to focus on shortening sessions for highly sensitive applications and highly privileged users – something about with great power comes great responsibility (to authenticate often).

    iv. Risk-Based Policy

Many IAM vendors today offer risk-based policy. It can be more advanced than static policy. In most cases, risk-based policy evaluates the current login context in comparison to that same user’s history of access attempts. Common attributes to consider are operating system, browser, and location (based on IP address). For example, it’s very common to check if the user is accessing from a new or anomalous location as compared to their usual workday.

The cool thing about risk-based policy is that it can detect new attributes and adapt the authentication experience accordingly. This can be very helpful in catching or preventing an identity-based attacker attempting to login with stolen credentials because they will have a hard time mimicking all of the attributes associated with a historical login from the true identity.

However, there are some factors to consider with risk-based policy. For one, risk-based calculations tend to vary from vendor to vendor – and there is a lot marketing around the functionality. When asking about risk-based policy, ask about which attributes are considered at authentication and what they mean. As an example, be skeptical of a long list of “location” variables (i.e. country, state, city etc) - as they are most likely all derived from IP address.

It is also completely fair to ask how risk is calculated – and be wary of hand-wavy answers about AI or machine learning. Even if there is some sort of machine learning going on, the behavior it is detecting should be simple to explain. Finally, it’s good to ask about when the risk calculation takes place. It is one thing to calculate risk and adapt access policy in real-time. It’s a very different thing to calculate risk up to 24 hours later.

    v. Role & Application-Based Policy

As a final thought, it’s worth pointing out that access policy can and should vary by user group and application. As noted above, it can drastically increase the complexity of a policy stack – but, targeting even basic grouping can be very helpful. For highly sensitive applications, it is totally acceptable to increase the authentication requirements to the strongest forms of MFA and restrict access to only trusted devices and locations. The same goes for privileged user groups – they can and should expect to face more stringent access policy, especially when performing powerful actions.

e. User Education

The last preventative measure we’ll discuss is end user education. As noted, most people know not to send money to Nigerian princes. However, as social engineering attempts get more sophisticated, putting in place an annual (if not more frequent) security training program is fundamental. KnowBe4’s Phishing By Industry Benchmarking Report found that undergoing even basic training reduces a user’s likelihood of opening a phishing email by ~50%.

The reason user education is so important is that the mechanisms employed by attackers keep evolving and improving. One example is the urgent request from a stranger may now look like a relatively-calm request from the CFO. Knowing that sensitive request should always get third-party validation is a helpful reminder for employees.

As another example, simple phishing emails with somewhat obvious nefarious links have been replaced by sophisticated adversary-in-the-middle workflows that look and feel just like the typical login experience. However, one extremely helpful tidbit for end users is to review the URLs in a login flow. Even advanced phishing techniques that create aesthetically identical webpages will still have to use an illegitimate domain they control to proxy the information.

These are just two simple use cases, but they highlight the need for end users to have up to date context on what types of techniques are out there and how they can help defend against them.

In this section, we’ve covered approaches to preventing identity-based attacks at both the strategic and tactical level. Hopefully the section highlights that aligning the security team on a strategic north star is critical – and provides some key prevention areas to consider and implement.

In the final section, the piece will dive into the world of detecting identity-based attacks. While prevention is about reducing attack surface and increasing the defenses along the identity perimeter – it’s important to assume attackers will still make it past the front gates. When they do, detecting them quickly and effectively helps minimize their impact.

03. Detection

The structure of the detection section will consist of two parts. In the first part, we’ll overview commonly recommended detection logic for identity. This is the type of logic that often comes out of the box from an IAM vendor or threat detection tool like a SIEM. If it doesn’t come out of the box, it’s often relatively easy to implement as it relies on basic logic and limited calculation. The second part of the section will cover the prospect of writing original detection logic for identity-based attacks. This will cover examples of relevant logs to use in writing identity-based detections – and thoughts on iterating and improving said detections.

As a note, before diving in, we understand that teams actually do identity threat detection in a few different places. It can be in a Security Information and Event Mangement (SIEM) tool, a general-purpose logging tool modified to ingest relevant security logs, the IAM vendor’s threat detection feature, or even a dedicated Identity Threat Detection and Response tool (ITDR). Or, that an organization might not be doing dedicated identity-based threat detection currently. The point of this section is to be generic enough to be relevant to all types of security teams – regardless of current tooling.

Recommended Detection Logic

To start, let’s break down commonly recommended detection logic into a few categories: known suspicious attributes, known suspicious patterns, unfamiliar or new attributes, and identity administrator actions.

a. Known Suspicious Attributes or Indicators of Compromise (IOCs)

This type of detection is based on lists of known bad things. The format is traditionally, “if I see this known bad thing attempt an action, alert me.” The logic can take many forms from there, but an example related to identity-based attacks might be something like: If user credentials attempt access from this known bad IP, alert me.

The nice thing about this form of detection is it can be relatively easy to implement. However, the challenging piece can be maintaining the list of known bad things. Attackers tend to learn quickly and move on from IOCs before they become too well-worn.

One way to address this challenge is to partner with third-party threat intelligence feeds to inform the detection logic. An example would be calling services like IPInfo or Greynoise to enrich the IP data in any given detection. This can be helpful as the impetus to maintain and refine a list of bad things is outsourced to the third-party. Another way to understand known bad indicators would be to use vendor-proprietary services. Okta has a tool called ThreatInsight that “aggregates data about sign-in activity across the Okta customer base to analyze and detect potentially malicious IP addresses.” Given that Okta has thousands of customers, the IPs highlighted by ThreatInsight are likely to have significant signal. And, once again, organizations aren’t forced to constantly do malicious IP homework.

b. Known Suspicious Patterns

Known suspicious patterns are like known suspicious attributes but slightly more complicated to calculate. While a suspicious attribute is typically only a single variable to be evaluated as good or bad, suspicious patterns are comprised of multiple actions or events. This can be more difficult to detect because the actions must be accurately linked together for the pattern to be visible.

A simple example is the brute force technique highlighted in Part I of the piece. One failed login probably doesn’t indicate suspicious behavior (who hasn’t mistyped that complicated password?). However, if the same credentials fail many times in quick succession – that pattern starts to become suspicious. The logic for this type of detection is something like: if user credentials fail to login five or more times under a certain amount of time, alert me. This same logic can be used with push-based MFA to address MFA bombing attempts – except swap “fail to login” with “receive MFA push.”

Another attacker technique that involves a suspicious pattern is password spray. Instead of targeting a single account, the attacker attempts to use a stolen password across many identities in an environment. To detect this type of pattern, the logic is: if five or more distinct identities fail to login all from the say IP address under a certain amount of time, alert me.

While brute force and password spray are relatively straightforward examples, suspicious patterns can get complicated when attempting to replicate nuanced attacker behaviors. However, it can be interesting to think like an attacker when coming up with patterns for detection. What actions might they take in what order to accomplish an identity-based attack? If an identity’s password is reset, then the identity attempt access using SMS as a second factor instead of the more common push. Is that pattern strong enough to trigger an alert? What could be added on to scope the pattern more effectively into the “suspicious” camp?

c. Unfamiliar or New Attributes

Unfamiliar or new attributes are also more complicated to detect than checking a list for good and bad. This is because knowing if something is new or unfamiliar requires keeping a log of historical activity. Unfamiliar is also slightly more complicated than new. New just means never-before-seen, but how many times can something be seen and still be “unfamiliar,” and when does “unfamiliar” become “familiar?”

That said, unfamiliar or new attributes can be very effective in helping detect identity-based attacks. This is because employees tend to do the same things from the same places (well, most do – the sales department is another story). Overall, most employees log in from the same device, IP, and browser every day – probably using the same MFA method as well. The regularity of these attributes makes it more difficult for attackers to impersonate any given individual accurately – because to “be” someone there are more variables to copy than just a username and password.

Therefore, alerting on unfamiliarity or novelty, especially multiple counts of novelty within a single access attempt, can help illuminate suspicious behavior. Here are traditional forms of detections that fall into this category:

  • An authentication from a new or unfamiliar IP address

  • An authentication from a new or unfamiliar device

  • An authentication from a downgraded MFA method (i.e. SMS)

  • An authentication attempt from an inactive account

Many IAM vendors include this type of logic in their detection engines. In some cases, it is detected in real-time and can be invoked in access policy (as discussed in Part II). In other cases, it is calculated offline and can be used for threat detection after the fact. For example, Microsoft Entra has a variety of detection logic – some can invoked in access policy but the rest is evaluated asynchronously.

d. Identity Administrator Actions

Perhaps the most important subset of detection logic is monitoring for and alerting on suspicious actions for identity infrastructure administrators. As noted in Part I, administrator accounts have power to do serious damage in the wrong hands. Therefore, it’s very important to both log their actions broadly and detect risky actions quickly. The category of actions that are most important to alert on are ones that an attacker would use to cover their tracks, enable further exploit, or deepen their presence in the environment.

Here are some thoughts on administrator actions to monitor and alert on:

  • An administrator revoking privileges to other administrators or users

  • An administrator changing authentication requirements or access policy

  • An administrator modifying roles or groups

  • An administrator adding or integrating a new identity provider

There are many more examples, but these provide a nice illustration of the types of actions a compromised identity administrator account might take to progress their breach. The tricky part is, of course, that legitimate administrators must make these actions regularly. To limit false positives, correlating administrator actions with unfamiliar or new attributes will help scope the detections more precisely. For example, an administrator account modifying access policy from a new IP address is a very suspicious action and should trigger an alert.

This subsection has covered a variety of recommended detection logic – hopefully this provides a baseline for the types of detection work required for identity-based attacks. However, in the next section, let’s discuss the prospect of writing custom detection logic for an organizational environment.

Writing Original Detection Logic for Identity Threats

This section may seem to require that a reader have security tooling that enables writing detection logic (i.e. a SIEM) to be useful or interesting. That might be true – and, there's no requirement to read any section of this piece! However, we would argue there is value in understanding some basics around threat detection logic – regardless of security software stack. By spending a little time reading this section, at best it may help readers write their own logic, but at worst it should help security professionals ask their vendors about the logic they’re creating and implementing.

That being said, the following section will cover the basic building blocks required for identity-based detections and a few strategies for improving detection logic.

a. Logs: The Building Blocks of Detection

To write an original detection, we’ll need the proper ingredients. In the previous subsection, we provided examples of detection logic like: if user credentials fail to login five or more times under a certain amount of time, alert me. To provide a primer on how to write a detection from scratch – let's use this logic more concretely. How does that if-then statement become a detection? Well, the first requirement is data to query – and in the case of most security detections that data is logs.

For identity-based detections, the most relevant logs will be from identity-infrastructure. It’s true that for more advanced detection logic, there will almost certainly be correlations across identity logs and other types of log data (i.e. from the network or endpoint). However, in this simple example, the logs only require one source – the identity provider. The most common identity providers are Microsoft and Okta – each of these vendors provide logs that can be used in detection logic.

    i. Microsoft Logs

Microsoft’s Entra ID product has two relevant log sources. Sign-In Logs which capture login information like successful and failed attempts, sign-in locations, device information and auth methods used. There are also Directory Audit logs that retain information about administrative activties like changes to user accounts, groups, and access policy.

As an example, each interactive Sign-In log provides the following data:

  • Date and Timestamp
  • Unique Request ID
  • User
  • Application
  • Status
  • IP Address
  • Location
    ii. Okta Logs

Okta’s product includes a System Log that tracks many events in an Okta customer environment. Okta’s System log is a relatively robust logging tool that includes a variety of useful objects to unpack a given event. As an example, each event log will include components like

  • Date and Timestamp
  • Unique Event ID
  • Actor
  • Client
  • Event Type
  • Event Status

To recreate, the Brute Force detection either of these two logs could be used as the baseline ingredient. For example, with the Microsoft Sign-In Log the logic takes form as something like:

If <User>'s <Status> = fail over the last five <Date & Timestamp> Sign-Ins, alert me.

The same is true of Okta's log data. Although the log mapping would look more like this:

If <Event Type= user.session.start> for <Actor=John Doe> = FAILURE over the last five <Date & Timestamp> attempts, alert me.

For this to function fully, the logic will have to understand the last five logins – but conceptually, this is mapping the log data needed to the pseudo logic of the behavior.

To do a more formal deep-dive into writing detection logic for Okta and Microsoft is beyond the scope of this piece, but there are many resources available that can take on the mantle from here. In particular, the Rezonate team has a great series on writing detection logic using both Microsoft Entra and Okta log data. For those underwhelmed by the lack of depth here, we recommend those posts as next steps!

b. Improving Detection Logic

Once detection logic is written, it’s bound to produce false positives. False positives have a relatively negative connotation – yet there will almost always be benign human actions that trigger suspicious alerts. Workers travel, they get new devices, and they ask for MFA resets. Perfect can’t be the enemy of good. However, there are some strategies or frameworks to put in place to help improve identity-based threat detection in an environment. For this section, there are two component we’ll review: improving detection efficacy and improving detection context.

    i. Improving Detection Efficacy

To start, by improving detection efficacy we mean scoping and tuning logic so that it more accurately produces the desired effect. One way to improve efficacy is to improve the signals or attributes involved in a detection. What does this mean? Well, for example, IP address is frequently used in detection logic – it's a readily variable signal and has been used in detection for a long time. However, as Martin Connarty points out in his piece on IP-based detection rules, the IP address has some serious issues as a signal. One being that VPN usage can obfuscate true IP and cause a variety of false positives for location-based detections that rely on IP as a stand-in for geolocation. To improve upon IP address, it can be useful to find new or additional signals (where possible) that can identify a user. Associating a user with a particular device ID might be one signal, or perhaps a browser-based signal like user-agent string. None of these will be perfect on their own, but by adopting new signals when relevant, hopefully detection efficacy can increase.

A second method for improving detection efficacy is aggregating signals across different logs and actions – commonly referred to as correlating. Sean Hutchinson points out in his piece on dealing with noisy behavioral analytics that “sometimes an identified behavior just isn’t a strong enough signal in isolation; it may only become a strong signal in relation to other behaviors, identified by other detections.” But, how does one do this in practice? The answer is it’s often complicated – correlation can require the concept of an “entity” that the detections all point to (i.e. a consistent “user” across all log types). Many threat detection platforms sell their services solely on the prospect that they “correlate” logs better than others.

However, to provide a concrete and real-world example of correlation, the Expel security team recently showcased behavior correlation in a detection developed in response to the Okta breach. The post is worth a read, but in summary they tag a variety of events in Okta as potentially a component of a cross-tenant impersonation attack. Many of the tagged events are things we’ve discussed already in this post: MFA factor reset, suspicious administrator access, and creation of new admins. However, from there the Expel team aggregates each of these detections into a single detection that fires if each of the components fires. This way the team can be more certain the alert is triggered during suspicious activity as opposed to when an administrator goes about their daily work.

There are other ways to improve detection logic, but by looking to improve the signal inputs to individual detections and then correlate detection outputs into broader attack stories – security teams can increase their detection efficacy and reduce unwanted false positives.

    ii. Improving Detection Context

Another way to improve detection logic is to continuously improve detection context. In this piece, by improving context we mean understanding how the context of the threat detection landscape changes. As discussed in Part I, attacker techniques tend to evolve. They change in response to defenses put in place by security teams and to changes in the technological ecosystem. As MFA became more prevalent, attacks that bypass or at least expect MFA to be in place grew too. As organizations adopted cloud infrastructure, attackers looked to exploit new cloud-based controls and systems. The point of this section is to simply say: it’s important to keep track of new attackers, attacks, and techniques as they arise. Then, after learning about a new attack, thinking about how to detect its techniques via the tools available.

There are many resources for staying up-to-date with new identity-based threats and the work of detecting them, but some personal favorites of the authors are:

04. Conclusion

This piece turned out to be a bit longer than expected. And, yet, it still feels like there’s more to say. By the authors’ own admission – this survey of identity-based threats is not “deep” but “wide.” The goal was to consolidate a lot of information in one place to help a security analyst looking to brush up on the identity threat landscape and some key prevention and detection mechanisms. Hopefully, this piece has accomplished that – and will provide a nice jumping off point for deeper research and analysis.

Again, we are an open book. If identity trends, prevention, or detection techniques were obviously neglected, missed or incorrectly stated – let us know. Contact us at the Duo Labs X account or even shoot us an email at either tkietzman@duosecurity.com or jgolden@duosecurity.com.

Good luck out there!