ABAC Archives - IDPro https://idpro.org/tag/abac/ The Professional Organization for Digital Identity Management Thu, 28 Oct 2021 16:57:02 +0000 en-US hourly 1 https://idpro.org/wp-content/uploads/2023/07/cropped-idpro_stickerA-circle-100-32x32.jpg ABAC Archives - IDPro https://idpro.org/tag/abac/ 32 32 The State of the Union of Authorization https://idpro.org/the-state-of-the-union-of-authorization/ https://idpro.org/the-state-of-the-union-of-authorization/#respond Thu, 28 Oct 2021 16:56:53 +0000 https://idpro.org/?p=1326 by David Brossard, Sr. Director Identity, Product Management — Salesforce Background A few months ago, just as summer was getting […]

The post The State of the Union of Authorization appeared first on IDPro.

]]>
by David Brossard, Sr. Director Identity, Product Management — Salesforce

Background

A few months ago, just as summer was getting underway, I had the privilege and honor to speak at Identiverse on the latest trends in authorization (you can watch a recording here and you can glance through the slides here). The talk served as an update to an article I had written in 2018 on the state of authorization. While there is a clear consensus on what authentication is and does, the waters are a little muddier when it comes to authorization. What is it? Where does it begin? End? And why is it so hard to pin down? What are the current efforts in the standards community and the industry to address authorization? This article aims to address these points.

Defining Authorization

One of the problems with authorization is that it can be used to describe many needs and technologies. The term itself is abused. Think for instance about HTTP 401 Unauthorized. Does it mean you didn’t have the right authorization? No, it means your authentication failed. So, why is it called unauthorized? Just a misnomer. Similarly, OAuth stands for Open Authorization. Yet OAuth is mainly about access delegation (I grant a given service X controlled access to my data on another service Y) and addressing the password anti-pattern. So let’s level-set here and define what authorization truly is in the broadest possible sense:

“Authorization is about granting or denying an entity access to another entity.”

In the presentation I gave in June, I wrote that:

  • Authentication confirms that users are who they say they are. 
  • Authorization gives those users permission to access a resource.

More broadly, authentication is about proving a claim about someone or something. Usually it’s a person’s identity. But it could also be an attribute of that person e.g. their date of birth.

Authorization is the process of granting (or denying) someone or something access to something else. Authorization needs to consider what we know about the requestor and the requested item before granting access.

The challenge is that over the past 15 years there hasn’t been one major model, standard, or framework to address authorization. Unlike SAML which has tackled SSO, there hasn’t been one turnkey solution yet. But The Times They Are a-Changin’ and it’s time to revisit the space.

Requirements for an Authorization Framework

Before we look into existing solutions, let’s look into requirements. Basic ones aside (e.g. consistency or performance), there are three fundamental requirements any authorization model or framework should address:

  1. Configurable: you need to be able to change your authorization without having to code your application from scratch. It should be possible to reuse the same framework from one application to another.
  2. Future-proof: your authorization framework should be capable of adapting to new requirements. If a new legislation or business need comes into play (e.g. GDPR) then the authorization framework should be able to adapt.
  3. Auditable: your authorization configuration (artifacts) should be understandable by the audit teams. Humans should be able to ask both (a) what did happen and (b) what can or may happen.

In addition, authorization needs to reach out from the identity realm to extend to other dimensions. One issue with models like RBAC or standards like OAuth is that they are all identity-centric. Yet, an effective authorization solution needs to consider not only the requestor’s identity but also the targeted resource and its attributes as well as the context of the interaction. This leads us to the Venn diagram of Authorization:

Authorization, properly defined, should reside at the intersection of all three dimensions: identity, entity, and context.

Types of Authorization

I’m often asked what the different kinds of authorization are. I usually break things down into three categories:

  • Functional authorization: you could ask whether a user, as a whole, can print. This is fundamental when generating scopes, claims, or maybe just rendering a UI.
  • Transactional authorization: can a specific user do a specific action on a specific item e.g. can Alice view account 123? This is nearly always a yes/no (binary) question. Transactional authorization typically occurs in API calls or business processes.
  • Data-centric authorization: sometimes you want to select an entire swath of data (an unknown number of items) and asking yes/no questions is either not possible (for we don’t know what there is) or doesn’t scale (for there are millions of records). In that case, we need an authorization framework capable of reversing queries (flip questions). This is data-centric authorization. Data-centric authorization occurs when retrieving data from databases or Big Data systems (anything from traditional RDBMs to systems like Snowflake). To do so, your authorization language’s syntax needs to be ‘reversible’. Lea Kissner of Twitter calls it reverse-indexable. Axiomatics’ Chief Product Officer, Pablo Giambiagi calls it Reverse Query. And OPA’s co-founder, Torin Sandall calls it partial evaluation.

Who Defines Authorization?

There are two main sources for authorization:

  1. The resource or data owner: I, the owner of the data, decide who can access my data. This is exactly the sort of authorization that takes place when I share a Google Doc with a coworker.
  2. The enterprise: as the data steward and the owner of the services, the enterprise can determine who can access which services and data. In the previous example, my employer can choose to add authorization checks. For instance, the employee can only share with other employees but not outside the enterprise. Note that the enterprise might have different drivers such as legal, compliance, or merely business.

When do we Enforce Authorization?

There are two ways authorization can be enforced:

  1. At design-time: in traditional identity-centric authorization models, authorization is usually defined when the user is defined, created, or updated. We assign those users groups, roles, and permissions either directly or via profiles. The rest of the authorization is devolved to the application code.
  2. At runtime: if we are to consider contextual information e.g. time of day or a requestor’s geolocation and if the authorization is to be decoupled from the application, then it is necessary to enforce authorization at runtime.

Now that we’ve looked at the requirements and traits of a comprehensive authorization framework, let’s discuss existing approaches.

The Current State of the Art

Models, Standards, and Frameworks

When authorization is being discussed things like RBAC, ABAC, XACML, and OPA come to mind. These are not all equal and it’s worth grouping them into one of three categories:

  • Model: abstract approach to implementing authorization e.g. ACL, RBAC, and ABAC
  • Standard: a formally approved set of specifications that define how to address authorization e.g. SAML, XACML, OAuth…
  • Framework: a technical implementation that handles authorization without being a standard itself e.g. Ruby cancancan, OPA, or Oso’s Polar.

With this in mind, let’s charter the land of authorization. In my Identiverse presentation, I came up with the following (incomplete) diagram and timeline.

Models

In the first category, we have 3 main models: ACL, RBAC, and ABAC. There are occasionally other acronyms that pop up e.g. PBAC, GBAC, or ReBAC (policy-based, graph-based and relationship-based) but these tend to be analogous to ABAC.

While ACLs can be extremely fine-grained (to the point of assigning a specific user to a specific item), it is usually too tightly coupled with the applications you want to protect and its management does not scale well.

RBAC addresses the management scale issues of ACLs but, given it is identity-centric, falls short of allowing for contextual, runtime authorization.

ABAC addresses the limitations of RBAC and introduces policies as a means to define authorization rather than roles. However, the governance becomes more arduous. Most of the latest innovations in authorization (Authzed, Oso HQ, Open Policy Agent, and ALFA) are all to some degree ABAC. They use a policy language (or in Authzed’s case a schema language on top of Zanzibar’s configuration language) to express authorization.

ABAC Architecture

The following diagram summarizes ABAC’s architecture and flow.

  • PEP: Policy Enforcement Point: this is how applications are integrated into the ABAC architecture. Example PEPs include API gateways, annotations, and proxies
  • PDP: Policy Decision Point (the “engine”): the PDP evaluates policies and generates decisions
  • PIP: Policy Information Point. PIPs allow the PDP to query data sources for attribute values
  • PAP: Policy Administration Point where policies are managed (defined, governed, audited…)

Standards

Identity-derived Standards (RBAC)

SAML, OAuth, and all its derivatives such as UMA, GNAP, Rich Authorization Requests (RAR), and JWT access tokens do tackle authorization to some degree. But, as noted previously, they are identity-centric and leave out the context part. Additionally, they are not policy-driven which means they can be harder to manage.

That being said, User-Managed Access (UMA) does fill the authorization gap around consent management. UMA can be used to collect a data owner’s wishes. That information can then be used in other standards and frameworks (e.g. XACML or ALFA).

Rich Authorization Requests (RAR) are also a bridge between the identity side and the authorization side. With RAR, full authorization decisions can be delegated to a decision engine (e.g. a Policy Decision Point).

JWT AT can also be used as the bearer of authorization decisions inside of the access token and as the source of attributes or claims to be used in an authorization decision.

Alone, these standards will suffer from the same woes RBAC suffers from. But used in conjunction with ABAC approaches, they become invaluable.

Standards that Implement ABAC

There are 2 official standards that implement ABAC today: XACML and ALFA. XACML (the eXtensible Access Control Markup Language) has been around since 2001 and is to authorization what SAML is to federation and SSO. They are both part of OASIS’ family of standards. XACML has had some success especially in the era of XML gateways (in the pre API days) but its syntax (XML-based) has made it hard on developers.

ALFA (the Abbreviated Language for Authorization) addresses the syntax aspect of XACML. It is easier for developers to write in ALFA.

XACML and ALFA still lack massive adoption and part of the challenge is that both languages (which are interoperable) are too generic, too broad. This makes it harder on developers and business users. Additionally, XACML and ALFA require that attributes be defined. This is true of any ABAC-like system and it adds an extra burden on the framework adopters. What are those attributes? Who defines them? Who governs them? How are their values retrieved? This is where RBAC is simpler: all attributes are identity attributes and are stored in the “one and only” identity store.

Beyond Standards, Authorization Frameworks Abound

I’d like to start this section with this little xkcd cartoon. When there are too many standards, define a new one. This is, in a way, what’s happened in the past 5 years. In spite of XACML and ALFA, we have seen the rise of Open Policy Agent and its language Rego as well as Oso HQ and its language Polar. Google released Zanzibar (a framework to manage permissions at scale) and several companies (Authzed and Auth0 to name a couple) are building management solutions on top of Zanzibar.

Open Policy Agent

Although I peg them as a framework, they could be considered as a standard given they are part of the CNCF family. OPA follows the same architecture as XACML and ALFA. OPA has a large ecosystem of enforcement points making it easy to adopt OPA especially in the Kubernetes and microservices world. The language, Rego, is based on Datalog. Its syntax is simple enough that it can be handwritten by a developer. However it lacks the scaffolding and structure that come with ALFA or XACML. In addition, Rego can do data manipulation which can be both a good thing and a bad one as it blurs the lines between what should be in a policy (the rules) and what shouldn’t (the data massaging). There are several companies such as Styra who aim to address the management aspect of OPA and Rego.

Oso’s Polar

The Oso authorization library uses the Polar programming language to express authorization logic and policies. Like XACML and ALFA, it is a purely declarative language. Polar combines together the definition of facts e.g. 

# father(x, y) ⇒ y is the father of x.

father(“Artemis”, “Zeus”);

With rules e.g.

allow(“Zora”, “read”, “document-1”) if 1 = 0;

Because Polar mixes both the definition of the model through relationships and the definition of the authorization rules, it steps a little bit outside the boundaries of ABAC and into “ReBAC”. Graph-based authorization models go down that path as well. In the long run, it might make it harder to manage such policies.

Zanzibar

Zanzibar provides a uniform data model and configuration language for expressing a wide range of access control policies from hundreds of client services at Google, including Calendar, Cloud, Drive, Maps, Photos, and YouTube. (source). Zanzibar is still ultimately ACL-based. It stores access control lists (ACLs) and performs authorization checks based on those stored ACLs. It acts as a framework on top of ACLs across all of Google’s services (originally). Zanzibar’s goals were not (originally) fine-grained authorization as defined in NIST’s ABAC. Rather, Zanzibar’s goal is to make sure authorization is consistently defined and coherently enforced across all of Google’s distributed services at scale.

From a ‘language’ perspective, Zanzibar uses namespaces, relations, usersets, and tuples. It also muddies the waters between what is truly policy (A user can do X) and definitions (A is the father of B).

Conclusion

The authorization world is growing faster than ever: the # of new startups is a homage to its vibrance. Yes, it pains me to write as such but XACML’s XML syntax is dead. However, in good old Monty Python “I’m not dead yet” fashion, XACML isn’t. In fact, XACML, ALFA, and OPA are essentially variations of the same model, ABAC. Graph-based approaches such as Nulli’s, Oso’s Polar, or Authzed are worth keeping an eye on. I like to keep things clean and as such I prefer ALFA (but I’m also biased). Application and infrastructure vendors (Azure, AWS, SaaS, app frameworks) will keep offering their own approaches. AWS’s IAM is a great example of ABAC using tags and policies (attached to users, objects).

What matters above all else is the ability to easily author, manage, enforce, and audit your policies. This might mean that the true answer to authorization may not so much reside in a single standard or language. It might well reside in the governance layer. And in that space, it may be worth looking into the Identity Query Language (IDQL), a new standard for identity and access policy orchestration across distributed and multi-cloud environments. My former colleague and friend, Gerry Gebel (Strata) spoke on that very topic at the European Identity Conference this past September.

And since you made it this far…

Does it have to be a standard? That’s a good question and I think the short answer is no. There are always benefits to standards (avoid vendor lock-in). In IAM, the biggest benefit has been interoperability. In my role at Salesforce, I am grateful for SAML and OpenID Connect as it makes it easy to integrate Salesforce with many identity providers. But when it comes to authorization, the need for interoperability is not as important. What the policies are written in does not impact how an app is integrated with the policy engine. The need for interoperability is therefore on the request/response side only. Make sure you choose a framework or standard that has a rich ecosystem of integrations. OPA and ALFA both provide integrations.

Additional Resources

The post The State of the Union of Authorization appeared first on IDPro.

]]>
https://idpro.org/the-state-of-the-union-of-authorization/feed/ 0
IDPro Newsletter – Feb 2020 https://idpro.org/idpro-newsletter-feb-2020/ https://idpro.org/idpro-newsletter-feb-2020/#respond Tue, 14 Jul 2020 17:58:16 +0000 https://www.idpro.org/?p=861 Don’t Launch the ABAC Ship Without Stewards Onboard The promise of attribute-based access control (ABAC) is positively mesmerizing. Most IAM […]

The post IDPro Newsletter – Feb 2020 appeared first on IDPro.

]]>
Don’t Launch the ABAC Ship Without Stewards Onboard

The promise of attribute-based access control (ABAC) is positively mesmerizing. Most IAM products can assign access based on roles and rules built on user data. Training usually provides simplistic, easy to follow use cases. Business analysts can quickly analyze data and sort out use cases for automation which should improve security, lower overhead, and enable the business. While this all sounds great, and is great, a lot of online and product documentation leaves out a key component – data stewardship. If the departments that own the data don’t know how it is being used, and agree to it, side effects of automation may ensue leaving users without the access they need, and the IAM, Sec Admin, Access Admin groups in SOS mode. Here is a cheat sheet that lays out definitions, benefits, and potential “gotchas” organizations should be aware of before launching their ABAC initiative.

Data Requirements for Implementing Attribute Based Access Control (ABAC)

  • Data Stewards – responsible for each data element used in ABAC
  • Data Integrity – with established accuracy and completeness thresholds
  • Understanding of Use – and acceptance of the use of the data by the data owner and provider
  • Data Protection – changes to data objects and available data values must be governed,
  • controlled, documented and communicated

ABAC Benefits

  • Automation for access decisions and provisioning based on data – business enablement
  • Ability to map data to business roles to access in systems and applications
  • Improved security posture and better housekeeping
  • Bundling of access into roles

ABAC Concerns

  • Missing or incomplete data requires fallback to default logic for error handling
  • Timing – data may change for a worker before or after the true date when action should be
  • applied to worker accounts
  • Point of failure when logic is dependent on specific data values that can change based on
  • Finance, HR, Org changes – such as or Cost Center or Org Name changes
  • Potential for changes to large numbers of worker records simultaneously
  • Retesting – Upstream changes require updates to IAM system and retesting
  • Finance and organization data changes may not be communicated to IT and identity teams in
  • advance, resulting in downtime or fallback to default logic

Funnily enough, a quick Internet search for “RBAC is dead” will reveal a trove of articles on the rise of ABAC.

James Dodds

IDPro Editorial Committee


A Look Back at GDPR and A Look Forward to CCPA and LGPD 

What can we learn from GDPR (the General Data Privacy Regulation) about how to manage privacy legislation? What does impending privacy regulation like CCPA (the California Consumer Privacy Act) or LGPD (Lei Geral de Proteção de Dados, Brazil’s personal data protection law) mean for the privacy landscape in general? How can we future-proof our privacy practices to move beyond prepping for the next set of rules?

GDPR is now a year and a half old! I will always remember the day GDPR was born, as it was preceded by 800 companies I don’t remember ever interacting with sending me emails asking me to approve their updated privacy policy. It also marks the era of cookie notifications on every website, each of which with an “accept” button but no “decline” button. It’s the same behavior as when I’m offered cookies at my mom’s house, so I’m actually pretty used to it. You’re gonna accept these cookies and you’re going to love them!

Seriously, though, GDPR is a positive step. It is a major legislative piece that tackles the issues of data rights and consent for a big group of people and it represents that people are demanding more control over their data. This can feel daunting for marketing teams, who have to make sense of all of these rules and may feel like they are losing their ability to generate leads. And that can be true in the short term – but the effect in the long term is that companies will start to have more genuine relationships with those who remain, and the ability to really develop a trusted relationship with those customers results in greater customer loyalty and a higher lifetime value. But, before we get into that – let’s take a look back at GDPR over the last year and a half.

So what’s happened?

In the first year:

280,000+ cases

144,000+ complaints

89,000+ data breach notifications

90+ fines

56,000,000 Euros in fines

To date:

200+ fines

460,000,000 Euros in fines

(Reference: https://enforcementtracker.com/)

As a reminder, the GDPR gives national watchdogs extensive powers to investigate privacy breaches and to hand down fines of up to 20 million euros (around $22.4 million USD) or 4 percent of a company’s global annual turnover, whichever is greater.

More than 280,000 “cases” were reported in 27 european economic area countries in the first year of the GDPR. Of these, around 144,000 were “complaints” (e.g. improper data processing), as opposed to 89,000 that were data breaches (i.e. insufficient measures to secure data).

As of this writing, the top complaint category is insufficient legal basis for data processing, by almost twice the number of fines of any other category. In total, 460 million euros in fines have been levied (which is more than four times the amount of fines just six months ago!)

This sounds like a significant number, but it turns out that more than 400 million of that is British Airways, Marriott, and Google. British Airways and Marriott were fined around 200 million and 110 million, respectively, for insufficient technical and organisational measures to ensure information security, and Google was fined 50 million by CNIL [ke-nil] (France’s privacy regulatory body, the Commission Nationale de L’Informatique et des Libertés) for failing to inform users adequately about its use of their personal data and failing to seek “valid legal consent” from users to personalize ads. 

Now, that leaves 60M euros over 200 fines levied, and some of you might be thinking to yourself – okay, 300,000 Euros, and only 200 organizations have been fined. Maybe there’s a risk discussion we need to have before investing any further in privacy. But, don’t get too comfortable, because it turns out regulators aren’t letting things slide, they’re just really, really busy. As DLA Piper research puts it:

“Regulators are stretched and have a large backlog of notified breaches in their inboxes. [T]he larger headline grabbing breaches have taken priority …, so many organizations are still waiting to hear from regulators whether any action will be taken against them …”

which means that, as Giles Watkins, IAPP Country Leader for the UK explains:

“… I sense that there is only a limited time for organizations to put their houses in order before the commissioner does revert to the enhanced penalty regime, with potential enforcement actions perhaps being even more significant to businesses than the monetary fines” – Giles Watkins, IAPP Country Leader, UK

Okay, great, so we really do have to care about this. But . . that’s not all. We don’t just have GDPR to worry about: every country and their mom are coming out with a privacy regulation. Are we going to be living a GDPR Groundhog Day for the rest of our lives?

Not necessarily – but first, let’s talk a little bit about what’s definitely maybe coming and how these regulations overlap.

The next big regulation to hit the scene this year is the California Consumer Privacy Act, the CCPA. This law has been called GDPR-lite or the California GDPR, which I think just means it’s privacy regulations with some avocados on it? I kid. In all seriousness, though, there are some differences between the two, which we can take a quick pass through.

Both regulations require transparency or audibility of operations. Both require maintaining a data privacy notice, policies and procedures for obtaining consent. However, the CCPA notice

requirements on personal information disclosed or sold to third parties only covers the 12 months preceding the request

CCPA is specific about the ability to opt-out of the sale of personal information to third parties as well as protecting those users from price or usability discrimination, while GDPR is not as explicit about that particular scenario.

CCPA 1798.115 (d) A third party shall not sell personal information about a consumer that has been sold to the third party by a business unless the consumer has received explicit notice and is provided an opportunity to exercise the right to opt-out pursuant to Section 1798.120.

The high level take away from this comparison, is that we should expect to have to grant users the right to manage their data in a variety of capacities. So, what does this look like when we add in LGPD?

The high level take away from this comparison is that we should expect to have to grant users the right to manage their data in a variety of capacities. So, what does this look like when we add in LGPD, another major privacy legislation to appear post-GDPR?

The LGPD is very similar to GDPR in terms of personal data rights and protections – in each of the major categories we examined for GDPR, LGPD follows suit exactly. There are some minor differences with the LGPD. For example, LGPD does not differentiate anonymous data from pseudonymous data. When the difference between those categories is related to risks of re-identification of the data subject, the Brazilian law does not relax legal obligations for controllers that employ pseudonymisation techniques when compared to the EU regulation. But, on the whole, similar data strategies and rights considerations can be employed between the two.

So what are we going to do? To hit the overarching themes and impetus behind the regulation, every organization should do three things:

  1. Understand your customer data
  2. Make it easy (ish) to manage
  3. Give control to your customers

In order to understand your data, you need to:

  • Catalog your data: What data do you have? Where is it stored? Why do you have it/how is it used?
  • Know who can see your data: Who can access your data? To which third parties do you share/sell data and what data do you share/sell?
  • Assess your risk: What is the sensitivity of each piece of data?

In order to manage your data you need to:

  • Minimize Data: Determine what data is needed and what isn’t – delete data you don’t need, anonymize data you don’t need to tie back to an individual
  • Data Retention: Based on your catalog decide how long you need each piece of data and implement process around retention and disposal of data
  • Data subject processes: Make it as easy as possible to respond to requests like: portability, vendor sharing/selling, deletion, processing by having APIs or processes in place with each data owner
  • Review/Enhance Security: Protect data from unauthorized access, use classification to drive access

Finally, giving control to your customers means:

  • Transparency: Show your customers how you’re using their data and with whom it’s shared/sold. Give them the ability to revoke those purposes or third party access
  • Data Access Rights: Give your customers the ability to exercise data rights like portability, restriction of processing, right to be forgotten. The more automated, the better.
  • Consent and Preferences: Give your customers the ability to opt into or out of data uses (as much as is possible) and establish their own preferences for communication and data use.

Doing this will not only keep regulators happy and your organization off the fine list, it will also create a trusted, transparent relationship between organizations and the data subjects whose data they are stewarding, which means these regulations, if handled well, can be a win-win for both organizations and customers, consumer, and all data subjects.

Marla Hay

Sr. Director

Product Management – Privacy & Data Governance

Salesforce


Evaluating 2FA in the Era of Security Panic Theater

It seems like today’s world offers constant reminders of how insecure our digital lives can be. As a security professional, part of my job is to monitor for threats to my company and the organizations with which I have a relationship. A significant part of that effort lies in assessing how likely or realistic those threats are. If you believed every infosec vulnerability headline you see come across twitter, it would be easy to feel somewhat like chicken little, with the sky ever falling. I’ve actually coined a term for this phenomenon (though I’m not sure if I actually originated it, but Google seems to think so): Security Panic Theater.

If this term sounds mildly familiar, it is because of its proximity to the phrase ‘security theater’. We experience this pretty regularly whenever we attend a major sporting event like the World Series and we have to go through long lines where people wave a wand over us to ensure my keychain knife doesn’t get admitted to the stadium. This takes place even though the track record of seizing weapons that would matter is pretty poor. But the mere act of this experience makes patrons feel safer. This is even worse when we travel and pass through TSA’s gauntlet of screeners. Consistent penetration tests reveal a woeful rate of actually detecting items that could cause us harm while we are in flight. To add to the insult of this process, there is a comic reality with what actually is seized. I’ll let comedian Steve Hofstetter explain:

If you bring too much liquid, the TSA confiscates it and throws it away, in case it’s a bomb. So they throw it away. In case it’s a bomb. In the garbage can, right next to them. With all the other possible bombs. In the area with the most amount of people.

In case it’s a bomb.

Security Panic Theater (SPT) is a bit of a different experience. The process for SPT goes something like this:

Vulnerability/breach announced regarding a product or control (x) [Security]

+ Inflammatory internet headline(s) regarding (x) [Panic], which leads to the conclusion:

Product or Control (x) is useless/defeated [Theater]

A relatively recent example of this was the release of a penetration testing toolkit by Polish researcher Piotr Duszyński named Modlishka, which loosely translates in English to Mantis. The central feature of this toolkit was the use of a reverse proxy that could accelerate a phishing flow by sending a user to a spoofed URL, but the rest of the web experience was as the user expected. This enabled a man-in-the-middle (MITM) attack to capture both the credential and the SMS code being used by the user.

The significance of this new framework didn’t lie with the fact that you could now phish any 2FA method that used OTPs. What made this release notable was that it was now significantly easier to accelerate the phishing flow because you didn’t have to spin up a fake site. A reverse proxy would do the work for you. To be clear, that is certainly noteworthy, but also not new.

However, to hear the twitterverse and online media outlets talk about it, you’d think all our credentials, even if protected by 2FA, were suddenly moments away from being captured by hackers. Now, to be fair, there are some responsible journalists who try to treat these topics fairly, but even a sane article can often be overridden by a clickbait title like “Is 2FA Dead?”

Let’s get a few basics clear for the sake of sanity & clarity:

  1. 2FA can’t be killed . It isn’t a combination of factors for authentication, not a single technology or pattern. The last few years alone have had a litany of episodes where a particular technology may be at risk (often temporarily, or misleadingly so), such as:
    1. RSA tokens were allegedly cracked (mostly not true)
    2. SS7 flaw will drain all your bank accounts (true, but hard to implement)
    3. NIST Killed SMS 2FA (sort of, but not really)
    4. Modlishka makes SMS useless (sort of, but not really) 
    5. Google Security keys have Bluetooth flaw (recall for some, not all)
    6. Yubikey FIPS keys flawed (recall for some, not all) 
    7. Apple promoted modifications to SMS 2FA for improved anti-phishing strength & joined FIDO’s board. 
    8. 2FA implementation in Iowa Caucus renders app nearly unusable 

Notice the trend here? While there is some truth for most of these from a vulnerability perspective, the reality is that these technologies still work to protect your credentials. Apple’s recent announcement has its own debate worth talking about (and has been on IDPro’s Slack site) and the debacle in Iowa shows that any technology is a dumpster fire waiting to happen if its implementation is designed poorly.

  1. The diversity of the 2FA landscape makes it stronger, not more vulnerable. 

Let’s take a look at the following categories of authentication: 

Pretty diverse to be killed with a single vulnerability, I would think! Now let’s overlay which ones have at least one known vulnerability:

If we look at all the ones in red, that would be pretty disheartening to the casual observer. That’s where journalists and analysts need to take special care in talking about vulnerabilities. The real story doesn’t fit neatly into a simple headline regarding the vitality of the authentication landscape.

  1. All methods of 2FA are still incredibly effective (some more than others) 

Google published a study of some internal findings on various methods used to secure their public credentials. Yes, SMS should be the low hanging fruit of 2FA but guess what, even this well-beaten pinata of 2FA stopped 76% of targeted attacks and nearly 100% of automated & bulk phishing attacks!

Microsoft recently published some numbers to similar effect, that the risk of account compromise is reduced by 99% using multi-factor authentication (MFA). I’d say 2FA is far from dead in that context.

  1. Yes, we should get rid of the 2 in 2FA, long live MFA

The biggest reason for this is that users can be more secure, and less inconvenienced when they have access to multiple ways of authenticating instead of one token combined with a password that can be lost, or a phone that can be upgraded and lock a user out. Without promoting one vendor, I can say thoughtfully that I have several methods to secure my key accounts and that diversity of options, I believe, is the key to giving our users the power of choice as to how they want to login. That power is how we eventually do reduce passwords to an edge use case. The key is that more sites need to support those methods to incentivize adoption. We’re not there yet, but the last few years show a lot of promise in eventually achieving that goal.

The reality is, even the coolest methods of authentication will eventually find a vulnerability. History proves this. But we don’t throw the baby out with the bathwater when those are discovered. We fix it, learn from it, and stay secure. Let’s leave the theater to the actors, where it belongs.

Lance Peterman

IDPro Board

Resources:

The post IDPro Newsletter – Feb 2020 appeared first on IDPro.

]]>
https://idpro.org/idpro-newsletter-feb-2020/feed/ 0