MFA Archives - IDPro https://idpro.org/tag/mfa/ The Professional Organization for Digital Identity Management Wed, 26 Jan 2022 18:04:58 +0000 en-US hourly 1 https://idpro.org/wp-content/uploads/2023/07/cropped-idpro_stickerA-circle-100-32x32.jpg MFA Archives - IDPro https://idpro.org/tag/mfa/ 32 32 The Password Isn’t Dead…But It’s Quite Ill https://idpro.org/the-password-isnt-dead-but-its-quite-ill/ Wed, 26 Jan 2022 18:04:56 +0000 https://idpro.org/?p=1489 by Simon Moffatt Well, as we enter 2022 – and a good way into 60 years of using commercial computer […]

The post The Password Isn’t Dead…But It’s Quite Ill appeared first on IDPro.

]]>
by Simon Moffatt

Well, as we enter 2022 – and a good way into 60 years of using commercial computer technology of some sort – the password is very much alive and kicking. For example:

  • This article is being written in Google Docs, which requires my username, password + MFA.  
  • It will be promoted on Twitter: Username, password + MFA.
  • Shared on LinkedIn. Username, password + MFA.  

Note the pattern?  Yes MFA is absolutely in the mix for me personally, but a) that doesn’t necessarily equate for all users and b) the underlying requirement for a shared secret still exists.

The “cost” to a service provider or application developer to reach out for the username and password pattern is very low.  Libraries exist and many password storage approaches now rely heavily on techniques using salts and hashes.  Making a choice for something different has some pretty big impacts – namely changes to usability and hoops to skip through regarding security change management if some new and funky passwordless approach is selected.

Drivers Towards Passwordless

However, there are emerging shoots of hope for those who wish to see a password-free world. A quick Crunchbase search reveals a tasty $700+ million has been poured into startups with the word “passwordless” in their description in the last 36 months.  A chunk of change (admittedly heavily influenced by Transmit Security’s $543 million last summer) that is empowering new techniques to the age-old problem of authentication.

The interesting aspect is that authentication is the main pinch-point of both B2E and B2C interactions.  B2E identity is having to contend with distributed working, migrations to zero trust and secure service edges and data security, whilst the continued drive for B2C consumer identity sees a need for secure yet usable user verification driven by retail and financial services and the increasing need for secure PII sharing.

All in all, user interruptions during the authentication process are increasing hugely.  The volume increases and the context surrounding the transaction is becoming more complex and subtle, too.  Usernames and passwords just won’t cut it, even with a decent MFA overlay leveraging one time passwords (generated client side of course not sent via SMS or email…) or Push Notifications.

Passwordless Requirements

Passwordless adoption requirements for both B2C and B2E will be subtly different.  It can be quite interesting to analyze requirements of passwordless just as you would any other credential – via a life cycle model.

A basic example would see steps such as enroll, use, add, migrate, reset, and remove.

Each step in the life cycle can then be broken down into the capabilities needed.  A consistent theme would seem to be a need for increased end user self-sufficiency – especially around enrollment and reset, where the dreaded call to the helpdesk instantly increases cost and reduces end user happiness.  (Obligatory sales nudge, I worked on a buyer guide for passwordless in 2021…)

B2E

From a B2E perspective, concerns for a passwordless model seem to focus upon replacing existing MFA components.  Many organisations often have numerous disconnected modals perhaps focused on specific user communities or applications.  Any consolidated passwordless approach must provide a range of application integration options from SDK’s, standards integration, or out of the box native integrations.  It would also be worth considering orthogonal authentication use cases for PAM and even physical building access.  Can that be integrated into a mobile centric passwordless approach?  The buzz words of zero trust and contextual and

adaptive access need to be shoe-horned into this landscape too, likely with a decoupled

approach to authentication away from the identity provider and network infrastructure plumbing.

B2C

Consumers are a different beast.  The focus is often upon rapid user onboarding with transparency and usability being important.  Can KYC and identity proofing be augmented into the credential issuance process?  Can those processes also be used during any reset

activities?  Clearly fraud – I’m thinking ATO, phishing, credential stuffing and basic brute force attacks – are all a huge issue with an Internet facing service, so any passwordless service needs to be immune.  Compliance initiatives such as the Strong Customer Authentication aspect of PSD2 is also driving a need for an authentication method that is secure yet can be operated at high scale by the end user.

What Are The Options?

So we all hate passwords. Service providers are getting hacked daily – the HaveIBeenPwned site is nearly at 12 billion breached accounts – and end users pick easy to break passwords that they re-use.  But, numerous startups are coming to the rescue – typically with a local mobile focused biometric (aka FaceID/fingerprint) that unlocks a private key on a device in order to respond to a challenge being set by a service that requires an authentication result.  Many do this in a proprietary way and many now leverage the W3C WebAuthn approach as a standards-based model.

A few other subtleties start to emerge.  How is the private key stored?  If on device, does it

leverage the trusted execution environment or secure enclave?  If off-device, is it stored in a

distributed manner, so no single point of failure exists?  If on device, what happens if the device is lost or stolen?  Does the end user have to re-enroll? Questions that all emerge once roll out starts to hit big numbers.

Another aspect to consider, away from just the technicalities, are things like end user training

and awareness.  Whilst many service providers aim for “frictionless” experiences and

transparency, a user journey that is too seamless, may actually make the end user suspicious – they want to see some aspect of security.  The classic “security theatre” scenario.  As with any mass rollout approach, not all users are the same. Behaviour, geographical differences, device preferences and the like will result in the need for a broad array of usage options and coverage. Can the new passwordless models cope with this?

Summary

Passwords aren’t dead, but they’re definitely quite ill.  The options for moving to something new are starting to become broad and numerous.  However, authentication doesn’t exist in a silo and on its own carries little use.  It would seem that before authentication (think proofing) and after authentication (think session integration coverage) use cases would likely emerge as the biggest competitive battlegrounds in the next 24 months.  Those suppliers that can create authentication ecosystems that integrate into a range of different devices, users, and systems

would likely see success.


Simon Moffatt

Founder & Industry Analyst, The Cyber Hut

Simon Moffatt is Founder & Industry Analyst at The Cyber Hut. He is a published author with over 20 years experience within the cyber and identity and access management sectors. His most recent book, “Consumer Identity & Access Management: Design Fundamentals”, is available on Amazon. He is a CISSP, CCSP, CEH and CISA. He is also a part-time postgraduate on the GCHQ certified MSc. Information Security at Royal Holloway University, UK. His 2022 research diary focuses upon “How To Kill The Password”, “Next Generation Authorization Technology” and “Identity for Hybrid Cloud”.

The post The Password Isn’t Dead…But It’s Quite Ill appeared first on IDPro.

]]>
IDPro Newsletter – July 2020 https://idpro.org/idpro-newsletter-july-2020/ https://idpro.org/idpro-newsletter-july-2020/#respond Wed, 03 Feb 2021 21:27:44 +0000 https://idpro.org/?p=951 EKYC & Identity Assurance Working Group OpenID Connect is used in a number of places for strong identity assurance, i.e. […]

The post IDPro Newsletter – July 2020 appeared first on IDPro.

]]>
EKYC & Identity Assurance Working Group

OpenID Connect is used in a number of places for strong identity assurance, i.e. the Relying Party uses the end-user claims provided by an OP to verify the user’s identity in order to fulfil regulatory or legal requirements, such as anti-money laundering, or in the context of fraud Prevention.

As one fundamental challenge, OpenID Connect (and other standards in this field) do neither reveal what trust framework the OP complies with for collection, verification, and maintenance of particular end-user claims nor do they communicate to the Relying Party important metadata about the verification process, such as when the verification took place, what evidence was checked and using what methods.

This information is essential for a Relying Party seeking to use OpenID Connect for strong identity assurance in order to fully document the assurance level and circumstances under which data was obtained for auditing purposes and to map the assurance level of the OP (or generally speaking the claim source) to the expected trust framework and assurance level of the Relying Party. For example, a RP could intend to use data verified and maintained under anti-money laundering law in the context of the local telecommunications law. Whether this is possible might depend on the verification method or evidence employed for a particular user as some methods allowed in the anti-money laundering context might not be allowed in the telecommunications context.

The eKYC & Identity Assurance Working Group at the OpenID Foundation is working towards OpenID Connect extensions for supporting strong identification use cases. The working group started in January 2020 and took over and continues the previous work on OpenID Connect for Identity Assurance (https://openid.net/specs/openid-connect-4-identity-assurance-1_0.html), which started in the AB/Connect Working Group in early 2019.

OpenID Connect introduces the “verified_claims” structure that is used as a container to convey a set of end-user claims along with the related metadata about trust framework, time, evidence, and methods.

The following example shows a user info response containing, beside other claims, verified claims maintained by the OP in accordance with the German Anti-Money Laundering law, indicated by the trust_framework value “de_aml”.

As illustrated by the example, verification data and end-user claims are conveyed in the separate sub-container elements “verification” and “claims”. The example also illustrates that the concept allows to mix verified and other claims in the same assertion while retaining a clear boundary between them.

Verified claims can also be provided through aggregated and distributed claims, making OpenID Connect for Identity Assurance a suitable tool for combining verified claims from different sources while keeping the clear relationship between the end-user claims and the assurance levels and metadata.

OpenID Connect for Identity Assurance recently passed the 2nd Implementers Draft voting and is already implemented in a number of products and services. It has been tested against the requirements from different jurisdictions by the broad membership of the working group from Asia, Europe, and North America.

As the current specification has gotten stable now, the working group is looking into further topics, e.g. identity assurance for legal entities, and intends to work towards conformance testing for OpenID Connect for Identity Assurance.

Anyone interested in the topic of strong identity assurance and wanting to contribute is highly welcome. The working group holds a weekly call on Wednesday at 3 pm UTC. More information can be found at the working group page https://openid.net/wg/ekyc-ida/ .

Torsten Lodderstedt

CTO, yes.com


When Web Browsers Attack – Browsers, Privacy Preservation, and Identity Flows

The world of web browsers is grappling with a deceptively simple mandate: Protect users from third-party tracking. It’s like a motherhood-and-apple-pie statement: having third parties track individual behavior is a significant issue. Legislation around the world agrees that third-party tracking is a Bad Thing.

But what happens when the technology used by advertisers for third-party tracking is the same technology used by enterprise and academic identity federations to support SSO? Suddenly, that simple mandate of “protect against third-party tracking” can potentially disrupt scholarship and business in significant ways. During Identiverse 2020, Vittorio Bertocci presented “Browser Features vs Identity Protocols: An Arms Race?” If you’re not familiar with how third-party tracking works, and how it is indistinguishable (as far as the web browser is concerned) from identity flows, this 30-minute session is something you need to view.

The good news is that even the browser vendors are still in the early stages of figuring out exactly what they want to do. That allows the broader IAM community to engage in the conversation and ensure that all the major use cases are considered. Discussion on this topic has, at least in part, moved into the W3C’s Web Incubator Community Group through Google’s webID project (https://github.com/WICG/WebID). While the webID developers have, to date, focused solely on the consumer space, issues have been raised to highlight enterprise SSO and academic federation requirements. The good news is, now that this discussion is happening in a public forum, more people can get involved. The bad news is that WICG attracts web API developers; additional expertise will almost certainly be needed in the privacy space and standards development.

The browser vendors are expected to be responsive to the issue of third-party tracking. Given they are still very early in the game of figuring out exactly what they want to do means now is the time for interested parties to get involved and be a part of figuring out a solution that will work for more than just one use case. IAM practitioners, particularly those that support their enterprise SSO environment or who are engaged in supporting academic research and scholarship, should get involved now to help build a robust and implementable solution for all.

Heather Flanagan

Translator of Geek to Human

Spherical Cow Consulting, LLC


News from the Amsterdam Digital Identity Meetup

During the spring and summer the Amsterdam Digital Identity meetup has been running a series of talks around modern authentication and how the different authentication options relate to each other.

We started in January with Multi Factor Authentication where Brian Kloof shared his experience of rolling out Azure AD MFA at a global retailer. In March we learned more about Windows Hello for Business where Pim Jacobs talked about how you implement WHFB in enterprise environments. Finally we got a run through of the FIDO2 standard and how to implement Yubikey in an Azure AD centric enterprise from Per Erngard.

The main conclusion from these talks is that each technology is one piece of the puzzle of minimizing the usage of passwords within an enterprise. MFA will add an additional layer of security on top of password. The main challenge with MFA is the roll out, especially in Corona times as the usual approach of requiring enrollment on corporate premises or via VPN is a lot harder to implement when the majority of the staff is working from home.

Some of our members have chosen to soften the enrollment requirements to get staff onboarded despite the downside that you lose the strong onboarding. Many enterprises are taking advantage of the integration with self service password reset functions that more and more MFA vendors are offering to get an improved user experience and lower help desk password reset call volumes.

Windows Hello For Business is getting increasingly popular and the consensus from our members is that it is a very good option for staff that are provided with company managed Windows 10 laptops that have Windows Hello compatible cameras or fingerprint readers. Some of our members have been trying to roll this out using PIN codes, but that has not been successful as the user experience improvement simply is not big enough.

FIDO2 offers an interesting way of providing very strong authentication for users who do not have personal laptops, but share kiosk style machines. This approach is especially interesting for retailers, hospitality, and healthcare companies that have a lot of staff who are not assigned a personal laptop and who need to be able to quickly log in and log out. If you have an Azure Active Directory centric environment the FIDO2 integration offers an attractive way to increase the authentication strength for key identities and applications.

In the fall we are planning talks on identity analytics in cooperation with Forgerock as well as external identities in AAD with Microsoft. We are also hoping to be able to get back to physical meetings but the online meetings have been very well attended and have facilitated some very good discussion. If you are interested and want to learn more, visit https://www.meetup.com/Amsterdam-Digital-Identity-Meetup-Group/ and sign up.

Martin Sandren

Domain Architect IAM at Ahold Delhaize

The post IDPro Newsletter – July 2020 appeared first on IDPro.

]]>
https://idpro.org/idpro-newsletter-july-2020/feed/ 0
So You Think You Can Two-Factor? https://idpro.org/so-you-think-you-can-two-factor/ https://idpro.org/so-you-think-you-can-two-factor/#respond Sun, 10 Nov 2019 20:44:00 +0000 https://www.idpro.org/?p=650 If every year is The Year of PKI, then when exactly was The Year of Two-Factor Authentication? Was it 2012, […]

The post So You Think You Can Two-Factor? appeared first on IDPro.

]]>
If every year is The Year of PKI, then when exactly was The Year of Two-Factor Authentication? Was it 2012, when the epic hacking of Mat Honan highlighted just how vulnerable all of our digital lives are? Was it 2014, when the even higher profile iCloud leaks of celebrity photos pushed various consumer services to hastily make two factor authentication an option available to users? Or did it really arrive in 2018, at least for financial institutions, when PSD2 delivered a regulation with some real teeth?

The Struggle is Real

Two-factor authentication (or 2FA as the cool kids call it) isn’t really new. We’ve all experienced it during the course of our professional lives, but organizations still struggle with rolling out 2FA to customers. Why? The simple reason is that while employees are a captive audience that will submit to whatever painful, inconvenient mechanism you force them to adopt (ok, except for MDM on their personal phones), customers are a whole different ballgame. The customer experience matters, and if you don’t do it right then people are either going to not enable it (when you make it optional), work their way around it, or not engage at all.

For any organization starting down the path of implementing 2FA, it can be confusing and challenging. They find a large list of factors spread across the “something you ___” categories, but little guidance on how to put a good 2FA scheme in place. Most organizations simply end up taking the approach of picking an additional factor that they can simply tack on to the end of their password authentication step, and call it a day. Unfortunately, that simplified approach falls far short of successfully addressing the problem.

A Framework for Designing Your 2FA Schema

I first started going down the 2FA rabbit hole when I wrote a blog post analyzing the Mat Honan hack. Since then, I’ve had the benefit/privilege/misfortune (which one it is depends on the kind of day I’m having) of having worked on strong authentication models quite a few times. More recently in my current role at Uniken, our efforts to create a multifactor authentication model that is focused on the customer experience, and can work across industries for both small and large organizations, user bases and threat models, has yielded some deep insights into what works and what doesn’t when it comes to 2FA. The effort to distill these learnings into something that can be explained to our product team and our customers has resulted in a basic framework for how organizations should go about implementing 2FA for their customers, built on 6 pillars.

Viability

The first pillar of that framework is Viability. When going through the long list of factors possible, you have to assess which of those factors is viable for your 2FA scheme. Assessing viability has multiple considerations:

  • You have to think of the people that make up your user base, and what factors they’d be willing to accept and use.
  • You also have to think about the cost of the factor, and whether that is a cost that the business will bear, or the customer will bear. Hardware tokens are great, but expensive. Is the business buying it for their customers, or are they expecting the customer to buy it themselves?
  • You have to carefully consider the threat model associated with the factor. The Yubikey is a really secure authentication factor, where the user has to plug the key into a port on their desktop in order to authenticate. But research studies have shown that people will often leave them plugged into their desktop even when they leave the office, virtually negating its assurance as a possession factor.You have to think of the people that make up your user base, and what factors they’d be willing to accept and use.
  • You also have to think about the cost of the factor, and whether that is a cost that the business will bear, or the customer will bear. Hardware tokens are great, but expensive. Is the business buying it for their customers, or are they expecting the customer to buy it themselves?
  • You obviously have to consider the effectiveness of the factor. See: security questions.
  • In many cases, regulatory compliance can enter the equation, since regulators are increasingly rendering opinions on which factors are acceptable for your business.

Multimodal

The second pillar of the framework is Multimodal. When implementing two-factor authentication, the goal is to have each user employ at least two factors when authenticating (obviously). However, that does not mean that the business is only going to support two factors. Not all factors work for all users, and when you’re trying to increase the number of customers turning on 2FA, you have to offer options that work with your vast and diverse user base. The idea that you can find two factors that work for everyone leads you to a least common denominator approach, and that’s how we got SMS OTP as the de facto “standard” in 2FA, and a weakening of the security model. Offering choice allows you to address the varying capabilities, preferences and circumstances of your end-users, and avoid a “one size fits all” approach that alienates customers and often weakens your security.

Adoption

I’ve alluded to the third pillar, which is the one that is the most misunderstood – Adoption. The reality is that unlike enterprise environments where you can mandate 2FA, the customer environment requires you to actually convince your end-users to start using 2FA. There’s a wonderful research paper called “Why Johnny Doesn’t Use Two Factor” that I highly encourage everyone to read. While there are many important takeaways in the paper, one overarching lesson from the paper is that organizations need to make UX research a core element of their IAM program, especially as they design their 2FA scheme. It’s a critical and foundational element to creating the right set of messaging, training, and incentive components that you will have to incorporate into your roll out plan to drive adoption.

Omnichannel

An overlooked pillar is Omnichannel. Businesses have often failed to recognize that 2FA shouldn’t apply just to their web or mobile channels, but must be deployed across all their customer facing channels. Businesses are engaging with customers and partners across many channels – web, mobile, call center, in-person, chat, smart home assistants, and more – and each channel usually brings a completely different way of authenticating the end-user. That inconsistency frustrates your end-users, creates a headache for your customer-facing staff and IT staff, and delights bad actors. Attackers look for the weakest link across those channels, and go after that one, exploiting not only the weakness of the channel, but also the frustration that your customers and employees feel. The result is rampant account takeover attacks and fraud. Businesses have an imperative to transition away from an inconsistent hodge-podge of varying  authentication models, and bring some consistency and equality of security levels across their various channels.

Processes

The fifth pillar of the framework is the one that most organizations don’t pay enough attention to  – Processes. Enabling and maintaining 2FA for individual customers involves many different processes, each of which needs to be properly designed:

  • Enrollment: If the enrollment process is flawed, the assurance of your 2FA is suspect from the very beginning. Many organizations will allow users to set up their second factor after they’ve authenticated solely using their first, and that is a massive vulnerability point in your scheme.
  • Backup: No authentication factor is immune from loss or destruction, so you have to think about ways to not only allow, but proactively encourage, your customers to set up additional authenticators as backups. And those backups better have the same strength as the primary, otherwise you’re creating a backdoor for attackers.
  • Escape Paths: Not all authentication factors are always available for use. Consider what happens to push notification based authentication for someone working in a part of the building, or on a plane, where they get no signal. Locking them out under those circumstances can be hugely problematic.
  • Recovery: Consider how you will support an end-user that has lost their authentication factor(s), so that they aren’t faced with the dire consequence of being permanently locked out (think of all the horror stories of bitcoin wallets irrecoverably locked up because their owner lost the hardware token containing their private key). Recovery paths must also be designed properly to avoid having them turn into backdoors for bad actors. And for heaven’s sake, never use an authentication factor as the verification factor for also doing recovery. I’m looking at every service that uses SMS OTP as a second factor of authentication, and also as a way of resetting a forgotten password. You’ve effectively created a backdoor that turns your two factor authentication scheme into a one factor authentication scheme.
  • Deprovisioning: Of course, you have to consider how one can go about invalidating a factor that is no longer available to the customer, or is no longer acceptable to the business because of vulnerabilities or issues discovered in it (whether it be at an individual level or system wide).

Importantly, escape paths and recovery flows need to be treated as exceptions with higher risks associated with them. That often implies increasing the risk evaluation and security of those flows, which often means adding friction. However, customers are frequently understanding of the increased scrutiny in those paths (provided you’re explaining it to them).

Trusted Environment

The sixth and final pillar of the framework is establishing a Trusted Environment within which to execute 2FA. It won’t really matter how good or strong your factors of authentication are if the environment within which those factors are being accepted, stored, transmitted, and evaluated is compromised, allowing them to be stolen, manipulated or replayed. Keyloggers that capture secrets, malware apps that intercept SMS codes or steal keys, malicious WiFi, reverse proxies, and rogue cell towers that capture and replay credentials or tokens – threats like these reduce the effectiveness of 2FA and degrade organizational trust in those factors. Your two-factor authentication project has to be part of a larger security program that enforces defense-in-depth (or, to use the industry term du jour, zero trust security) to not only leverage the factors of authentication, but also look at the health of the devices and hardware being used and the networks being relied upon, as well as other signals of risk, in order to build trust in (hopefully) the simple act of authenticating your customer.

So, there you have it. A simple framework to apply while designing, building and rolling out your two-factor authentication program. May all your authentications be strong, and all your customers be happy, engaged and protected.

[This article is adapted from my talk at EIC, Identiverse and Identity Week. You can watch the Identiverse talk here.]

Nishant Kaushik
CTO, Uniken Inc.

The post So You Think You Can Two-Factor? appeared first on IDPro.

]]>
https://idpro.org/so-you-think-you-can-two-factor/feed/ 0