NHIs are here, performing countless actions on our behalf. Understanding them begins with seeing that every non-human identity represents accountability, Read more
Ideas for IAM work show up in all the usual ways: I won’t say the list is in priority order… Read more
This autumn, IDPro launched a four-part series on Continuous Identity—a term that’s quickly becoming central to how modern IAM teams Read more
This article is about “identity.” However, this is explicitly not about user accounts and what some may call “digital identities”. Read more
Introduction: Setting the Scene Every identity professional knows this story: the IAM team is the guardian of security, yet often Read more
If you've spent any time designing secure systems, you've explored the wonderful world of authorization acronyms: RBAC, ABAC, ReBAC, and… Read more
An Identity Governance and Administration (IGA) platform serves as the central authority for defining and enforcing access policies. However, its Read more
The concept of Non-Human Identities (NHIs) has become a defining topic in Identity and Access Management (IAM). This article introduces Read more
Hello, DevSecOps fans and security buffs! If you’re running a software supply chain in 2025 and still handing out access Read more
Black Hat and DEF CON are, as always, conventions that set the tone for the security savvy for the next Read more
There’s been some buzz recently around the new specifications regarding the Credential Exchange family of specifications coming out of the Read more
Identiverse 2025 took place June 3–6 at Mandalay Bay in Las Vegas, bringing together over 3,000 identity and cybersecurity professionals Read more
Employee impersonation
This year, expo floors are louder than ever. But the real signal is quieter, traded in hallway whispers and private Read more
Identiverse at the Mandalay Bay, Las Vegas
Identiverse has become a lot of things for a lot of different people. It’s the only place I know where Read more
The world of identity and access management (IAM) touches every aspect of online technology today. One of the better side Read more
OSW 25 Iceland
The OAuth Security Workshop 25 (OSW) took place in Reykjavik, Iceland this year, in the last week of February. Currently Read more
Integrating cryptography with Identity and Access Management (IAM) isn't just a good idea – it's a necessity in today's complex Read more
Since 2017, IDPro® has been growing, evolving, and supporting the practitioners working in the field of digital identity. From our Read more
Account recovery has long been a pain point for both businesses and users. With over 30% of contact center calls Read more
Decorative image
Learning a language can be quite difficult. Sure, you can opt for mobile apps that claim to teach you the Read more
by Dr. Tina P. Srivastava Authenticate 2024 took place in Carlsbad, CA in mid-October. The weather was beautiful and the Read more
the bullseye on a core business function
The Identity and Access Management (IAM) industry is ready to move out of its parents’ house and be recognized as Read more
The U.S. National Security Agency (NSA) recently (as of July) completed its final paper on the pillars of Zero Trust. Read more
Chart of non-human identity actions
Originally published on April 5, 2024 by Heather Flanagan ("The Evolving Landscape of Non-Human Identity"); updated on August 22, 2024, Read more
By Dean H. Saxe Last month, the IDPro newsletter published an OpEd entitled I’ll pass (for now) on Passkeys. In Read more
by David Brossard Well, it’s been another busy few months for the authorati (credits to Omri Gazitt of Aserto and Read more
A new “signals plane” is needed to achieve zero-standing access By Atul Tulshibagwale and Sean O'Dell Security online is no Read more
Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the Read more
Article updated 10 July 2024 Disclaimer: The views expressed in the content below are solely those of the author and Read more
Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the Read more
Technological advancements, shifting market dynamics, and changing regulatory environments are creating an incredibly dynamic environment for digital identity practitioners. IDPro's Read more
AI authentication by facial recognition concept. Biometric. Security system.
Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the Read more
Since its inception in 2017, IDPro® has been on a journey of growth and innovation. From our founding to the Read more
Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the Read more
Authentication and Authorization Capability Taxonomy in a Zero Trust model
Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the Read more
Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the Read more
Introduction Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect Read more
Diagram of the UNDP Model Governance Framework for Digital Legal Identity Systems
Digital identity systems have been a core component of organizations in every sector and around the world. Here at IDPro, Read more
artistic rendering of a wasp
FYI I love acronyms: acronym soup, acronyms al dente, acronym au jus… Acronyms FTW. So, when I started working on Read more
My opinions are my own and do not reflect the views of others at my employer, the Federal Reserve System. Read more
Artistic rendering of the concept of layered authorization models.
by Alexandre Babeanu, 3Edges, and Tariq Shaikh, CapitalOne Background The true beginning of scientific activity consists rather in describing phenomena Read more
Well, it’s a wrap on a very successful Internet Identity Workshop (IIW). A few weeks ago, 300+ attendees all gathered Read more
Headshot of Vittorio Bertocci
The identity community and the world lost an amazing person in October. Vittorio Luigi Bertocci was a kind and brilliant person. He Read more
Artwork depicting delegation
by Vipin Jain Delegation in IAM empowers organizations to distribute authority, responsibilities, and access privileges effectively, enabling efficiency and maintaining Read more
Identity Day Logo
In a rapidly evolving digital landscape, personal information is the currency of the online world. The significance of being able Read more
IDPro®, the leading organization for identity management professionals worldwide, is pleased to announce the appointment of Heather Flanagan as its Read more
Authorization concept.
(This post was previously published on Heather's blog, and has been updated based on the conversations it inspired.) I have Read more
Authorization mark
For the first time ever, Identiverse headed to Vegas for its annual conference. It was a hit, as always, and Read more
Photo artwork - little wooden figures with a magnifying glass focused on one
We are pleased to announce that the results of the 2023 IDPro Skills, Programs & Diversity Survey are now available! Read more
Government-issued digital credential graphic art depiction
In October of 2022, the OpenID Foundation contacted me about helping develop a research paper on government-issued digital credentials through Read more
Darkened image of a hand touching an iPad with a little digital ghost hovering over it labelled "AI". Keywords LLM, Generative AI.
By Mike Kiser We live in an age that values authenticity: being true to who you are and what you Read more
Thinking about Deployments & Leading Practices (D&LP) for Identity
Hard to believe that in just over a month, we’ll be gathering in Las Vegas for Identiverse. IDPro’s home event Read more
Shocked and surprised boy on the internet with laptop computer as he realizes just how little privacy there is on the web
As many do, I sort of fell into Identity. I worked as a Product Manager in cloud platform API services. Read more
Passwordless Login with Passkey Concept - Authentication and Login Credentials that Remove the Need for Passwords - 3D Illustration
Let’s talk about passwordless, but less about the how and more about the why of passwordless. The drive toward passwordless Read more
Artistic rendering of a map with the shadow of people collaborating overlaying it.
The IDPro Body of Knowledge (BoK) Committee is seeking new members to help guide the future development of the Body Read more
Since its inception in 2017, IDPro® has been on a journey of growth and innovation. From our founding to the Read more
We would like to extend a warm welcome to our newest corporate members Indigo Consulting and Structured Query! Indigo Consulting Read more
Identity, much like tamale-making, the Wave, or Bollywood dancing, is best done in the community. We learn the most as Read more
I recently decided to switch to a new password manager and thought it would be worthwhile to share my experience Read more
IDPro® is pleased to welcome our newest corporate member MassMutual! MassMutual is a Massachusetts-based insurance company that offers life insurance, Read more
The AWS Inclusion, Diversity, and Equity (ID&E) Innovation Fund program enables organizations to make a positive impact on racial and Read more
IAM is a fundamental component of security, privacy, and user experience. Without any measure of the diversity, goals, interests, skills, Read more
I was lucky enough to be able to attend the Internet Identity Workshop (IIW) in November after being away from Read more
It’s that time of year when we all pause to reflect on events in our lives, personal and professional. As Read more
Identity management is a multibillion-dollar global industry with growth expected to double in the next five years. But finding experienced Read more
During a recent IDPro® webinar, IDPro Principal Editor Heather Flanagan and IDPro Executive Director and President Heather Vescent discussed the Read more
We would like to extend a warm welcome to our newest corporate members Allstate and Okta! Allstate is an American Read more
Why Should I Care About GraphQL ? If you work in identity, sooner or later you’ll need to secure access Read more
IDPro’s mission is to globally foster ethics and excellence in the practice and profession of digital identity. We do that Read more
During the latest IDPro® webinar, the 2022 Skills, Programs & Diversity Survey Q&A webinar, IDPro founders Ian Glazer and Andi Read more
IDPro® is excited to welcome CVS Health as the newest Champion member of our organization of digital identity professionals. CVS Read more
I have a "smart" TV which apparently can act like a hub for other "smart" appliances (made by the same Read more
IDPro® would like to warmly welcome our newest corporate members Easy Dynamics Corporation, Cross River and IDENTOS!  Easy Dynamics Corporation Read more
We are pleased to share that 25 people have completed IDPro’s Certified Identity Professional (CIDPRO) exam and have become CIDPRO Read more
We warmly welcome the newest corporate IDPro® members, Radiant Logic and Target, to our community of identity and access management Read more
After two long covid years it is once again possible to meet up with your IAM industry peers. If you Read more
A friend of mine recently received his new electric vehicle, full of all the expected modern experiences and connectivity. So Read more
We are pleased to announce that the results of the 2022 IDPro Skills, Programs & Diversity Survey are now available Read more
The IDPro Body of Knowledge (BoK) offers information on topics ranging from Identifiers and Usernames to the IAM Reference Architecture. Read more
By Greg Smith The Internet Identity Workshop meets twice a year and publishes proceedings for those who were unable to Read more
by David William Silva, PhD This is the last article of a series of four on the basics of the Read more
By Vittorio Bertocci After having attended in person one Identiverse, two EICs, one AuthenticateCon, one IETF, one OSW and one Read more
As part of IDPro®’s continued efforts to promote a diverse and inclusive identity community, we are pleased to announce that Read more
GDPR Part 3
by David William Silva, PhD This is the third article of a series of four about the General Data Protection Read more
Did you know about World Password Day? It takes place every year on the first Thursday in May and is Read more
Welcome to Identity Management Day 2022! Identity management is the term that describes how organizations maintain effective security to prevent Read more
One of the core values of IDPro is “Transparency”, and with that in mind the Board has a well-established tradition Read more
by Heather Flanagan This month marks  the second anniversary of our first articles published to the IDPro Body of Knowledge Read more
by Ian Glazer Women In Identity (WiD) is a volunteer-run, international not-for-profit organization that promotes diversity and inclusion across the Read more
by Martin Sandren If you have been working in the IAM space for a while it is quite interesting to Read more
by David William Silva, PhD This is the second of four posts about the General Data Protection Regulation (GDPR) according Read more
by Greg Smith Only three months to go! Identiverse is IDPro’s home event, and it will be taking place in Read more
by David William Silva, PhD The General Data Protection Regulation (GDPR) is considered the most comprehensive security and privacy law Read more
After what feels like an eternal period of COVID waves we are finally moving into what we hope is the Read more
by Simon Moffatt Well, as we enter 2022 - and a good way into 60 years of using commercial computer Read more
by André Koot I’m not sure when I heard it first. It must have been a while back, but it Read more
In November 2021, the world lost Kim Cameron, a champion of digital identity. In honor of this esteemed thought leader, Read more
by Martin Sandren One of the lessons learned from 2021 is that ransomware can target any and all companies. Hailing Read more
During the season of giving, there’s always at least one person on the list that seems impossible to buy for. Read more
by Greg Smith Are you taking full advantage of all that your IDPro membership has to offer? One of the Read more

Looking for more blog articles? Check out the archive.

NHIs are here, performing countless actions on our behalf. Understanding them begins with seeing that every non-human identity represents accountability, not just automation, and approaching it from how NHI management is more than just a technical discipline. It is about ownership and governance across change and access processes. To set up that governance, organisations must recognize where NHIs live in their architecture and assign accountability accordingly.

This article continues the NHI discussion started in Part 1 by examining how non-human identities come into existence, operate, and are decommissioned. Just like human users follow Joiner-Mover-Leaver (JML) logic, NHIs follow their own pattern — one rooted in change management.

Lifecycle of NHIs

Just like human identities, NHIs have a lifecycle. Somehow, they come to exist, they have a life, and then they are destroyed.

For human identities, the human resource management (HRM) processes are the governing processes. Joiner–Mover–Leaver (JML) is the core model. Every Join, Move, or Leave event will be evaluated for the identity and access management consequences. If an actor joins an organization, a digital identity is created, and one or more accounts or usernames are assigned to this actor. Moving within the organization (to a new department or manager) will result in a reevaluation of permissions. And when leaving, all permissions will have to be revoked and licenses will be terminated, all to prevent the abuse of identities and identity theft.

For non-human identities, a different process is the governing process, not the JML processes. These processes are not HR processes. NHIs don’t apply for a job, nor do they drop out of the sky.

Before NHIs gain access to the network or a building, they must be onboarded. Meaning they should be identifiable as ‘trusted’ components that may get access for a defined purpose. A component has a purpose, a goal in non-human life. Be it a service, a server, a robot, Robotic Process Automation (RPA), or a machine interface. The component is implemented and configured for that purpose. The governing process is a Change Management process, and registration occurs in a configuration management database (CMDB). Reconfiguring the component to serve a different purpose or work in a different environment is possible, but that also requires a change. A financial reporting RPA will not become an invoicing RPA without reconfiguration, not without a change request. And removal of the component, again, takes a change.

So instead of JML, this is Create, Adjust, and Remove. We could refer to this as the CAR processes or, less specifically, change management.

Change Management

A change management process is more than just the Information Technology Infrastructure Library (ITIL) definition; it’s every change in an infrastructure or application landscape that results in functionality or features needed.

There is always a stakeholder who requests functionality: a tool, a service, or whatever. If the stakeholder has the mandate to do so, a change will be implemented, resulting in a component, service, or thing that can be used by or on behalf of the stakeholder. And, this is key: a change not only results in the component, but also in a governance item, typically the registration in a configuration management database (CMDB). And it is important that the CMDB item affected by the change has an owner and some more documentation, like the permissions needed by the component. We know who requested the component, we know its whereabouts, and we know the permissions.

Changing the functionality or location of the component is possible, but that, again, requires a change request and documentation. This is also valid for all components, including those built dynamically, such as services created in a CI/CD pipeline. The continuous process is a configured process; it has an owner, a build requester. So even those services and APIs are created in a structured and governed manner, and that again is a change management process. And even more, these processes can be part of automation (in CI/CD devops environments) or procurement processes. The changes (such as in the source code and config files) may even be documented in a version control system in highly automated build processes.

When the component is not in use anymore, it will be decommissioned, it must be disabled, removed, or destroyed. When the component has no purpose anymore, the final change is the removal of the component and registering the change in the CMDB. It must not be removed from the CMDB, because it once had its own identity, and for logging, monitoring, and forensics purposes, we need to know the history of the component.

Remember that maintaining the CMDB is an ongoing process; this precondition will not be discussed further.

As a further clarification, the digital identity lifecycle of non-human accounts, as defined in IDPro (see refs) is shown:

Figure 1: NHI lifecycle

IGA and Authorization

To automate the Human JML process, the supporting tool is an Identity Governance and Administration (IGA) solution. It evaluates the event in an HR system and, based on rules and roles, provisions accounts and authorizations to connected target systems. And after provisioning, it can also reconcile the target systems to validate accounts and permissions in these target systems. 

In this process, the IGA system will encounter accounts and authorizations that have not been generated by the IGA system. IGA will see the admin and root accounts that belong to the target system. These accounts are NHIs, and they exist as a part of the target system. If you implement a new Linux or Windows server, the root or admin account already exists. And, as you will understand, these new servers are the result of a change management process; it’s not a JML event. 

After reconciliation (the process of reading back the identity and authorization repositories of a target system), an IGA system will see the root or admin account in the target system, and that’s it. No need to manage admin or root as these accounts have all permissions, as they should have. They will not be related to a human identity. The account belongs to the component, and it also has an owner: the owner of the component. IGA will see the account and report it as ‘not being managed by IGA’, but that’s okay. In most IGA solutions, this type of account can be classified as a system account or service account. In fact, IGA solutions should enable it to be classified as an NHI.

In short, IGA takes care of human access as a result of the human JML process. But where does that leave the NHI’s, how should we manage their lifecycles? 

For human identities the lifecycle management process is well defined and IGA systems are well equipped to support that with both account management and role based access control, provisioning and deprovisioning accounts and authorizations. Can the same system also support NHI’s? And then my opinion is that IGA systems should not be the solution to manage the lifecycle of NHI’s. And there are multiple reasons for that. First, there is not just one process responsible for managing NHI’s. 

If you treat an NHI as a human identity, some additional controls are inevitable: If you would manage the lifecycle and authorizations of an NHI in en IGA solution, then these effects would be caused by the IGA solution:

  • An account is created in the target system through the provisioning process;
  • In the organizational structure of, not sure, either the org top level structure, or a sub level of the top level, to be defined by the owner? Or Manager or operator?;
  • The birthright authorizations will be granted for the org top level;
  • The line manager will see the NHI to grant authorizations by assigning a role.

But these provisioning effects have to be undone, because other controls and measures are already in place:

  • Account creation has already been done by onboarding the NHI in the network;
  • There is no need to assign the NHI to an organizational level: in the CMDB or change request the whereabouts of the component is already known;
  • The NHI shall not have default birthright authorizations or roles, components don’t need birthright authorizations, they only need ‘least privilege’ access;
  • The NHI doesn’t need a role, it has all the required permissions defined/described in the change. An RPA cannot just get new authorizations by changing the business role in IGA, it needs new functionality, because of the change.

And the same will happen for other NHIs: say we configured an RPA. Every day after office hours the RPA will read the sales figures, analyse the data, create a report and send the report to the head of sales. This RPA needs an account, authorisations, and resources to perform these actions. All of this will be configured while creating the NHI. 

The NHI will work on behalf of the Head of Sales, but it should not have the authorizations of the Head of Sales. Nor shall it have the business role of the Head of Sales, it only needs the permissions needed to read, analyze, create, and distribute the report. Least privilege. The permissions are very specific. Any other RPA will have different authorizations, no need to make a role for it. Its authorizations are a dedicated, non-reusable set of permissions. And these permissions will not be changed, unless the functionality of the RPA must change, in which case it will be newly developed. RBAC? No way! In fact, if you try to give a role to an NHI, you misunderstand the concept of RBAC… There are better solutions for managing access for NHI’s, like Policy Based Access Control or Relation Based Access Control to add the dynamics required. We will get to that in the next articles in this series.

Life Cycle Conclusion

NHIs don’t join the organization. NHIs are managed through a change management process. This means that an IGA solution does not fit the NHI lifecycle management process. IGA vendors may tease you in managing NHIs in IGA, but that’s not a sustainable solution.

NHIs must be managed in a change management process, and they should be registered in a CMDB and assigned to an owner.

Reality check

No, this hardly ever happens. In most organizations any CMDB is not reliable, the registration is, therefore, unreliable. But that should not keep you from managing NHIs in a controllable way. This is a call for fine-tuning the change management and asset management processes.

The organization may decide to implement specific tooling for managing NHIs, that’s all right, but that does not mean that the governance problem can be ignored. There must still be a business owner who is accountable for the life cycle and the authorizations granted. And just implementing additional tools next to the CMDB and service management solutions that are in place could only obfuscate the problem of lack of governance.

Conclusion


This lifecycle perspective underlines one essential truth: NHIs are governed through change, not employment. The Access article examines how these identities interact with systems. Specifically, how access “to” and “by” NHIs should be understood and controlled.

References

There are great resources that cover NHIs, but the lifecycle covered in this article are not clearly identified. Anyway, please study these articles in the IDPro Body of Knowledge:

The CAR case was invented by my colleague Henk Marsman, feel free to use CAR 🙂

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

About the author:

Headshot - Andre Koot

André Koot is principal IAM consultant at Dutch IAM consultancy and managed services company SonicBee (an IDPro partner). And member of the Advisory Board of IdNext.eu. He has over 30 years of infosec experience and over 20 years of experience as an IAM expert, acting as architect, auditor and program lead. For the last nine years he has taught a 4-day IAM training course. André contributes to the IDPro BoK as committee member, author, and reviewer.

Ideas for IAM work show up in all the usual ways:

  • An executive reads something or talks to a colleague about where the industry is heading.
  • A leader comes back from a conference energized about “the next big thing.”
  • The business needs to streamline how they work or unlock revenue.
  • An IAM team member calls out something that obviously needs improvement.

I won’t say the list is in priority order… but it’s close.

And more importantly: that’s just how the idea arrives—not how it gets funded, approved, or actually done.

So how does something move from idea into reality?

IAM Project Momentum Model — Reference Card

1. Policy

Does policy already support the need?

If yes → you have built-in leadership intent.

If no → clarify policy language so the need becomes undeniable.

Momentum Source: documented executive stance + audit alignment.


2. Process

Where is the process failing, outdated, or not meeting policy intent?

Look for gaps caused by new technology, business changes, or legacy workflows.

Momentum Source: operational inefficiency + risk mitigation.


3. People

Do you have the skills today to execute the project?

If not, can training or restructuring solve it?

Leadership rarely hires for future problems—scope realistically.

Momentum Source: team capability + future-proofing.


4. Tools

Tools come last—after policy, process, and people are aligned.

Ask: does the tool meet policy needs, fix process gaps, and elevate skills?

Momentum Source: accelerators, not the foundation.


“The strongest IAM projects build momentum in this order:

Policy → Process → People → Tools.

Skip the order, and momentum breaks down.”


Momentum: The First Gate


A project needs momentum. It has to catch a wave, or roll downhill like a snowball.

Your position in the hierarchy determines how easily you can create momentum, what tools you have, and how much help you need.

Executives and senior leaders have experience (also called trial and error). They know the fastest path, they know who to talk to, and they’ve built trust. They still have to check the boxes to activate a project, but they get flexibility because leadership assumes they’ll finish the details. It’s basically a good line of credit.

The business or IAM staff?

We have more obstacles. Our job is to earn momentum. One way is getting buy-in from those executives and leaders—turning your idea into their idea.

But how?

By showing your work clearly.


1. Policy: The First Source of Natural Momentum

Ask first: does policy already support this?

If yes, you have documented leadership intent behind you.

If not, your first job is clarifying the policy so the need becomes unavoidable. If audits or controls testers are already raising concerns, momentum is forming all by itself.


2. Process: The Second Momentum Check

If policy is solid, look at where the process fails:

  • Outdated workflows
  • Gaps caused by evolving tech
  • Steps stuck in “the way we’ve always done it”

If the policy is still valid but the process is behind, you have a legitimate need to drive change.


3. People: The Hardest Momentum to Build

Skill gaps matter. You can’t build a project the team has no ability to execute.

Sometimes you can train.

Sometimes you need restructuring.

Sometimes you have to scope the project down until hiring or attrition lets you rebuild the skills you’ll need later.

Leadership rarely hires for a future problem—so break the work into achievable phases and plan ahead.


4. Tools: Everyone’s Favorite Step (Which Should Be Last)

Tools get the attention because they’re exciting and vendors are loud. But tools must match the needs you discover in the first three steps, not the other way around.

When evaluating tools, ask:

  • Does it meet policy intent?
  • Does it modernize or fix process gaps?
  • Does it bridge skills or provide a path to grow them?
  • Is the vendor stable, innovative, and able to deliver on time?

Tool selection should be the result of the earlier steps—not the beginning.


The Real Momentum Curve

Working from Policy → Process → People → Tools gives you the strongest possible foundation.

The common trap is gaining just enough momentum to start and then stopping. That’s how projects lose funding, lose leadership attention, get paused, get mothballed, or get handed to people who don’t share the original vision.

A year later you find yourself asking, “How did this happen?”

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Chris Power is an IT leader with over 25 years of experience across infrastructure, application delivery, and enterprise systems, with the last five years focused on Identity and Access Management. He currently serves as Senior Manager of IAM Operations at Sallie Mae, where he leads teams responsible for delivering and governing workforce identity services in a highly regulated financial environment.Chris focuses on building IAM programs that work at scale—balancing control, usability, and operational sustainability. His leadership perspective centers on daily workforce provisioning, access governance, and the automation required to support growing organizations without increasing risk or operational drag. He is particularly interested in how clear ownership, decision rights, and accountability models shape successful IAM outcomes. He writes and speaks from the perspective of a leader who has spent decades running systems and teams, and now applies those lessons to building resilient, auditable, and people-centered identity operations.

This autumn, IDPro launched a four-part series on Continuous Identity—a term that’s quickly becoming central to how modern IAM teams think about trust, automation, and risk. Our first session focused on the part that quietly makes everything else possible: Data.

You can’t manage IAM at the speeds required today without good data. Sean O’Dell and Andrew Cameron kicked off the series by making it very clear that if your data is a mess, nothing else will work.

Why data comes first

Identity teams today ingest an overwhelming number of signals, including authentication events, device posture, network behavior, location anomalies, entitlement usage, and more. In theory, this should enable adaptive, continuous identity management. Alas, in practice, it’s often noise.

Session 1 focused on how to turn that noise into a living, contextual identity profile that organizations can trust. The more data you pull in, the more effort it takes to clean, transform, normalize, and verify it. Without a structured approach, real-time signals (and we do love Shared Signals!) can’t be used meaningfully for security or access decisions.

This isn’t a tooling problem as much as it is a data hygiene problem. Continuous identity, according to both Sean and Andrew,  only works when the inputs are clean.

The identity fabric: a layered model

To frame the conversation, Andrew and Sean described a fabric-based model for workforce identity, one that organizes real-time signals into a structure IAM teams can build on.

The architecture discussed follows five major layers:

  1. Workforce Identity Data Platform (Core) – Aggregates identity signals and applies transformations, normalization, and verification.
  2. Identity Fabric / Continuous Ingest & Analysis – Ensures signals are captured as they happen.
  3. Functional Plane – Implements standards such as Shared Signals and identity verification workflows.
  4. Orchestration Layer (Signal Plane)-  Translates, routes, and processes signals for policy-driven decisions.
  5. Action Plane – Executes access changes, risk responses, and lifecycle updates.

If verification at the ingest stage is weak, every downstream decision is compromised.
Garbage in, garbage out.

Data models and identity graphs

One of the biggest takeaways from the session for me was the shift from making decisions only at admin-time or run-time to using analytical and operational graphs that inform decisions continuously.

That requires structuring identity data into interconnected models, such as:

  • Enterprise Graphs — capturing relationships, teams, and reporting structures
  • Identity Graphs — tracking authentication events, account lifecycles, and risk
  • Entitlements Graphs — mapping access rights and whether they’re used
  • Shared Signals Graphs — enabling bidirectional trust updates between systems

These graphs form the context layer that continuous identity depends on.

While I’m sure we’re going to discuss this in more detail in the next few webinars, here’s a sneak preview of what we’ll hear: The orchestration hub and the data platform should remain separate. Blending them increases the risk of operational bottlenecks and muddies the clarity of responsibility between “understanding identity” and “acting on identity.”

The road ahead

Session 1 set the stage for what’s coming in the next three parts of the series: orchestration, decisioning, and automation. But the foundation—data—cannot be skipped or assumed.

Clean data isn’t glamorous work, but it is the most strategic investment an identity program can make. Without it, continuous identity is just an aspiration.

With it, continuous identity becomes the natural next step in modern IAM.

Stay tuned for Part 2 on Orchestration.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author:

Heather Flanagan, is the Principal at Spherical Cow Consulting, where she helps organizations navigate the fast-moving world of digital identity and Internet standards. With more than 15 years of experience translating complex technical concepts into clear, actionable strategy, Heather is known for her ability to bridge communities, guide collaborative work, and make standards development a little less intimidating.

Named to the 2025 Okta Identity 25 as one of the top thought leaders in digital identity, Heather is also a regular speaker and writer, focusing on standards, governance, and the real-world challenges of identity implementation. If there’s work underway to shape the future of identity or rethink how the Internet functions, she’s probably in the middle of it.

This article is about “identity.”

However, this is explicitly not about user accounts and what some may call “digital identities”. It’s also not about non-human identities (NHIs), workload, service, machine-to-machine, or customer accounts. 

There are a lot of great articles already written on each and every one of these identity types by thought leaders, so I’d like to address the neglected others.

So, if this article is about identities, but none of the above, then what’s this article about? This is about other constructs that are fundamental to all Identity and Access Management programs, and to their related tools and applications. I’m referring to the identities of constructs like groups, applications, policies, networks, etc.

Identity Constitution

Allow me to simplify the constitution of ‘Identity’ into having three parts: 

  1. An identifier (as unique as possible)
  2. Attributes, which provide further differentiation, context, etc.
  3. Relationships (e.g., “belongs to”), which can be documented as part of #2

“My dog’s name is Lola” ← These five words already encompass the three parts above:

  1. Her identifier: Lola
  2. Attributes: type: Dog
  3. Relationships: owner: Me (although, if Lola could talk, she’d tell you her human is my wife)

An example of a non-living object is “my lucky t-shirt”. I’ve had this t-shirt for years, and it’s green, and it has a print of mountains with “Colo ‘rad’ o” written above (I’m a dad, I love it). At home, I may say, “have you seen my lucky t-shirt?”, and in the context of my family, chances are they’d know which one I’m talking about. If my daughter is not sure which t-shirt I’m talking about, she may ask, “what color is it?” (It’s green, an attribute). Life gives us an extensible schema to define any number of attributes to identify objects.

In the examples above, I shared the ‘Identities’ of two objects. The point is to ‘identify’ them.

If we turn to IAM-related objects, we can look at groups as in immediate need of proper identification. A group’s system identifier may be “xyz123”, attributes may include Group Name = “App X Users” (this may be considered the identifier, to the human eyes at least), and Group Description = “Accounts with access to App X”. Is this sufficient? Perhaps initially you’ll think “absolutely”. I’d argue that there’s a rich group identity hidden behind the ID, Name, and Description for this group. 

The IAM systems I’m most familiar with allow me to define a rich, extensible schema for accounts with many different attributes and even different attribute-types (string, Boolean, array, etc). This is excellent and much needed. In the last few years, the ‘group schema’ became available, so I may now define a Boolean value ‘For SSO’, ‘For SCIM Provisioning’, or ‘For Policy’. In addition, I want to define ‘Pushed to App’ as a Boolean value, and if TRUE, then ‘App’ (string type, as I can’t define an App object relationship).

But, there’s no extensible schema for ‘Apps’, or for ‘Group Rules’, or ‘Policies’, or ‘Networks’, etc. Lots of opportunities here to elevate the schemas of other objects to a whole new level. 

The CMDB is an Identity Management system

It follows that the system of record for constructs such as applications, systems, and perhaps groups is actually an IAM system, but for constructs other than accounts.

A proper CMDB will contain the creation date for any of its configuration items (CIs), its reason for being, its location, and, importantly, its relationships to other CIs.

A Source of Truth

One way to make your IAM system compliant and elevate its security is to delegate account creation to the correct source of truth. HR-driven provisioning is one example of this. If the IAM system delegates employee account creation to a correlated HR record, and the permissions to create accounts are removed from humans, a bad actor would have to shift their tactics to the HR system in order to create an account, which would likely require creating a role requisition, an applicant account, and then a hire/onboarding process.

Similarly, if the base attributes for a group, application, or other IAM construct are established and properly governed by the right source of truth, then the entire identity fabric will be more secure and compliant, but it’ll be like a self-maintaining organism, keeping the parts that are needed and auto-shedding those that have come to the end of their useful existence. 

Naming Conventions Don’t Work

You’ve likely implemented or have seen many naming conventions implemented to address this very topic. In my experience, a naming convention typically encodes attributes into the name (perhaps into a `Description`) with the intent to give more context to the object. This may work in some situations and it may help humans visually inspect the object. The problem begins when these existing encoded dimensions change or no longer capture the entirety of the object’s schema. When faced with this challenge, proper hygiene means renaming all existing objects, or, in the more common scenario, breaking the naming convention altogether. The end result is heterogeneous names and paralysis due to confusion and the need to research.

Suggested Actions

If you have access to an extensible schema for your objects, use it. Give those objects a rich identity that empowers a complete lifecycle of the object, from creation to decommissioning.

In the case of our Lola, she has her tag on her collar with her name and our cell phone numbers. However, she also has a microchip that extends the schema of her attributes to include our details, her vaccinations, etc. in case she gets lost and loses her collar.

If you’re building or managing IAM software, expand the universe to enable rich schemas in the system. Some of us may want to have a “lucky” group/policy/agent, and we certainly want better ways to identify and protect our Lola’s.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

About the author

Pablo Valarezo is an Identity practitioner building and modernizing secure IAM programs over the last decade. His primary focus has been in the workforce side of IAM. He came to Information Security via system administration, project management, and audit and compliance.

Introduction: Setting the Scene

Every identity professional knows this story: the IAM team is the guardian of security, yet often the last in line when it comes to funding. Budgets are locked to “security outcomes,” and anything that strays into “customer experience” or “digital enablement” is out of scope. You’re expected to protect millions of customer accounts, deliver seamless experiences, and stay ahead of threats—while working with a budget focused solely on security. That’s exactly the challenge we face when trying to evolve our Customer Identity and Access Management (CIAM) platform.

The IAM Team Context and Funding Challenges

In most enterprises, the IAM team’s funding comes from the security bucket. Consequently, budget allocations are directed toward risk mitigation, compliance, and protection, rather than initiatives focused on improving customer experience or encouraging innovation. When your team might see an opportunity to deliver a capability like User managed access (as an example), there’s often no clear financial pathway.  The result? Good ideas stall, and security alone becomes the narrow lens for investment.

Platforms and Historical Practices

Our journey began with a heavily customised, on-premises CIAM platform. Every upgrade was a major event. Adding new features, like 2FA or social login, involved coordination between multiple teams and—inevitably—long delays. The platform renewal cycle was budgeted based on the current needs, not customer needs. 

Efforts to speed up feature rollouts or enhance the customer journey were consistently blocked by numerous dependencies between teams. Even when the value was clear, the process felt like a lost battle. The IAM team’s own funding couldn’t stretch to cover the broader investments required, and other business units had their own priorities.

Challenging the Status Quo

Faced with mounting frustration, our team decided to challenge the historical approach. Instead of waiting for the next renewal cycle or hoping for a change in budget structure, we asked a simple question: “Why are we still doing things this way?” That question, as it turned out, was the spark we needed.

We proposed a small-scale proof of concept (PoC). Using modern CIAM tools, we demonstrated how quickly using out of the box capabilities (OOTB) we could enable modern features, like passwordless login and adaptive authentication, without massive infrastructure changes—just enough to show what was possible. The first step was to present it to the end-to-end architects for initial feedback. It is important to acknowledge and be upfront that with OOTB capabilities, you do loose the control over customisation. With their blessings, we moved onto the next step.

Discovering Broader Value

While we understood that the capability is feasible and can be rolled out, we still lacked the business value. Any business will make an investment when there is something in return. It is hard to quantify the reduction in hours to deliver a capability since it is tied to full time employee (FTE) reduction and every team in organisation is stretched. We started interacting with Fraud and found some savings but that would have not accounted for making the business case. 

While we were interacting with various sections of the business, we met with our frontline support lead (Consumer Channels team) and they had customers calling in with longer wait times to verify themselves. They mentioned that – it is something which was presented to the leadership and are planning to build a custom solution. 

A detailed meeting gave us the following problem statement:

It takes anywhere from one to four minutes to verify a customer when they call our support centre. Every call. Reducing this to 90 seconds or less through automation prior to the call connecting with our team could result in a $900k p.a. saving in the call centre alone.

Building the Business Case

That particular statement carried multiple implications. What followed were a series of meetings to unpack and understand the real problem. Newer start-ups might not have these problems but when you work in a telco where mergers and acquisitions and brand change happened historically, you could develop solutions in siloes just not because you don’t want to collaborate, because – you just did not know. 

What started as a journey to reduce development cycle became a full-fledged program which touches, consumer channels, digital experience, fraud reduction and of course improving the security posture. A project where security also becomes a valuable customer experience.

Lessons Learned

Looking back, several lessons stand out. First, it’s vital to challenge historical practices—even those that seem set in stone. Second, a well-run PoC can be the conversation starter as it makes the concept real. It break down silos and create new alliances. Third, Don’t be fixated on the PoC and the problem statement you started off. Success comes from speaking the language of the business, not just security. By being open to broader problems and looking for opportunities to deliver value, IAM teams can drive lasting change—even when they don’t hold the purse strings.

Most importantly, collaboration is the key. No team succeeds in isolation. By involving other departments, sharing ownership of outcomes, and being transparent about challenges, it’s possible to turn CIAM from a cost centre into a business enabler.

Conclusion: Encouragement for the Journey

For IAM leaders and architects facing similar challenges, know this: you don’t need to control the budget to control your destiny. Start with curiosity, challenge the status quo, and focus on broader business value. Build relationships, share success, and invite others to the table. CIAM success is a team sport—and with the right approach, you can lead the change your enterprise needs.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author Bio:

Abhi Bandopadhyay: I manage the outcomes for identity and access management across various workstreams, including workforce, customer and IoT identities in Spark NZ. My primary responsibility is to enable business to understand the complexities of digital identity and make informed decisions

My core competencies are IAM strategy, leadership, security, and delivery. I am responsible for defining the IAM vision and roadmap for Spark NZ, I also champion the principles of modern IAM, security by design, and zero trust mindset, and empower internal teams to leverage the platform for their needs. My mission is to enable Spark NZ to provide secure and seamless digital experiences for its customers and employees.

If you’ve spent any time designing secure systems, you’ve explored the wonderful world of authorization acronyms: RBAC, ABAC, ReBAC, and… PBAC. For a long time, I tried to line them up neatly in my head.

For Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), and Relationship-Based Access Control (ReBAC), I can find a common thread. I can consider them part of the same family. They represent distinct, abstract ways of thinking about permissions.

But Policy-Based Access Control (PBAC) always felt like the odd one out. Actually, it is not the only odd one out: Mandatory Access Control (MAC) and Discretionary Access Control (DAC), usually included in the family of authorization models, seem like intruders to me as well, but that is another story.

Including PBAC in the same list as RBAC, ABAC, and ReBAC is misleading, in my opinion. Especially if you want to understand what the best authorization model is for your scenario. Comparing PBAC with RBAC, ABAC, and ReBAC sounds like comparing apples and oranges.

This confusion led me down a rabbit hole, and I’ve come to a conclusion: Policy-Based Access Control is not an authorization model. At least, not in the same sense as RBAC, ABAC, or ReBAC. It’s something different, and the distinction is crucial for a clear comparison between these things.

The Common Thread

Before we tackle the outlier, let’s look at what RBAC, ABAC, and ReBAC have in common. While they function differently, they are all built on the same fundamental premise: they make authorization decisions by evaluating a specific type of data. This data is the core input that defines the model itself.

RBAC relies on one primary piece of data: the user’s assigned role(s). The entire decision-making process boils down to the question: “Does the role assigned to this user grant the permission to perform this action?” 

ABAC utilizes a much richer set of data: attributes. These aren’t limited to the user’s role; they can describe the user (department: ‘Finance’), the resource (sensitivity: ‘Confidential’), the environment (time_of_day: ’09:15′), and the action itself (type: ‘read’). The decision is made by evaluating these attributes against a set of rules and answering the question: “Do these attributes grant the permission to perform this action?

ReBAC, as the first name in the acronym implies, relies on data describing the relationships between the user and the resource. It answers questions like, “Is this user the owner of this document?” or “Is this user a member of the team that has editing access to this project?” or, more in general, “Is there any relationship between the user and the resource that grants permission to perform this action?

The common thread is evident: each model is fundamentally defined by the kind of information it considers. They are abstract blueprints for permission logic, distinguished by their primary data input: roles, attributes, or relationships.

What Is Policy-Based Access Control?

Now let’s turn to PBAC. If I summarize the definitions of PBAC I have encountered, I can condense them into the following one:

PBAC is an authorization model that grants or denies permissions to resources based on a set of rules, or policies, evaluated in real-time.

If we attempt to analyze it through the same lens used for RBAC, ABAC, and ReBAC, we immediately run into a problem. What specific type of data does PBAC rely on? As far as I understand, the answer is: none in particular.

PBAC does not prescribe what data you should use to make an authorization decision. Instead, it describes an architectural pattern for how to structure and execute your authorization logic. The core principle of PBAC is the externalization of authorization logic from the application’s business code into a centralized “policy engine.”

The crucial point is that the rules within the policy engine can be based on anything! You could write a policy that enforces RBAC, ABAC, ReBAC, or any hybrid model you can imagine.

PBAC is the machinery, not the logic upon which to make authorization decisions. It is fundamentally unconcerned with the type of data being evaluated, which makes it structurally different from the other models.

So, What Is an Authorization Model?

This contrast forces me to formulate my own definition of an authorization model. Based on the evidence, I propose a simple definition slightly inspired by the one given here:

An authorization model is an abstract model that defines the core inputs required to make access control decisions.

This definition is about the what, not the how. It defines an authorization model as an abstraction, regardless of its architecture and implementation. It answers the fundamental question: “What pieces of information do I need to look at to decide if a subject can perform an action on a resource?

I know this may sound totally arbitrary, but this is how authorization models look to me. Also, this is not a prescriptive article, but rather a perspective one! 🙂

Under this definition, RBAC, ABAC, and ReBAC fit perfectly. They are true models because they are defined by their core inputs—roles, attributes, and relationships, respectively.

PBAC, however, does not fit. It’s an architectural pattern that provides a possible way to implement these models in a clean, decoupled manner.

Conclusion

So, is PBAC an authorization model? No. It’s a powerful and highly recommended architectural pattern for building flexible, scalable, and maintainable authorization systems.

By treating it as such, we can think more clearly. Our design process becomes two-fold:

  1. Choose your model(s): “For this part of your system, RBAC is sufficient. For this other, more dynamic part, you’ll need ABAC.”
  2. Choose your architecture: “Will you embed this logic directly in the code, or will you adopt a PBAC architecture with a centralized policy engine to manage it all?”

This separation of concerns is powerful. It allows us to pick the appropriate logical model for the job and the best architecture for our scalability and maintenance needs. So next time you’re in a design meeting, let’s stop comparing apples and architectural patterns and use these terms with the precision they deserve. 🙂

Additional Resources

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

About the author

Andrea Chiarelli is a Principal Developer Advocate at Auth0 (Okta). He has extensive experience in software development, holding various technical roles. In recent years, he has focused on Identity, developing a passion for the core concepts, which he aims to disseminate to developer communities.

An Identity Governance and Administration (IGA) platform serves as the central authority for defining and enforcing access policies. However, its effectiveness is entirely dependent on its ability to communicate with the diverse landscape of target systems—applications, databases, and infrastructure—where digital identities and permissions are managed. This bridge is established and maintained by connectors.

A connector functions as a specialized software component that translates the IGA platform’s generalized commands (e.g., “assign role”) into the specific protocol and data format required by a target system (e.g., a specific API call or LDAP modification). It bridges the gap between abstract policy and technical implementation, enabling automated, auditable, and consistent governance. Without connectors, an IGA platform would be unable to perform its core functions of visibility and control over the enterprise IT environment.

The Core Functions of a Connector

Connectors facilitate a continuous cycle of governance through three primary functions: aggregation, provisioning, and verification.

1. Aggregation (The “Read” Function) 

Aggregation, also known as reconciliation, is the process of pulling identity, account, and entitlement data from a target system into the IGA platform. This foundational function builds and maintains a centralized, unified view of access rights across the organization. This data is essential for:

  •  Visibility and Reporting: Providing a single source of truth for who has access to what.
  •  Access Certifications: Supplying managers and asset owners with the data needed to conduct periodic access reviews.
  •  Policy Evaluation: Detecting access that violates established policies, such as Separation of Duties (SoD).

2. Provisioning (The “Write” Function) 

Provisioning is the process of executing commands from the IGA platform onto the target system to grant, modify, or revoke access. Triggered by identity lifecycle events (e.g., joiner, mover, leaver), provisioning automates access changes. The connector translates a logical request from the IGA platform into the precise technical operations required by the target system, such as:

  •  Create: Creating a new user account.
  •  Update: Modifying user attributes or adding/removing entitlements (e.g., roles, group memberships).
  •  Delete/Disable: Deactivating or deleting an account upon user departure to remove access promptly.

3. Near-real-time Verification (The “Check” Function) 

After a provisioning action is sent, a robust connector performs verification. It queries the target system to confirm that the requested change was successfully applied. This closed-loop reconciliation is critical for maintaining data integrity and providing a reliable audit trail, ensuring that the state recorded in the IGA platform accurately reflects the permissions in the target system.

Connector Implementation Models

Because no two enterprise environments are alike, connectors are available in two primary models:

  •  Out-of-the-Box (OOTB): These are pre-built, vendor-supported connectors designed for common commercial and enterprise systems (e.g., Microsoft Entra ID, Salesforce, SAP). They are generally preferred for their rapid deployment, reliability, and ongoing maintenance by the vendor.
  •  Custom: For homegrown applications, legacy systems, or niche platforms without OOTB support, custom connectors must be developed. They extend governance to all critical assets but require an initial development investment and ongoing maintenance by the organization.

The Connector Framework: Enabling Scalable and Customized Integration

Modern IGA platforms include a Common Connector Framework (CCF) to standardize and accelerate the development of custom connectors. Rather than building from scratch, developers use the framework to apply business-specific logic to pre-built templates that handle common protocols (e.g., REST/SOAP, SCIM, SQL).

  •  Aggregation Rules: Custom logic can be used to transform inbound data into a consistent format for the IGA platform. This includes normalizing data (e.g., mapping location codes to full names) or correlating low-level permissions into a single, business-friendly entitlement.
  •  Provisioning Rules: Custom logic can translate a single IGA directive into a complex series of operations required by the target system. For example, a “Create User” event can trigger a rule that executes multiple, sequential API calls to build an account, assign a profile, and set initial security parameters.

The Evolution to Intelligent Connectors

The integration of Artificial Intelligence (AI) and Machine Learning (ML) is transforming connectors from procedural intermediaries into intelligent agents that enhance governance.

Key AI-Driven Capabilities:

  •  Intelligent Aggregation and Role Mining: AI algorithms can analyze aggregated entitlement data to discover hidden access patterns, identify orphaned accounts, and recommend optimized business roles based on usage and clustering.
  •  Predictive and Risk-Based Provisioning: ML models can predict the access a new user will likely require based on peer analysis, accelerating onboarding. They can also assess the real-time risk of an access request and trigger step-up authentication or additional approvals before the connector executes the change.
  •  Self-Healing and Adaptive Operations: An intelligent connector can autonomously handle transient failures (e.g., by retrying a command if a target system is temporarily unavailable) and detect API changes in SaaS applications to flag potential integration failures before they occur.
  • Behavioral Analytics (UBA): By analyzing logs and usage data, AI can establish a baseline of normal user activity. Deviations from this baseline can trigger an alert, an automated access review, or a temporary suspension of access via the connector, enabling a shift from periodic to continuous, risk-based access verification.

Key Considerations for Advanced Connectors: The efficacy of AI-driven IGA depends on high-quality data. Organizations must also address challenges related to model explainability for auditors, the potential for perpetuating historical bias, and the added complexity of implementation.

Conclusion

Connectors are a foundational component of any IGA architecture. They are the essential infrastructure that translates policy into practice. The evolution of connectors—from standard OOTB integrations to customizable frameworks and, ultimately, to AI-enhanced intelligent agents—is critical for enabling organizations to build a proactive, secure, and efficient identity governance program capable of addressing the challenges of the modern digital enterprise.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

About the author

Mr. Anant Wairagade is an internationally recognized Senior Cybersecurity Engineer and independent researcher with a Bachelor of Engineering in Computer Science. His expertise lies in a niche and highly visible field within the software industry: enterprise-scale identity and access management, with a particular emphasis on cloud security, zero-trust architectures, and the application of artificial intelligence. His significant contributions to this field, demonstrated through his publications, technical program committee work, and impactful achievements at American Express and other major corporations, have earned him a reputation as an expert in a specialized area.

The concept of Non-Human Identities (NHIs) has become a defining topic in Identity and Access Management (IAM). This article introduces the issue, explains why it matters, and previews the two key perspectives — lifecycle and access — that subsequent pieces will examine in depth.

In combination, the three articles (“Understanding NHI,” “Lifecycle of NHIs,” and “Access and Governance for NHIs”) outline a practical framework for managing non-human identities with the same discipline we apply to people, without losing sight of their fundamentally different nature. In a future post, I will dive into the specific IAM issues around Agentic AI. 

Introduction

The hottest topic in IAM these days is not AI, nor is it RBAC versus ABAC; it’s NHI, or Non-Human Identities. And there’s a good reason for that: many LinkedIn influencers post their opinions about NHI, there are blogs and webcasts, management solutions and tools are popping up, and it’s hot.

And rightly so, NHI is important because while there may be 8 billion humans on this planet, there are 10 times as many non-humans living on the digital planet. A tenfold increase in actors and resources in need of access and access control. In our field of expertise, we need to know who has access to what, and we almost know how to manage that (no, we don’t, but that’s a different topic). But managing access for non-human identities is not clear. And that really is a challenge, because non-human identities are already performing more transactions than humans can ever imagine, such as machine-to-machine communications, robotic automation processes (RPA), devices, scanners, and cameras, as well as car park barriers, etc.

And yet, the most common denominator is that there seems to be no consensus on how to handle this topic. Are we talking about non-human identities or non-human accounts? Does an NHI need access, and if so, to what, and perhaps more importantly, why? How do we cope with keys and tokens? Where do they come from? Is there a lifecycle for the identity and for the access tokens?

So, there is plenty of space for yet another blog about NHI, and I would like to describe the Access topic. And some of the Identity views that may exist. (I will share some links to IAM definition documents at the end of this article.)

Why NHI Matters Beyond the Hype

What makes the NHI debate more than a passing trend is that it touches core governance: ownership, accountability, and sustainability. To manage NHIs properly, we first need to understand how they live and how they act.

Lifecycle — How NHIs Come to Exist

Just like human identities, NHIs have a lifecycle. Somehow, they come to exist, they have a life and then they are destroyed. The following illustrates this principle. 

For human identities, the HRM processes are the governing processes. Joiner–Mover–Leaver (JML) is the core model. Every Join, Move, or Leave event will be evaluated for the identity and access management consequences. If an actor joins an organisation, a digital identity is created, and one or more accounts or usernames are assigned to this actor. Moving within the organisation (new dept, new manager) will result in a re-evaluation of permissions. And when leaving, all permissions will have to be revoked and licenses terminated, to prevent the abuse of identities and identity theft.

For non-human identities, a different process is the governing process, not the JML processes. These processes are not HR processes. NHIs don’t apply for a job, nor do they drop from the sky. Instead, the governing process here is the Change Management process, and its registration happens in a Configuration Management Database (CMDB). As a further clarification, the digital identity lifecycle of non-human accounts, as defined in IDPro (see references below), is shown: 

Figure 1: NHI lifecycle

This contrast shows why managing NHIs in HR-driven IAM tools is ill-fitting. In other words, we cannot approach the non-human identities in the same way we do with the human ones. The existence of NHI is more the result of Change Management rather than employment events, and their records belong in the CMDB, more so than in HR systems. More on that in the next article on Lifecycle Management for NHI. 

Access — How NHIs Interact and Perform Tasks

Access control for NHIs is two-sided. They consume access to do their work, but they also provide interfaces that others use. I’ll set the scene on this duality a little more below.

There are some interesting observations to be made about access control and NHIs. And missing these views leads to misunderstanding NHI Access Control and results in conflicting advice and practices. Access control and NHI’s have at least two different aspects: Access to the component and Access by the component. And these different views can be evaluated in this visualization: 

Figure 2: Access to and access by an NHI

In this illustration, an actor logs in to a component interactively. The actor has an account and authorisations to match. The component, in turn, logs into a resource, probably using a resource account and authorisations. Since the login is automated, the secret to logging in is probably configured in the component. The component could be an application, a camera, or a device.

These two views (Access to and by the component) are central to understanding NHI security. Without differentiating them, organisations blur responsibility and risk auditability gaps. For a deeper dive into this, have a look at [this article] on Access & NHI’s, which examines how these permissions are defined, granted, and controlled through the principle of least privilege.

From Theory to Governance


In short, NHIs are already here, performing countless actions on our behalf. Understanding them begins with recognizing that every non-human identity represents accountability, not just automation, and approaching it from the perspective that NHI management is more than just a technical discipline. It is about ownership and governance across change and access processes. To set up that governance, organisations must recognize where NHIs live in their architecture and assign accountability accordingly.

In two subsequent articles, I will focus on two topics that are close to my heart and that are often overlooked. First, there is the identity lifecycle, and second, the topic of access control. In combination, the three articles outline a practical framework for managing non-human identities with the same discipline we apply to people, without losing sight of their fundamentally different nature.

At the time of writing this article, further research is underway in the areas of RPAs (Robotic Process Automation, or bots) and Agentic AI. This will lead to more articles that will follow once presentable.

When read together, these insights form a coherent narrative: NHIs are neither employees nor users; they are governed components that perform defined tasks. They need identities, accounts, and authorisations. Yet, their management lifecycle and access patterns differ fundamentally from those of humans.

References

There are great resources that cover NHIs, but the two topics covered in this article are not clearly identified: 

More online resources:

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

About the author:

Headshot - Andre Koot

André Koot is principal IAM consultant at Dutch IAM consultancy and managed services company SonicBee (an IDPro partner). And member of the Advisory Board of IdNext.eu. He has over 30 years of infosec experience and over 20 years of experience as an IAM expert, acting as architect, auditor and program lead. For the last nine years he has taught a 4-day IAM training course. André contributes to the IDPro BoK as committee member, author, and reviewer.

Hello, DevSecOps fans and security buffs! If you’re running a software supply chain in 2025 and still handing out access like it’s free pizza at a team meeting, it’s time to rethink things. The Principle of Least Privilege is your secret weapon for keeping code repositories and CI/CD pipelines safe, but making it work is tricky. Sprawling permissions, fast-paced teams, and tools that don’t always cooperate can leave overprivileged accounts vulnerable to attacks like credential theft or pipeline tampering. The good news? You can secure everything without making your developers’ lives harder.

In this blog, we’ll dive into why locking down access is tough, the risks of getting it wrong, and a simple plan to get it right. I’ve pulled from real-world lessons to share a roadmap that keeps your supply chain safe while letting your team keep rocking. Let’s jump in.

Why Loose Access Is a Big Deal

Picture this: your software supply chain, code repos, build pipelines, and deployment tools are like a bustling kitchen. If everyone has access to every ingredient and burner, things can get messy fast. Developers might have admin rights just in case, or your CI/CD pipeline might have free rein across your systems. That’s a problem. It opens the door to attacks such as pipeline tampering, insider slip-ups, or stolen credentials, causing significant chaos. Rules like SOX, PCI-DSS, and HIPAA promote the principle of least privilege to mitigate these risks, but it’s not always easy. Many tools don’t let you fine-tune access, and managing it manually takes forever, still leaving gaps. This creates a larger target for attackers, slows down your team, and frustrates developers who just want to ship code. In today’s fast-paced DevSecOps world, where speed is everything, sloppy access controls are like a flat tire on a racecar.

Why You Need to Fix This Now

The pressure to tighten access is real, and it’s coming from all sides. Your supply chain spans a ton of tools, repos, CI/CD systems, and cloud platforms, and one wrong setting can expose everything. Teams are more dynamic than ever, with contractors and freelancers joining and leaving fast. Old-school access rules can’t keep up, leaving outdated permissions that hackers love to exploit. Meanwhile, hackers are getting craftier, using automation to find and attack overpowered accounts before you even notice. Plus, regulators are cracking down with tougher audits that demand solid controls. If you’re trying to ship software fast while keeping it secure, loose access is holding you back, and it’s time to act.

A Simple Way to Lock Down Access

So, how do you maintain tight access control without slowing down your team? The trick is to build a system that fits into your developers’ workflow and keeps security first. It starts with smart, automated policies powered by tools such as policy-based access control, automated role management, intelligent workflows, and proxy gateways. Require everyone to request access to repos or tools through a quick approval process, so no one gets in without a green light. Make policies flexible: a manager’s approval might be sufficient for read-only access, but admin rights require additional sign-offs. To avoid bogging things down, use smart workflows to auto-approve low-risk requests based on what similar team members have or how they’ve used access before. Policy-based access control makes real-time calls by checking things like a user’s role or task, ensuring they only get what they need right now.

Keep sensitive code secure by ensuring users only see what they’re authorized to access. Bundle permissions into roles tied to specific jobs, like code reviewer or pipeline operator, and assign them through automated role management tools to avoid giving too much access. Team leads can create these roles to match project needs, but resource owners should always have the final say to keep things in check. For high-risk access, like admin or write permissions, set an expiration date so it doesn’t linger. Low-risk access can stick around longer. To avoid the trap of managers rubber-stamping renewal requests, pair expirations with lightweight review mechanisms, for example, usage-based validation (has the access actually been used?) or automated just-in-time provisioning that grants elevated rights only when needed. This balances thoughtful retention with the speed and agility modern pipelines demand.  Proxy gateways double-check everything at the tool level, catching any unauthorized moves before they happen. This setup keeps your supply chain secure while letting your team move fast.

A Three-Part Plan to Make It Work

Here’s a straightforward, three-part system to bring this to life, blending governance, central control, and tool-level security.

First up is Identity and Access Governance. A solid IGA system builds and assigns job-specific roles based on policies. It automatically green-lights low-risk access but requires manual checks for sensitive information. Mixing role-based access control for simplicity with policy-based access control for smart, context-aware decisions gives you flexibility while keeping resource owners in the loop.

Next, a centralized supply chain platform ties everything together. Think of it not just as CI/CD automation, but as a single system that combines repository management, CI/CD workflows, project bundling, and access governance. From one place, admins can create and manage entities required by multiple tools in the supply chain, define approval and visibility policies, and bundle permissions around projects rather than scattered tools. The platform also enforces policy-based access over time, ensuring that access stays relevant as teams and projects evolve. A proxy gateway extends these controls down to individual tools, blocking unauthorized actions in real time and giving developers a single spot to request access or check pipeline status. This fills in the gaps where point tools fall short.

Finally, lock down your tools, like repos or CI/CD systems, so they only allow actions approved through the central platform. This prevents anyone from sneaking around policies or exploiting weak spots, maintaining tight control across the board.

Dealing with Real-World Hiccups

Getting least privilege right isn’t always smooth. New systems can throw developers for a loop, so clear training and a gradual rollout are key. Start with your power users to iron out kinks and get buy-in. Allowing teams to create their own roles enhances flexibility, but it can lead to overlap. To maintain organization, have resource owners approve all roles. Some tools don’t offer fine-tuned controls, but your platform’s gateway can enforce policies at a deeper level to fix that. Keeping the central platform running takes work, so build it to handle tool updates on its own to save your team headaches. Policies can get stale or clash, so review them regularly and use automation to spot issues early. Planning for these bumps keeps your system running smoothly.

Wrapping It Up: Security That Fuels Your Team

In today’s high-pressure software supply chain, least privilege is a must-have. Overpowered accounts are an open invitation for trouble, but you don’t have to slow your developers down to fix it. With smart policies, a centralized platform, and locked-down tools, you can protect your supply chain and keep things moving. Try starting with a key repo or pipeline, see how it goes, and scale up from there. If you’re tackling this at work, share your thoughts or reach out. Let’s swap tips on making least privilege work for you.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

About Author

VATSAL GUPTA is a cybersecurity leader with 13 years of experience in identity and access management (IAM). He currently works at Apple and has previously held roles at Meta and Pricewaterhouse Coopers (PwC), advising Fortune 100 companies on securing complex digital ecosystems. Gupta specializes in building scalable, artificial intelligence (AI)-driven identity solutions. He is an active contributor to IDPro and a senior member of the Institute of Electrical and Electronics Engineers (IEEE), and he also serves on technical committees for leading cybersecurity conferences. His research focuses on AI, large language models (LLMs), and policy-based access controls (PBAC) to modernize IAM and enhance threat detection.

Black Hat and DEF CON are, as always, conventions that set the tone for the security savvy for the next year; new findings are released with varying degrees of showmanship, a substantial portion of the hacker community comes back together to see each other, and inevitably the convention site’s computer systems get poked and prodded.  I’d like to talk to you about what I walked away with from both conferences from an identity practitioner’s perspective; I fully recognize before I start in here that I may be wrong or misinformed, and I am happy to discuss any of what I say here with any of you; whether it be in the IDPro Slack, or in any other forum.

Problematic Passkey Parley

There were also several discussions at both conferences about FIDO2.  I am sure this section will probably be the most divisive of my discussion, but I will do my best to navigate the issues presented at the conferences.  Namely, some strong accusations have been made around the security of passkeys and of hardware authenticators, and I feel like we should unpack them.

Phishing Synchronized Passkeys

Two of the talks focused on passkeys specifically.  Chad Spensky, Ph. D., discussed a potential avenue for phishing synchronized passkeys in his talk “Your Passkey is Weak: Phishing the Unphishable” (slides available at https://yourpasskeyisweak.com/).  Specifically, if an attacker can perform a successful phishing attack to access the service acting as the synchronization fabric for the passkeys (e.g. Google Password Manager) then they have access to everything they need to replicate the passkey.  This is obviously problematic, as an attacker who gains the metaphorical keys to the kingdom in this way can then access anything that relies on these passkeys.  A second talk, titled “Passkeys Pwned: Turning WebAuthn Against Itself ” by Shourya Pratap Singh, Jonny Lin, and Daniel Seetoh explores a similar path (slides available at https://media.defcon.org/DEF%20CON%2033/DEF%20CON%2033%20presentations/Shourya%20Pratap%20Singh%20Jonny%20Lin%20Daniel%20Seetoh%20-%20Passkeys%20Pwned%20Turning%20WebAuthn%20Against%20Itself.pdf).  The team discusses additional avenues for phishing, such as through a malicious browser extension.  The results, unsurprisingly, are the same as Dr. Spensky’s – the user’s passkeys are compromised through what was an assumed-trusted path, and all is lost.

Browser Security and User Manipulation

These two talks, taken in totality, should tell us nothing particularly new as practitioners of identity.  If we drink the Kool-Aid and accept the statement that “identity is the new perimeter”, then we might also consider that the browser is the new doorway.  As an information-focused society, some among us rely more on our browser software being secure than we do our houses being secure.  While phishing is increasingly the easiest way with which an attacker may access a system, if these are accepted as “vulnerabilities” then we too must accept the successful hijacking of an access token generated through an OIDC flow as a vulnerability.  While they are both vulnerabilities, what are the actual issues?  The actual issues at play here are browser security, and user manipulation.  The point here is that a given protocol, program, or defined process has a specific scope – every link in the chain needs to be secure.

Limits of Synchronized Passkeys

Further, the security model of synchronized passkeys predicates that they should not be used by individuals who are attempting to maximize security.  Enterprises and individuals potentially targeted by nation states should carefully consider the usage of synced credentials when determining the blast radius of a given system’s compromise.  An enterprise or particularly concerned enterprise should seek to conform to more stringent qualifications (such as NIST 800-63’s Authenticator Assurance Level 3) and perhaps consider device-bound passkeys when deciding authentication strategies.  Additionally, organizations permitting account recovery or account modifications should do real auditing on their workflows to ensure that real users have the edge over attackers when adding new passkey credentials to an account, or when seeing a passkey login behave strangely.  There is a lot more that could be said here around the security of related systems and passkeys, and I will leave that to those of you who wish to passionately discuss those points in these newsletters.

API Confusion in FIDO2

There was, notably, a third talk brought forward by Marco Casagrande and Daniele Antonioli discussing API confusion issues within FIDO2 (paper can be found at https://arxiv.org/pdf/2412.02349).  Specifically, they focused on issues in the CTAP protocol – issues that exist regardless of the method that CTAP moves over (such as NFC or Bluetooth) and impact both CTAP1 (what we used to call U2F) as well as CTAP2.  Some of the notable issues brought forward through their research are the ability to force lockouts of hardware tokens, force factory reset of these tokens, fill credential storage for these tokens, and profile the underlying authenticator (to potentially compromise the token, or to track the user).

The Hardware Token Catch

While this class of attack is not as flashy as the phishing demos given by the two teams above, it does demonstrate a very real need for physical security for hardware keys!  These attacks are potentially brutal, but they require proximity.  Given the effective range of these attacks can be measured in feet, an attacker (or accomplice) either needs to be targeting the holder of the security key specifically or needs to construct a device to passively brick hardware tokens.  An interesting note is that in Dr. Spensky’s talk a mitigation route that was presented was to use hardware tokens – it turns out that even the best laid plans of security researchers often go awry.

Identity Practitioners and AI

If you were still sleeping on LLMs, Generative Image Models, and other generative models, you have overslept.  Identity practitioners of all stripes should now be taking time to understand and experiment with the tools available to them in this space – as well as the extremely complex security and privacy concerns that come from them.  There were many talks focused on this intersection of security and AI from multiple perspectives, and I feel like we should unpack some of them.

Apple Intelligence and On/Off-Device Risks

One such talk was Yoav Magid’s talk on Apple Intelligence (article available at https://www.lumia.security/blog/applestorm), a complex dance of on-device model usage versus off-device data transport occurs depending on what is requested.  These requests, while seemingly inoffensive, can transfer sensitive data to servers not under your organization’s control with no means to know when and where this will occur.  The adoption of Agentic AI by consumers will muddy these waters; we as identity practitioners will need to keep in mind the ramifications of telling an AI agent they are allowed to do something on behalf of a user.

Enterprise AI Exploitation and Guardrail Weaknesses

In another talk called “AI Enterprise Compromise – 0click Exploit Methods”, Michael Bargury and Tamir Ishay Sharbat drove home some pretty powerful and concerning points around the new frontier of abusing enterprise-oriented AI (articles around this available at https://labs.zenity.io/p/hsc25).  Some particularly salient concepts from their talk are that LLMs as designed “are doomed to complete”- that is to say that they cannot dissent to a properly crafted request, and guardrails are simply soft boundaries that can be worked around through careful prompt design.  A more nuanced, careful approach needs to be taken to clearly define what agentic AI can or cannot get to.

AI as an Offensive Security Tool

The final theme of the two conferences was the synthesis of AI into not only adjacent tasks, but society.  Brendan Dolan-Gavitt presented a very compelling talk (“AI Agents for Offsec with Zero False Positives”, you can see an unfortunately light on details article at https://www.darkreading.com/vulnerabilities-threats/ai-based-pen-tester-top-bug-hunter-hackerone) around how to ask LLMs to work as an attacker – moving against an established system to red team on your behalf.  The results speak for themselves, with over 174 vulnerabilities reported (22 CVEs issued at time of talk, with the rest pending).  This sort of embrace of AI as co-conspirator is not necessarily revolutionary, but it is iteratively necessary.  

Thinking Like a Hacker in the Age of AI

A second talk, perhaps much further ahead than Dolan-Gavitt’s in terms of the impact of AI but less technical, was the talk given by Richard “neuralcowboy” Thieme titled “Thinking Like a Hacker in the Age of AI”.  Thieme, through his 45 minutes, discussed how technology and the means by which we pursue mastery have evolved rapidly.  To quote him, “Many of the current disciplines, now named, did not exist only 10 or 20 years ago.  And experts in them cannot keep up with all the materials published in their own areas of expertise”.  

Community and Shared Burden

I, as a humble systems integrator at an identity vendor, especially feel this sting – new advancements in the field seem to occur daily, and there is a fatigue that is generated by attempting to keep up by myself.  How comforting it is, then, to have a space such as IDPro from which I can have some of that cognitive burden of continual pursuit lifted – not because I or any one of our practitioners are somehow less motivated – but precisely because everyone is so motivated.  By knowing the value and depth the organization provides, we make each other better.

As our industry further synthesizes with generative models and a whole host of new disciplines arise from it, we as practitioners will need to be mentally flexible.  We will need to be continually curious.  We must keep shifting the context in which we engage with technology, such that it is with passion and intent.  We must keep shifting context such that we are no longer mere operators in these systems.  We must keep shifting context such that we become and remain creators and active participants in these systems.  As technologists and humans, we cannot afford to do otherwise.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author

Rusty Deaton has been in Identity and Access Management for over a decade. He began in technology as a technical support engineer for a Broker-Dealer and has since worked across many industries, carrying forward a passion for doing right by people. When not solving problems, he loves to tinker with electronics and read. He currently works as Federal Principal Architect for Radiant Logic.

There’s been some buzz recently around the new specifications regarding the Credential Exchange family of specifications coming out of the FIDO Alliance, which has led to some confusion about the whole concept of exportable passkeys.

If you’re like many others, you might be confusing syncing passkeys and Credential Exchange (CX). (Note: Device-bound passkeys are not affected by these specifications). Before we spiral into hypothetical doom scenarios, let’s get one thing straight: this is not about syncing. It’s not about making passkeys magically work across all your devices and all your platforms, like some universal login pixie dust. This is about something much more specific, much more niche, and arguably much more important for long-term user control and ecosystem interoperability.

Let’s talk about Credential Exchange (CX).

What is Credential Exchange

CX is a point-in-time migration protocol, not a sync protocol. If you’ve ever tried to leave one password manager for another, you probably remember the painful steps: exporting a CSV, crossing your fingers that nothing gets corrupted, and importing the file only to realize half your entries didn’t map correctly. Oh, and that CSV? Probably sitting unencrypted in your downloads folder.

The CX family of specifications was designed to fix that.

The CX family has both a schema specification and a protocol specification for securely moving passkeys (and other credentials and items you’d typically find in a credential manager) from one credential manager to another. Think: moving from Apple Passwords to Bitwarden, or Google Password Manager to 1Password. The goal is to eliminate the plain-text mess and standardize the fields so that you can actually preserve metadata like tags, notes, and usage history during a migration.

Again, because this keeps getting misunderstood, this is not a continuous cross-platform sync model. There’s no background process constantly pushing updates to different ecosystems. The user must initiate the migration from one credential manager to another. They can do this as many times as they want.

Why This Matters (Even if Most People Will Never Use It)

Let’s be honest: the regular person (hi, Mom!) will never touch CX. Most people will stick with whatever ecosystem their phone gives them—Apple, Google, whatever—and never think twice.

But for those who do care—those who worry about vendor lock-in, future-proofing access, or trust boundaries between providers—this matters a lot.

Imagine a world where:

  • You’re done with Apple and want to move everything to 1Password.
  • Your credential manager of choice is shutting down.
  • You want to archive your credentials in a way your estate executor can actually access.

These aren’t everyday scenarios, but they’re real. And right now, they’re painful. CX gives us a clean, interoperable way to move between providers without compromising security (or sanity).

What Could Possibly Go Wrong?

Plenty. Like any tool, CX can be misused.

One of the concerns floating around is that CX could become another attack vector. Bad actors could convince users to “migrate” credentials to a malicious app, and if that app poses as a legitimate destination, it could harvest the user’s entire credential set. The threat model here isn’t fully defined yet—though it probably looks like how attackers already trick people into exporting or copying passwords from their managers—but it’s worth watching closely. OS platforms do have mitigations in place for dealing with malicious apps, before and after they are installed (e.g., Google Play Protect, app store review, etc), so mitigations are already in place.

From the relying party (RP) side, one of the issues here isn’t security as much as it is user experience and reliability. Some services today rely on hints from the credential manager (like “this credential lives in the Apple ecosystem”) to drive helpful UX choices. But once CX is in play, those hints can quietly become stale. A credential that once lived in one ecosystem may have been exported elsewhere, and the RP has no way of knowing. There are future plans to enable providing these hints when passkeys are used as well (not just during creation), which should alleviate these concerns.

This isn’t a CX design flaw. But it is a consequence of treating ecosystem-specific metadata as a proxy for where a credential lives, rather than what the protocol actually guarantees. As more users gain the ability to migrate their credentials, services that depend on these assumptions may need to rethink what “helpful” really means and how they rely on that information.

Security Model: New Questions, Not New Threats

CX doesn’t introduce a fundamentally new class of threats, but it does complicate the security model that many RPs and security teams have come to expect.

If CX has been used to export credentials, that same passkey may now live in a completely different ecosystem. There’s no standard way for RPs to tell whether a credential has moved or where it ended up. That makes it harder to scope the blast radius of an incident, and harder to know who still needs help.

There’s also the practical issue: most services haven’t built passkey rotation flows yet. Even if passkey re-registration is technically possible, very few RPs support it in production today. So when credentials are compromised and there’s no clear path to rotate them, users may fall back to less secure recovery options like SMS or email-based OTPs.

These aren’t dealbreakers. But they are operational challenges that need to be solved as CX gains adoption. If you’re building or maintaining a passkey-enabled system, now’s the time to think through:

  • What happens when a credential manager is breached?
  • Can you support credential rotation or re-enrollment?
  • Are you depending on ecosystem hints that might no longer be valid?

Let’s Not Lose the Plot

Yes, there are risks. Yes, they’re worth discussing. But let’s be clear: not every use case demands the same level of security response, and not every theoretical vulnerability warrants panic.

CX is a tool, not a mandate. Its value depends on how and where it’s used. That’s why these questions about breach impact, credential portability, and fallback mechanisms must be addressed as part of a proper risk management exercise, not just tossed around as worst-case hypotheticals.

Threat modeling isn’t about imagining everything that could possibly go wrong. It’s about weighing likelihood, impact, mitigation, and business value. Treating CX as inherently dangerous because it introduces new questions is a shortcut to bad security decisions. Ask the questions, but do it in context. 

Why Not Just Call It “Migration”?

Honestly, that might’ve avoided a lot of confusion. CX as a name is technically accurate, but it doesn’t scream “this is only for rare migrations.” And unfortunately, consumer tech reporting has run with the idea that CX means passkeys can be synced across all providers, finally making good on the cross-platform dream.

That’s… not what this is.

It’s also not a get-out-of-jail-free card for people storing the same passkey across multiple providers. If one manager is compromised, that same credential may be reused elsewhere. Using CX doesn’t remove the passkey from the source. That’s still manual and must be done by the user if the user wants to avoid having credentials in multiple locations. The best practice, just like with passwords, is still to use one provider, close old accounts when you’re done, and avoid scattering credentials like breadcrumbs across the Internet. 

Bottom Line: This Is About Control, Not Convenience

Exportable passkeys, via CX, aren’t for your average user. They’re for those who want choice, who don’t want to be tied to a single vendor forever, and who want a standards-based path forward.

It’s not about making your credentials work everywhere. It’s about giving you a secure, private way to move them somewhere else when you’re ready to go.

It may not be a feature you ever use. But you’ll be glad it exists when you need it.

Thanks to Dean H. Saxe and many others for all their support in answering my questions and reviewing the post!

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author

Heather Flanagan is the Principal at Spherical Cow Consulting, helping organizations navigate the fast-moving world of digital identity and Internet standards. With 15+ years of experience translating complex technical concepts into clear, actionable strategy, she is known for bridging communities and guiding collaborative work. Heather currently co-chairs the W3C Federated Identity and Exploration Interest Groups, the IETF Secure Patterns for Internet Credentials (SPICE) working group, and HotRFC. Her past roles include leadership positions with the OpenID Foundation, IDPro, the IETF/IRTF, and REFEDS. Named to the 2025 Okta Identity 25 as a top thought leader in digital identity, Heather is a frequent speaker and writer focused on standards, governance, and the real-world friction of identity implementation. You can find more of her blog posts (and link to an audioblog podcast!) on her website at https://sphericalcowconsulting.com.

Identiverse 2025 took place June 3–6 at Mandalay Bay in Las Vegas, bringing together over 3,000 identity and cybersecurity professionals for four days of keynotes, panels, and hands-on sessions. This year marked a pivotal shift: as artificial intelligence rapidly advances, the need to secure agent-based authentication and authorization has emerged as the next critical—and still largely uncharted—frontier.

Here are some highlights of the standout sessions, announcements, and community moments that shaped Identiverse 2025.

Highlights

IDPro Members’ Reception

The IDPro community gathered for a vibrant Members’ Reception at Mandalay Bay. The event, which drew a packed house of identity innovators, standards leaders, and privacy advocates, was standing-room only — a testament to the strength and momentum of the digital identity profession. IDPro members were welcomed by IDPro Board Members Hannah Sutor, Dr. Tina Srivastava, Heather Flanagan, and Bertrand Carlier. The evening unfolded seamlessly and stress-free, fostering meaningful connections in a warm and welcoming atmosphere. Drinks were provided courtesy of sponsor Hydden!

The reception reflected the heart of IDPro’s mission: to create, manage, and use digital identities in ways that are professional, ethical, secure, and privacy-preserving — all in service of reliable and high-value digital services. That shared commitment echoed throughout the room, as members old and new affirmed the principles that guide our growing community.

IDPro & IDAC “Identity Feud”

photo from IDV25

Fresh off their hit game show at FIDO Authenticate, Identity at the Center podcast hosts Jeff Steadman andJim McDonald brought the fun to Identiverse with a spirited installment of “Identity Feud.” This high-energy showdown saw Team IDPro: Heather Flanagan, Mike Kiser, andDr. Tina Srivastava in a battle of identity trivia, best practices, and buzzy survey questions. The event packed a laughter-filled pavilion, cheering them on.

The event was more than just entertainment—it was a celebration of community. Practitioners and thought leaders mingled and bonded over shared identity quirks, highlighting the camaraderie that defines the IDPro community. In true Identiverse fashion, Identity Feud delivered connection, levity, and a reminder that the identity world knows how to work hard—and play hard too.

Identiverse 2025 Sessions

photo credit CRA Andre Durand Ping Identity CEO Andre Durand headlined with a keynote on scaling secure digital business at the speed of trust. With more users, devices, bots, and AI expanding the threat landscape, Durand argued that continuous, contextual, and verified trust is needed to keep pace. He painted a picture of an identity future where adaptive, continuous authentication enables agility without sacrificing security, giving organizations an edge over adversaries. This call to embrace a zero-friction, high-trust paradigm set an optimistic tone for the conference.

Photo IDV25“Data Provenance: Keystone of Trust in the Age of Deepfakes” – Top-ranked cybersecurity analyst Jack Poller held a fascinating discussion with IDPro Board member and Badge Inc. Co-Founder Dr. Tina P. Srivastava and former DARPA CybersecurityDan Kaufman on preserving trust as AI-generated deepfakes blur reality. The panel discussed the critical need for strong data provenance to combat manipulated identities and forged credentials.

Masterclass – Identity-First Security: In a technical deep-dive, Wade Ellery (Field CTO, Radiant Logic) led a masterclass on fortifying defenses through an “identity-first security” approach. Ellery highlighted that identity has become the number one attack surface in cyber breaches, placing identity teams on the front line of defense. He showed how high-quality identity data is the linchpin: garbage in/garbage out identity data undermines security decisions, so organizations must transform their identity data management to enable fully empowered identity-first security. Attendees gained strategies to improve data integration and consistency, reinforcing that strong identity foundations are key to better defense.

Cisco spotlighted its identity vision with a focus on resilience and user experience. In a keynote, Matt Caulfield (Cisco Duo) urged teams to plan for outages and breaches—“your worst day on the job”—by embracing passwordless, proximity-based authentication. Later, IDPro member Chris Anderson tackled rising threats from fake identities, outlining how context-aware signals like device trust and anomaly detection can help spot imposters. Together, their message was clear: with modern design, security and seamless access aren’t in conflict—they’re two sides of the same strategy.

“Italian Precision and German Creativity” – Thales & NAB Conversation.

In a standout cross-continental session, Marco Venuti (Thales) and Olaf Grewe (National Australia Bank) explored B2B identity through the lens of “Italian Precision and German Creativity.” Their candid dialogue offered real-world strategies for managing third-party access and building trust in partner ecosystems. The session reflected Identiverse’s global outlook, and Thales reinforced its forward-thinking stance by showcasing quantum-safe, phishing-resistant authentication for future-ready MF.A

The IPSIE panel, featuring experts from Workday, Beyond Identity, Okta, and SGNL, tackled the complexity of identity standards like SAML, OAuth, and FIDO. Panelists including Jen Schreiber and Dean Saxe emphasized how technical profiles and frameworks (e.g. OpenID profiles) can bridge the gap between evolving protocols and enterprise needs—enabling secure, mix-and-match interoperability without the chaos. The message: simplifying identity integration is a shared challenge, but real progress is underway.

Expo Hall

The Expo Hall featured multiple booths, including IDPro, and was the location of many debates on identity, AI, and the future!

The message was clear throughout the week: identity isn’t just foundational to cybersecurity; it’s becoming the anchor point for securing a future shaped by intelligent systems.

Author

Dr. Tina Srivastava  is an MIT-trained rocket scientist, entrepreneur, technology expert, author, and the inventor of more than 15 patents.

This year, expo floors are louder than ever. But the real signal is quieter, traded in hallway whispers and private meetings among security leaders grappling with the same unsettling truth:

“We hired someone who wasn’t who they said they were.”

Not what if. Not someday. It’s already happened. And it’s happened to almost everyone.

The lucky ones discovered it during audits. Most uncovered their impostor after an incident. 

One CISO told me how their latest hire “didn’t seem quite right.” For months, it was just a hunch. Then all of a sudden, a simple helpdesk ticket unraveled into a months-long impersonation complete with falsified identity documents, deepfake-masked video interviews, and someone else entirely doing the employee’s work.

And that’s just one story. Across company sizes and sectors, I heard two confessions echo over and over, variations on a sinister theme: 

  1. “We thought we hired one person. We onboarded someone else.”
  2. “We thought we hired a superstar. They were actually from North Korea.”

Employee impersonation isn’t a future threat. It’s a current reality.

Workforce identity is a new primary attack surface. Why? Because today’s Identity & Access Management (IAM) systems were designed to authenticate users based on little more than device possession. 

Authentication factors, from passwords to passkeys, verify knowledge or possession of an enrolled device. And in a post-COVID world of remote hiring and global workforces, it’s harder than ever to know who’s really behind the screen.

The background check mirage: HR owns the process, IT inherits the risk.

Hiring checks amount to little more than blind trust. We hand out credentials to people we’ve only seen through a webcam. We assume that background checks and I-9 validation are both rigorous and effective. They’re not.

I watch companies invest millions in robust endpoint protection, phishing-resistant multi-factor authentication, and highly sophisticated monitoring systems. And I watch them leave their front door—specifically their hiring and onboarding process—wide open. 

Here’s the uncomfortable truth that CIOs, CISOs, and HR leaders are now grappling with:

Most companies have no idea if the person they hired is the same person being onboarded, or whether that person is legitimate.

Today’s hiring checks verify criminal records, employment history, and work authorization. Not actual people.

Ask HR and they’ll tell you the new hire was cleared. Background verified. No red flags. And in a way, they’re not wrong; the background check was clear. But here’s the catch: many U.S. background checks do little more than look up a Social Security Number against public criminal records databases. When a North Korean uses a stolen SSN that belongs to a real American with a clean record, the background check comes back green.

Maybe your company does “enhanced” background checks that go into education and employment histories. But we’re dealing with nation-state-sponsored fraud here; North Korea has demonstrated, repeatedly, that they can craft a compelling synthetic identity that passes these checks, too.

Remember: HR isn’t tasked with verifying the actual identity of an applicant. This means IT inherits all the risk, but has no role in verifying who someone actually is before issuing credentials.

And that’s how attackers win; they don’t bypass security controls, they become trusted users by convincing you to hire them. And once inside your networks, those secure credentials you spent so much time and money deploying can suddenly, ironically, turn against you.

The new insider threat wears a (deepfake) mask.

We’re well into the era of synthetic insiders. It’s common practice for even legitimate candidates to use AI-optimized resumes, cover letters, and even live genAI aides during their interviews. But some candidates are also using deepfake video filters, fake or altered identity documents, VPNs, RATs, and willing collaborators.

Sometimes it’s basic employment fraud. But more and more often, it’s a coordinated attack. Search “North Korean IT workers” on Google News, sort by Recent, and you’ll see what I mean. Fake IT workers affiliated with the DPRK are confirmed to have infiltrated “hundreds of the Fortune 500” and sent billions of dollars in revenue back to the North Korean regime. 

Threat intelligence researchers at Mandiant, DTEX Systems, and Palo Alto Networks, as well as the American, British, and South Korean governments, have all issued dire warnings, reports, and even sanctions targeting North Korean IT workers.

This is no longer an edge case. It’s a global epidemic of misplaced trust.

IT security starts with identity security. Identity security starts with onboarding security.

Whether your insider threat is North Korean or a garden-variety fraudster, the core problem is the same: unvetted identities gaining legitimate access to your most sensitive systems.

It’s time for companies to stop hoping that their hires are who they say they are and start proving it instead. Increasingly, I’m seeing that this means embedding identity verification directly into employee hiring and onboarding flows. 

Identity verification (IDV) asks a person to scan a government-issued photo ID document and then take a selfie to prove that they’re not impersonating someone else. The system validates the authenticity of the ID, validates the authenticity of the selfie, and then validates that the selfie matches the photo on the ID.

There are dozens of companies selling IDV products. And although the core user flow is largely the same, there are important differences in how they implement and protect this flow, and how they manage data privacy. 

The upshot is that some IDV systems are consumer-grade and some are workforce-grade. And it’s important to understand the difference.

Consumer-grade IDV performs Know Your Customer (KYC) checks with the goal of converting users into customers. These products are built for compliance, not for security. Some companies are trying to retrofit their KYC products into workforce use cases, with mixed results. 

Workforce-grade IDV is built from the ground up for workforce use cases like employee onboarding and MFA resets. These products typically don’t check the boxes that KYC and other regulations require. But the tradeoff is that they are much more secure against injection attacks, deepfakes, and other advanced threats that beat consumer-grade IDV systems. 

Four factors to consider when implementing workforce IDV.

I’ve had hundreds, maybe thousands, of conversations with CIOs, CISOs, and their teams over the past few years. I’m glad to be able to say that IT and security departments are increasingly aware of IDV as a solution to these problems. But some confusion remains.

So, here’s what I encourage CIOs to consider when evaluating IDV solutions for the workforce.

Capture channels: Allowing users to capture their ID and selfie via a webcam is undoubtedly convenient, but it comes with a major security tradeoff. Webcams and web browsers, including mobile browsers, are highly susceptible to injection attacks that insert deepfake media and false data. I also strongly recommend that you completely avoid any IDV system that allows people to upload documents or selfie photos.

Liveness detection: How do you prove that it’s a real, live human being actually in front of the camera? IDV companies typically speak in terms of “active” or “passive” liveness detection. Active models make a user dance for the camera, turning their head side to side or in a circle. Passive methods range from flashing lights to basic data analysis to more sophisticated “spatial” techniques leveraging three-dimensional depth maps and other advanced sensors.

Integrations: One might argue that any security technology is only as good as its ease of deployment. Increasingly, IDV providers are building plug-and-play integrations with workforce identity providers and enterprise apps. Look for a company that slots into your existing tech stack (IAM, HRIS, ITSM, SIEM, ATS, etc.), with little to no dev work. Consider if you need a turnkey solution you can deploy ASAP, building blocks you’ll need to piece together yourself, or something in between.

Privacy: I’d be remiss if I didn’t talk about data privacy and security here. Using IDV in a workforce context brings very different privacy considerations than in consumer use cases. For example, if you want to use IDV to verify candidates at the interview stage, be sure that you research any applicable laws or guidance. And make sure that your IDV system can be configured to adhere to those regulations, such as only showing the result of the verification.

The conversation we’re not having loud enough: Secure credentials are the frame, not the foundation.

Security teams shouldn’t be the last to find out there’s an impersonator inside the network. But today, they often are. And that’s because true identity verification was never part of your hiring process.

Whether it’s for new hires, contractors, offshore teams, or step-up verifications later in the employee lifecycle, identity verification is the critical missing element for ensuring that the human to whom you’re issuing credentials is exactly who they claim to be.

If you don’t know who exactly is enrolling or resetting credentials, every other layer of your security stack is built on sand.

It’s time to shift left, not just in software development, but in trust. Because if your enterprise is still issuing credentials without verifying who you’re issuing them to, you may already have hired an impostor.

And by the time you realize it, they’re already inside.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author

Photo Aaron Painter Aaron Painter is the CEO of Nametag, the world’s first identity verification system purpose-built to combat deepfakes and other AI-powered threats. Driven by his personal experiences with online fraud and identity theft, Aaron assembled a team of security experts to create the next generation of account protection. Aaron is a best-selling author, former VP and General Manager at Microsoft, Fellow at the Royal Society of Arts, Founder Fellow at OnDeck, a member of Forbes Business Council and a senior External Advisor to Bain & Company

Identiverse has become a lot of things for a lot of different people. It’s the only place I know where Passkeys and NHI (Non-Human Identity) can lead to hours of discussion and meeting new people from around the globe simultaneously. 

My name is Reece Price. I am a cybersecurity professional who has spent the first decade of my career in the identity space, working in higher education, consulting, and now for a private business services company. 

Identiverse, Year Two

2025 was my 2nd time attending Identiverse, and I had a great repeat experience. In my current role, I am not here to sell a new product or gain a new client; I am looking to learn where the industry is headed and how AI will impact it. Several of my I&A (Identity and Access) colleagues joined me at Identiverse, so I hope it allows us to supercharge any takeaways.  A few of my highlights are as follows.

B2B CIAM

B2B CIAM is still flying under the radar, in my opinion. But Thales hosted two great sessions. Italian Precision and German Creativity: A conversation on Optimizing B2B Services; and B2B IAM Smackdown: Defending the Future of Partner Identity. Both sessions were very thought-provoking as I continue to mature my own B2B CIAM program. 

Decentralized Identity

Decentralized Identity is the future of how the world can get to a place where physical wallets with a Driver’s License are a thing of the past. However, to get to that world, the world’s governments will play a crucial role in making this happen, both from a policy standpoint and by adopting the right technology without critical flaws. As Identity professionals, we must ensure this technology is developed and securely deployed worldwide. 

And of course, AI

AI, any list about a technology conference without AI would be lacking. AI will change the world, but Andre Durand (Ping Identity) mentioned in his keynote that verified trust will ensure our new AI-powered world is secure and allows transactions to flow across the internet. It’s now not good enough to know that Reece can complete a transaction; we must understand if Reece has authorized some agent (AI or not) to complete this transaction. 

Identiverse, Year 3?

Overall, if you are an Identity professional and are unsure about attending next year, I can’t recommend it enough. Everyone in the Identity industry can find something at Identiverse that they can learn from.  Hope to see everyone again in Las Vegas for Identiverse 2026.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author

Reece Price, CIAM Manager at CSCGlobal, is a seasoned CISSP-certified IAM leader with extensive experience in the commercial, government, and academic IAM sectors. He has worked with most major vendors, including Ping Identity (ForgeRock), SailPoint, and Okta. Using his background and experience, He can understand your technology, business, and people to create solutions that work for everyone.

The world of identity and access management (IAM) touches every aspect of online technology today. One of the better side effects of this is that professionals in the field develop a diverse range of skills throughout their careers. We recently analyzed the data from the latest IDPro Skills, Programs & Diversity Survey. Learning from IAM professionals at different experience levels helps us uncover key trends in skill acquisition. The results reveal not only which areas experts focus on but also how beginners can shape their career paths effectively.

How Experience Impacts Identity Professionals’ Skillsets

We examined responses from IAM professionals with varying years of experience and categorized their expertise in different identity-related areas. The goal? To uncover how expertise develops over time and what this means for the future of identity management.

Key Experience Groups Analyzed:

  • 0 – 2 years: Early-career professionals entering the IAM field.
  • 3 – 5 years: Mid-level professionals building their expertise.
  • 6 – 10 years: More experienced professionals refining their specializations.
  • 11 – 15 years: Senior professionals leading strategic IAM initiatives.
  • 15+ years: Experts and veterans shaping the industry’s direction.

The Skills That Define Each Stage of an IAM Career

Early-Career Professionals (0-2 Years) – Building Foundations

  • Those with 0-2 years of experience focus primarily on authentication, authorization, and cloud-based identity solutions like IDaaS (Identity-as-a-Service).
  • Many also highlight API protection and access certification as key areas of early exposure.
  • However, this group often lacks experience in governance, compliance, and more advanced security architectures.

Takeaway:
New professionals should focus on strong fundamentals like authentication protocols, access control, and API security while seeking exposure to governance and compliance areas.

Mid-Level IAM Professionals (3-10 Years) – Broadening Knowledge & Specialization

  • Access certification, identity governance, and directory services become more prominent in this stage.
  • Many professionals begin working on privileged access management (PAM) and multi-factor authentication (MFA) implementations.
  • Skills like risk-based authentication and policy-based authorization gain importance.

Takeaway:
Mid-level professionals should hone their expertise in governance and directory services while exploring advanced security measures like PAM and risk-based authentication.

Senior & Expert-Level IAM Professionals (11+ Years) – Shaping the Future of Identity

  • Those with 11+ years of experience are more likely to work in identity governance and administration (IGA), federated identity, and self-sovereign identity (SSI).
  • This group is leading discussions around verifiable credentials, decentralized identity, and blockchain-based authentication.
  • They are also deeply involved in setting enterprise-wide security strategies and compliance frameworks.

Takeaway:
Experienced IAM professionals should focus on emerging trends like decentralized identity while shaping organizational security policies and governance models.

Key Insights from the Survey Data

Experience Brings Specialization:

While early-career professionals focus on fundamental security practices, mid-career experts develop deep specializations in governance, risk management, and advanced security protocols.

Governance & Compliance Come with Time:

Most professionals don’t engage with identity governance and compliance until 6+ years into their careers—suggesting these are skills best learned through hands-on experience rather than theoretical knowledge.

Cloud & API Security Are Crucial at Every Stage:

From newcomers to veterans, API protection, cloud IAM, and authentication technologies remain key focus areas, reinforcing their role as core competencies in identity security.

How to Advance Your Career in IAM

  • For Beginners (0-2 Years):
    • Learn authentication, authorization, and cloud IAM basics.
    • Gain hands-on experience with IDaaS, MFA, and API security.
    • Build a strong foundation in identity standards like OAuth, SAML, and OpenID Connect.
  • For Mid-Level Professionals (3-10 Years):
    • Develop expertise in governance, compliance, and privileged access management (PAM).
    • Work on risk-based authentication and federated identity implementations.
    • Get familiar with identity governance tools and architectures.
  • For Senior Professionals (11+ Years):
    • Lead strategic security initiatives and develop IAM policies.
    • Stay ahead of trends in decentralized identity, verifiable credentials, and blockchain authentication.
    • Contribute to industry standards and best practices.

Final Thoughts: The Future of IAM Careers

IAM is a rapidly growing field, and professionals who continuously adapt and specialize will be in high demand. Whether you’re just starting or have decades of experience, understanding these trends can help you navigate your career strategically. Make sure you respond to next year’s IDPro Skills, Programs & Diversity Survey! It only gets better the more people respond.

What do you think? What skills have been most valuable in your IAM journey? Let’s chat about it in our members-only Slack!!

Want More Insights Like This?

📩 Join IDPro for expert insights around IAM trends, career tips, and more!

Author:

Heather Flanagan, Principal at Spherical Cow Consulting, comes from a position that the Internet is led by people, powered by words, and inspired by technology. She has been involved in leadership roles with some of the most technical, volunteer-driven organizations on the Internet, including IDPro as Principal Editor, the IETF, the IAB, and the IRTF as RFC Series Editor, ICANN as Technical Writer, and REFEDS as Coordinator, just to name a few. If there is work going on to develop new Internet standards or discussions around the future of digital identity, she is interested in engaging in that work. You can learn more about her on LinkedIn or reach out to her on the IDPro Slack channel.

The OAuth Security Workshop 25 (OSW) took place in Reykjavik, Iceland this year, in the last week of February. Currently in its 10th year, the workshop was initially created by two different research groups from the Universities of Ruhr-Bochum and Trier who independently discovered attacks on OAuth and OpenID Connect around the same time. These researchers first met in Darmstadt in 2015 with members of the OAuth working group to discuss the issues they surfaced and find mitigation strategies. The participants also decided that given the need for a better exchange of information and knowledge, a regular meeting or event was necessary. The OAuth Security Workshop was born, with the goal of ensuring that research and standardization efforts go in sync. Since then, the OSW has been run and organized independently by Dr Daniel Fett, Guido Schmitz, Steinar Noem without corporate backing or funding. Thankfully, individual workshops have corporate sponsors, but we still have to thank Daniel, Guido, and Steinar for their volunteer efforts in keeping this community alive and thriving for the past decade! As participants like to point out themselves, OSW is a meeting place for a couple hundred geeks, but those are the geeks who actually drive those standards for Identity that the current World Wide Web is built upon!

OSW Sessions of Interest

OSW is itself part conference, part unconference. The mornings are dedicated to proper talks, keynotes, and sessions for different horizons – students, researchers, security architects, seasoned RFC writers, thought leaders, startup founders, and even Digital Identity Advancement FoundationVittorio Bertocci Award” grantees. The afternoons are open for unconference-style open sessions, the contents of which are decided each day by popular vote. Several tracks were thus discussed, these being the latest and greatest work currently in progress or published in the OAuth universe at large. 

Verifiable Credentials

A set of sessions was dedicated to the various groups working on Verifiable Credentials and related specs (OID4VC, etc.). Kristina Yasuda and her peers showed that a lot of effort has been going into standardizing the formats for representing credentials, for building trust frameworks, and ensuring that these digital credentials can be read, presented and understood, and most importantly, trusted. A huge driver here is still the pan-European eIDAS initiative, whose goal is to provide all European citizens with Digital Credentials.

WIMSE-cal Workloads

On another tack, a lot of work has been going into securing Workloads, with the advent of the new Transaction Tokens and WIMSE specifications, as well as the new implementations of the not-so-new SPIFFE framework. As defined in the Transactions Token spec, a Workload is “An independent computational unit that can autonomously receive and process invocations, and can generate invocations of other workloads. Examples of workloads include containerized microservices, monolithic services and infrastructure services such as managed databases”. This also applies to our friends the AI Agents. Pieter Kasselman highlighted that workloads have two main problems: providing them with a provable unique identity that can enable them to authenticate with each other (support of which can be provided through SPIFFE, but also through a new concept of a Workload Identity Token or WIT), and ensuring that the same context is shared across all the workloads participating in the same transaction (achieved through Transaction Tokens). Access to any resource by any of these Workloads can then be properly authorized within the context of the operation at hand.

The Next Member of the OAuth Family

This topic led to some good follow-up discussions after Justin Richer presented the RFC on HTTP message signature, an alternative to DPoP, one of the legitimate children of OSW. What would happen if the workload couldn’t access the Authorization Server or the client key material? By the end of the conference a new proposal was submitted to IETF. Discussions in Bangkok promise to be epic.

FAPI

On the development/engineering security side, the FAPI 2.0 specification was also released recently. It included various updates to the security posture required from its various participants. As for gauging the security of Web Applications, the Open Worldwide Application Security Project (OWASP) just released its version 5.0 of its Application Security Verification Standard (ASVS). Elar Lang presented the ASVS project, whose primary goal is to provide an open application security standard for web apps and web services of all types. The standard provides a set of controls that can be used to assess or test the security of any system. Implementers can thus choose which controls to focus on to secure their applications.

Factors and Claims

Jeff Lombardo and Alex Babeanu have introduced a new set of claims for the JWT Access Token profile within the OAuth2 standard, along with a new flow. These proposed claims aim to enhance visibility into the Client entity itself. While existing OAuth2 flows provide extensive information about the end-user making requests to access resources (through access or ID token claims), there is little standardization around identifying and assessing the client application that the end-user is authorizing. Specifically, there is no widely accepted way to determine the level of assurance associated with the client entity. For example, how was the client authenticated? Was it a simple ID and secret, mTLS, or a signed JWT assertion? These methods vary significantly in security, with cryptographic signatures offering stronger assurances. Additionally, what security extensions were applied in the OAuth flow? Was PKCE or DPoP used? These factors can impact the overall security posture of a request.

Access Control Mechanisms

As access control mechanisms evolve, these considerations are becoming increasingly important. With the rise of AI agents, Policy Decision Points (PDPs) must assess not just the end-user but also the security of the calling application itself. By incorporating these new claims, PDPs can make more informed access control decisions, ensuring stronger and more adaptable security policies.

Thus, a better integration with AuthZEN-compliant Policy Decision Points (PDP) is proposed in a couple of ways:

  • OAuth Authorization Servers (AS) should make direct AuthZEN calls to compliant PDPs as part of their usual token-minting ceremonies. This will supply the PDPs with the additional client claims described above to help in decision-making. (We are thinking in particular about RAR requests, which can be complex authorization requests).
  • The authors also proposed a new Step-Up Authorization Protocol as an extension to RFC 9470, Step-Up Authentication Protocol. In this new flow, a Resource Server can request an Authorization Step-Up and require a new set of client claims from the client. The client is then responsible for obtaining these claims by, for example, authenticating using a stronger method (such as mTLS or signed assertions) and ensuring certain extensions (such as DPoP) are presented.

Work on drafts for these extensions has already started.

Closing With a Bang

Finally, as Mike Jones pointed out during his session entitled “The Cambrian explosion of OAuth and OpenID Specifications”, there are rather a LOT of standards in the OAuth universe, over 100. So much so that it can be hard for newcomers or implementers to find the right path in this forest. This is nevertheless also a sign of a healthy ecosystem, where more and more problems are tackled. Like the Cambrian explosion that our planet Earth experienced some 540 million years ago, we may be witnessing an explosion in the diversity of Digital Identity topics and concerns, a good sign that we will keep busy for the foreseeable future.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Authors:

Alex Babeanu is a seasoned expert with over two decades of building innovative IAM solutions using Graphs and Open Standards, as a principal Engineer, Consultant, Product Manager and CTO. A passionate advocate for the graph-based approach to IAM, Alex has presented at leading conferences and contributed extensively through published papers and blogs. As a founding member of IDPro and part of its editorial committee, Alex plays a key role in curating content for the organization’s monthly publications. Currently, he leads the Access Management product at Indykite, a cutting-edge platform that harnesses graph data and AI to simplify complex identity challenges.

Badges: IDPro Member, IDPro Editorial Committee, IDPro BoK Reviewer, IDPro Newsletter Author, IDPro Founding Member

Jeff Lombardo is a Solutions Architect expert in IAM, Application Security, and Data Protection. Through 15 years as an IAM consultant for French, Canadian, and US enterprises of all sizes and business verticals, he has delivered innovative solutions with respect to standards and governance frameworks. Since the last 5 years at AWS, he helps organizations enforce best practices and defense in depth for secure cloud adoption.

Integrating cryptography with Identity and Access Management (IAM) isn’t just a good idea – it’s a necessity in today’s complex threat landscape. For too long, information security frameworks have treated cryptography, especially PKI, as a separate entity from access control. While PKI’s historical focus on human identity lifecycle management (think Registration Authorities, Certificate Authorities, and identity proofing) made sense in its time, it now significantly overlaps with modern enterprise IAM systems. Traditional PKI models, often defined by RFC 6484-style Certificate Policies, even included physical security, audit logs, and personnel controls – domains now covered mainly by ISMS frameworks like NIST SP 800-53 or ISO 27002. This redundancy creates complexity and potential inefficiencies.

Organizations grappling with regulations like DORA and NIS-2, or simply trying to mitigate information security and privacy risks, must effectively weave cryptography into the fabric of their IAM. This integration must also account for modern realities: the rise of cloud computing, the complexities of supply chain management, the criticality of incident response, and the ever-expanding attack surface.

Understanding Stakeholder’s PKI Needs

So, where do we begin? A key first step is understanding stakeholder information needs. By framing these needs as pointed questions – much like a well-crafted Statement of Work – we can define project scope and deliver precisely what’s required. Here are ten critical questions to guide your thinking:

1. Who are we dealing with: B2E, B2B, or B2C?

Just as in Identity Governance and Administration (IGA), protection needs are intrinsically linked to the “user”. We must consider both the legal/organizational context (workforce, business partner, consumer, etc.) and the user’s nature (human, device, workload, service, agent). One-size-fits-all policies simply won’t cut it. Policies must fit each category’s specific requirements.

2. How sensitive is the data: Public, Restricted, or Highly Restricted?

A tiered approach, typically three or four levels, offers a practical compromise between granular control and manageable complexity. This applies across the board – identity assurance (think NIST 800-63 LoA), risk profiles (NIST 800-53 baselines), and information classification (public, restricted, highly restricted). We need clear policies dictating when data must be encrypted. For instance:

  • Restricted Data: Encrypt when traversing external networks.
  • Highly Restricted Data: Encrypt always.

Don’t forget Segregation of Duties (SoD). Should system administrators really have unfettered access to highly restricted data? Think carefully about the implications. This adds another layer of security if an admin account is hijacked.

3. Scoping cryptography, what are use cases and cryptographic application domains?

Let’s avoid muddying the waters with an overly broad definition of “use case.” We’ll stick to the classic actor-uses-black-box metaphor (enrollment, key distribution, revocation, key escrow). “Cryptographic application domains,” on the other hand, are technology-dependent: TLS, SSH, IPSec, storage encryption, secure boot, code signing, Kerberos, WebSSO, client authentication, S/MIME, document signatures – the list goes on.

4. How do IAM assurance levels connect to cryptographic keys?

Managing keys securely is a constant battle. The late 90s hype around smart cards and dedicated terminals has given way to more scalable, but often less secure, approaches. Private keys are now often centrally generated, stored in directories, or held within Windows domain login sessions. While scalability is paramount, we must balance it with risk. Hardware-based key protection remains a best practice for high-assurance needs.

5. How do we enforce segregation of duties in high-risk environments?

When defining key management controls, ask yourself these tough questions:

  • If keys are accessible via enterprise SSO, how do we achieve sufficient assurance?
  • Do we need a separate, hardened environment (jump host, privileged access workstation)?
  • How do we prevent administrators from impersonating key holders?

6. What are the critical differences between normal operations and emergency recovery?

Incident recovery demands meticulous planning. Consider:

  • Can we establish a secure operational base with minimal dependencies during a crisis?
  • Can we support secure login without relying on enterprise SSO (hint: X.509 client certificates are your friend)?

Smoothly operating, highly interconnected systems can become a tangled mess during recovery. A simple, isolated PKI for a small group of indispensable administrators, using smartcard-like USB tokens, can be a lifesaver, providing a trusted authentication platform in both normal and recovery modes.

7. How tightly is asset management integrated?

Many organizations fail to properly map cryptographic uses and key material to their asset inventory. Drill down:

  • Do we know who owns the keys and certificates? Is there a clear link to an asset owner role?
  • Is the protection level of keys and certificates derived directly from the asset’s protection level?
  • What algorithms, parameters, key sizes and protocols are used where?
  • What should be encrypted but isn’t?

8. What about roles, responsibilities, and organizational structure?

Consider these organizational factors:

  • Is the PKI team tied to a specific technology (like Active Directory)? If so, does the CA’s security depend on it?
  • Are responsibilities for decentralized key material clearly defined for server and service administrators?

9. How do we manage key and certificate lifecycles effectively?

Ask yourself:

  • When can we leverage standard IGA processes (request, approve, auto-provision, recertification)?
  • How do we align data between existing key management systems and IGA?

10. How do we handle user-based encryption securely?

User-based encryption is deceptively complex and requires a risk-based analysis. Encryption itself is straightforward; key management combined with entitlements is the real challenge. Crucial questions include:

  • What technique do we use to share access to encrypted content? (Shared keys are a terrible idea; key wrapping with individual keys is best practice; a KMS is a secure alternative).
  • How do we use entitlement management to grant multiple users access to an encryption key?
  • What safeguards are in place during provisioning to prevent unauthorized key access?

PKI, IAM, and Governance

These ten questions underscore the fundamental overlap between cryptography and IAM. Effective governance and architecture planning requires close collaboration – a true meeting of the minds – between IAM and cryptography experts. Only then can we build truly secure and resilient systems.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

For additional reading on PKI and IAM, see the IDPro Body of Knowledge article, “Practical Implications of Public Key Infrastructure for Identity Professionals” by Robert Sherwood.

Author

Rainer Hörbe, based in Vienna, Austria, is Senior IAM Architect at KPMG.

Since 2017, IDPro® has been growing, evolving, and supporting the practitioners working in the field of digital identity. From our founding to launching CIDPRO® in 2021, we’ve worked to build a strong, connected IAM community.

Now, it’s time to welcome a new leader to help shape IDPro’s future. With the board size set at nine members, we have one open seat for an engaged and enthusiastic professional to join us in July 2025.

What It Means to Be on the IDPro Board

This isn’t a ceremonial role: It’s hands-on! IDPro’s board is operational, which means rolling up your sleeves to keep things moving. Board members typically contribute 10–15 hours per month, participating in monthly meetings, managing projects, and engaging with existing and potential members.

If you’re passionate about the practice and the profession of digital identity and want to actively contribute to the success of IDPro, we’d love to see you throw your hat in the ring. As a board member, you’ll:

✅ Help steer IDPro’s strategic direction
✅ Support financial health and organizational growth
✅ Strengthen the IAM community and professional network
✅ Work alongside industry leaders and practitioners
✅ Gain leadership experience and visibility in the field

We value collaboration, inclusivity, and diversity; we encourage identity professionals from all backgrounds to apply.

Ready to Nominate?

If you or someone you know would be a great fit, now’s the time to act! Self-nominations are welcome—after all, who knows your passion and skills better than you?

📩 To apply, email director@idpro.org for the nomination packet.
📅 Deadline for completed nominations: April 30, 2025.

This is your chance to make a real impact in the world of digital identity. We can’t wait to see who steps forward!

Learning a language can be quite difficult. Sure, you can opt for mobile apps that claim to teach you the language in “three short months!”, but anyone who’s tried to order the ratatouille in Paris, the Tom Yam Koong in Bangkok, or the Burnt Ends in Texas quickly learns that there’s a difference between knowing a few words and being able to communicate useful information in the real world. What most of us truly need is a conversation partner—someone who will always respond with the correct answer and gently correct our mistakes as they slowly fade into proper usage.

Adopting identity standards is a lot like acquiring a foreign tongue. While it’s relatively easy to have a surface knowledge of the technology, most of us don’t easily understand what is occurring in these identity approaches until we can actually interact with them personally. As we explore them by hand, we see what each exchange looks like, what happens when things fall over, and what current systems do when faced with boundary cases.

In short, we need a “conversational” partner that will let us try out these interactions and learn the proper call and response.

A Demo System as a Conversational Partner

Open-source or publicly-available demo systems are crucial to the learning process. They allow for a deeper understanding of interactions and the chance to learn via experience. When it comes to emerging standards, they speed adoption tremendously, as can be seen from examples such as AuthZen and the Shared Signals Framework from the OpenID working groups.

Those of us participating in the Shared Signals Framework Interop this year in March (and coming up again in December) have benefitted from Caep.Dev – an online receiver/transmitter that can be used publicly both to understand interactions within the standard and to identify where ongoing development efforts may have failed to follow the specification. (Not that Caep.Dev was infallible by the way—it helped clarify issues on both sides of most interactions.) Without the existence of this kind of conversational partner, the standard would see much slower adoption and lower levels of successful interop participation.

Just Try It Out

But it’s not just emerging standards, either—existing standards benefit from conversational partners as well. Take SCIM, for instance; it has been around for at least nine years, but still benefits from projects such as Arie Timmerman’s Scim.Dev. Users can explore the world of SCIM, including my personal favorite emerging standard: SCIM Events.

I’ll let Arie describe what he’s created over on Scim.Dev:

“Tell me and I forget, teach me and I may remember, involve me and I learn.” This wisdom—shared by Benjamin Franklin—underpins the philosophy behind SCIM Playground. Rather than responding to questions like “How do I integrate using SCIM?” with “Read the specs”, we can now say, “Just try it out.” A demo environment is one click away, complete with optional dummy users and groups to help you get started quickly. Many IT professionals perceive SCIM as complex or challenging to understand, but this playground and testing environment can help overcome these barriers and encourage adoption of the protocol.

Sites such as Caep.Dev and Scim.Dev (no, they’re not all suffixed with .dev) give us the opportunity to practice using these standards, write prototype and production code against them, and level up quickly as we rush to enhance the utility of identity. These kinds of publicly available tools exist for most standards—easily found a few short searches away (ask on the IDPro Slack if you’re having difficulty uncovering what you need).

Accelerate Your Progress

So, if you’re looking to learn something new about identity or want to understand a new or emerging standard, accelerate your progress the same way you would if you were trying to gain fluency in a language other than your own: find a conversational partner.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author Bio

Director of Strategy and Standards, SailPoint

Mike Kiser is insecure. He has been this way since birth, despite holding a panoply of industry positions over the past 20 years—from the Office of the CTO to Security Strategist to Security Analyst to Security Architect—that might imply otherwise. In spite of this, he has designed, directed, and advised on large-scale security deployments for a global clientele. He is currently in a long-term relationship with fine haberdashery, is a chronic chronoptimist (look it up), and delights in needlessly convoluted verbiage. Mike speaks regularly at events such as the European Identity Conference and the RSA Conference, is a member of several standards groups, and has presented identity-related research at Black Hat and Def Con. He is currently the Director of Strategy and Standards at SailPoint Technologies and an active IDPro member.

by Dr. Tina P. Srivastava

Authenticate 2024 took place in Carlsbad, CA in mid-October. The weather was beautiful and the attendance was high. Hosted by the FIDO Alliance, Authenticate focuses on user authentication and brings together CISOs, business leaders, product managers, and identity architects.

Keynote and session highlights:

IDPro member Chris Anderson, Product CTO at Cisco discussed the challenges that exist today with compromised identity credentials, which contribute to over 80% of data breaches, a seemingly undented figure year over year. Chris noted there are gaps in protection as there is no phishing-resistant MFA available for many areas from unmanaged devices to remote access, to Linux systems, to contractors and vendors, and others.A group of people on a stageDescription automatically generated

A group of men on stageDescription automatically generated

IDPro member Tina Srivastava, PhD, Badge Co-founder and MIT Lecturer, and Bill Wright, former executive director at USAA bank and FIDO board member, presented on the importance of strong attestation for #passkeys and the approaches financial institutions are taking to solve this on their passwordless journeys. Relatedly, Pedro Martinez from Thales presented why synced passkeys do not work for banking, including that “they are exported and stored in the Cloud of the user’s device OS” and “synced passkeys may not meet stringent MFA requirements from Financial regulators in some countries/regions.”

The industry has been seeking phishing-resistant technologies to address the problem of breaches caused by the compromise of identity credentials. Challenges exist with passkeys, such as with account recovery, provenance, and portability. Many approaches still maintain a password or KBA as a fallback for account recovery, enabling ATOs prone to attack from social engineering. 

Google’s keynote by John Gronberg noted “Cross device still a challenge” and “Users are anxious about losing their devices.” In his key learnings about passkeys, he shared, “Raising the security bar comes later” and that passkeys are for “re-authentication,” account takeover playbooks already include passkeys, and credential managers storing passkeys are becoming targets. He noted that the new device bootstrap scenario is critical and unsolved. Amazon’s keynote by Abhinav Mehta similarly noted “Cross-Platform Challenges” and that “Passkeys don’t transfer across platforms.”

On the future of digital payments, Mastercard executives Jonathan Grossar, VP, product management, and Fred Tyler, VP, emerging digital products for North America, introduced the concept of the payment passkey, bound to a user’s device. They shared that in situations of higher security, enterprises are leaning towards device-bound passkeys. Generally, enterprises are not adopting synced passkeys as stand-alone MFA. This is the approach many companies seem to be taking, including Mastercard.

Sushma K. and Ritesh Kumar from Microsoft shared the challenges with migrating a passkey across devices. They demonstrated a set up that is required once per device that requires scanning QR codes with a phone or tablet. The accessibility and usability issues of scanning QR codes were raised.

Amazon and Microsoft presented their passkey implementations, including the importance of using prompts like “Skip for now” instead of “Not now” or “No thanks.”

Partnership Announcements and Expo highlights:

Qualcomm and Daon are working toward IoT-connected cars using biometrics and passkeys with key drivers including personalization and payments. 

Cisco and Thales announced major partnerships with Badge, the award-winning privacy company enabling identity without secrets. The companies demonstrated their joint integrations with customers. Cisco demonstrated the Hardwareless MFA experience. Thales showcased Passwordless authentication without secrets.

A group of people standing around a tableDescription automatically generated
A group of men standing around a tableDescription automatically generated

Social highlights:

An identity-themed family feud-style game show captured the attention of attendees, resulting in laughter and applause from the Authenticate audience. A big surprise was that for the question “What trend are you most tired of in the identity and access management space?” the answer “FIDO/Passkeys” was #2. IDPro member Jeff Steadman, of the Identity at the Center podcast, quickly noted to his FIDO hosts that he did not generate these and was just the host! The Gliterati team was seen taking shots on stage at a fun-filled comedic break.A group of people on stageDescription automatically generated

Karaoke was a hit at the Passwordless Party. Pictured below, singing their hearts out to Katy Perry’s “Firework”: Christiaan Brand from Google, IDPro member Tim Cappalli from Okta, Matt Miller from Cisco, IDPro member Christine Owen from 1Kosmos, Jamie Danker from Venable LLP, and IDPro member Tina Srivastava, PhD from Badge (left to right).

A group of people singing into microphonesDescription automatically generated

Dr. Tina P. Srivastava is an entrepreneur, author, inventor of more than 15 patents, and an MIT-trained rocket scientist. She served as Chief Engineer of electronic warfare programs at Raytheon before founding a cybersecurity startup that was acquired by a public company and global leader in network security. She is an FAA-certified pilot and is a Lecturer at MIT in Aeronautics and Astronautics.

When her identity was stolen in a data breach in 2015, Dr. Srivastava teamed up with a group of MIT cryptography PhDs to crack the code on one of the most common reasons for modern data breaches: stored credentials. Together, they solved a decades-old cryptography problem to remove PII, biometrics and other stored credentials from the authentication equation, eliminating highly vulnerable storage systems as points of attack for hackers. Badge Inc. is the award-winning privacy company enabling Identity without Secrets™.

The Identity and Access Management (IAM) industry is ready to move out of its parents’ house and be recognized as an equal player to other critical business functions. CEOs are learning more about the criticality of the IAM function. CIOs have this as a named thing in their portfolio of responsibilities. Basically, we’re recognized as a critical function for modern businesses. Go, team!

But my observation from chatting with people in conference hallways and social media channels is that professionals in our industry are still grappling with this shift in expectations. There are new skills we need to polish (or learn outright) that may not have been on our radar. Let’s talk about that. 

The Evolution of IAM: From the Shadows to Center Stage

I’m going ahead and asserting that IAM was often treated as a necessary but secondary concern. Many organizations didn’t see it as integral to their operations, leaving it to be managed by HR or, at best, as part of broader cybersecurity efforts. IAM was reactive, a tool to be deployed after other systems were in place. And no one was actually trained before they got sucked into the vortex of IAM. 

COVID was eye-opening to lots of people in the Gen X and Boomer crowd when it came to recognizing the importance of IAM. I’m willing to guess, however, that Millenials and Gen Z already had the expectations in place that their digital identity and the whole identity experience online was a Really Big Deal. 

Modern businesses cannot function securely or efficiently without a robust IAM infrastructure. It’s not just about keeping the wrong people out; it’s about enabling the right people to access the resources they need when they need them. And SO MANY resources are now online. Little wonder that companies demand more from identity professionals than ever before.

The New Reality: IAM as a Core Business Function

Today, IAM is less an afterthought and more a core business function. The stakes are higher, the demands are greater, and the need for skilled identity professionals has never been more urgent. IAM is foundational to every aspect of a business’s operations—from securing sensitive data to ensuring compliance with regulatory requirements, to supporting seamless user experiences.

But with this shift comes a new set of expectations. Identity professionals must now act with the same level of urgency and importance as their counterparts in HR, Engineering, and Sales. This means not only managing identities but also understanding the broader business implications of their work. It means being able to prepare reports for our executives and present our projects and findings across teams. We are more visibly responsible for balancing security with usability, ensuring that IAM solutions support business goals while protecting critical assets.

Stepping Up: What It Means to Be an IAM Professional Today

Of course, there is the need to stay on top of the technological changes in our space. Establishing yourself with a baseline of IAM knowledge is a great idea (had to do a CIDPRO® promo in there). But there are a few other areas I think you’ll want to focus on, too:

  1. Strategic Thinking: Identity professionals must move beyond the technical aspects of IAM and start thinking strategically. How does IAM align with the company’s business objectives? How can IAM drive efficiency, innovation, and competitive advantage?
  2. Collaboration: IAM is no longer siloed; it’s a function that touches every part of the organization. Identity professionals must collaborate closely with other departments, from IT to HR to Legal, to ensure that IAM solutions are aligned with business needs.
  3. Leadership: As IAM becomes more critical, identity professionals need to take on leadership roles within their organizations. This means having the skills to advocate for IAM at the executive level, ensuring that it receives the attention and resources it deserves.
  4. Continuous Learning: The IAM landscape is constantly changing, with new technologies, threats, and regulations emerging all the time. Identity professionals must commit to continuous learning to stay ahead of the curve and keep their organizations secure.

The Future of IAM: Becoming Indispensable

The evolution of IAM from a part-time function in the Information Security or HR departments to a core business function is still ongoing, but in no world can I imagine that function will do anything but grow. Those who build up more than just technological awareness will be the ones who shape the future of IAM—and ensure that their organizations can thrive in a complex, digital world.

If you’ve read this far, you rock AND you’re in the right place to engage with other professionals to grow all your skills, from collaboration and leadership to hard-core tech knowledge. IAM is critical to the success and security of modern businesses. You can do more with strategic thinking, collaboration, leadership, and continuous learning and ensure that you’re not just keeping up with the demands of the business world, but driving it forward.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author (as an individual contributor)

Heather Flanagan, Principal at Spherical Cow Consulting, comes from a position that the Internet is led by people, powered by words, and inspired by technology. She has been involved in leadership roles with some of the most technical, volunteer-driven organizations on the Internet, including IDPro as Principal Editor, the IETF, the IAB, and the IRTF as RFC Series Editor, ICANN as Technical Writer, and REFEDS as Coordinator, just to name a few. If there is work going on to develop new Internet standards or discussions around the future of digital identity, she is interested in engaging in that work. You can learn more about her on LinkedIn or reach out to her on the IDPro Slack channel.

The U.S. National Security Agency (NSA) recently (as of July) completed its final paper on the pillars of Zero Trust. While the NSA does not directly credit it, these pillars conform to the U.S. Department of Defense’s (DoD) Zero Trust Reference Architecture, which IDPro had previously discussed. It would be beneficial to understand if there are any differences in the messaging between the NSA and DoD, what those differences are as they relate to identity, what similarities there are, and perhaps what we might glean from each organization’s representation of identity from a publicly facing perspective.

How Each Organization Wants Us To “See” Zero Trust

The NSA defines each “pillar” of zero trust as separate capabilities, each with a clear purpose and demarcation. While not entirely independent of each other (for instance, some aspects of the device pillar may intertwine with the user pillar, and so on) there are clear capabilities that each pillar has all its own. Under this visual model, we still need each capability to do the job of zero trust, and it is implied that each pillar is required for the model to function.

NSA's Zero Trust Pillars from the NSA CSI series on Zero Trust, including the user, device, application and workload, data, network and environment, automation and orchestration, and finally visibility and analytics.

Figure 1: NSA’s Zero Trust Pillars from the NSA CSI series on Zero Trust

The DoD defines each “pillar” as being in service of protecting data. Data is, per the DoD’s understanding, part of all other resources. In this sense data is a pillar, but because everything else utilizes data and the goal is to protect data, the boundary of what is in the data pillar and what is data in service of another pillar gets blurry – for instance, contextual information around a user that guides authorization decisions could absolutely be considered data, and it comes down to the purview of the person looking at the model to make that distinction. The model is further blurred by noting that a given capability may be the purview of multiple pillars and that some capabilities (such as continuous multifactor authentication) span all pillars.

Figure 2: DoD’s Zero Trust Pillars from the DoD ZTRA

What The NSA Reports Offer

At a high level, the NSA Cybersecurity Information Sheets (CSIs) on Zero Trust (available in totality on the NSA website) generally speak to the capabilities inside a given pillar, how the NSA defines these capabilities, and then the path from preparation into maturity with a given capability. For instance, the NSA speaks to five specific capabilities it feels are within the User pillar. Here we replicate their statements on these capabilities:

  • Identity Management: technical systems, policies, and processes that create, define, govern, and synchronize the ownership, utilization, and safeguarding of identity information to associate digital identities to an individual or logical entity.
  • Credential Management: technical systems, policies, and processes that establish and maintain a binding of an identity to an individual, physical, or logical entity, to include establishing the need for a credential, enrolling an entity, establishing and issuing the credential, and maintaining the credential throughout its life cycle.
  • Access Management: management and control of the mechanisms used to grant or deny entities access to resources, including assurances that entities are properly validated, that entities are authorized to access the resources, that resources are protected from unauthorized creation, modification, or deletion, and that authorized entities are accountable for their activity.
  • Federation: interoperability of ICAM with mission partners. This CSI only discusses the general complexity of identity federation. 
  • Governance: continuous improvement of systems and processes to assess and reduce risk associated with ICAM capabilities. This CSI addresses improvements for this category by defining maturity levels for each of the ICAM categories rather than discussing maturity of identity governance in general.

These statements are fairly broad, and rightfully so. Each capability is discussed in further detail within the CSI, specifically around what an increasingly mature organization might have in terms of capabilities. For instance, if we look at identity management it speaks to a set of capabilities that becomes increasingly complex and based around the mitigation of risk.

NSA’s Zero Trust user pillar maturity model, from preparation (FICAM baseline) to basic (defined and assessed identity attributes), intermediate (standardized and managed identity attributes), and advanced (authoritative, dynamic, risk-based attributes.
Figure 3: NSA’s Zero Trust user pillar maturity model

Indeed, across the User pillar the NSA makes it clear that they see the mitigation of risk through the utilization of dynamic assessment and recording of risk to push decisions as close to the application and as close to real-time as possible as an “advanced” capability. If we perform a similar analysis of the NSA’s work on the other zero trust pillars, we see a similar building upon prior capabilities to meet a common end goal. For instance, if we look at the automation and orchestration pillar, we see a series of capabilities whose ultimate goal is to support the automation of workflows and to facilitate responses that are dynamic and risk-based. This framing across the NSA CSIs is consistent – drive decisions through each pillar that are dynamic, adjusted for risk, and as close to the application or workload as possible.

As a final note, the NSA CSIs also offer a “why all of this matters” for each separate paper. For instance, the user pillar CSI discusses several real-life scenarios that came about due to ICAM immaturity at a federal level and what the results of those failures were to emphasize each pillar’s importance.

Comparison to the DoD 

The DoD ZTRA, by comparison, speaks to specific capabilities that should comprise each pillar and then uses cases that drive the need for these capabilities. For instance, if we look at the “Pillars, Resources, and Capability Mapping” figure in the ZTRA, we see a substantial number of terms and capabilities put forward all at once.

Figure 4: The US DoD’s Zero Trust Reference Architecture Pillars, Resources & Capability Mapping (CV-7)

The capabilities outlined here are discussed later within the document – for instance, if we look at the user pillar and its call-out of an Enterprise Identity Service in service of the user pillar, we see the capabilities defined and key functions discussed. Perhaps due to the sheer scale of the mapping done here, we should note the above figure is not exhaustive! We can see this when we look at the use cases. For instance, if we look at the figure representing Use Case 4.14 (Dynamic, Continuous Authentication (OV-1)), we see it fleshes out the capabilities and requirements further.

Figure 5: The US DoD’s example of Dynamic, Continuous Authentication (OV-1)

Differences and Similarities Between the DoD and NSA

It could be said that between the two US Government sources on zero trust thought leadership, the Department of Defense is more focused on speaking to specific capabilities it feels are necessary to eliminate implicit trust across the organization with the ZTRA. The NSA, in its CSIs, is attempting to offer a path by which an organization might eliminate implicit trust. While the DoD does speak on needing a dynamic and risk-adjusted response for each action taken in the environment, it at times loses that messaging over spelling out the capabilities it feels are necessary to get there.

The NSA’s CSIs offer a clear advantage here in that they attempt to educate as opposed to enumerate but at the expense of perhaps missing things that may be considered important to a given pillar. For example, within access management, the NSA CSI on the user pillar discusses privileged access devices generally, which suggests some implicit trust of the workstation itself. The DoD’s model, in the “Zero Trust Authentication and Authorization Capability Taxonomy (CV-2)” figure, instead calls this “Device Hygiene” and calls out specific capabilities that comprise this capability. 

The DoD's model "Zero Trust Authentication and Authorization Capability Taxonomy (CV-2)" figure
Figure 6: The DoD’s model “Zero Trust Authentication and Authorization Capability Taxonomy (CV-2)”

To the NSA’s credit, they discuss these capabilities in further detail within the Device Pillar CSI, and to the NSA’s further credit, they specifically state that pillars are not independent, and many capabilities rely on or mesh with other capabilities in other pillars. All of that said, for an individual or organizational unit looking to understand where they fit into zero trust it can be a lot like grasping at an elephant in the dark if they only read about the pillar that relates to their daily operations. The DoD ZTRA, in comparison, does not have this sort of by-pillar problem due to it presenting all the work in one place.

These differences are largely in the way the information surrounding zero trust is disseminated to its audience, and who its audience is. The DoD ZTRA is very much for stakeholders within the DoD. The NSA CSIs are very much for the larger federal government (both within defense as well as national security). The DoD ZTRA is very much a reference architecture, and it is built to discuss specific use cases that the DoD considers important to solve. The NSA CSIs are built to help organizations understand the path to zero trust, which has unfortunately been used more as a term by security firms to sell products and less as a framework in which to mitigate risk through the elimination of implicit trust.

The commonalities between these works are significant. Both the DoD ZTRA and NSA CSIs on zero trust act as some of the most (if not the most) comprehensive, definitive visions of zero trust available today to the public. Both are very clear about the necessity to eliminate implicit trust and the need to radically shift how organizations think about identity and its associated capabilities. Both are excellent, no-nonsense reads that are vendor-neutral. Both promote a strategy that, as adversarial events become both more frequent and more sophisticated, makes sense to explore and adopt. It would be prudent for any organization looking to mitigate risk to understand what governments are doing to address nation-state level adversaries, and consider what steps they could take to bolster their own internal processes.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author

Rusty Deaton has been in Identity and Access Management for over a decade. He began in technology as a technical support engineer for a Broker-Dealer and has since worked across many industries, carrying forward a passion for doing right by people. When not solving problems, he loves to tinker with electronics and read. He currently works as Federal Principal Architect for Radiant Logic.


Originally published on April 5, 2024 by Heather Flanagan (“The Evolving Landscape of Non-Human Identity”); updated on August 22, 2024, for IDPro

Post written as an individual member of IDPro.

I think 2024 is going to be the year I point to for when I truly discovered non-human identities. What a crazy, complex space! After attending this year’s IETF meetings, where some of the most engaging—and frankly, alarming—conversations centered around physical and software supply chains, I realized how much this space has changed from what I initially encountered in my digital identity career.

The Shift from Access Cards to Cloud Computing

Back when I first managed digital identity at a university, our challenges with non-human identity were limited to access cards. These cards, which were provisioned through the HR system, represented a square peg in a round hole situation—fitting non-human entities into a system designed for people was an exercise in policy gymnastics. 

But that kind of non-human identity is so very much not what we’re talking about today. The focus now is on how cloud computing has fundamentally changed the landscape.

Cloudy with a Chance of Storms

Cloud computing is often hailed as the solution to cost and efficiency challenges in IT. By offloading computing power to external providers, companies can avoid the hefty costs associated with maintaining their own data centers. More power! Less money! What could possibly go wrong? Well, funny you should ask: this shift has introduced significant identity management issues, particularly in authorization.

Processes and APIs—unlike people—don’t follow a predictable lifecycle of hiring and firing. These digital entities might not be associated with any human at all, yet they require strict authorization controls to operate within their designated scope. In a single cloud environment, this might still be manageable, but across multiple cloud environments, the complexity skyrockets.

The Complexity of Multi-Cloud Authorization

When authorization needs to occur across different environments, the challenges multiply. Code formats vary, sources of truth differ, and the sheer number of processes and APIs can dwarf the number of human employees in a company. This type of non-human identity doesn’t behave like human identities; it operates independently, often in ways that humans wouldn’t recognize as normal behavior.

Take batch processing, for example. Whether it’s training an AI model or handling bank payroll transactions, these processes run without human supervision and don’t act on behalf of a user. What might look like suspicious behavior for a person—such as simultaneous logins from multiple locations—is just routine operation for a cloud-based application.

The Role of DevOps, IT Admins, and IAM Practitioners

In cloud environments, it’s typically DevOps and IT teams who develop and manage the applications necessary for business operations. These teams often specialize in specific cloud environments, making it difficult to find generalists who understand identity management across the board.

The problem is, identity management has traditionally focused on people. IAM teams are beginning to realize there’s an authorization problem in the cloud, but solutions are still sparse. DevOps teams, aware of the identity challenges, often face difficulties managing API permissions without compromising security. However, they aren’t turning to IAM teams for help, partly because it wouldn’t occur to them, and partly because the scale of the problem would overwhelm traditional IAM approaches.

Standards to the Rescue?

This brings me back to the enlightening hallway discussions in the IETF. The reason I became aware of these authorization challenges was due to conversations around three emerging standards efforts:

SCITT: Rethinking Authorization in the Software Supply Chain

SCITT is focused on the software supply chain, which involves the components, libraries, tools, and processes used to build and publish software. Imagine a world where a standardized, computer-readable Software Bill of Materials (SBOM) could be used to determine in real-time whether a software package is safe to run, based on any security vulnerabilities associated with its components. SCITT aims to make this a reality, offering a new approach to authorization.

Right now, the working group is discussing three drafts: 

WIMSE: Defining Workload Identity

Workload identity is still a concept searching for a universal definition. I favor Microsoft’s definition: “an identity you assign to a software workload (such as an application, service, script, or container) to authenticate and access other services and resources.” With so many applications and services running across multiple clouds, there needs to be a standardized way to handle identity across these environments. WIMSE is working on just that, focusing on least-privilege access and visibility across platforms.

IETF 120 was only the second time this working group has met, and they are making fabulous progress in no small part thanks to their chairs, Justin Richer and Pieter Kasselman. (Did I mention both are IDPro members?) That group has quite a few documents in development right now. You can find the list on the working group’s website.

SPICE: Bridging Human and Non-Human Identities

Finally, SPICE is tackling the challenge of creating a credential format that’s lightweight enough for workload identities but still robust enough to handle the nuances of human identities. It’s a delicate balance, given that human identities require considerations like privacy and revocation, while workload identities need to operate at high speeds with minimal overhead.

Full disclosure on this one: when I first really started understanding the challenges of the non-human space during IETF 119, my first thought was “I am in no way experienced enough to be able to help solve this problem as an architect.” My second thought was “Stop whinging: how CAN you help solve this problem, then?” And so I volunteered to be co-chair of this newest of the three critical working groups. I may not be able to solve the problem, but that’s not always what’s needed. A team requires people who can facilitate and ask stupid questions just as much as they need people who can architect clever technical solutions.  

SPICE just adopted its first two documents, “SPICE SD-CWT” (draft-ietf-spice-sd-cwt-00) and “Use Cases for SPICE.” That latter one is not intended to ever become an RFC; it’s more to inform the requirements for all the other specifications this group will produce. 

The Future of Non-Human Identity

Non-human identity represents identity at a scale and speed we’ve never seen before. The traditional tools and standards we use for human identity simply don’t work in this new environment. As I continue to explore this space, I’m following experts like Pieter Kasselman (Microsoft) who are deeply involved in these discussions and pushing the boundaries of what’s possible.

If you want to learn more, everyone is always welcome to join the working group mailing lists or ask questions in the IDPro Slack.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Heather Flanagan, Principal at Spherical Cow Consulting, comes from a position that the Internet is led by people, powered by words, and inspired by technology. She has been involved in leadership roles with some of the most technical, volunteer-driven organizations on the Internet, including IDPro as Principal Editor, the IETF, the IAB, and the IRTF as RFC Series Editor, ICANN as Technical Writer, and REFEDS as Coordinator, just to name a few. If there is work going on to develop new Internet standards or discussions around the future of digital identity, she is interested in engaging in that work. You can learn more about her on LinkedIn or reach out to her on the IDPro Slack channel.

By Dean H. Saxe

Last month, the IDPro newsletter published an OpEd entitled I’ll pass (for now) on Passkeys. In it, the author discusses their caution in adopting passkeys at this time due to perceived interoperability and usability challenges. Out of concern that those perceptions might hinder the growth of passkeys, and thereby limit options for users and relying parties who need better credentials than passwords, I’d like to share my own perspective below.

First, let’s clarify some language around passkeys.  Passkeys are defined as FIDO discoverable credentials.  Discoverable credentials reside within the authenticator, whether it is a hardware device, TPM, or passkey provider. Passkeys are distinguished from non-discoverable FIDO credentials, which are embedded in the credentialID returned to the relying party (RP) at registration and thus stored by the RP. Yubico has a good writeup on the concepts.

Passkey Options

Within the realm of passkeys, there are two additional options: device-bound passkeys and synced (synchronized) passkeys. Device-bound passkeys are inherently bound to the device – a Trusted Platform Module (TPM), Trusted Execution Environment (TEE), or Secure Element (SE).  These passkeys cannot be exported or backed up, if the device is lost, reset, or broken, the credentials are lost and cannot be recovered. Synchronized passkeys (synced passkeys) are stored within a passkey provider synchronization (sync) fabric and may be moved between devices, shared, and (in some cases) exported.  The sync fabric ensures high availability and reduces the risk of loss of the credential.

Fundamentally, all FIDO credentials – passkeys and non-discoverable credentials – have the same security model. The credentials are cryptographic key pairs that are origin-bound, enabling strong phishing resistance. Due to the use of asymmetric cryptography, there is no secret that can be stolen from the RP, unlike passwords or OTPs.  

More on Synced Passkeys

The introduction of passkeys —what we now call synced passkeys— in 2022 changed our approach to phishing-resistant credentials. With synced passkeys, users can create credentials that automatically sync across the cloud within a single ecosystem (e.g., iCloud). This synchronization ensured the availability of synced passkeys even if a device was lost. However, these credentials were only available within that vendor’s ecosystem in the initial deployment. Cross-device authentication partially solved this problem by allowing devices to be used across ecosystems for authentication without sharing the passkey. Synced passkeys alleviate the concerns for consumer and enterprise markets where managing device-bound credentials creates unacceptable user friction.

In 2023, we saw the emergence of third-party passkey providers, including traditional “password managers,” enabled on multiple platforms. Passkey providers offer alternatives to a platform’s passkey implementation, allowing cross-ecosystem syncing within the provider’s ecosystem. Today, there are 25 different passkey providers listed in the Passkey Authenticator AAGUIDs list from various providers, including small companies, large companies, and open-source implementations.  Today, passkey providers are available for all major browsers and operating systems. 

Security Spectrum

All credentials reside somewhere along a security spectrum; this is no different with passkeys.  

In a 2023 study by Bitwarden, only 30% of respondents use password managers (credential managers), while 84% of users reuse passwords across sites! Any increase in the use of a credential manager raises the bar for end-user security, whether the user chooses a password or a passkey. If users choose passkeys, let’s celebrate! We just reduced authentication friction for the user with a higher-quality, phishing-resistant credential, reducing risk for both the user and the relying party.   

Synced passkeys introduce new risks compared to the traditional FIDO hardware key deployment model. Synced passkeys may be leaked through credential sharing, insecure credential export, attacks against the passkey provider, or attacks on the provider’s client application. All of these attacks are possible against credential managers today, yet we broadly agree that using a credential manager effectively reduces the risks associated with passwords. 

Passkeys Support

Recently, NIST published NIST Special Publication 800-63Bsup1, which outlines the properties of passkeys that reach Authenticator Assurance Level 2 (AAL2).  Passkeys with demonstrable properties that meet or exceed the requirements outlined in Section 4 may meet the high bar of AAL2 credentials. Since passkeys are commonly considered a “password replacement”, it is reasonable to consider that all passkeys are AAL1. Yet this classification isn’t fine-grained enough to distinguish that even within AAL1, some credentials are better than others. Passkeys are clearly superior to passwords, even though they are both AAL1 credentials. 

In practical terms, vendor lock-in for passkeys does not exist. Any service supporting passkeys should allow the registration of multiple passkeys per account. Users operating across platforms or ecosystems can register multiple passkeys in different providers or use a cross-platform passkey provider. The Cross Device Authentication flow can be used to authenticate on a client that doesn’t have a passkey using their phone or tablet (“authenticator”), which has a passkey.

Today, some passkey providers allow you to export your passkeys to disk for backup as you see fit: KeepassXC, ProtonPass, and BitWarden. While I don’t recommend this option, it exists. 

What’s Next

The FIDO Alliance is developing a new Universal Credential Exchange protocol to allow the secure transport of passkeys and other credentials between different credential managers. I hope we’ll see public implementations of Universal Credential Exchange soon.

Passkeys are not perfect, but they continue to evolve through the hard work of members in the FIDO Alliance and W3C. Don’t let perfect be the enemy of good and overlook passkeys.  Identify use cases for passkeys in your environment as a password replacement, second factor, or even as an AAL2 multi-factor credential. Together, we can reduce the use of knowledge factors, phishing, and related fraud while delivering a better user experience.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author

Dean H. Saxe is a Principal Engineer in the Office of the CTO of Beyond Identity, founding member of IDPro, IDPro Body of Knowledge author and reviewer, the first person to obtain the CIDPRO certification, and co-chair of the FIDO Alliance Enterprise Deployment Working Group (EDWG). Beyond the realm of Identity, Dean is passionate about traveling, cycling, camping, board games, cooking, and spending time with his wife, two kids, and two dogs.

by David Brossard

Well, it’s been another busy few months for the authorati (credits to Omri Gazitt of Aserto and Sebastian Rohr of Umbrella Associates for coining the term). The OpenID AuthZEN Working Group was busy putting the final touches on its first implementer’s draft all the while spreading the gospel at several events. Let’s rewind the tape and sum up the highlights.

May 2024 – Identiverse – AuthZEN Interop

We were fortunate enough that both Identiverse and OpenID lent us rooms during the event to finalize our initial interop: 12 different implementations took part and successfully tested their capabilities against a Rick & Morty-inspired demo app. So, what does the initial interop include? A fully spec’ed-out binary authorization API that allows clients to send an authorization request in the form of a yes/no question e.g. “Can Alice view document #123?” and get a decision back in the form of a boolean. For those familiar with XACML, this is a streamlined and simplified version. For developers and API lovers out there, you can check out the sample AuthZEN Postman library. Omri (Aserto) also maintains a website that walks readers through the interop.

In addition, there were several talks worth calling out:

The latest version of the implementer’s draft can be accessed here. Readers interested in providing feedback should use the issues feature in the AuthZEN GitHub repository.

June 2024 – European Identity Conference – AuthZEN Interop (take 2)

Attendees and speakers of Identiverse had a mere 48 hours before heading out to Berlin for a second generous helping of IAM. EIC was also replete with authorization talks and AuthZEN presentations. My peer (and fellow editorial member) Alex Babeanu and I took part in a panel with fellow IAM expert Patrick Parker (EmpowerID): Unpacking Authorization Approaches: Policy as Code Versus Traditional Business Needs. You can watch the replay here.

On Thursday, Allan Foster, Adam Rusbridge, Alex Babeanu and I talked about the importance of standardization in authorization. All four of us are members of OpenID AuthZEN and both 3Edges and Axiomatics are part of the 12 conformant implementations.

On the last day of the conference, Gert Drapers led the second AuthZEN interop: the focus was on use cases brought by individuals from the manufacturing and banking sectors.

Allan Foster and I also sat down with Martin Kuppinger to talk about Authorization with AuthZEN – The Future of Digital Identity. You can watch the full replay here.

July 2024 – AuthZEN meets OAuth at IETF

OAuth focuses on “access delegation” and of course authentication. Authorization (ABAC/ReBAC or other models) focuses on access control. Can both models be used together? That’s what Eve Maler, Justin Richer, Allan Foster, and I attempted at IIW last October (notes). This led to a first attempt in the form of the AuthZEN Request/Response Profile for OAuth 2.0 Rich Authorization Requests which was proposed during IETF 120 in Vancouver. The profile suggests leveraging the AuthZEN request format to send a RAR request from a client to the authorization server. The hope is that this will increase interoperability and “integrability” between OAuth-based systems and “policy decision points”. For more information, check out the presentation slides or join OpenID’s Slack for a live discussion.

What’s next for AuthZEN?

The WG is already actively working on the next iteration of the standard. Members have reached consensus on a batch authorization request API (sometimes called boxcarred requests). We are planning an interop at AuthenticateCon in October and IIW a few weeks later. If you would like to join the WG, especially as a customer (non-authorization vendor) organization, we’d love to hear about your use cases. Join us on OpenID’s website

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Author

In his role as CTO, David drives the technology vision and strategy for Axiomatics based on both identity and access management (IAM) market trends as well as customer feedback. He also leads the company’s strategy for standards and technology integrations in both the IAM and broader cybersecurity industries. David is a founding member of IDPro, a co-author of the OASIS XACML standard, and an expert on standards-based authorization as part of an overall IAM implementation. Most recently, David led the design and development of Salesforce’s identity offering, including customer identity and access management (CIAM) solutions.

A new “signals plane” is needed to achieve zero-standing access

By Atul Tulshibagwale and Sean O’Dell

Security online is no longer a periodic snapshot of users and authorizations. We need an efficient architecture to respond to events and updates in real-time.

Background

Recently, Andi Hindle (of the Identiverse conference fame) and Ian Glazer (former SVP, Identity Product Management at Salesforce, and now President of Weave Identity) both published blog posts about how the area of identity and access management is changing to a more continuous model. 

Earlier work around the Identity Fabric popularized by Gartner defines a framework for how different identity systems could collaborate to provide more complete security coverage for all constituent users and all protected systems. Among its “Must-Have” characteristics are things like “event-based integration connectivity” and “adaptive continuous, risk-aware and resilient security”. These also point to a continuous and event-driven methodology to ensure identity security…enter the concept of continuous security.

A Paradigm Shift

All this got us thinking: we’re seeing a paradigm shift in how we think about security. It’s a paradigm where there is no single point of control—each system needs to enforce its own access security—but you still need to define centralized policies and management.

Lately, the real security action has shifted to identity and the behavior of users. The specific concern here is whether that behavior represents legitimate usage or malicious behavior, either because an attacker has assumed a user’s identity or the user themself is deviating from what is or has been perceived as “normal”.

What does this mean when all your services run independently in a zero-trust architecture? There is no central point of control, other than the login-time participation of the identity provider. Like Ian said in his blog post, the “event-time” dimension comes in after you login, and this is what leads to the “Continuous Identity” state that Andi mentions in his blog.

What Is the Continuous Security Paradigm

So let’s consider a new model more suited to today’s dynamic security requirements: the Continuous Security Paradigm. As we move forward in identity, we are emulating behavioral characteristics from the real world in the digital realm. As an example, say you are having work done on your house and have contracted with a company. The company has notified you in advance that someone named Erik will arrive at a certain date and time to do the work.. When Erik shows up at the expected date and time, do you go back and verify if Erik is employed by the company doing the work? You probably don’t. Instead, you make a decision based on context and risk. You know to expect a person named Erik to be at your home between a certain time from a certain company to perform a task. This is exactly what zero-standing access is in the digital realm. Would you always grant Erik access to your home just because of this one task that they had to perform? No, that is too risky. These same principles are why the shift to a Continuous Security Paradigm is not only needed but required. 

The Continuous Security Paradigm is a system-centric view of security. Your particular application or system is one node in a tapestry of loosely coupled nodes. In addition to the usual data plane and control plane, this paradigm introduces a new “Signals Plane” of asynchronous communication, which enables event-time processing. Runtime decisions are, as expected, made in the Data Plane, but they are based on the context derived from the Signals Plane. The Control Plane defines the trust topology of the Signals Plane.

The Control Plane

Rather than viewing the network as a uniform “fabric”, the Continuous Security Paradigm models it as a loosely coupled and more diverse tapestry that captures the differences in how much each node trusts another, and who owns individual nodes. A node in such a network is not about physical or virtual connectivity as represented by Virtual Private Clouds (VPCs) or firewalls, but a logical definition of what information is asynchronously communicated (either received or transmitted) between which nodes in the tapestry. This may include nodes that you “own”, e.g., VPCs, SaaS tenants, or IaaS tenants, but it may also contain nodes that are trusted sources of public information (e.g., public securities data or dark web credential monitoring data).

The control plane is used to specify this trust topology. Specifically, how each node is connected to another with respect to the trust, entities, and attributes. For example:

  1. A CRM node trusts the HR node as the authoritative source for employee entities and their attributes such as their cost center.
  2. An application node trusts the HR node as an authoritative source for employee entities and trusts the CRM node as the authoritative source for customer entities and as a non-authoritative source for employee entities, which it correlates with the authoritative employee entity source, the HR node.
  3. The CRM node trusts the application node to receive customer entities and certain attributes of customer entities from it.
  4. The HR node trusts the CRM node and the application node to receive employee entities and specific attributes of those entities.

The control plane also specifies the frequency of ingestion or transmission of specific entities to / from specific nodes. It also specifies the policies to be applied when using information received from other nodes.

The Signals Plane

The signals plane enables each node to asynchronously collect the entities and attributes that are important to its own data-plane decisions. It also enables each node to communicate any changes to its entities and attributes that may be relevant for other nodes it trusts. The asynchronous ingestion and transmission of trusted data enables each node to decouple its runtime decisions from the availability and latency characteristics of other nodes in the network. Open standards such as the OpenID Shared Signals Framework (SSF) are designed for conveying such information asynchronously.

The Data Plane

The actual access decisions in response to API calls or user requests are done in the data plane. The signals plane enables each node to ensure that it or other nodes in its network do not make decisions based on outdated information. Yet, because the information is conveyed asynchronously, the decisions each node makes are based on “event-time”, or “continuously updated” information – without sacrificing efficiency. When a data plane event occurs, e.g., a user attempts to access specific data in a node, the policies specified by the control plane govern how the asynchronously ingested data and the runtime data from the data plane are used to make decisions.

Computation in CSP

To ensure security for an application, e.g., one node in your organization’s network, you need to:

  1. Obtain trusted signals about all interesting interactions/events (identities, devices, environmental factors (e.g., IP location, geo-location, etc.). Some signals are obtained from the user request, but most may be obtained asynchronously using the signals plane from other nodes
  2. Make your own policy decisions about granting or denying access: Your application or system needs to have its own rules to determine access and know user behavior as far as your own application is concerned
  3. Communicate changes to other nodes: The control plane may obligate you to communicate any changes to certain entities to other trusted nodes, at a certain cadence. Doing this enables all nodes to make decisions based on event-time data.

Why introduce a new paradigm?

We’re seeing escalating tensions on a couple of axes: Between having to constantly re-evaluate access decisions, the desired performance, and the computational impact of doing so; and between the independence and resilience of each system and the enforcement of common policies. The Continuous Security Paradigm enables independent, decoupled execution while being able to leverage the latest data, and one that enables real-time decisions without huge availability and performance requirements. It also enables independent services to be good citizens of a larger network that can both help other services make good decisions and be a part of a common trust topology.

Managing a Continuous Security Paradigm-based Network

Even though each node operates independently in terms of the decisions it makes, your organization needs to centrally manage the trust topology between various nodes and the policies that you need to comply with within each application or system (e.g., within each node). A centralized management system can use the control plane to set the rules. This is different from a central point of control for each access decision. At the same time, each node is free to dynamically modulate the trust it places in systems it receives data from, based on the quality of signals it receives from them. Diagrammatically, this can be represented as follows:

In the diagram above, all types of systems, including SaaS apps, cloud infrastructure, custom apps, and APIs, follow the same continuous security paradigm. 

  • All of them consume signals from other systems, make access decisions for themselves, and selectively convey signals to other systems. This is the new “signals plane” of asynchronous communication that is disjoint from the data plane or the control plane
  • An organization would, of course, need to manage trust between various systems (internal or external) and would need to set org-wide contextual rules. That is provided by the control plane described by the long rectangle at the top of the diagram.
  • Finally, the systems need to respond to inline requests from the client, regardless of whether the client is a robotic principal or an end-user. Access decisions need to be made for each one of these requests. This is the data plane.

Looking Ahead: The Continuous Security Paradigm in Practice

Where do we begin?

The CSP includes components that may be within your control (such as custom apps) and some that you will need support from (e.g. SaaS apps). However, keeping this paradigm in mind as you build out your strategy is key. You might find solutions that help you realize parts of this picture, and you can influence others in moving to support this architecture. For instance, building out a signals plane by adopting the OpenID Shared Signals Framework can help build out the context for your existing components – whether they are SaaS apps or custom apps.

Use Cases

The big picture here offers a way for cybersecurity and identity practitioners to think about securing systems and services in a way that supports real-time considerations. In our next blog post, we’ll break this down and discuss specific use cases where continuous security paradigms can be used today and the standards that already support this model.

Disclaimer: The views expressed in the content are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Authors

Atul Tulshibagwale is the CTO of SGNL, a company backed by Microsoft and Cisco and founded by ex-Googlers that helps enterprises mitigate damage from identity breaches. Named in the Okta “Identity 25”, Atul is a federated identity pioneer and the inventor of the Continuous Access Evaluation Protocol (CAEP). He was previously at Google, where his seminal blog post kicked-off the industry-wide movement that culminated in the OpenID Foundation’s Shared Signals working group, which he co-chairs. Atul is also a Corporate Board Member of the OpenID Foundation. His leadership in developing and promoting SSF and CAEP, the critical zero-trust standards, has been influential in their widespread adoption. Apple, Okta, Cisco, and others have announced support for these standards. Previously, Atul was a co-founder and the CEO of Trustgenix, a federated identity pioneer that was acquired by HP. Trustgenix contributed to the development of federated identity standards such as SAML 2.0 and the Liberty Alliance Framework.

Sean O’Dell is a Senior Staff Security Engineer spanning both Consumer and Workforce IAM at The Walt Disney Company. He is a co-chair of the Shared Signals Working Group in the OpenID Foundation and has been on podcasts covering identity security and written about the subject…with more coming soon. He is a technical leader and trusted technical advisor to executives at The Walt Disney Company where he has been instrumental in both Workforce and Consumer IAM strategy over the past 10 years covering security, product, engineering, implementation, and architecture while also acting as a principal advisor in the same capacity for key mergers and acquisitions helping to shape overall company decisions and direction. His vision, leadership, and implementation expertise are helping to promote and drive the adoption of both SSF and CAEP overall in the industry. His current focus around identity security is zero standing privilege, next-gen authorization, ITDR, shared signals, CAEP, behavioral analysis, data science, identity data…all of the continuous aspects of identity security.

Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Attending both the RSA Conference and Identiverse this year has been an enlightening experience. I’ve witnessed firsthand how identity security is bridging the gap between IAM and security, bringing these traditionally separate domains closer together. At the RSA Conference, the emphasis on AI and its integration with IAM solutions was palpable. This cutting-edge approach showcases the future direction of our industry, where advanced technologies are becoming integral to identity management. The discussions around AI were not just theoretical but practical, highlighting how these innovations can address current challenges and anticipate future threats.

Differing Perspectives

However, I noticed a clear distinction in how identity security is perceived by the IAM community compared to the security community. IAM professionals are often engrossed in the daily operational challenges and actionable solutions. It is their day-to-day. Their priorities are grounded in the here and now, putting out fires and ensuring seamless, secure operations. On the other hand, the security community is captivated by what’s coming next, and identity is becoming a mainstay in security. They are driven by a vision of the future, exploring how AI and other advanced technologies can transform the landscape of identity security. This forward-looking perspective was evident in the numerous AI-focused discussions at RSA.

The unifying theme across both conferences was the opportunity presented by a data-driven approach to identity security. Regardless of their immediate focus, everyone is beginning to see the value in leveraging data to enhance identity management and security. This approach promises not only to optimize current operations but also to provide predictive insights that can preemptively address potential vulnerabilities. As we move forward, it’s clear that data will be the cornerstone of our strategies, helping us to build more resilient, adaptive, and secure identity management systems. This convergence of IAM and security, underpinned by data-driven insights, marks a pivotal moment in our industry, and I’m excited to be at the forefront of this transformation.

Will Lin has started and incubated multiple successful organizations, including a cybersecurity-centric Non-Profit, a Venture Capital firm, a book “The VC Field Guide,” and VC-funded startups. He has contributed to the security & IAM ecosystem with senior & board roles at companies such as Attivo Networks (SentinelOne), Bishop Fox, Concourse Labs (Fortinet), Forgepoint Capital, LoginRadius, Remediant (Netwrix), Security Tinkerers, Sphere Technology Solutions, Symmetry Systems, and Uptycs. His current full-time venture, AKA Identity, is building the data and intelligence layer to enable, secure, and govern identity and access management in the workplace.

Article updated 10 July 2024

Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Increased security for identity is always welcome, especially when it is leveraging the latest standards and easy to use. However, for me I’ll be waiting for the dust to settle a little more before committing to passkeys. For clarity, I am referring to passwordless synced (or multi-device) passkeys [1] instead of the physical FIDO2 authenticators (now named device-bound passkeys). At the moment, I use the physical security key option as a second factor, both Webauthn and U2F, where I need that level of security and where they’re supported, along with a variety of MFA (software and hardware) options for the rest.

Reasons to be Cautious

Anyone working in identity will have heard the excitement around passkey, and the benefits certainly warrant jumping straight in:

  1. it just works (strong crypto-based authentication with no passwords)
  2. re-use existing phones/tablets/computers
  3. phishing resistant.[2]

Passkeys have been around since 2022.[3] However, the interoperability of their implementations has been, and still is, something to carefully consider. Depending on your preferred browser(s) and device(s) you may run into usability challenges or worse, unable to use the passkeys that you have already created on other devices [4]. Starting with passwords over half a century or more ago, through to modern authentication standards, authenticating across different hardware and software, old and new, has always been a basic expectation. It has also been a major factor in determining their success. Fortunately, this is changing, and there are plans to support the migration of passkeys between ecosystems [5].

Security Spectrum

FIDO2 authentication has provided a truly strong authentication solution, however purchasing, managing the recovery and taking responsibility for the (physical) keys seems to have been reason for lack of mass adoption. With passkeys, a provider will do all of this for you, with the trade-off being you are no longer the only one in possession of the keys [6]. With truly powerful security comes at least some level of ownership, effort, and responsibility. The hack of LastPass [7] demonstrates the risks when trusting a provider to manage your secrets, even when they are end-to-end encrypted.

It’s About Interoperability

Passkeys did seem like an initial positive middle ground, however the confusing marketing, implementation limitations, sometimes lack of transparency [4] and interoperability has so far, undermined their potential to be a truly great improvement on current password and MFA options. For me, I’ll be using physical FIDO2 authenticators for vital services, my password manager for passwords, and MFA for most others, and only when they’re truly cross-platform, I’ll slowly replace passwords with passkeys. Sometimes with security, it’s better to avoid the hype and appealing new features (great convenience) to make sure you have solid foundations.

1 https://corbado.com/blog/device-bound-synced-passkeys
2 https://docs.yubico.com/hardware/yubikey-guidance/best-practices/all-faq-passkeys.html
3 https://fidoalliance.org/white-paper-multi-device-fido-credentials/
4 https://proton.me/blog/big-tech-passkey
5 https://fidoalliance.org/specifications-overview/
6 https://www.yubico.com/blog/new-nist-guidance-on-passkeys-key-takeaways-for-enterprises/
7 https://arstechnica.com/information-technology/2022/12/lastpass-says-hackers-have-obtained-vault-data-and-a-wealth-of-customer-info/

Author

Jac Fowles is a security systems specialist with experience in deploying, securing and integrating core security services in large environments. Certified security, Linux and DevOps and cloud engineer – CISSP, CEH, RHCE, Azure, and AWS.

Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the views of the IDPro organization.

On May 28, 2024, the first Innovation Workshop was held at Identiverse. In a packed room, participants held a lively discussion focused on overcoming obstacles to empower innovators to advance identity and security.

Dr. Tina P. Srivastava, Co-Founder of Badge Inc., and Bob Blakley, Co-Founder of Mimic and former Global Director of Information Security Innovation at Citi, two identity leaders, led the workshop. Pam Dingle, Director of Identity Standards at Microsoft, acted as the facilitator.

Why a Workshop?

Identity is becoming a cornerstone of modern IT infrastructure, serving as a new perimeter. Therefore, the need for cutting-edge innovation is not just a nice-to-have but a need-to-have as identity solutions are safeguarding access to the most important networks, information, systems, and our personal data. Teams tackling identity and security, whether in organizations or companies, need to innovate and draw from a diversity of ideas and include new people in identity projects, and yet there are challenges in bringing in and empowering these innovators. Innovating in the context of intellectual property constraints, national security restrictions, and tight budgets is challenging. Yet it is imperative.

Dr. Srivastava shared research from her book Innovating in a Secret World: The Future of National Security and Global Leadership, including examples from the pharmaceutical industry, an industry cloaked in secrecy given the high costs of R&D and need for intellectual property protection.

Identity Innovation is Essential

Identity is essential to access healthcare services, financial systems, and the economy, so innovation must account for the whole spectrum of user experiences. Pam Dingle brought the audience into the conversation. Drawing on their respective experiences from national security defense and financial institutions, two of the most high-security fields, Dr. Srivastava and Dr. Blakley shared practical guidance on navigating these hurdles. During this interactive workshop, Tina and Bob led a discussion about the importance of bringing in new voices, the challenges we’re facing in today’s hybrid work environment, and how to overcome them. Tina shared specific use cases from her research at MIT about how certain classes of innovators are unintentionally excluded, and Bob discussed challenges and strategies to overcome them.

Dr. Blakley shared some key lessons learned as an Innovation leader at Citi. For example, he noted that the system has to want to change and highlighted the importance of working in a diverse team. Bob discussed that innovative solutions arise from conversations with others, not solo meditation.

Bob noted the importance of the conversation about innovation given that the environment is changing as fast as any in his decades of working in identity and security. Discussions from the Innovation Workshop carried into the break and even continued at the IDPro booth later on during Identiverse. Pictured below, Sarah Cecchetti, Mat Hamlin, and Dr. Tina Srivastava share information about IDPro, including about CIDPRO certification, with Identiverse attendees.

Author

Dr. Tina P. Srivastava is an entrepreneur, author, inventor of more than 15 patents, and an MIT-trained rocket scientist. She served as Chief Engineer of electronic warfare programs at Raytheon before founding a cybersecurity startup that was acquired by a public company and global leader in network security. She is an FAA-certified pilot and is a Lecturer at MIT in Aeronautics and Astronautics.

When her identity was stolen in a data breach in 2015, Dr. Srivastava teamed up with a group of MIT cryptography PhDs to crack the code on one of the most common reasons for modern data breaches: stored credentials. Together, they solved a decades-old cryptography problem to remove PII, biometrics and other stored credentials from the authentication equation, eliminating highly vulnerable storage systems as points of attack for hackers. Badge Inc. is the award-winning privacy company enabling Identity without Secrets™.

Technological advancements, shifting market dynamics, and changing regulatory environments are creating an incredibly dynamic environment for digital identity practitioners. IDPro’s seventh annual Skills, Programs & Diversity Survey, conducted by IDPro and sponsored by Acsense, Hindle Consulting, and Weave Identity, provides critical insights by identity practitioners and for identity practitioners. The survey plays a powerful role in informing both individual practitioners and organizations about the current trends, challenges, and opportunities in identity management.

Survey Highlights

Focus on Authorization A standout finding from this year’s survey is the significant attention towards authorization. Nearly 28% of respondents identified it as a top priority within their organizations, and 22% expressed a keen interest in delving deeper into this area. This focus is likely stimulated by notable advancements in authorization technologies, such as AWS’s Cedar and OpenID Foundation’s AuthZen Working Group. The demand for dynamic, near real-time access management is pushing the continuous evolution of standards and technologies in authorization, reflecting its critical role in today’s security landscape.

The Broader Business Impact Despite the technological focus, digital identity management continues to be predominantly viewed through a cybersecurity lens. However, the survey highlights a persistent gap between the technological interest of practitioners and the organizational prioritization, especially in areas like verifiable credentials and decentralized identities. This gap indicates a significant opportunity for organizations to align their identity management strategies more closely with business objectives such as customer acquisition and retention.

Industry Demographics and Future Challenges The survey also sheds light on the demographic composition of the identity management field. Most respondents have over six years of experience, pointing to a matured industry. However, this maturity brings challenges; there is a noticeable trend towards hiring senior-level professionals, which could lead to a skills and experience gap as current leaders begin to retire. This demographic trend emphasizes the need for strategic changes in hiring and training to prepare the next generation of IAM professionals.

Download

The IDPro® Skills, Programs & Diversity Survey serves as an essential tool for the digital identity management industry, offering valuable insights that can help shape the future of identity strategies and technologies. As we look forward to the continued evolution of this field, it is clear that both challenges and opportunities lie ahead.

Want to see the full report? You can download it here!

Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the views of the IDPro organization.

True confession: Many decades ago, my 17-year-old self created a synthetic identity with ready-made biometric authentication…otherwise known as a fake ID. Living in Hawaii, I needed a faraway place to call my fake home, and I picked Yonkers, New York. You could call me a real-life reverse McLovin.

Get yours today! Credit: Amazon

Fast forward to now. Identification is, of course, much more easily checked – but it’s also much more easily faked. In large part that’s thanks to generative AI, which dramatically increases the scale and automation of attacks.

Crowdstrike’s Global Threat Report documents how “identity threats exploded in 2023” with a boost from genAI, and that’s not even the half of it. Read on to understand where the threat is coming from and what to do about it.

Biometrics, We Hardly Knew Ye

Many organizations have been making upgrades to strengthen their authentication capabilities, often through the application of biometrics.

But is biometric authentication the factor you think it is?

Authentication Factors: Thou Shalt Count to Three

Classically, there are three factors used to verify the authenticity of a credential; using them in combination contributes to authentication strength. (Other contextual cues in their infinite variety form a phantom fourth factor. Five is right out!)

  • Something you know, like a password, is what NIST’s Digital Identity Guidelines call a “memorized secret”.
  • Something you have, like your association with a particular mobile device, is what NIST refers to as an “out-of-band authenticator”.
  • Something you are, like your particular face or fingerprint, is what NIST calls a “biometric characteristic”.

Two of my financial services providers just rolled out voice authentication as a new “strong” method lately, and tout not just its ease of use but also its security. We are assured, for example, that “Your voiceprint is stored securely as a mathematical equation, and only works for verification with our system.”

Now why would there be a Spinal Tap reference here? Credit: MakeAGIF

Is that really how things work?

Biometrics Are Different

The unfortunate fact is that biometrics make a better username than a password. That is, they’re pretty good at distinguishing “you” from “other people”, but limited in their ability to confirm that you are you. As NIST says:

For a variety of reasons, this document supports only limited use of biometrics for authentication.

Their reasons are important to understand:

  • The nature of biometric comparison is to be probabilistic – based on statistical likelihood – rather than deterministic, such as when a system compares a presented vs. pre-registered password or device. It gets a letter grade vs. a pass/fail.
  • The false match rate (FMR) and false non-match rate (FNMR) of a biometric method are critical stats — but you don’t interpret a low FMR alone as giving high authentication confidence. This rate doesn’t even account for spoofing attacks.
  • Unlike with a password or device, there are few circumstances where you can properly revoke a biometric. After all, you can’t just do a “fingerprint reset” on yourself.
  • The biggest distinction is that biometric characteristics aren’t secrets! Many biometrics make terrible secrets because they’re part of our exhaust data, both online and IRL. It’s a losing battle to protect something like a unique face from being seen.
  • As a result, we must rely on liveness detection to ensure a binding between the presented biometric and the person. And that means we must trust a complex chain of detection sensors and processors.

The Last Decade Saw a Legit Biometrics Revolution

This isn’t to say that biometrics can’t form a crucial part of secure ecosystems.

Touch ID’s launch in 2013 started something big. Its immediate impact was to make the iPhone 5s easier to unlock. Fingerprint readers had previously required cumbersome end-user processes and erred on the side of extra-low FMRs. For the price of a few more false positives, and with a painless enrollment process built into the experience, Touch ID – and in 2017, Face ID – led to a remarkable cascade of use cases.

  • Phone usage increased. In 2013, the year of Touch ID, people were checking their phones 110 times a day. By 2022, that number was up to 352 times a day, or once every three minutes on average. (You know you’ve done it!)
  • Phone security increased. In 2013, 53% of phones were kept locked. By 2021, locked phones were ubiquitous at nearly 99%.
  • Phones became de facto data wallets. The new mobile environment, involving a secure element, made on-device storage of personal data – including items beyond biometric templates for face and fingerprint matches – attractive.
  • Phones became wallets, period. The availability of this data, and the ability to bind it to the end-user with biometrics, unleashed a flood of payment scenarios and the digital wallet era.

Mobile OS-level biometric unlocking isn’t without complications, such as reliance on a memorized secret (PIN) for confirmation and recovery. But the sheer weight of improved security and value has been impressive.

AI Cut Short the Server-Side Biometrics Revolution

Unfortunately, we’re in a dramatic new threat landscape.

I’m focusing on the “server-side” revolution because wholly digital identity scenarios – foreign travel pre-authorization, remote employee onboarding, direct-to-consumer eCommerce, gaming – have become inescapable. And they rely more and more on server-side biometrics for identity verification and authentication. These scenarios are now at extra risk when they use these biometric checks without a trusted device as another factor.

The generative AI revolution took mere weeks to explode from a bubbling low-level concern into a bona fide threat. November 2022 saw the launch of ChatGPT’s API, DALLE-2, and Whisper. Just two months later, organizations experienced disturbingly widespread AI-boosted identity fraud: In a survey concluding in January 2023, Regula Forensics found that 29% of organizations were being targeted by video deepfakes and 37% by voice deepfakes.

By December 2023, iProov’s Threat Intelligence Report revealed that face swaps increased 704% from the first half of the year to the second. They warned that “Face swaps are now firmly established as the deepfake of choice among persistent threat actors,” and that nearly half of the threat groups they’re tracking were newly created.

The report shares an up-to-date example of face swap technology (see the second listed video) that provokes amazement among viewers but is becoming routine among threat actors.

Creepy. Credit: iProov

AI technology can be used cleverly for good even in the deepfake realm – check out HYPR’s Bojan Simic “speaking” in Japanese — but even the creators of the base technologies are spooking themselves about the consequences. OpenAI has stated its Voice Engine should be held back from general release because it’s so good that it’s certain to be misused.

A Brief Tour Through the Consequences

The monetary costs of this new landscape are shocking – but we should also recognize the societal costs of these very personal forms of attack.

Classic Cyber

The most obvious consequence is classic cyber risk and fraud. Most of the attacks are intended for financial gain. A dramatic example came in early February 2024 when a Hong Kong finance professional experienced a unique form of spear phishing: a faked-up request to transfer HK$200 million, supported by an entire cast of senior exec characters deepfaked in the context of video conference calls.

Political

Nonmonetary but serious motivations include disrupting the political landscape. A voice deepfake, not the real President Joe Biden, was behind a series of robocalls in January 2024 urging New Hampshirites not to vote in their state’s primary election. Granite Staters who remembered a similar controversy from mid-2022 may have been extra confused because the earlier instance was a false alarm.

Cultural

The constant uncertainty about what’s real affects not just famous people but all individuals.

Every 100% digital interaction without a definitive authentication method now has question marks around it. We could call this the Voight-Kampff challenge, after the test in the Blade Runner movie (and source novel) to root out non-humans that relied on micro-expressions, bodily functions, and expressions of empathy. To the question “How can people not tell this is AI?” posted in March 2024, one Redditor said:

“Seeing AI pictures, reading AI generated text, I’m starting to feel like Rick Deckard. I’m no longer able to trust anything I see or even ‘people’ I talk to through chat OR voice. I’m giving everyone and everything around me the Turing Test without even realizing it.”

Another fake ID you can go and buy. Credit: eBay

Check out the TRUE project, which is taking this societal risk very seriously indeed.

Doing Battle Against These Risks

What are our best options to mitigate and prevent these risks, when AI is powering high-scale attacks and elaborate spear-phishing episodes alike?

Some options count as obvious CISOcraft (or are perhaps emerging as CIDOcraft). A Zero-Trust mindset and a commitment to layering signals and decision-making actions still go a long way. A LastPass employee recently batted away a social engineering attack that had a deepfake element but smelled wrong for traditional reasons. And successful attacks like these stolen ChatGPT credentials are AI-adjacent but not necessarily a consequence of the AI era.

Here are additional recommendations for battling these dramatic new threats.

Respect the New Arms Race We’re In

Now is the time to develop a deep appreciation for the ways biometrics are different and the ways AI is rapidly creating unknowns. Stay attuned to NIST’s Digital Identity Guidelines in these areas, and stay alert for the forthcoming final fourth revision. As well, check out the work of the Kantara Deepfakes group.

The Liveness.com site reminds us that two-dimensional liveness checking isn’t all it’s cracked up to be, yet. So, increase your awareness of the state of the art in liveness detection and participate in the security community surrounding it. If you’ve got a great solution, consider taking part in Face 2024 and other competitions. The FTC’s recent voice cloning challenge produced heartening results.

Pair Biometrics With Other Authentication Tricks

Passwordless experiences that get still-extant passwords off the wire are still better than password-bearing interactions. Much like Touch ID, they have the potential to reduce ecosystem-wide risk. So, accelerate your plans to move to phishing-resistant authentication methods. Liminal research says 48% of practitioners who are planning to adopt passwordless solutions in the next two years prefer biometric authentication.

FIDO multi-device credentials aren’t perfect, but they hit a new sweet spot for security, privacy, user choice, and user experience. So, use passkeys where possible. They typically leverage biometrics in proper fashion, binding the user to the channel used.

Get Fine-Grained About Verifying Identity Data

If you want to be sure you’re not looking at a “swapped face”, add more identity verification signals to your registration and authentication user journeys. The trick is to scope those signals to individual pieces of data, make them more privacy sensitive, and reduce their invasion into the user experience. An example is privacy-sensitive age estimation, a new biometric technique for responding to the age-appropriate design codes and website age-gating mandates sprouting up.

The emerging verifiable credentials era presents an intriguing opportunity to start trading in verified-identity “small data”. If you could ask for and receive individual verified identity signals from a user’s wallet — with biometric and binding assurances about the quality of those signals — what would you ask for?


Finally: Be wary of simply participating in a classical security arms race as your only strategy. Bots battling bots for inches of fraud detection ground won’t get us where we need to go. Incremental gains in liveness detection will be swamped by AI’s endless invention. We’re in the biometrics singularity now — so we need to innovate more than ever before.


Eve is a globally recognized pioneer in identity and access management and standards. Her roots are in semi-structured data modeling and the API economy and include a passion for fostering successful ecosystems and individual empowerment. At Venn Factory she drives identity, security, and privacy success in the connected world by bridging the gap between technical intricacies and strategic business outcomes.

Eve’s leadership on pivotal protocols such as XML, SAML, UMA, and HEART as well as industry efforts like UK Open Banking, US government health IT, and the medical Internet of Things demonstrate her unwavering commitment to innovation.

As CTO of ForgeRock, Eve oversaw emerging technology R&D, evangelism, and innovation culture, and empowered her team and cross-functional colleagues to deliver results to dozens of Global 5000 customers, partners, analysts, publications, and events. She previously served as a Forrester Research security and risk analyst covering IAM, strong authentication, and API security.

Thanks for reading! Visit the Venn Factory to request an expanded version of this article.

Since its inception in 2017, IDPro® has been on a journey of growth and innovation. From our founding to the 2021 global launch of CIDPRO®—Certified Identity Professional—we’ve been steadily elevating the IAM profession worldwide.

This year, we bid farewell to three board members: Janelle Allen, Jon Lehtinen, and Bill Nelson. The IDPro board has also decreased the number of board members to seven. Consequently, we’re on the lookout for one passionate, dedicated individual to join the IDPro board of directors in June 2024.

We encourage you to nominate yourself for this prestigious role if you share our enthusiasm for:

  • Championing IDPro’s mission
  • Shaping the organization’s growth and strategic direction
  • Strengthening IDPro’s financial health and stability
  • Giving back to the global IAM community
  • Collaborating with other thought leaders and experts
  • Expanding your professional network
  • Developing your leadership skills and gaining industry recognition

We eagerly anticipate board nominations—including self-nominations—that embody these values.

Our board members actively contribute 10-15 hours per month to various responsibilities, including monthly board meetings, project management, and engaging with current and prospective members. You can learn more about the structure of the board, the decision making processes, and more, by reviewing our bylaws.

At IDPro, we celebrate our team-oriented, inclusive approach that transcends cultures, nationalities, and time zones. We eagerly anticipate board nominations that embody these values. We firmly believe that diversity within the industry benefits everyone, and we welcome identity practitioners and qualified individuals from all backgrounds to apply.

If you or someone you know fits the bill and is eager to contribute to the ongoing success and growth of IDPro, don’t miss this chance to make a difference in the world of digital identity – submit your nomination today! We can’t wait to see the incredible talent and enthusiasm you bring to the table.

If you’re interested in being an IDPro Board nominee, please contact director@idpro.org today to receive the application packet with the necessary questionnaire and other useful material. Completed nominations are due by April 30, 2024.

Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Authorization has become the latest buzzword in IAM land. Second to none… Well ok maybe second to passkeys. And with renewed attention, many practitioners have been asking great questions about the scale, performance, governance, and responsibility model for authorization. There are recurring questions that regularly come up in IDPro’s Slack #authorization channel. This post aims to address them.

Now, if you’re entirely new to the authorization conversation, don’t worry, we got you covered. Check out these introductory articles that describe the authorization landscape:

Policy Lifecycle Management, Ownership & Collaboration

Who’s in charge

So you’ve decided to externalize and decouple authorization from your application. You now need to define your authorization policies (whether those end up expressed as an actual policy, ACLs, or a graph is besides the point). But who should define those policies? Approve them? Review them?

Based on past customer interactions, we’ve come to realize at Axiomatics that there are different types of policies:

  • Enterprise-wide policies
  • Application-specific policies

Additionally, there may be different sources for the policies:

  • The compliance & legal teams
  • The application teams

Figure 1: Policy Lifecycle Management

Generally, most of the actual policies come from the business or someone who knows the business. That translates to product owners or product managers seconded by business analysts. Those policies that are not application-specific would be defined by the legal and compliance teams. Interestingly, some of these policies may come from standards e.g. PCI DSS, SOX, PSD2… All you have to do is apply them. As a matter of fact, some industry standards (FHIR, HL7) have even implemented their own policies using XACML.

Historically, you’d expect product owners & business analysts to implement their own policies. However, with the rise of *-as-code and lightweight policy syntaxes such as ALFA or Rego, product owners can delegate the implementation of policies to developers. This leads to the main benefit of app/policy co-development and therefore a faster app development time and more cohesion between the app and the policies.

Now that we introduce a hierarchy of policies with varying scope, we also need to introduce a means to collaborate between policy authors. How else can a local product owner be aware of the enterprise policy on money laundering (as an example)? What collaboration, discovery, and reusability capabilities does your authorization platform provide?

What skill set do policy authors need?

A range of skills are needed to write effective and efficient policies. Starting with the plain old English version, you need knowledge of the business application and business rules that need to be applied. Representatives from security, compliance and legal will have their own guidelines or regulations that must be accounted for in the access policies. Finally, you need the technical perspective to make sure access policies are written in a way that will provide the best performance and the technical people will know how to configure the system for any metadata that the policy decision needs, such as attribute sources, connecting to the MFA system, etc. It may sound scary but take a step back and look at it as if it layers of an onion. Peel back one layer at a time: start with the requirements. The skills needed are strictly business-related. Eventually, you hit the layer where knowledge of the specific authorization platform is needed e.g. ALFA, ACLs, or Graph. Fortunately, these models are relatively easy and there are lots of resources available e.g. https://alfa.guide, https://authorization.guide, or https://authorization.academy

Authorization API Standardization

Many Authorization platforms, products, and SaaS follow their own standards. For instance, Open Policy Agent has a relatively flexible interface depending on how the Rego policy is written. Others like Oso and Cerbos have a proprietary interface. Some like Cedar don’t have an API to speak of. Standards-based options such as XACML and ALFA follow the OASIS XACML REST/JSON PEP-PDP interface (what a mouthful). NIST has their own interface as part of NGAC, the standard. The good news is that all these approaches are very similar. Several vendors and practitioners have been working together under the helm of the OpenID Foundation in a new working group called AuthZEN to standardize the PEP-PDP interaction and align the vendors/platforms. You can see the latest samples here. There will be an interop at Identiverse 2024. This will increase interoperability between platforms and products. AuthZEN’s goal is also to encourage non-authorization vendors (e.g. other SaaS such as Workday and Salesforce) to adopt these APIs and allow their customers to externalize their authorization configuration.

More broadly, there are 3 areas worth standardizing

  1. The standards of underlying data sources e.g. SQL, SCIM, OData… That’s really just about plumbing. Each AuthZ solution can easily address that
  2. The policy language: a standard policy language makes learning the language easier. There are more resources to learn and there are more templates to be had. For instance, ALFA’s Wikipedia page contains lots of examples. The OPA community has many Rego samples… It also matters from an integration perspective if, for instance, you’d want to give the policies to an IGA tool like Sailpoint. It may also matter from a “PDP engine” perspective. Imagine you have a single control plane where you write your policies yet you want to provision to different PDPs in different layers/from different vendors. Having a single language is useful. This is where the feature parity between Cedar and ALFA is interesting. You can imagine translating from one to another. This is also where IDQL is useful as it aims to provide that translation mechanism. But be wary of languages that are very vast (Rego) and cannot be easily translated to purposely constrained languages (Cedar and ALFA).
  3. Lastly and most importantly, as previously discussed, the PEP-PDP interaction. That’s what AuthZEN aims to address. It’s partly addressed in OPA (although they give a lot of freedom to developers) and it has already been addressed by XACML (JSON profile, REST profile). See the work we’re doing on prior art here.

Architectural Patterns, Scale & Performance

Performance requirements can take many forms. For example, do you need a lot of throughput (typically you can deploy more instances of the PDP service) or is response time the most critical factor? Caching policies, attributes, or decisions can help with response time. Placing the PDP as close as possible to the protected resource (like with a sidecar) can eliminate or reduce network callouts.

There are many examples of externalized authZ systems managing millions of users or transactions. In many cases, you need to pick the product with the right architecture and configure it properly to meet performance needs.

That said, processing security requests will take resources – whether it happens within the application code or if you call an external authZ service. Today that performance hit may be buried within the app and now you expose the performance considerations when using an authZ service, but this also gives you the opportunity to optimize performance in a way that you may not have had available to you before.

We’ve come a long way since the early days of “externalized & decoupled authorization”. In 2010, many vendors would argue for a single logical centralized PDP. Long gone are those days. Now you can mix and match: one central PDP and/or sidecar PDPs. One single PEP (e.g. an API gateway) or micro-PEPs deployed inside each application.

If you have to argue about performance with a peer, ask them what the performance of the current system is. I’d argue externalizing and more importantly decoupling authorization gives you an opportunity to optimize and increase the overall performance of your applications.

Conclusion

I hope this article has proven useful. If you’ve read to this point, just know you’re not alone: many of us have pondered about these topics. Fortunately, the IDPro community is thriving with eager volunteers that have great insights on rolling out a successful externalized authorization strategy so come find us on IDPro’s #authorization Slack channel.

Authors:

David Brossard, Chief Technology Officer, Axiomatics

In his role as CTO, David drives the technology vision and strategy for Axiomatics based on both identity and access management (IAM) market trends as well as customer feedback. He also leads the company’s strategy for standards and technology integrations in both the IAM and broader cybersecurity industries. David is a founding member of IDPro, a co-author of the OASIS XACML standard, and an expert on standards-based authorization as part of an overall IAM implementation. Most recently, David led the design and development of Salesforce’s identity offering, including customer identity and access management (CIAM) solutions.

Alex Babeanu, Chief Technology Officer, 3Edges

Alex leads the research and development of 3Edges, which created the best and easiest to use Graph platform on the market, specifically built for graph-aware dynamic authorization. His past experience includes building pieces of the Oracle Identity Manager server as a Principal at Oracle, and over 10 years spent as a consultant in the field, architecting many solutions for public and private organizations in all verticals. Alex holds an MSc in Knowledge Based Systems from the University of Edinburgh, UK, and is an avid Sci-Fi enthusiast.

Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Many organizations hear from vendors, thought leaders, and perhaps strange women lying in ponds who distribute swords that they need to get to “Zero Trust.”  Zero trust as a marketing term has exploded over the past few years, and it feels like everywhere you look, the term is being used, but very little is being said on what it means—and indeed, what it means to identity. It would be prudent then to understand what is meant by zero trust, select a model that provides a basis by which a zero trust architecture may be achieved, and dig into the ramifications of the model chosen for identity.

What is Zero Trust?

Zero Trust is broadly defined by many sources. For instance, Gartner couches Zero Trust within the context of networks, stating, “Zero trust network access (ZTNA) is a product or service that creates an identity—and context-based, logical access boundary around an application or set of applications.” The UK’s NCSC also defines it within this context of networks-—they offer that “A zero trust architecture is an approach to system design where inherent trust in the network is removed.” If we are to believe NIST SP 800-207 (Zero Trust Architecture), it is “the term for an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources.” Given the spread of definitions, we should look to synthesize these definitions to provide a holistic perspective.

  • Zero trust seeks to eliminate implicit trust.
  • Zero trust seeks to make access determinations that are identity, context, and resource-driven.
  • Zero trust seeks to move past using static network configurations as a defense.

What is Implicit Trust?

Implicit trust, put simply, is where actions taken between systems, users, and other resources are allowed due to some facet of their relationship with each other. In an extremely simple example, a database within a traditional, organizationally managed data center may have a line of sight from a network perspective to hundreds of other systems because of the tasks the database helps those systems perform. These in-datacenter systems may have a common set of administrators, and one of these administrators may have access to a laptop that uses a VPN client to get into the data center remotely for management, specifically that database. These systems then share a tremendous degree of implicit trust- an attacker who gets access to the administrator’s laptop could potentially do immense damage to a number of systems because each system in the chain has put some degree of faith in the next one down the line. Ransomware, in particular, exploits implicit trust, utilizing whatever tools it can to move laterally within an organization to cause as much damage as possible.

What Do Zero Trust Folks Mean When They Say “Identity,” “Context,” and “Resource”?

When we speak of identities in a zero-trust context, we refer to both traditional users (as in people) and non-person entities (such as machine accounts used for programmatic access). These identities must have appropriate context, meaning they must meet specific conditions (e.g., time of day, location, compliance to specific requirements identified by the organization, attributes, role-based access signifiers, etc.) to perform a given operation. Resources are objects an organization possesses that are subject to access determinations, such as applications, workflows, systems, assets that respond and conform to logical access (such as doors), and so on. We describe all of this to indicate that a user, in certain contexts, has access to perform actions on specific resources.

What Happens to the Network?

The network, as we understand it, still exists. However, the focus shifts from hardening the perimeter of a network to securing resources. Typical implementations focus on identities sufficiently authenticating and having sufficient authorization (by having appropriate context), with these entitlements being dynamic and assessed continuously such that if the identity no longer meets requirements, access is terminated immediately; if the identity is sufficiently authenticated and authorized, it is allowed access to the resource for that specific interaction. Each interaction with a given resource requires a new and separate assessment; prior successful assessments do not indicate future success. The common terminology used for the interaction of identity to resource under this model is “microsegmentation”—to effectively construct a network segment from resource to resource and dynamically assign it based on context.

What Models Are There of Zero Trust?

While vendors quickly provide their own view of zero trust, few (if any) have provided comprehensive models that outline critical functions necessary to achieve such a state in a distributed computing environment. Various countries and blocs, such as the UK and the EU, have offered either broad guidance (https://www.ncsc.gov.uk/collection/zero-trust-architecture) or pay lip service to it in reports (https://www.europarl.europa.eu/doceo/document/A-9-2021-0313_EN.html) but few government-sponsored and independent reference models have been put forward. The US Government has offered some guidance on this across its agencies, notably NIST by way of its work in the NCCoE (https://www.nccoe.nist.gov/projects/implementing-zero-trust-architecture) as well as NIST SP 800-207, and the Department of Defense with its Zero Trust Reference Architecture (henceforth the DoD ZTRA). While all of the NIST SPs are great reading on this subject, let’s focus for a bit on the DoD ZTRA.

An Extremely High-Level View of the DoD ZTRA for Identity

The DoD ZTRA asserts that zero trust’s goal is to protect data. It does this through the interrelated nature of six separate focus areas: User, Device, Network/Environment, Applications/Workload, Visibility/Analytics, and Automation/Orchestration. The DoD ZTRA asserts that conditional authentication and authorization are critical to each focus area and provides a figure that offers capabilities related to those areas. See Figure 1 for their highlighted capabilities.

A diagram of a systemDescription automatically generated

Figure 1: Authentication and Authorization Capability Taxonomy. Source: DoD ZTRA

A point that the DoD ZTRA really drives home with this figure, as well as the other capability taxonomies and capabilities outlined, is that authentication and authorization need to be driven into every decision possible, as close as possible to the point of decision. These authentication and authorization decisions need to be constant, fine-grained, adaptive, and provide rapid mechanisms for restricting access should it become incongruent with a user’s standard use patterns.

The DoD ZTRA indicates that a service external to the previously mentioned focus areas, known as the “Enterprise Identity Service” (EIS), should be utilized at the control plane to facilitate this. The EIS is made up of three capabilities: the Enterprise Federated Identity Service (EFIS), Automated Account Provisioning (AAP), and the Master User Record (MUR). At a high level, these capabilities map to federated authentication and authorization, identity governance/lifecycle management, and the aggregation of contextually important attributes for a given entity (person or otherwise) for the purposes of driving those authentication and authorization decisions. Examples include credentials, roles, attributes defining access classifications, policy/context-driving attributes (such as a risk score for a given user), and so on.

This begs a question of scale: is the DoD ZTRA meant to construct one system to rule them all?  Not necessarily. To quote the DoD ZTRA on this, “DoD enterprise ICAM service providers provide one or more services that support ICAM capabilities. A service is defined as DoD enterprise if it can be used by anyone across the DoD, and, for externally facing federation services, by any DoD mission partner”. The document goes on to define requirements for these service providers, as well as DoD component organization requirements. Ultimately, there will be many implementations of an EIS across the DoD. In these many implementations, they will be able to best meet the needs of the mission while still conforming to the goal of eliminating implicit trust wherever possible.

A goal of this externalized service is then to be reusable and interoperable- while the DoD does not provide specifics around each service, it is to be assumed that an EIS for a given DoD organizational component should be able to communicate effectively to every other DoD organizational component and mission partner as it needs to. If this were not the case, the DoD would be back to building stovepipe systems- systems with limited scope and function, possessing data that, by the nature of the system, is difficult to use outside of the system. Identity commonly falls into this trap, where a given system owner may wish to implement their own flavor of an identity capability with a custom schema or custom relationship model.

In Summary

There should be minimal surprise when we see that the DoD ZTRA offers no revolutions in security or identity thought. It is instead a synthesis of practices that identity and security practitioners have been pointing towards as being critical for years. Whether the people who perform integrations across the federal government take this guidance to heart remains to be seen. It is this author’s hope that given time and appropriate space the DoD ZTRA will not act as the final word on the topic but is merely the beginning of the conversation with respect to integrating sound identity practices into large and distributed organizations.

Author Bio

Rusty Deaton has been in Identity and Access Management for over a decade. He began in technology as a technical support engineer for a Broker-Dealer and has since worked across many industries, carrying forward a passion for doing right by people. When not solving problems, he loves to tinker with electronics and read. He currently works as Federal Principal Architect for Radiant Logic.

Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the views of the IDPro organization.

As an introspective person, I often ask myself, “why identity?” Why, out of all of the industries that I have worked in, out of all of the areas within tech I have had the privilege of participating in, has identity been the one that stuck?  I always come back to the same answer: the meaning behind it. Working in our field is a calling. It’s often not glamorous. It’s just expected to work. When it does, you never hear any praise. When it doesn’t, the whole application is down, and all eyes are on you. Despite this, what not only keeps us going, but makes us proud to call ourselves identity professionals? We all see the bigger meaning behind identity.

As we approach the 75th anniversary of United Nations Human Rights Day on December 10, 2023, it is crucial to reflect on the significance of identity and privacy as fundamental human rights. In today’s digital age, where personal information is constantly collected and shared, it is essential to recognize how intertwined identity and privacy are, and how they are both necessary to uphold. 

Identity is an integral part of an individual’s selfhood. It encompasses the ability to define oneself, express one’s beliefs, and engage in society freely. Identity is closely tied to privacy, as individuals should have the right to maintain and protect their personal identity without fear of discrimination or surveillance.

Privacy ensures individuals have control over their personal information, encompassing the right to control what information is collected, how it is used, and who has access to it. When so much of our online reality is fed by algorithms that extrapolate things about us to achieve certain outcomes based on opaque data profiles, it becomes even more important that privacy is treated as a fundamental human right. It must be granted to all humans by default, not as something available only for those who can afford to buy it. 

Recognizing identity and privacy as human rights allows us to advocate for their protection and ensure that individuals’ dignity and autonomy are upheld. It involves promoting transparency in data collection practices, implementing strong data protection laws, and fostering a culture that respects and values privacy.

I thought of another reason identity has stuck for me. I’m not one to back down from a challenge. Protecting identity and privacy as human rights is a big, meaty challenge. If we don’t step up to the plate, who will? We are uniquely positioned to be right at this juncture, with our skill sets in identity, right now. Will you join me? By promoting awareness and understanding of identity and privacy as human rights, we can work towards creating a safer and more equitable digital landscape for all.

Author

Headshot

Hannah Sutor, Principal Product Manager, GitLab, IDPro Board Member

Hannah Sutor is passionate about all things digital identity and privacy. She currently works as a Senior Product Manager at GitLab, focusing on authentication and authorization in a DevSecOps context.

Hannah has spoken at various conferences on digital identity, privacy, cybersecurity, and DevOps workflows. She is also a content creator; writing articles and creating engaging, easy-to-digest content on these topics for those without a technical background.

She lives outside of Denver, Colorado, USA, and enjoys bad reality TV just as much as she enjoys a walk in the woods.

Introduction

Disclaimer: The views expressed in the content below are solely those of the author and do not necessarily reflect the views of the IDPro organization.

Both the identity and the data security world can be vast and complex these days. It’s sometimes common to see that people (organizations) consider these completely disconnected, and that couldn’t be more wrong.

One of the key responsibilities of an IAM practitioner is to ensure that the right people have the right access at the right time. This article aims to shed light on the crucial role of IAM practitioners in data security, providing an insider’s perspective on the challenges, triumphs, and the evolving landscape of this vital field. Whether you are an IAM professional seeking insights from peers or a cybersecurity enthusiast looking to understand the intricacies of IAM, this article offers a comprehensive overview of data security through the lens of an IAM practitioner. So, let’s dive in and explore the fascinating world of IAM and its impact on data security.

The role of IAM in data security

We can say that IAM can be considered as a framework for business processes related to the management of digital identities. At a minimum, this should cover key aspects related to the first steps of an identity, like onboarding (HR-driven or other methods depending on the type of identity (employees, guests, non-human)), entitlements, activity recording, management in general, monitoring and automations.

Next, it is time to start talking about the role of IAM in data security. Considering the above paragraph, it is crucial to ensure that only authorized individuals have access to resources and data. It is time to think about how crucial IAM is in ensuring that only authorized users have access to resources, devices, data, etc. Here is where we should think about everything related to the design, creation, and management of roles and access privileges, as well as granting, or not, those privileges.

The role of IAM in ensuring compliance with different regulations is crucial. As IAM practitioners we must help organizations to meet regulatory requirements related to access to data and privacy, through the implementation of policies and procedures.

Identity lifecycle management and data security

Managing the lifecycle of identities is a task where we must ensure that access rights are granted when needed and revoked when no longer necessary, all while maintaining compliance with various regulations like GDPR, HIPAA, SOX, etc. We need to make sure that we are also capable of revoking access in real-time if needed, and that use to be in general related to the integration between data security and IAM solutions, and through events, signals and triggers.

If we think about Open ID Connect providers, we can talk about the Continuous Access Evaluation protocol, or CAEP (OpenID Continuous Access Evaluation Profile 1.0 – draft 02). This protocol allows for real-time evaluation of user access, enhancing the security posture of organizations. It enables a dynamic exchange between the token issuer and the relying party, allowing for immediate response to critical events such as user termination network location change, and others. This ensures that only authorized individuals have access to sensitive data, thereby significantly reducing the risk of data breaches. In an ideal world, all the solutions can potentially provide access to sensitive data, should take advantage of CAEP and policy-based access controls (PBAC).

IAM, data security, and the user experience

Finding the right balance between security and the user experience is key if we don’t want to create frustration. Some of the key aspects to consider are:

  • Seamless access to resources. This could be translated into users not needing to enter multiple passwords to access multiple systems.
  • Secure access to resources. This is about controlling what users can and can’t access, so sensitive data and functions are restricted to those who really should have access. This enhances security and builds user trust.
  • Role-based access. To ensure that users have the right level of access to corporate resources. Think about this as a clear benefit these days with the increasing adoption of remote working and cloud adoption.
  • Minimizing disruptions when redirecting users from the service provider to other applications or services.

In summary, this is all about designing and implementing processes and controls aligned to the security policies and regulatory requirements. It is also about continuously monitoring and updating these processes and controls to respond to evolving security threats and business needs.

The role of IAM practitioners in preventing data exfiltration

IAM practitioners can contribute to preventing the unauthorized transfer of data in many ways:

  • Applying access controls to manage the access from non-corporate networks or devices, as well as risky IP addresses and others.
  • Adopting Data Loss Prevention (DLP) strategies. This is something that in general is more associated to data security practitioners, however we have to make sure that it is possible to integrate our DLP policies together with our IAM policies. For example, our policy engine could offer the possibility to create more granular controls that allow us to associate DLP policies created in our data security solutions.
  • Making permissions to very sensitive data temporary and subject to frequent review and revocation, in order to prevent long-term access to sensitive data.

How can IAM and data security practitioners contribute to reducing costs

IAM and data security practitioners can contribute to reducing costs in several ways:

  • Reducing data breaches through the implementation of strong IAM practices.
  • Ensuring compliance with regulations such as GDPR, HIPAA, PCI and others. Being non-compliant can result in hefty fines.
  • Improving operational efficiency by automating processes related to the management of digital identities.
  • Reducing the impact of employee turnover by implementing processes to deprovisioning access rights once a user leaves the company or as their role within the organization changes. This contributes to preventing gaps that could be potentially exploited by bad actors.

Conclusion

IAM practitioners play an indispensable role in ensuring that the right individuals have the right access at the right time, thereby significantly enhancing an organization’s security posture.

From managing the lifecycle of identities and preventing data exfiltration to ensuring compliance with various regulations, IAM practitioners are key on everything related to the safeguarding of sensitive data. IAM and data security practitioners must find a balance between security and user experience, ensuring seamless and secure access to resources while minimizing disruptions.

Moreover, IAM and data security practitioners contribute significantly to cost reduction. By implementing robust IAM practices, ensuring regulatory compliance, improving operational efficiency, and mitigating the impact of employee turnover, they help organizations avoid the hefty costs associated with data breaches and non-compliance.

In conclusion, the role of IAM practitioners in data security is crucial. As the digital landscape continues to evolve, these roles will become even more critical in safeguarding our digital assets and navigating the complexities of data security.

Author: Marcelo Di Iorio

Marcelo is a seasoned expert with over 20 years of experience in Identity and Access Management (both IAM and CIAM), Identity Governance (IGA), Identity Protection, Privileged Access Management (PAM) and Cloud Infrastructure Entitlement Management (CIEM). He has been working at Microsoft since 2008, first in Argentina and now in Spain, currently as a Global Black Belt (GBB) for Advanced Identity and covering EMEA. Before Microsoft, he worked for some Microsoft and Citrix partners as a consultant.He actively participates in conferences and writes articles where he shares his view on different topics related to identity and security, as well as current and future trends, and he also records a monthly podcast with other colleagues where he talks about Microsoft identity and what’s new in that space.

Digital identity systems have been a core component of organizations in every sector and around the world. Here at IDPro, we often focus on the enterprise and consumer end of things. Workforce identity and CIAM are the bread and butter of most IDPro members. But we’ve always known that digital identity is more than just a department or a role at a company. It’s truly the foundation of our digital lives.

Identity and Human Rights

The Universal Declaration of Human Rights enshrines the concept of recognition as a person before the law as a fundamental human right. Digital identity is a new aspect of that fundamental right, a topic covered by Elizabeth Garber and Mark Haine in the white paper “Human-Centric Digital Identity: for Government Officials.” This right has also inspired the United Nations Development Programme (UNDP) Model Governance Framework for Digital Legal Identity System

Source: UNDP Digital Legal ID Governance website – https://www.governance4id.org/ 

Digital Identity and the United Nations

It might seem like a big stretch to go from our day-to-day worries about our IAM systems to a governance framework designed for governments worldwide to adapt as they build their digital identity programs, but it’s happening. The UNDP argues that there is a significant social and economic benefit for governments to digitize their identity programs and close the identity gap. Just in financial services alone, a strong digital public infrastructure is expected to speed up growth by 20-33%

Think about it. Our little corner of the world, which focuses on a specialty so young you almost certainly don’t have a degree in it, is now a core aspect of global economic growth!

Eight Core Themes

So, what does the UNDP’s framework look like? As expected of the UN, they are taking a broad approach that considers all elements of society. Specifically, they offer guidance on:

  • Equality and Non-Discrimination
  • Accountability and the Rule of Law
  • Legal and Regulatory Framework
  • Capable Institutions
  • Data Protection and Privacy
  • User Value
  • Procurement and Anti-Corruption
  • Participation and Access to Information

The UNDP model comes from their legal identity AND digital public infrastructure efforts, which is the right combination of organizations to bring together. Digital transformation is a bit of a buzzword, and yet, that’s what is happening. The UNDP is trying to help provide some guidance so countries are at least somewhat going in the same direction. They’ve already noted that there are at least as many failed identity programs as successful ones, usually because of inadequate governance. 

Digital identity always comes down to governance.

Applying the Framework

We can always learn from others, and we have an opportunity, regardless of what sector we work in, to learn from the UNDP framework. While targeted towards governments and civil society, there is quite a bit here that the public sector can apply to their IGA programs. The need to take into account as a foundational principle the need to support equity and diversity is one example. Another is ensuring the systems and programs are adequately funded and clear of undue influence. 

Wrap Up

So why is this a Letter from Leadership post (which we’re also posting to the blog)? Because identity governance is our space and everyone in this organization has an opportunity to be a leader in ensuring the identity programs they are part of are well-designed and developed. So, as one leader to the next (that’s you), I hope you take a few moments to think about this bigger picture and how you can make the governance of the identity systems around you better.

Author

Heather Flanagan, Acting Executive Director and Principal Editor for IDPro (and Principal at Spherical Cow Consulting) comes from a position that the Internet is led by people, powered by words, and inspired by technology. She has been involved in leadership roles with some of the most technical, volunteer-driven organizations on the Internet, including the IETF, IAB, and IRTF as RFC Series Editor, ICANN as a Technical Writer, and REFEDS as Coordinator, just to name a few. If there is work going on to develop new Internet standards, or discussions around the future of digital identity, she is interested in engaging in that work.

FYI I love acronyms: acronym soup, acronyms al dente, acronym au jus… Acronyms FTW. So, when I started working on a new article for this newsletter, it only felt natural to tackle OWASP and IAM. O’ What, you ask? Let’s dive right in.

What’s IAM?

Most of the readership here is familiar with IAM: Identity & Access Management. I’ll refer back to IDPro’s book of knowledge for definitions. Turn to the terminology section for the following:

  • Access management: The process and techniques used to control access to resources. This capability works together with identity management and the Relying Party to achieve this goal. The model shows access management as a conceptual grouping consisting of the Access Governance function and the shared authorization component. However, access management impacts local authorization as well (through the governance function).
  • Identity management: A set of policies, procedures, technology, and other resources for maintaining identity information. The IDM contains information about principals/subjects, including credentials. It also includes other data such as metadata to enable interoperability with other components. The IDM is shown with a dotted line to indicate that it is a conceptual grouping of components, not a full-fledged system in itself.

In short, Identity & Access Management (IAM) manages the identification, authentication, authorization, and access control of users and devices in an organization. It ensures the right people have access to the right resources at the right time, and that their actions are auditable.

Notably, I’d like to highlight the fact that IAM is not specific to any particular technology or stack. As a matter of fact, although we tend to think of IAM in the context of IT, it also applies to the physical world (think access cards, drivers’ licenses…)

And what about OWASP?

Overview

The Open Worldwide Application Security Project (OWASP) is an online community that produces freely available articles, methodologies, documentation, tools, and technologies in the field of web application security. (source: owasp.org) What’s worth highlighting here is the focus on web applications.

Is there any overlap then between IAM, a discipline, and OWASP, a community? Both firmly belong to the realm of cybersecurity. OWASP focuses on a particular layer of the technology stack (think OSI model) whereas IAM focuses on the identity & access elements of all (or any of) those layers.

Let’s take a look at OWASP’s most popular project: the OWASP Top Ten. The aim of the project is to help web app developers identify, understand, and avoid critical security risks for web applications. OWASP Top Ten is updated on a regular basis. The following diagram shows the evolution of the threats’ prevalence between 2017 and 2021.

While some risks are not tied to IAM, a few are direct consequences of insufficiently planned IAM designs in a web application. In particular, it’s worth calling out:

Let’s dive into each one individually.

A01:2021-Broken Access Control

According to OWASP, 94% of applications were tested for some form of broken access control. The 34 Common Weakness Enumerations (CWEs) mapped to Broken Access Control had more occurrences in applications than any other category.

Some of you may have heard of things like BOLA or IDOR. Generally speaking, broken access control relates to the fact that the wrong clients, users, or services have access to the wrong set of processes or data. There are different ways to address this issue. In the realm of IAM, it boils down to verifying a caller’s identity and entitlements. This may be done through the use of access tokens or in combination with externalized authorization frameworks (either policy-based or graph – see A Taxonomy of Modern Authorization Models).

The best way to address Broken Access Control is to systematically think “who can access this? Who can run this process?” 

A03:2021-Injection

Injection is somewhat broader than IAM. It tends to relate to the types of attacks where input is sent to the server and is not validated. It could be content that changes the behavior beyond the owner’s intent (e.g. changing a product’s price) or lower-level attacks e.g. SQL injections. Randall Munroe has the perfect illustration of this phenomenon. Again, though, assuming all interactions are authenticated and authorized, injection becomes a much lesser threat as the access control checks part of authorization generally also validate the data input. Again, let’s be paranoid here and stick with the notorious Zero Trust ‘mantra’: ‘Never Trust, Always Verify’.

A04:2021-Insecure Design

Insecure Design is broader than IAM of course, but IAM is the perfect sound first step to adopting the principles of Secure by Design. Notable CWEs (CWEs) include

  • CWE-256: Unprotected Storage of Credentials, 
  • CWE-501: Trust Boundary Violation, and 
  • CWE-522: Insufficiently Protected Credentials.

This is where application developers need to work hand in hand with their IAM teams to leverage the right IAM tools (framework, vendor solutions) and avoid common identity-related mistakes. No one should ever have to implement a username-password database ever again. No one should have to wonder how to store passwords. I mean, do you ever wonder whether you should implement your own database protocol? Of course you don’t.

A07:2021-Identification and Authentication Failures

On paper, this is the one area in OWASP’s Top Ten that aligns the most with the field of IAM alongside A01:2021-Broken Access Control. Previously known as Broken Authentication, it includes Common Weakness Enumerations (CWEs) related to identification failures such as 

  • CWE-287: Improper Authentication, and 
  • CWE-384: Session Fixation.

IAM tooling and frameworks alone are not enough here but the patterns and best practices we provide at IDPro go a long way to help you avoid common mistakes. The OAuth 2.0 Security BCP also helps avoid authentication failures.

A09:2021-Security Logging and Monitoring Failures

This top ten includes the following CWEs:

  • CWE-778 Insufficient Logging
  • CWE-117 Improper Output Neutralization for Logs, 
  • CWE-223 Omission of Security-relevant Information, and 
  • CWE-532 Insertion of Sensitive Information into Log File.

Logging is perhaps one of the most overlooked aspects of an application or use of an IAM framework. When utilizing IAM frameworks and best practices, we can in fact turn logs into an asset. Delegating user management, authentication, or authorization to a third-party system leads to the generation of framework-specific logs (for instance authentication logs, failed attempts, authorization checks, load metrics, etc). This data alone is pivotal in determining whether the application is functioning correctly or is possibly either misconfigured or under attack.

Conclusion

This article provided a brief overview of the overlap between IAM and OWASP, in particular the Top 10. It highlights how a correct implementation of IAM frameworks and processes can help mitigate the risks that come with developing web applications. At its simplest, application developers should fully delegate user management and authentication to a third-party system. Developers should never think of implementing their own means of storing passwords or their own authentication protocols. Standards exist for a reason: they’ve been carefully designed to avoid security pitfalls. Frameworks (be it open-source or vendor products) exist to guarantee. Look into authentication standards such as SAML, OAuth, and OpenID Connect. OAuth, as previously mentioned, also comes with BCPs (best current practices) that highlight how to best implement authentication. Also consider externalized authorization.

Further Reading

Author

David Brossard

Chief Technology Officer, Axiomatics

In his role as CTO, David drives the technology vision and strategy for Axiomatics based on both identity and access management (IAM) market trends as well as customer feedback. He also leads the company’s strategy for standards and technology integrations in both the IAM and broader cybersecurity industries. David is a founding member of IDPro, a co-author of the OASIS XACML standard, and an expert on standards-based authorization as part of an overall IAM implementation. Most recently, David led the design and development of Salesforce’s identity offering, including customer identity and access management (CIAM) solutions.

My opinions are my own and do not reflect the views of others at my employer, the Federal Reserve System. Now that my authorization filter is in place, let’s follow Alice down the Rabbit hole and let your imagination run wild for the next 5 minutes.

Identity is jumping the fence

Identity is the new security perimeter, or so we’ve been told. 

Do we need someone to draw a box so you can picture yourself trapped inside? The Little Prince asked for it after all…. Are you a sheep?

Be honest – did you let Identity roam free again this year? Do you feel like you are Peter Pan trying to re-attach his shadow before going back to Neverland? Keep trying to secure this personal information that has leaked… again. Third time’s a charm. If not, a bit of fairy dust will go a long way to make DNA erase itself from the dark web (thank you 23andMe).

Identity can be undercooked, well-done or roasted

Let’s deconstruct this. Which material did you use for your identity wall? But more importantly, which one of the three little pigs are you?

Are you careless and used straw to build your wall? Let’s reuse this password again my friend.

Did you use sticks? Stick it together. Let’s add a thin layer of MFA with SMS and call it a day.

Did you use bricks? This is the way! Phishing-resistant authentication and mutual TLS. Hasta la vista, Big Bad Wolf! 

Hopefully, the Wolf was not part of the construction crew. Let’s put a layer of ITDR during inspections. Remember that the construction code has changed this year; we need to spend more on that house!

If none of these protections are enough, we do know that everything is good in the pig… so lucky Wolf.

Are my thoughts not Kosher enough? My Grandma is probably still cursing me in Yiddish from where she is… Sorry Grandma.

Identity strikes back – everyone to the exhaust port!

There must be a door somewhere in this identity perimeter, so what happens when you don’t have your keys?

Do you still have this red hammer in your outdoor shed? (you all know this mind trick… right?) 

Well, it will for sure prove itself useful when someone asks you about the secret passphrase to get back in. Those security questions are hard (to remember… not to break), but luckily with my red hammer, I nailed every answer. If only the helpdesk wasn’t trying to be “that” helpful… I don’t think I need to use Gen AI to impersonate the CEO this year.

Now as the Big Bad Wolf, I could also have tried the back door. I learned that trick from a Nigerian prince, and I can guarantee Phish for dinner (No winner, winner, chicken dinner tonight, I need to have a balanced diet).

Identity is at the door. My dog is barking

“Knock, knock”. Someone is at the door and my door is locked; I can’t trust anyone these days: Zero Trust. 

Before I let that person in, let’s re-establish trust. We don’t trust the network so let’s work with what we have, the identity network and the stuff around, like shared signals (Wait… what? Did I say network – again? Okay let’s call that identity graph).  Don’t worry, the Wolf is still huffing and puffing so that’s not him. He likes doing DDOS attacks to get in – he doesn’t knock. 

“Hey I’m the utility guy, I won’t be long, I just need to come inside to check the meter, I was not able to read it remotely.”

He looks nice enough, and he wears the uniform. Okay, I will only allow him inside for a few minutes.

Context has changed: we just realized we both play Bingo at the country club on Sundays. He can stay longer, and I will bring him biscuits. After all, we are all separated by at most six degrees of separation so what could go wrong? We’re practically family. I won’t ask for his badge this time. 

Here you go… I was able to sneak in some continuous authentication (let’s not call that authorization anymore – we all know that we can’t get authentication right, so let’s rename authorization “authentication” so we can lower everyone’s expectation).

Before leaving, the utility guy left that little device so he doesn’t have to come back. Luckily enough, it is always connected to a foreign country to assure higher availability and make sure my data lives forever. I’m grateful for the IoT redundancy, I won’t rely on my robot vacuum anymore to protect my privacy with its camera array. Maybe next year I will think about protecting these API and non-human flows.

Ring around the Rosie. Ashes, Ashes… Identity is falling down

What about the roof? If there is a perimeter and a door, there must be a roof, no? 

 It is important that the roof does not leak because when the storm hits the clouds, we want to anchor down on-premises and drink some hot cocoa. Traditions, Traditions! Like the Fiddler on the roof would say.

Identity is messy, so I packed it

I don’t know about you, but I’m a bit of a hoarder. My identity house is filled with resource rooms. There, I have a bunch of RBAC boxes. My wife says I act like I’m entitled. You put things in boxes and you forget what they are and what they do. Can’t tell what’s valuable anymore and should be in my vault. Not sure I will ever get to cleaning that up and identifying what’s valuable but I will repaint the walls in ABAC, PBAC, or ReBAC next year. Help me choose the color. I will put two layers of PEP this time to get this finer grain. 

Identity is wandering outside, alone in the dark

Now that we’ve explored the perimeter of our little identity house, let’s wander in the forest, aka the Internet (not your AD forest, that one is evaporating to the clouds turning Azure color – thank you Global Warming!). 

Do you see yourself more like the Little Red Riding Hood or Hansel & Gretel when you get lost? We were told not to go far from the identity perimeter, but something good is cooking for sure. I can smell it coming from that house over there. Now, going back to the decision you have to make, I will help you: the Wolf probably didn’t use an oven to bake stuff. It is wild out there. 

The network perimeter was porous. Solution: The Identity Sponge!

I feel like the sponge is filling itself up and getting really heavy by now. No wonder: I have to clean up all the mess left by these cookies. The good news is that someone out there is constantly tracking my every move (and mood) so there is no need to waste breadcrumbs to find my way out… if only I could escape the web.

Identity is coming to town

Here comes Decentralized Identity or Self-Sovereign Identity (SSI): so, the Identity Kingdom was becoming really big after all these federated trusts, but we didn’t have a good way to verify that this visiting knight from a faraway land really killed that dragon. Thanks to Merlin’s magic, we decentralized trust on the blockchain (yeah everything is made in blocks nowadays – Minecraft is a “thing”) and the crypto bros were able to verify its DID. This knight was indeed knighted, killed the dragon… but failed to save the princess, explaining why he is on the run again. 

Didn’t even need to force the knight out of the armor or send a messenger to the faraway land! The knight just presented claims from his verifiable credentials. Then, we made sure to store as many attributes as we could about him in our Kingdom registry for the next time he is in town (we can’t always rely on Merlin and our core mission is still to protect the crown jewels). I will keep the story of Zero Knowledge Proof (ZKP) for when we get our hands on a witch: I don’t feel like wasting wood or killing a duck.

Tired of this Identity Journey – need some rest

My identity journey has been long enough so I won’t waste time on CIAM. Which social identity provider did I use again? When in doubt I will click on all the options I see, starting with password reset. Love playing Whac-A-Mole. Maybe I’ll add an entry in my password manager or switch to passkeys. My dog loves Fido.

Identity is us – we are its Foundation

Voila! I hope you had a good laugh and you are ready to tell the Identity tale in 2024. Yes, sometimes it feels like Identity security or cybersecurity is a house of cards and this year was definitely full of emotions. We lost friends and made new ones. Things didn’t always go as we wanted in our personal lives, at work, or in the World in general. Thus, we shouldn’t lose track of our North Star, who we are, and our core values; in short: of our Identity. Talking about core values: let’s not forget to be kind to each other and to ourselves. 

The glass is not half empty or half full. Remember that it doesn’t matter anymore after the third glass. Cheers to 2024! 

Yes, there are indeed plenty of challenges in Identity ahead of us but who doesn’t like a good challenge? Also, never forget to question things, learn… and when you have learned, share the knowledge. Identity is our jam, let’s spread it!

I do love the community we are building at IDPro. I’ve enjoyed becoming a member and participating in the discussions on Slack. Tell others! Don’t keep it to yourself. Encourage diversity and bring new faces to our group next year. 

Author: Elie Azérad

Elie is Lead IAM Architect at the Federal Reserve and a member of IDPRO. He has 20+ years of experience in IT working in roles ranging from Software Development, Customer Support, Consulting and Pre-Sales to Solution Architecture in France and the US. He started his career in Business Rules and switched to IAM in 2010.

Since then, he is having fun solving Identity problems, big and small… and he calls that work.

In his free time, he bakes, raises his kids, and walks his Golden Retriever.

His opinions are his own.

by Alexandre Babeanu, 3Edges, and Tariq Shaikh, CapitalOne

Background

The true beginning of scientific activity consists rather in describing phenomena and then in proceeding to group, classify and correlate them.

Sigmund Freud

Identity and Access Management (IAM) systems have become critical in ensuring the security of enterprise applications. In the good old days of the on-premise / co-located data center, an enterprise could easily implement perimeter-based security – one where you would build a castle and a moat around your prized assets and then control the ingress & egress points to provide a reasonable security posture. The majority of access was granted to humans. Every human was given the appropriate level of access according to their job role, and everybody lived happily ever after… that is, until a dark cloud of disruption rained on the perimeter-based security parade. We are, of course, referring to the advent of cloud technology.

With a cloud-first approach, enterprises now have a significant portion of their prized enterprise assets and data deployed outside of their traditional data centers. Enterprises are shrinking their on-premise footprint and running workloads in the cloud. Identity, not network, is the new perimeter. One of the interesting aspects of this seismic shift was the rise of Infrastructure As Code (IAC) and, by extension, non-human accounts that manage the infrastructure. It is also not unusual to have cloud systems with thousands (if not tens of thousands) of permissions. This led to a proliferation of roles, and it became clear very quickly that the orthodox job role-based approach to access control needed adjusting.

Another unfortunate side effect of the identity-based perimeter approach was the rise of identity-based threats. A vast majority of breaches can be traced to compromised credentials and over-privileged accounts. It is becoming abundantly clear that an access control methodology that is dynamic and can evaluate access continuously based on risk signals in real-time is the need of the hour and a cornerstone of Zero Trust Architecture. Identity professionals responded to the challenge, and a variety of authorization and access control methods and corresponding ecosystems have developed. This is our attempt to enumerate these access control methods, categorize them, explore relationships between them, and, most importantly, provide guidance on how to choose your authorization system.

How to choose your next authorization system?

As highlighted in the preceding section, organizations need to shift their focus from old/legacy authorization models and systems to new ones capable of coping with today’s problems. This is not easily done when an organization’s whole infrastructure has evolved into its current state over a period of years or even decades… One therefore faces the two following questions right away:

  • What authorization model or language to even choose to face these challenges?

We will answer these questions by first providing a Taxonomy of modern authorization models and then using it to provide some answers.

What is an Authorization Model?

Authorization systems are made of several complex components. Typically, an engine that makes access decisions, along with some other systems whose roles are to execute the decisions made by the engine or to fetch the data necessary for the engine to reach its decisions.

Our goal here is not to list all possible architectures of such systems or to describe them but rather to focus solely on the Policy Engine itself, which is at the core of the Policy Decision Point (PDP). Any PDP uses at least one methodology to compute its decisions. We call these methodologies for building PDPs “Authorization Models,” and the following sections describe a taxonomy of such Authorization Models.

What is a Taxonomy?

In simple terms, it is the science of naming and classifying things. To the authors’ knowledge, this hasn’t been done yet for authorization models, even though there is a great deal of confusion throughout the industry about the various ways authorization can be implemented. Each category in a Taxonomy may have subcategories, but it is important to note that the things being classified may belong to several categories at the same time. Objects can therefore be duplicated under several branches of the Taxonomy Tree if it makes sense (for example, consider a taxonomy of fish: salmon would be present under both the “Ocean” and “River” categories…).

A Taxonomy of Authorization Models

The first question when creating a taxonomy is to choose the right categories. This may be a contentious subject, especially in the field of authorization, given the enthusiasm of the Authorization community (the #Authorati) and the fact that there could be many ways to go about it. In the end, we opted for a set of categories that met the two following criteria:

  1. The categories, and the Taxonomy in general, should be helpful to all and not just serve a small community of specialists. In particular, it should help any Identity practitioner in making implementation decisions based on real-world criteria.
  1. It should cover all the existing models by avoiding duplicates as much as possible, as well as be easily expandable to any new, not yet invented models.

Figure 1 below depicts our proposed Taxonomy of authorization methodologies.

Figure 1A Taxonomy of Authorization Models

Level 1 – Centralized vs decentralized control

All models described here involve some kind of rules, even the simplest of them. The first categorization is to distinguish between those models owned and maintained by the owners of the Resources being protected  (DAC branch) or whether these rules need to be centralized and administered by specialized administrators in a central location – see the Mandatory Access Control (MAC) branch. We find here our first authorization models

DAC branch:
  • ACL: Access Control Lists, the oldest of all and the first model introduced through the Multics OS in 1969. Here, a resource owner maintains a list of all the subjects allowed to access any given resource they own, along with the type of access granted (typically read, write, or delete). Popular in operating systems such as Unix or in LDAP Directories.
  • FGA/Zanzibar: Fine-Grained Access Control (FGA) solutions are all inspired by, or implementations of, the Google Zanzibar paper published in 2019. The paper describes Google’s own authorization model used throughout its various tools and offerings. Like ACLs, FGA solutions require resource owners to maintain “tuples” (text strings, essentially) that describe the type of access any subject may have to their resources. Because of the considerable amount of tuples potentially required by such a system, they are best suited for DAC applications (which is also Google’s use case).
  • ReBAC: Relationship-Based Access Control (ReBAC) is an approach that uses the paths between subject and resource nodes in a data graph in order to determine access. Access is granted if such paths exist. ReBAC uses native graphs and requires a proper Graph Database store (more on this further). Note that ReBAC can be used for both DAC or Mandatory Access Control (MAC) applications.

🛑 Note: We make a distinction here between FGA systems and ReBAC: we view these as different models altogether. Although FGA tuples describe a graph, those tuples are not stored in graph databases but are rather strings stored in SQL or custom databases. On the other hand, ReBAC systems use graph databases and express policies as Graphs, not as programming languages. This means that path traversals and tooling are vastly different between those systems. A Graph-based ReBAC policy is therefore an image/diagram and not a block of code, as is the case for FGA systems.


Level 2 – MAC Models

Centralized access policy models are of two kinds: those that can be context-aware and can implement environmental or other contextual conditions and those that are ignorant of context. The contextual conditions can be based on date and time, locations, or even specific attribute values.

Level 3 – Context-Aware models

At this level, and on the context-aware branch, we find two subcategories. Here, the authorization models can be based on rule sets or instead use relationships between entities in order to compute access. Relationship-based systems are graph systems.

Graph Approach

Graphs can implement two types of context-aware models:

  • Risk and Behaviour-Based Access Control (BeBAC): a model by which subjects’ behavior is tracked as a graph, and baselines established for acceptable behavior. These systems can then apply graph pattern-matching or graph analytics techniques to find outliers and thus compute the risk of any given access request.

Another common technique here is to compute the risk score of a given user/entity (similar to a ‘FICO score’) and calibrate ‘access credit’ based on the user/entity risk score, the nature of the request (e.g., privileged v/s nonprivileged) and the type of object/resource that is being accessed. For instance, a user/entity may be denied privileged access to a resource (such as a high-risk PCI database) if their risk score passes a certain threshold (the equivalent of an ‘Excellent’ FICO score due to recent activity that falls outside of the established behavioral norms for the user/entity).

Rule-Based Approach

The non-graph approach is more traditional and, in the case of some vendors, has been available since the beginnings of ABAC and the XACML standard. In this approach, the access policies are defined by a set of programmatic rules, defined either using modern authorization languages or through solutions that provide more business-friendly front ends. The rules combine Subject and Resource attributes with environmental conditions in order to compute a logical decision.

At this level, we find:

  • Authorization languages: Any specialized language that can express access policies using attributes and their values. We find some standardized languages (XACML, ALFA) and as well as some vendor-specific ones (the others). These languages let developers typically implement their own flavor of ABAC/PBAC (see below).
  • ABAC / PBAC: Attribute/Policy-Based Access control systems. These systems implement ABAC without a language per se; they rather rely on tooling and/or GUI widgets to help or guide users during the creation of the policies. Note that the authors believe ABAC and PBAC are synonymous in that all ABAC systems also need to define and manage policies.
  • Risk and Behaviour-Based Access Control (see definition above).
  • Organization-Based Access Control (OrBAC): A model driven by the subject’s and Resource’s membership to an organization. This can be based on business units within a company or even on different organizations altogether. OrBAC uses dynamic rules and context, as well as a hierarchy of Organization, Role, Activity, and View in order to determine access to its resources.

Level 3 – Context Agnostic Models

On this side of the tree, the authorization models don’t support the use of any environmental conditions. These are easier models to use and understand, but they are also much more limited. The two sub-branches here refer to the way to group Subjects and the Resources they try to access.

On the Set-based branch, subjects and resources are grouped together by some common factors, such as users sharing the same semantic role, security level or organization. The other side is, again, relationship-based and uses graphs to determine access.

Note that the set-based approaches are all prone to rule “explosions”: over time, the number of sets increases to the point where it eventually becomes very difficult to certify with certainty the access of subjects to all resources.

Set-Based Models

We find here:

  • Role-Based Access Control (RBAC): in use since its creation in 1992 by NIST researchers, this is still to this day the most popular (by far) Access Control model. Users are placed in roles; each role is granted a set of entitlements over resources.
  • Lattice-Based Access Control (LBAC): Uses the mathematical concept of lattices to define the levels of security a subject may have and may be granted access to. The Subject can thus only access any given Resource if their security level is greater than or equal to that of the protected resource.
Relationship-Based Models

Here we find only ReBAC, which can also be used with centralized control. Generalizing in a graph is easily done by just adding intermediary nodes. Adding extra hops can make ReBAC less fine-grained and, hence, easier to handle and manage.

How to choose?

Figure 2 below represents a decision tree that can be used to help choose the right model. Simply answer some basic questions to follow a path in the tree to a leaf node.

Figure 2An Authorization Decision Tree

Conclusion

This publication is the authors’ attempt to provide a first cut of a taxonomy model for authorization. Without taxonomy, we’re explorers without a map, scientists without a method. It brings order to chaos and meaning to complexity. As the saying goes, all models are wrong, but some are useful; we hope that readers will find the taxonomy model useful in disambiguating some commonly used terms, putting them in context, and simplifying complexity. We fully expect the taxonomy and the decision tree to evolve over time to meet the needs of the changing technology, threat, and business landscape. The accompanying decision tree can be a very useful tool in the Identity professional’s toolkit to aid in the selection of an authorization model that is appropriate for the business case. So the next time you are wondering which authorization model to select for your application, go ahead and use the taxonomy and the accompanying decision tree to guide your selection.

Authors

Alex Babeanu

 Alex leads the research and development of 3Edges, which created the best and easiest to use Graph platform on the market, specifically built for graph-aware dynamic authorization. His past experience includes building pieces of the Oracle Identity Manager server as a Principal at Oracle, and over 10 years spent as a consultant in the field, architecting many solutions for public and private organizations in all verticals.  Alex holds an MSc in Knowledge Based Systems from the University of Edinburgh, UK, and is an avid Sci-Fi enthusiast.

Tariq Shaikh

Tariq is an Identity Architect, Director & Distinguished Engineer at Capital One. He has 25 years of technology experience and a passion for developing innovative technology solutions to solve cybersecurity problems. Prior to Capital One, Tariq led the Cloud Identity & Access Management (IAM) and Privileged Access Management (PAM) initiatives at CVS Health.  He started his career as a software developer before taking on cybersecurity leadership & advisory roles. He speaks and posts extensively about Identity & Access Management topics.

Well, it’s a wrap on a very successful Internet Identity Workshop (IIW). A few weeks ago, 300+ attendees all gathered on the top floor of the Computer History Museum in Mountain View to exchange views on Identity and Access Management. Folks from far and wide came to speak, contribute, share updates on their innovations, and discuss what the future holds. Now, it’s in the name, this conference is identity-centric so I expected a lot of chats around identity standards (think OAuth, OpenID, and more). This year, though, in line with trends at Identiverse, authorization managed to snag some of the limelight. Here’s a roundup of authorization-related activities.

Announcing AuthZEN

AuthZEN is the OpenID Foundation’s latest working group and its purpose is to provide standard mechanisms, protocols, and formats to communicate authorization-related information between components within one organization or across organizations.

Several individuals got together at Identiverse to discuss what standardizing authorization could look like and how to achieve what we dubbed the ‘OAuth moment’, a time when adoption was inevitable and led to massive growth for OAuth. The AuthZEN WG, of which I’m a co-chair along with Allan Foster, Gerry Gebel, and Andrew Hughes, has three main goals:

  1. Increase interoperability between existing standards and approaches to authorization
  2. Standardize interoperable communication patterns between major authZ components
  3. Establish design patterns to promote the use of externalized authorization.

Our peers, Atul Tulshibagwale (SGNL) and Omri Gazitt (Aserto) gave an excellent presentation on the goals of the AuthZEN WG prior to IIW, during the OpenID Foundation session. Check out the slides here.

Launching “403Con”

Am I even allowed to talk about this initiative? On day 1 of the conference, we all piled into “Space F” to discuss what it would look like to run a conference solely dedicated to authorization. In the vein of “Authenticate” but for everything access control. Truth be told, there are so many new ways to address authorization from NIST’s attribute-based access control (ABAC) to 3Edges’ Graph-based approach, access control lists (Zanzibar-style), and more. We want everyone to come and chime in (no pun intended) so if you can spare a few cycles, join us here or reach out in the #authorization channel on IDPro’s Slack.

Authorization-related Talks

Out of the 163 or so sessions, there were several dedicated to authorization.

  • Darin McAdams of AWS gave an introduction to the Cedar Policy Language, a new open-source approach to attribute-based access control. It sits between Open Policy Agent’s Rego and Axiomatics’ ALFA in terms of expressibility. You can learn more about the language in their playground.
  • Eve Maler (my XML superhero) gave a 101 talk on User Managed Access (UMA) called ‘Get to know this unique “application of OAuth.”’ This is truly fundamental as oftentimes, authorization is seen as enterprise or compliance-driven when in fact it can be user and consent-driven. UMA helps enable consent collection on top of existing OAuth flows. The purpose of the protocol specifications is to “enable a resource owner to control the authorization of data sharing and other protected-resource access made between online services on the owner’s behalf or with the owner’s authorization by an autonomous requesting party.”
  • Omri Gazitt of Aserto gave a 101 introduction to Externalized Authorization, its building blocks, and its evolution over the years.
  • Justin Richer (the notorious author behind Cards Against Identity) gave an introduction to GNAP 101: GNAP (Grant Negotiation and Authorization Protocol) is an in-progress effort to develop a next-generation authorization protocol. It is an identity-centric approach to authorization
  • Eve Maler hosted an epic battle between Camp “PDP & PEP” vs. Camp “AS/RS” leading to a hilarious smackdown. It’s true that ABAC, XACML, and Externalized Authorization have systematically referred to PEP/PAP/PDP. The session explored how these concepts map back to OAuth’s more familiar AS/RS terminology. Conclusion? There’s definitely room for interoperability and integration.
  • Gerry Gebel and Phil Hunt spoke about the state of Identity Management Policy Interoperability and in particular IDQL (Identity Query Language), a declarative access policy and set of APIs that enables the mapping of a centrally managed policy into the native format of multiple clouds and application platforms.
  • Mark Berg, my colleague at Axiomatics, presented the latest on the Abbreviated Language for Authorization (ALFA), OASIS’s standard for fine-grained authorization. You can read more on ALFA’s Wikipedia page.
  • The Graph Extraordinaire Alex Babeanu spoke about Identity being a… 🥁… Graph Problem.
  • Lastly, Omri Gazitt gave a demo of TOPAZ, an open-source authorization framework that takes the best of Open Policy Agent with features of Zanzibar/ACLs to deliver a new approach to authorization.

There were a couple of other sessions that tie back to authorization such as:

  • Pam Dingle’s Minimum Interoperability Profile for ACR (authentication context). If we can all agree on ACR values, they can become attributes in a dynamic authorization decision-making process. 
  • George Fletcher’s Transaction Tokens Authorization for Multi-workload Environments. Can externalized authorization help solve the over-provisioned token use case?

I’m ever so happy to see the evolution of the IAM landscape and the growing importance of authorization. As a standards advocate, I’m keen to develop more bridges between standards to address our industry-wide challenges. Feel free to join the conversation in the #authorization channel on IDPro’s Slack.

Author

David Brossard

Chief Technology Officer, Axiomatics

In his role as CTO, David drives the technology vision and strategy for Axiomatics based on both identity and access management (IAM) market trends as well as customer feedback. He also leads the company’s strategy for standards and technology integrations in both the IAM and broader cybersecurity industries. David is a founding member of IDPro, a co-author of the OASIS XACML standard, and an expert on standards-based authorization as part of an overall IAM implementation. Most recently, David led the design and development of Salesforce’s identity offering, including customer identity and access management (CIAM) solutions.

The identity community and the world lost an amazing person in October. Vittorio Luigi Bertocci was a kind and brilliant person. He engaged the people around him in ways that helped people new to the field learn about identity and other experts think about challenges in new ways.

When Vittorio first learned about his cancer diagnosis only a few months ago, many individuals reached out to him to let him know how much he influenced their lives. For some, it was the way he explained technology such that they finally ‘got it.’ For others, it was the support he offered that enabled them to step up to a microphone and offer their own opinions.

Vittorio’s personality could fill a room when he chose, or he could listen quietly. Knowing when to step up and when to step back is a rare life skill, and he demonstrated it with style and grace. He has left us with many recordings, social media posts, and, of course, wonderful memories.

IDPro will always be grateful for Vittorio’s participation in virtual meetups, his contribution to the development of the CIDPRO exam, and his lively conversation in our community.

In the days and weeks following Vittorio’s passing, there has been a huge outpouring of tributes, support, and reminiscences from across the industry. One of Vittorio’s colleagues at Auth0 pulled together a collection of Vittorio posts on LinkedIn entitled In Celebration of Vittorio Bertocci. IDPro member Nishant Kaushik posted And Just Like That, He’s Gone on his Talking Identity blog. He also established a fundraiser called In Memory of Vittorio, our Identity Fabio. His colleagues at Okta posted In Celebration of Vittorio Bertocci, which includes links for charitable contributions to two causes important to Vittorio. Just two weeks before we lost Vittorio, Brian Campell posted his own heartfelt tribute on the Ping Identity blog called Step Up from Lighthearted Joke to RFC Homage, which was deeply moving for Vittorio. Lastly, in recognition of Vittorio’s immense impact to so many of us across the Identity Industry, Ian Glazer and Allan Foster are spearheading the establishment of the Digital Identity Advancement Foundation and the Vittorio Bertocci Award. Keep an eye on the Digital Identity Advancement Foundation website for updates on progress.

Vittorio will be sorely missed, but never forgotten!

by Vipin Jain

Delegation in IAM empowers organizations to distribute authority, responsibilities, and access privileges effectively, enabling efficiency and maintaining a strong security posture. In today’s interconnected world, businesses and organizations heavily rely on digital platforms and systems to streamline operations and increase productivity. However, with the increasing dependence on technology, there comes an inevitable concern for security and privacy. Identity Access Management (IAM) is a crucial aspect of ensuring data security, and an essential feature of IAM is delegation. This article explores the concept of delegation in Identity Access Management and its significance in modern cybersecurity landscapes.

Understanding Identity Access Management (IAM)

IAM is a framework of policies, processes, and technologies that control and manage access to an organization’s digital resources. It governs the interactions between users and digital systems by providing authorized personnel with the right access to the right resources at the right time. IAM solutions aim to ensure confidentiality, integrity, and availability of sensitive data, applications, and systems.

The Importance of Delegation in IAM

Delegation in IAM refers to the process of assigning specific responsibilities and access permissions to certain users or groups. Instead of having a centralized access control model, delegation empowers organizations to distribute administrative tasks and control to various individuals within the organization. This approach is critical for several reasons:

  1. Granularity: Delegation allows organizations to achieve a fine-grained access control model, ensuring that users have access only to the resources necessary for their roles. It reduces the risk of excessive permissions and potential security breaches.
  2. Operational Efficiency: By decentralizing administrative tasks, delegation streamlines processes and minimizes the burden on IT administrators. This enables quicker response times and more agile operations.
  3. Flexibility and Scalability: As organizations grow, the number of users, devices, and resources also increases. Delegation facilitates scalability by enabling a tiered approach to access control, accommodating a growing number of users and their unique access requirements.
  4. Accountability: Delegation fosters accountability as actions taken by delegated administrators are traceable to specific individuals or groups. This accountability helps in auditing and investigating potential security incidents.

Types of Delegation in IAM

Role-Based Delegation: This approach involves creating predefined roles with specific privileges and responsibilities. These roles are then assigned to users or groups based on their job functions. Role-based delegation simplifies the management of access control and ensures consistency across the organization.

Organizational Unit (OU) Delegation: Organizations often divide their user base into logical units, such as departments or teams. OU delegation allows administrators to grant specific permissions to designated units, giving them control over their own resources.

Policy-Based Delegation: In policy-based delegation, administrators can create customized policies that define access permissions for specific resources. This fine-tuned approach is beneficial when handling sensitive data or specific applications.

Time-Limited Delegation: Some IAM solutions offer time-limited delegation, where access permissions are granted for a specified duration. This is useful for temporary workers, contractors, or scenarios where access is required only for a limited time.

Security Considerations

While delegation enhances operational efficiency, it also introduces potential security risks if not implemented carefully. Here are some essential security considerations:

  • Least Privilege: Following the principle of least privilege is paramount when delegating access. Users should be granted only the minimum permissions necessary for their tasks, reducing the attack surface and potential damage in case of a compromise.
  • Monitoring and Auditing: Comprehensive monitoring and auditing of delegated privileges are vital. Regularly reviewing access logs helps detect suspicious activities and ensures accountability.
  • Revocation: Timely revocation of access privileges is crucial, especially when users change roles or leave the organization. Delegated permissions should be revoked promptly to prevent unauthorized access.
  • Dynamic Delegation: Dynamic delegation might be temporary, such as granting temporary access to a resource for a specific task or duration. After the task is completed or the timeframe expires, the permissions are revoked automatically.

Conclusion

Delegation in Identity Access Management is a fundamental concept that empowers organizations to manage access control efficiently while maintaining a strong security posture. By implementing delegation best practices, organizations can strike a balance between providing the necessary access to users and mitigating potential security risks. In the ever-evolving landscape of cybersecurity, delegation plays a vital role in safeguarding sensitive data, ensuring operational efficiency, and fostering accountability.

Author:

IDPro member Vipin Jain works at One Identity as Principal Product Manager for Active Roles, which secures and manages Active Directory and Entra ID (Azure AD) with the principles of Zero Standing Privileges and Least Privilege Access with deeper granularity level with a single pane of glass. He has 15+ years of experience in Identity Access Management space and worked in multiple roles in the past as Solution Architect, Sales Engineer, Technical Project Manager with experience in Simeio and PwC.

In a rapidly evolving digital landscape, personal information is the currency of the online world. The significance of being able to prove your identity is necessary to function in society. Identity Day, a global initiative started in 2018 by ID4Africa and observed on September 16th, casts a spotlight on the pivotal aspects that shape our identities: inclusion, protection, and utility. This day serves as a powerful reminder of the responsibilities and opportunities that come with our digital and physical identities.

Inclusion: Bridging the Gap

At the heart of Identity Day lies a commitment to inclusion. An estimated 850 million people across the globe lack proper proof of identity, with a significant portion residing in Africa. The overarching goal is to ensure that no one is left behind, regardless of their circumstances. Identity Day strives to create awareness and galvanize efforts to provide every individual with the means to prove who they are—an essential prerequisite for accessing fundamental rights and opportunities.

ID4Africa isn’t the only group focused on highlighting the need for Inclusion. Women in Identity’s Code of Conduct program has published some incredibly moving videos about the importance and challenge of establishing an identity when in a minority. 

Protection: Safeguarding Our Digital Selves

Identity theft and privacy breaches are modern-day threats that can have far-reaching consequences. For those fortunate enough to possess identification, Identity Day serves as a call to fortify their identity against theft and unauthorized use. It is a reminder that the digital breadcrumbs we leave behind can be exploited by malicious actors. By prioritizing the security of our identity, we contribute to a safer online environment and protect ourselves from potential harm.

Members of IDPro know all about the issues of safety and security in identity systems. We have a vision that includes ensuring:

  • Digital identities are created, managed, and used professionally and ethically, through secure, privacy-protecting, and reliable practices that produce high-value digital services.
  • The disciplines of digital identity and access management are globally seen as vital and vibrant counterparts to privacy and information security
  • Practitioners in all phases of their careers have access to continuing education and development materials that help them achieve their goals.

Utility: Harnessing Identity’s Potential

Identity is more than just a means of recognition; it’s a tool that can empower and simplify our lives. On Identity Day, let’s take a step back and think as individuals who exist with our identities firmly established. How does your identity shape your daily experience? Is your identity a key that unlocks opportunities, or does it serve as a barrier? This introspection offers a chance to optimize the utility of your identity. It is time that you ensure your identity aids in realizing your aspirations and streamlining your interactions with the world. It also gives you a baseline as to what we’re trying to build for everyone.

Identity Day’s Call to Action

Identity Day is not just an observance; it’s a call to action. It urges governments, organizations, and individuals to work collectively to establish equitable access to identity, enhance identity protection measures, and harness identity’s potential for the greater good. By embracing the principles of inclusion, protection, and utility, we foster a future where identities are recognized as a cornerstone of human rights and global progress.

As we commemorate Identity Day, let us recognize the power of identity to shape lives, enable connections, and drive change. Let us stand united in the pursuit of a world where every individual’s identity is valued, safeguarded, and harnessed for its full potential. Together, we can empower humanity through identity.

IDPro®, the leading organization for identity management professionals worldwide, is pleased to announce the appointment of Heather Flanagan as its Acting Executive Director. Heather will be stepping into the role with immediate effect, bringing her vast experience and deep understanding of the digital identity landscape to steer the organization in its next chapter.

Heather has been an integral part of the IDPro community for several years as Principal Editor of the Body of Knowledge, showcasing her dedication to advancing the field of identity management and her commitment to fostering collaboration and education among professionals in the sector. She will continue supporting the BoK as part of her new duties. Her prior roles in the industry and her consistent involvement with IDPro have made her a well-respected figure among her peers.

IDPro’s Board Chair, Miki Brotzler, remarked, “We are confident in Heather’s abilities to lead and represent IDPro at this crucial time. Her understanding of the challenges and opportunities in the digital identity domain is unparalleled. We look forward to her insights and direction as we continue to grow and serve our global community.”

Heather expressed her enthusiasm about the new role, stating, “I’m honored to take on the role of Acting Executive Director for IDPro. I get to engage at a different level with the identity and standards community, which is what I love to do most. I am committed to furthering our mission, driving membership engagement, and ensuring IDPro remains at the forefront of the industry.”

During her tenure as Acting Executive Director, Heather plans to focus on expanding IDPro’s reach, strengthening partnerships within the industry, and enhancing the value proposition for its members.

About IDPro: IDPro is a global association dedicated to the professional development and recognition of the identity management industry. Comprising of members from various sectors, IDPro serves as a platform for networking, collaboration, and knowledge-sharing among identity professionals.

(This post was previously published on Heather’s blog, and has been updated based on the conversations it inspired.)

I have awesome friends

I have awesome friends who are willing to educate me when I ask questions like, “why is authorization such a big deal right now?” Because it is, you know, a big deal. The pandemic took the slow-but-steady growth of the importance of identity systems and catapulted it into the hearts and minds of governments and corporate board rooms everywhere. And making sure people can work and play remotely means being able to both let them authenticate AND make sure that, when they authenticate, they only have access to those services or actions they are supposed to – that’s what authorization is all about.

The Authentication vs. Authorization Pet Peeve

Let me start by saying that people who assume authentication and authorization are the same thing drive me crazy. Crazier. Whatever. In some cases, sure, considering these things as the same may be functionally true. If someone can jump through all the hoops to log into a system, then by default, they can access All The Things. This is a fairly common pattern in either extremely low-value transactions (what they are accessing doesn’t need particularly rigorous security) or, oddly enough, in extremely high-value transactions where the barrier to authentication is complicated. (I am definitely not saying that the latter one is a good idea, just that it’s unfortunately common.)

But in *all* cases, logically, they are two separate actions. Can the person authenticate; yes or no? If yes, do they have permission to do or see things; yes, no, or maybe? Yes, “maybe” is an option, but more on that in a bit.

Why is Authorization the Next Big Thing?

Authorization is not a new concept. It’s like the enforcement of the age-old adage of “just because you can do something, doesn’t mean you should.” So, let’s make sure to remove that little temptation and make sure you can’t. But what makes it such a big deal now? What makes it come up as a call to action at conferences like Identiverse? Why are analysts like Martin Kuppinger writing about it? What makes so many vendors in the identity space shout to the rooftops about their way of supporting authorization?

Great question. I wish I knew the answer. Some people tell me it’s because authentication is now a solved problem thanks to the existence of WebAuthn. Other people tell me it’s because vendors need a “next big thing” to sell their products. Maybe it’s the growth of Zero Trust Architecture, which takes the whole concept of authorization and makes EVERYTHING an authorization decision. 

Beyond the gossip, I see organizations in every sector, from finance to education to commerce and more trying to figure out how to balance existing in an Internet-driven world with protecting everything from personal data to intellectual property. As it turns out, that balance is very hard to get right. The use case of government services offered online is very, very different from the use case of enterprises managing remote access for their employees.

Looking for the One True Way of Authorization

Alas, with every use case requiring its own balance of costs and risks, there is no One True Way to handle authorization. There’s a great article, “Introduction to Access Control,” in the  IDPro Body of Knowledge by André Koot that introduces several of the popular forms of authorization models, including Access Control Lists (ACLs), Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), and more. But determining which of those models is right for you and your organization is tricky!

There are a few questions that always apply when you start thinking about what’s right for your organization:

  • Are you in a regulated environment, and/or are you maintaining (or looking to gain) a certification with an IT component? This will provide you with some clear requirements to get you started.
  • How many systems need to collect their authorization from a central service? This will give you a sense of complexity and scale.
  • Do you have a data governance process that ensures the data used in authorization decisions is correct? What about a policy governance process that regularly checks the rules that will be applied to authorization decisions? This will give you an understanding of how successful ongoing support for your authorization framework is going to be.

But Wait, There’s More!

Right now, authorization tends to happen in silos–different systems don’t know how to talk to each other, and different sectors have different requirements. Industry leaders (and vendors trying to sell things) are very much hoping for convergence in the space to make this less complicated, but we’re not there yet. Authorization in practice seems to largely mean “let’s make some cool graph API calls so we can query lots of systems at once!”

So, for people trying to figure out what’s “right,” I’m sorry. There are no answers for you. For people trying to figure out where to focus to keep up with all the conversations, I hope you’re committed to some work. You’re going to have to do a lot of reading on LinkedIn, follow several analysts, look at press releases from a few of the major vendors, and attend some of the identity industry conferences.

Good luck! I’ll be reading along with you as this space evolves!

Author

Heather Flanagan, Principal at Spherical Cow Consulting and Founder of The Writer’s Comfort Zone, comes from a position that the Internet is led by people, powered by words, and inspired by technology. She has been involved in leadership roles with some of the most technical, volunteer-driven organizations on the Internet, including IDPro as Principal Editor, the IETF, the IAB, and the IRTF as RFC Series Editor, ICANN as Technical Writer, and REFEDS as Coordinator, just to name a few. If there is work going on to develop new Internet standards or discussions around the future of digital identity, she is interested in engaging in that work. You can learn more about her on LinkedIn or reach out to her on the IDPro Slack channel.

For the first time ever, Identiverse headed to Vegas for its annual conference. It was a hit, as always, and judging by the agenda, some of the hot topics were passwordless authentication, AI, and last but definitely not least, authorization. My eyes were gleaming! We’re making authorization great again!

Much Ado about Authorization

I was delighted to see so much activity around authorization, both in the standards track, the vendor track, and the keynotes. On the floor, we had a slew of newer vendor booths tackling the authorization challenge, from Aserto to Indykite. All sources of inspiration. There was no shortage of authorization-related talks either:

As You Like It

One of the main challenges with ‘authorization’ is that it is an overloaded term and many struggle to define its boundaries and key attributes. Gartner called it externalized authorization managers; Kuppinger Cole established Dynamic Authorization. To avoid any confusion, I’ll go with the industry-wide NIST-approved term of ABAC (attribute-based access control) although I will concede the term focuses on one aspect only of authorization. Beyond the what, it’s fair to say there are 3 main approaches percolating to the top:

  • The ‘traditional’, policy-driven approach: this is the one advocated by OASIS XACML going all the way back to 2001. It’s also the very same pattern taken by Open Policy Agent and its language, Rego, as well as AWS’s Verified Permissions and Cedar, and last but not least Axiomatics’ ALFA.
  • The graph-based approach: this is the route taken by NIST’s very own NGAC and startups such as 3Edges and Indykite. The graph becomes the policy and the relationships become the conditions.
  • The entitlements-based approach: this is the route taken by followers of Google’s Zanzibar such as Authzed and Auth0’s OpenFGA.

No matter the approach, these models all share the following in common:

  1. They use attributes (of the user, the resource, the context) to drive authorization
  2. They decouple authorization logic from the application/API/service
  3. They allow for reuse and centralization

In addition to these approaches, there is a new standard out there, IDQL, that focuses on the overall governance of authorization. It will be interesting to see how IDQL enables OPA, ALFA, and others.

To Centralize or not to Centralize…

As far as I can remember, authorization vendors and standards have always touted the ability to centralize authorization. But what does centralization mean exactly and what’s in it for the customer? Referring back to the ABAC architecture, we can centralize:

  • Decision making: one central logical policy decision point
  • Enforcement: one global enforcement layer (think web application firewall)
  • Policy authoring: one PAP or policy administration point for the entire enterprise

The reality is a bit more subtle. Firstly, decision-making (the PDP) doesn’t need to be centralized. It could contain a subset of policies, it could be tailored to a specific application, and it could be deployed as a sidecar to the target app or even embedded. Enforcement can happen in multiple places simultaneously. The emergence of CAEP and shared signals is also opening a new avenue for proactive, just-in-time decision-making and enforcement. I, for one, would love to see more interoperability between authorization and these other standards.

Lastly, policy authoring will only scale in the enterprise if we all chip in. Granted, there will be enterprise-wide, potentially compliance-driven, policies that are authored by an IAM team. But most of the policies will be authored by application owners. Rather than providing a centralized authorization solution, we must provide a centralized authorization framework, a set of tooling app developers can subscribe to and utilize to implement their own authorization at their own pace.

… That is the Question

The question! Authorization is all about asking. Forget the old mantra “ask for forgiveness, not permission”. Authorization is all about making sure you’re allowed to do what you’re about to do. A few of us, under the helm of my friend and former colleague, Gerry Gebel, got together in what I now call the “Wednesday meeting”. No, the Addams family wasn’t there. We collectively decided to drive a fresh standardization initiative around authorization starting with (a) a standard request/response scheme and (b) policy distribution. With regards to the request/response scheme, it’s all about making sure we’re asking the right question the same way. And when it comes to authorization, there are essentially two types of questions:

  • The direct “binary” approach: can Alice view document #123? This is extremely well-defined in XACML, ALFA, Cedar, and OPA. But all have their subtle differences. It’s time we standardized them all. My peer at SGNL, Atul Tulshibagwale, suggested we should take this initiative and go to COTS/SaaS vendors to encourage them to adopt those interfaces much like they’ve adopted SAML and OAuth/OpenID Connect for authentication.
  • The indirect, open-ended approach: OPA calls it partial evaluation. Axiomatics refers to reverse queries. It boils down to asking open questions such as “What can Alice do?”. The question can be broad (what can happen?) or specific “What can Alice do to document 123 on a Wednesday afternoon from conference room Juniper 1?” Again, it’s time we standardized this interface to promote interoperability and reuse.

If you want to jump on the authorization bandwagon (queue: DALL-E please generate a picture of XACML, ALFA, Cedar, ReBAC, and OPA riding on a wagon), feel free to join us at the policy charter group under the umbrella of the OpenID Foundation.

The Tempest

I would be remiss if I didn’t mention the storm that’s brewing outside. That’s of course AI. A lot of the conversations at Identiverse were around the impact of AI on IAM and in particular on trust (see Andre Durand’s stellar opening keynote for a chilling example).

With regards to authorization, Alex Simons from Microsoft delivered a compelling presentation on Open Standards for the Intelligent Trust Fabric. His closing point, “Authorization Policy”, highlighted the fact ABAC is here to stay and that there are enough policy standards (Rego, ALFA, and Cedar) not to create another. He linked AI with Authorization, suggesting AI could play the role of a glue or epoxy tying/translating languages between one another. Perhaps AI could also help mine existing configurations to identify policies that should be defined and enforced. Perhaps AI could help translate from plain old English requirements into the language of your choice.

Macbeth

On the last day, authorization was once again the highlight in the closing panel where Andi Hindle, our host, Allan Foster, the chair of the Policy Charter, Galina Livit, Identity and Access Management Tech specialist at Ford Motor Company, and myself exchanged views on the importance of ABAC and what to expect in the next 12 months. I even had the opportunity to murmur both SAML and XACML much to Allan’s awe/dismay. I’ll let the reader be the judge. The recording is now available here.

I firmly believe there’s a bright future for authorization and many opportunities to grow the field – not just through the different approaches but also through interop with other standards such as CAEP as previously mentioned, but also OAuth, Rich Authorization Requests, and OpenID Connect. It’s safe to say I’m authorized to say the future’s bright. My soul is lost forever to authorization!

Author

David Brossard

Chief Technology Officer, Axiomatics

In his role as CTO, David drives the technology vision and strategy for Axiomatics based on both identity and access management (IAM) market trends as well as customer feedback. He also leads the company’s strategy for standards and technology integrations in both the IAM and broader cybersecurity industries. David is a founding member of IDPro, a co-author of the OASIS XACML standard, and an expert on standards-based authorization as part of an overall IAM implementation. Most recently, David led the design and development of Salesforce’s identity offering, including customer identity and access management (CIAM) solutions.

We are pleased to announce that the results of the 2023 IDPro Skills, Programs & Diversity Survey are now available!

This is the sixth annual installment of the digital identity industry survey conducted by IDPro and made possible thanks to our members and participants. IDPro’s annual surveys are a deep dive into the identity and access management industry. We couldn’t create this without you.

Attracting nearly 30% more respondents than last year, the 2023 survey is set against a backdrop of economic uncertainty, the ongoing after-effects of the COVID-19 pandemic, and significant geo-political challenges. The role that digital identity technologies, processes, and programs—and associated laws and regulations—play continues to grow as our reliance on the Internet deepens.  Our digital world simply cannot operate without digital identity: digital identity is critical infrastructure for the internet. 

Survey Highlights

Within this context, through the 2023 skills survey, IDPro has established the following:

  • The industry is growing rapidly — reflected both in practitioner demographics and in a reported need for continuing professional development, particularly amongst the longest-serving practitioners—with implications for individual career development and for enterprise talent acquisition and management programs.
  • Enterprises have yet to fully recognize the value of their digital identity programs in enabling customers (both internal and external), versus managing and protecting systems, evidenced by an ongoing focus on technologies like (multi-factor) authentication and authorization, coupled with a relatively lack of interest in enabling technologies like CIAM and the personal identity technologies including self-sovereign identity (“SSI”), verifiable credentials (“VCs”), and digital wallets.
  • Enterprises will increasingly turn their attention to authorization deployments; and practitioners should be well-positioned to support these initiatives, unless technological approaches evolve significantly (which they may).
  • In common with many other sectors, Artificial Intelligence and Machine Learning technologies are likely to have a significant impact on the digital identity industry and digital identity practitioners.  Risks and opportunities will emerge; but it is still too soon to draw any firm conclusions about specifics.
  • There is less practitioner interest in standards development, and an increased focus on combining existing standards and technologies to solve business problems.  To some extent this reflects the greater reach of the survey, but it may also indicate a phasic maturity within the industry: with a few notable exceptions, many of the technical problems are seen as ‘solved’, if not yet fully realized.  It is unlikely, however, that this will remain the case.

There is so much more to learn from the survey, and you are welcome to download a copy here.

In October of 2022, the OpenID Foundation contacted me about helping develop a research paper on government-issued digital credentials through the lens of privacy considerations. The scope meant diving into a review of different governments, different legislation, and different technologies, as well as considering how they all come together to form what we have in the credential space today.

The scope was both too broad and not broad enough. Digital credentials are an incredibly hot topic around the world, and privacy legislation is just as hot; technical standards development is trying to keep pace with varying degrees of success. Ultimately, the paper, published in May 2023, serves as a review of what’s happening worldwide but with room for more detail on every level.

The Current Landscape of Policy and Technology

The first third of the paper looks at the more influential privacy regulations and standards and how they are being used in various government systems. We selected the governments described in this paper to represent various characteristics:

  • the one focused on international interoperability (eIDAS2.0 in the European Union)
  • the largest deployment (the Aadhaar system in India)
  • one within the EU that demonstrates interesting challenges with regard to their demographics (SPID in Italy)
  • the largest deployment in Africa (eID in Nigeria)
  • the most ubiquitous (SingPass in Singapore),
  • examples of various U.S. State implementations (Maryland, Arizona, and Utah)

This section also includes a review of the technical standards and frameworks commonly used in the various government implementations, including the highlights of OIDC, SAML, Verifiable Credentials, Biometric guidance, Identity Assurance, and the Open Standard Identity APIs (OSIA).

Gaps and Risks

Reviewing the landscape is useful, but considering what gaps remain open between what governments are trying to achieve, what privacy standards and regulations offer, and what technology can do is potentially far more useful. The second third of the paper focuses on those gaps. Different motivations from one region to the next significantly impact deployments: Cultural and economic realities make achieving global interoperability a huge challenge.

For people looking for an area they can focus on to help make a difference in the identity and privacy landscape, this section on gaps in technology, regulation, and standards might offer some food for thought.

Recommendations for Scaling to the Future

The final third of the paper offers recommendations. Given the overview of the landscape and the existing gaps, suggestions on what to do next are possibly the most interesting part of the paper. The recommendations are divided into four sections: asking governments and organizations to make sure they have the basics of security and privacy built into their services, that they consider the ongoing concerns, such as surveillance, as they design their systems, that we all consider some emerging concerns around digital warfare and AI, and that civil society steps up to help bridge the gap between government legislators and technologists.

The Story Continues

In the weeks since the paper’s publication, several organizations and technologists have asked that more material be added. Later this year, the OIDF and partner organizations will likely publish a v1.1 that includes additional government identity systems, technical standards, and possible additions to the recommendations. The paper is freely available on the OpenID Foundation website, and I hope it spurs conversations around the world with policymakers, technologists, and civil society. Stay tuned for that v1.1!

Author

Photo of author

Heather Flanagan, Principal at Spherical Cow Consulting and Founder of The Writer’s Comfort Zone, comes from a position that the Internet is led by people, powered by words, and inspired by technology. She has been involved in leadership roles with some of the most technical, volunteer-driven organizations on the Internet, including IDPro as Principal Editor, the IETF, the IAB, and the IRTF as RFC Series Editor, ICANN as Technical Writer, and REFEDS as Coordinator, just to name a few. If there is work going on to develop new Internet standards or discussions around the future of digital identity, she is interested in engaging in that work. You can learn more about her on LinkedIn or reach out to her on the IDPro Slack channel.

By Mike Kiser

We live in an age that values authenticity: being true to who you are and what you value. It is ironic, then, that one of the more recent innovations of the past few years—Large Language Models (LLM), or generative AI—is in the process of undermining authenticity itself.

Human Authorship, Technology Editorship

Hold on just a minute or two, you may be saying to yourself: “We’ve long used innovation as an assistive tool!” This is certainly true; we’ve grown accustomed to using technology to assist with a variegated selection of activities that no one thinks twice about. As we perform these activities, though, the technology is assisting the human.

Take assistive writing features of modern word processing products: the human produces the content, and the program corrects spelling and ensures that the sentences are well-formed and comply with known grammars. In this process, the technology performs the editing function: copy editing and some level of line editing. To put it succinctly, the human is the author, and the technology is the editor.

The recent wave of generative AI reverses those roles: the human prompts the system to generate content and then edits it to fit purpose. The AI becomes the author, and the human is relegated to the role of editor.

This exchange of roles then leads to all sorts of ethical issues around authenticity: humans are tempted to continue to claim authorship of the generated material. This leads to all sorts of ethical issues as they claim work that is not their own.

Inauthenticity in Academia

Academia is largely concerned with truth—what is authentic. Generative AI is undermining that by providing students with questionable content. Since LLMs are creating texts with “what’s likely to be next,” they can create plausible-sounding sources and data that is the opposite of truth. When prompted for sourcing, generative AI has produced plausible citations out of thin air. Universities and other school systems will need to be able to further educate students about the dangers of thinking that if a document reads well and appears substantiated that it must be true—they must safeguard what is authentic, what is true.

In addition, generative AI creates another challenge for authenticity: the ability to generate well-written papers by outsourcing the task to AI is now prevalent in higher education settings. Multiple universities have forced students to go back to in-person written or oral exams to ensure that the knowledge and written materials are, indeed, generated by the student rather than an online tool.

Historically, identifying this kind of synthetic generation of material was fairly straightforward, with online tools to detect plagiarism and paper reuse. With the rise of generative AI, this check has become much more difficult. OpenAI and others have been working on watermarking generated content, but it is a work in progress (with adversarial approaches to thwart any safeguards already surfacing.) One professor attempted to use ChatGPT to inform on potential users of the solution in his class, resulting in the entire class being accused of cheating (inaccurately, of course.)

Inauthenticity in the Creation of Art and Writing

If academia presents a clear delineation of right and wrong in exams and classwork, the world outside of academic institutions presents an ethical quagmire when it comes to generated content.

For example, ghostwriting (by humans) has been a practice for centuries. At times, it’s an open secret that someone’s book or other content is actually written by a third party (and even at times they receive attribution); in other instances, the lines of authorship are not nearly as clear. Ghostwriting, though, is governed by contract law: the true author has intellectual property rights over the content that they create, which they then sell to the public author of the piece.

With the use of generative AI, that contractual line may become blurred: who owns the rights to the generated content? Is it the AI system? Or the “prompter” of the system? Or might it be the creator of the content that the AI system drew from? (see the ongoing Getty images lawsuit as an example of this).

There are no easy solutions to these situations; the level of content, the forum for which it was produced, the perceived benefit of claiming authorship, and a host of other factors speak to the ethics of claiming authorship in these cases. A New York Times bestselling book is a different prospect than creating content for a low-slung personal blog, but the ethical concerns remain the same.

The core truth is that with generative AI, the AI system steps forward as the author of the artwork or the written creation, while the human user steps back into the role of editor.

Safeguarding Authenticity

Flipping these roles of author and editor erodes the authenticity of any created material and presents a series of ethical questions that must be asked. Going forward, how can we ensure that knowledge demonstrated from created material actually originates with a human and not LLMs and generative AI? How can we trust or prove that someone was the creator of a particular document or piece of art? How can we—or should we—credit the AI systems we use with the proper attribution of role?

Ultimately, how we guard against the temptation to claim authorship of these creations—when we are merely their editors—will either continue to erode authenticity and truth in our society or safeguard it for future generations.

Author

Mike Kiser

Director of Strategy and Standards, SailPoint

Mike Kiser is insecure. He has been this way since birth, despite holding a panoply of industry positions over the past 20 years—from the Office of the CTO to Security Strategist to Security Analyst to Security Architect—that might imply otherwise. In spite of this, he has designed, directed, and advised on large-scale security deployments for a global clientele. He is currently in a long-term relationship with fine haberdashery, is a chronic chronoptimist (look it up), and delights in needlessly convoluted verbiage. Mike speaks regularly at events such as the European Identity Conference and the RSA Conference, is a member of several standards groups, and has presented identity-related research at Black Hat and Def Con. He is currently the Director of Strategy and Standards at SailPoint Technologies and an active IDPro member.

Badge for IDPro Editorial Committee
IDPro Newsletter Author badge
IDPro Active BoK Reviewer badge

Hard to believe that in just over a month, we’ll be gathering in Las Vegas for Identiverse. IDPro’s home event takes place at the ARIA Resort & Casino May 30 through June 2. As ever, the content committee had a challenging time with this year’s proposals. There were so many excellent submissions for too few openings. We were only able to accept just over 100 proposals out of 375. It’s a good problem to have because it means the quality of the sessions is shaping up to make this the best Identiverse yet.

Together with fellow IDPro member Lorrayne Auld (recently retired from MITRE), I’m excited to be co-leading the Deployments and Leading Practices (D&LP) topic once again. In this blog, we’ll take a look at some of the upcoming highlights for our track.

D&LP Track Highlights

Come to the Identiverse D&LP sessions to learn how IAM practitioners are dealing with identity at scale. These sessions frequently involve large, global enterprises facing challenging implementations with lots of complexity. We’ll have sessions about both workforce and customer identity. Our speakers come to us from multiple countries, multiple industries, and varied backgrounds. They represent higher education, manufacturing, retail, consulting firms, finance, and various identity-related organizations.

Identiverse’s theme this year is “Identity Everywhere,” which offers plenty of latitude for our topics. You’ll hear from companies like General Motors, Otis Worldwide, and McDonald’s with real-world stories of challenges they’ve solved. You’ll also be hearing from experts hailing from Wavestone, Amazon Web Services, UberEther, and ProofID.

This year, D&LP has two panels:

  • 2023 Trends in Securing Digital Identities featuring Jeff Reich, Diane Hagglund, Tom Sheffield, Rajnish Bhatia, and Bernard Diwakar
  • Passkey Early Adopters Fireside Chat moderated by Andrew Shikiar from the FIDO Alliance

Thirteen Great D&LP Sessions

Let’s review some of the sessions:

  • Learn how Ebony Love and her team at McDonald’s accelerated crew onboarding
  • Explore Andrew Cameron’s Zero Trust Architecture for B2C Identity at GM
  • Discover the challenges Alyson Ruff and her team at Otis Worldwide encountered when implementing their passwordless program
  • There will be two more sessions on passwordless journeys shared by Michal Kepkowski, Maciej Machulak, Nathan Macrides, and Chintan Jain
  • We have three sessions on passkeys (the new hotness) from Huan Liu (Block Inc.), Dean Saxe (AWS), and Rolf Lindermann (Nok Nok Labs)
  • Join Bertrand Carlier from Wavestone for what is bound to be an entertaining and informative session on mission-critical SSO for millions of users
  • Did you ever wish you could bring continual dynamic authorization to COTS applications? Join Paul Heaney from ProofID to find out how!
  • We’ll hear from Matt Connors, CISO at Southern New Hampshire University, and Robert Block from Strivacity on customer IAM lessons learned from their students
  • Sarah Villarmarzo from Easy Dynamics will help us learn to trust the process when it comes to Zero Trust
  • Tabitha Hancock from UberEther will walk us through the not-so-obvious parts of application onboarding
  • Ken Robertson from Fifth Third Bank will help you jumpstart your privileged account management program
  • Integrating custom applications to MFA can be a major challenge. Sandeep Talwar from Accenture Federal will share the story of their approach
  • Jim Routh of Jimmer Advisory Services LLC will discuss the road to continuous authentication

Content review for these sessions is already underway, and the early drafts are looking good. I can’t wait to see the finished product, and I hope to see you at Identiverse in Las Vegas! If you haven’t already registered to attend, what are you waiting for?

Author

Greg Smith

Chair, IDPro Editorial

Radiant Logic

Greg Smith is a Senior Solution Consultant with Radiant Logic, where he serves as a trusted advisor for new and existing customers. He has been implementing Identity & Access Management solutions for over 35 years. He holds BSEG and MSBA degrees from Bucknell University, where he also began his professional career before moving into the pharmaceutical industry in 1996. Following a 25-year career there, he retired in November 2021 from Johnson & Johnson, where he led the engineering team for J&J’s single sign-on, risk-based authentication, multi-factor authentication, access governance, directory synchronization and virtualization, provisioning automation, and PKI services. He has spoken at Identiverse® and other industry events on numerous occasions. He was CIDPRO™ certified in October 2021 and is also a founding member of IDPro, where he currently chairs the editorial committee.

As many do, I sort of fell into Identity. I worked as a Product Manager in cloud platform API services. When I got a new job, it included identity as one of the services in the platform. It wasn’t long before I was fully immersed in the seemingly monumental task of understanding digital identity, how it fits into a larger product, and the many security implications of such a foundational element. After all, identity is the entry point into any software product.

Three identity-related jobs later, I now consider myself quite nicely “niched” in identity. I (finally) understand most of what I’m talking about (it only took a couple of years!) and I’m excited about where identity is headed. One unexpected side effect of my equally unexpected foray into identity: the emergence of privacy as a concern…a concern enough that I felt compelled to put myself out there on social media and share.

Privacy Breadcrumbs

I’ve always been a bit hesitant to share too much of myself online. I’ve been known to put on the detective outfit, get my magnifying glass, and go down the rabbit hole of high school classmates on a Friday night, and I didn’t want anyone returning the favor. I also had a strong sense of wanting to protect my child from a life documented online at a time when a child’s first presence online is a line on a pregnancy test or an ultrasound. Of course, I knew that my identity was not as simple as a username and password. My digital identity was the entire footprint I was leaving behind online. With every photo uploaded, every search query executed, every app downloaded on my phone, I was leaving a lot of information behind, and I had no control over what that information was or how it was used. 

Completely opting out of online life wasn’t an option for me, but I also felt that there was a great imbalance between the personal data we constantly give away in exchange for using a service and the reward it has for large tech companies. The fact that this tradeoff wasn’t transparent to me–and I worked in digital identity, for goodness’ sake–bothered me. How can I expect my friends who know nothing about tech to understand what is really happening with their data when I don’t know what is really happening with our data? 

The Privacy Learning Curve

One of my most engaged-with posts. I guess we’ve all had this experience!

It was time for me to take a deep dive into the world of data protection and transparency in technology. 

  • Why should I care if someone hacks into my account, as long as they didn’t buy anything?
  • Does Alexa really listen to us? 
  • When you download an app, what is all of that data being transferred in the background without your knowledge? Is there any way of knowing for sure? 
  • Why should the average person care? After all, we’re not that interesting and we have nothing to hide, right? Alexa can have my grocery list. She can listen to the dinner table conversations about daycare and taxes. 

These are some of the questions and concerns (or lack thereof) that I’ve heard over the past year or so when I’ve talked about my privacy and data protection work. It’s been really interesting, and honestly, somewhat frustrating, to try to confront these huge questions about technology and how it is so tightly integrated into our lives. I’ve covered everything from dark patterns with cookie consent to using password managers to covering your laptop camera with a camera cover. In fact, one of my most viewed reels is identity related – I did a reel of myself authenticating with a YubiKey.

Practical Privacy Education for All

A sample of Hannah’s Instagram content

I aim to bring practical privacy education to the masses. Most people are not going to buy a Raspberry Pi and set up a DNS block. Heck, most people are not even going to bother switching off precise location tracking to apps on their iPhones. I try to give people tangible things that they can do to protect their data while still acknowledging the very real fact that most of us live at least a portion of our lives online in the media we consume and in the ways we stay connected with people. My audience is not us ID Pros, although I hope you find it interesting. My audience is our friends and families who have no idea what we do for a living and just know it’s “computer stuff”.

Advocacy

I’ve found my privacy and digital footprint education attempts to be rewarding. I’ve had people tell me that they had no clue what was going on with their personal data. But now that they’ve learned, they’ve been more careful about what they share online.  It’s been uncomfortable for me on some level. I have never enjoyed sharing myself online, and now I am sharing videos of myself. I hate hearing my voice on the podcasts I’ve been on. It’s a human thing, I think that none of us like our own voices. I’ve pushed through the discomfort because I believe that sharing this information and advocating for more transparency in technology is not something that everyone is equipped to do. Thanks to my background in digital identity, I believe I have the experience and knowledge to know just how important our identity is, including all of the breadcrumbs we leave behind as we traverse this crazy thing called the internet. And the least I can do is leave behind something useful.

About the Author

Hannah Sutor is passionate about all things digital identity and privacy. She currently works as a Senior Product Manager at GitLab, focusing on authentication and authorization in a DevSecOps context.

Hannah has spoken at various conferences on digital identity, privacy, cybersecurity, and devops workflows. She is also a content creator; writing articles and creating engaging, easy-to-digest content on these topics for those without a technical background.

She lives outside of Denver, Colorado, USA, and enjoys bad reality TV just as much as she enjoys a walk in the woods.

You can find her educational posts on Instagram.

Let’s talk about passwordless, but less about the how and more about the why of passwordless. The drive toward passwordless authentication flows across all sorts of technical and user landscapes is gaining momentum. A cursory search of the internet for “why passwordless” yields an emerging consensus on why passwordless technologies occupy so much mindshare for security and workplace technologies professionals. There are two major benefits to this push.  The first centers on the improvements the user experiences by being liberated from the password. The second comes from the improved security posture from the elimination of passwords as an attack surface. These two benefits are not necessarily at odds with each other. However, we can argue that they do not completely explain why the industry wishes to raise the bar on authentication technologies. 

What’s Wrong with Passwords?

Let’s review why the industry picks on the poor password. First, passwords are reusable. Though best practice is to use a password manager and store unique, complex passwords for each website and service where we have an account, this fails in practice. Second, even if we count ourselves among those rare “diligent flossers” of password hygiene, passwords remain phishable. Phishing is when an attacker uses social engineering to get the user to share a secret and includes more than just fake websites or password reset links. Person-in-the-middle attacks, brute force attacks, credential stuffing, and replay attacks are examples of phishing attacks. Since most people reuse passwords, a breach of security in one vendor or a successful phish at one website can quickly spread to others. Finally, and partially for the reasons outlined above, passwords are expensive to maintain. There is a time cost borne by consumers to manage their passwords well. Even then, phishing can make that effort moot. Organizations lose significant workforce productivity to password issues and support at the help desk. Wouldn’t it be better to be passwordless?

Passwordless Tech and Phishing

Going passwordless solves everything wrong with modern authentication, right? Well, it’s more nuanced than that. The password is a phishable authentication technology. Its history and ubiquity make it the obvious weak link amongst our available authenticators. We get so hung up on rooting out the passwords and the passwordless experience that we can lose sight of the actual principle we are pursuing by trying to remove them: phishing resistance. Phishing-resistant technologies are not a replacement for multifactor authentication. Rather, they are an additional layer of security that compounds and reinforces baseline multifactor authentication to inoculate the authentication flow from phishing attacks. This is done with mechanisms like demonstrating user intent at authentication time, such as requiring a biometric check to continue the authentication flow or responding to a time-boxed push. Another common mechanism is removing the need for a shared secret at all using public key cryptography. WebAuthn, built upon the FIDO2 standard, is among the most visible examples of this approach. 

Multifactor Authentication

Of course, for any technology to succeed, we must meet our customers where they are in their risk tolerance and user experience journeys. Workforce identity has been very good about recognizing the risks of formerly-ubiquitous multifactor technologies, like SMS. SMS as an out-of-band authenticator recognized by the industry as a low-assurance authenticator in the workforce space for years, yet it grows increasingly ubiquitous on the customer identity side of the house. Whereas some vendors are beginning to use push notifications through their consumer apps, SMS remains a ubiquitous authentication technology globally. And that makes sense and still represents a significant upwards trend in identity security compared to the password-only baseline. 

Meanwhile, a workforce implementation that removes passwords but replaces them with SMS or push notifications may improve the user experience. Still, it won’t impact security posture as much as ensuring that a phishing-resistant factor is required for access to any business resource. Of course, this is where the rubber hits the road in terms of figuring out how to make phishing-resistant, passwordless technologies successful in a workforce implementation. Major administrative challenges around identity verification, activation, and recovery of phishing-resistant credentials are where the industry can make the next major strides of value for simplifying the implementation and operation of phishing-resistant, passwordless technologies for the workforce.

It’s About the Users

And in the end, the user experience will drive the adoption of these technologies. Though the introduction of WebAuthn passkeys complicates the workforce use case by allowing the private keys of passkeys to be shared across devices and even shared with others, it remains significantly more phishing-resistant. Consumer adoption of technologies frequently drives the patterns adopted within the enterprise, especially those pushed by device manufacturers. There have been and will continue to be gallons of ink spilled on some of the “controversies” behind passkeys. However, its wide adoption in customer identity will do much to improve user experience and security. And I suspect passkeys will also find their place in workforce implementations in time.

So as you move your organization or business to passwordless technologies, keep in mind why you are doing so. The user experience improvements are great and will be a boon for customer use cases, but the end goal of the passwordless push should be a move toward requiring phishing-resistant authentication flows.

About the Author

Jon Lehtinen

Board of Directors, IDPro; Director, Okta-on-Okta, Okta

Jon Lehtinen specializes in both the strategy and execution of Identity & Access Management transformation in global-scale organizations. He builds diverse, passionate teams that deliver automated, future-oriented Identity solutions that provide the bedrock for information security, governance, and new opportunities for business. Moreover, Jon is dedicated to the growth and maturity of IAM as a profession. He serves on the Board of Directors and as Secretary of IDPro. He’s also served as an advisor to multiple identity vendors, published Implementing Identity Management on AWS through Pakt Publishing, and is a member of ISC2, the OpenID Foundation, and Women in Identity. Presently, Jon owns the workforce, customer, and federal identity implementations as Okta‘s Director of Okta on Okta.


The IDPro Body of Knowledge (BoK) Committee is seeking new members to help guide the future development of the Body of Knowledge.   

Over the past three years, this volunteer group established structures, budgets, and staff that have enabled our Principal Editor, together with volunteer writers, to produce ten issues with high-quality articles about digital identity and access control.  The IDPro Body of Knowledge now represents the most comprehensive documentation of the Identity and Access Management (IAM) technology sector globally. The BoK defines terms and normalizes vocabulary across the sector and provides a core understanding of IAM issues to the public. There is no charge to persons accessing the document repository. All IDPro Bok documents are peer-reviewed and regularly updated to keep up with technological developments.

It’s Time to Grow the IDPro BoK

It is time to take these efforts to the next level. 

If you are working in the field of digital identity, you know that this is a rapidly changing sector, and understanding the technology can be complex.  There is a need to develop skills and knowledge in this area and to document technological advances in the BoK.  As a member of the BoK Committee, you will be a part of fulfilling the IDPro mission to create and maintain a core repository, open to the public, that documents the main topics that IAM professional needs to understand.

IDPro BoK Committee Engagement

By being involved, you will

  • Help ensure the “BoK” provides comprehensive study material for the CIDPRO™ certification.
  • Help locate and support new authors.
  • Be a part of growing the nascent ‘Paid Writer Program’.
  • Establish institutional support for foreign language translations.
  • Deepen your IAM knowledge by reviewing articles prior to publication.
  • Establish industry contacts and improve your own curriculum vitae.

During the next few months, the role of committee chair will become vacant.  This will be an opportunity to have a larger impact on the direction and progress of the BoK.

The committee meets bi-weekly (there are two meetings that cover the same ground – one for Europe and one for the Asia-Pacific).  These meetings provide a forum where the committee can assist the Principal Editor and keep abreast of developments.   Additionally, toward the end of each editorial cycle, members are requested to review the peer-reviewed articles. The BoK Committee’s role is to ensure the articles fit in the IDPro BoK. While the members are not expected to produce new articles, they are welcome to collaborate on material as appropriate.

The chair provides support to the Principal Editor and the members, typically attending all meetings, and acts as a liaison to the board.

Membership Requirements

The committee is open to IDPro members and non-members.  However, the chair must be either an Individual Member or a Delegate of an Organizational Member.  The BoK committee will elect the new chair.

To express interest, simply write to editor@idpro.org.

Since its inception in 2017, IDPro® has been on a journey of growth and innovation. From our founding to the 2021 global launch of CIDPRO™—Certified Identity Professional—and the recent introduction of our formal leadership, we’ve been steadily elevating the IAM profession worldwide.

This year, we bid farewell to three founding board members—Andrew Hindle, Sarah Cecchetti, and George Dobbs—as they complete their terms. Consequently, we’re on the lookout for three passionate, dedicated individuals to join the IDPro board of directors in June 2023.

We encourage you to nominate yourself or someone else for this prestigious role if you share our enthusiasm for:

  • Championing IDPro’s mission
  • Shaping the organization’s growth and strategic direction
  • Strengthening IDPro’s financial health and stability
  • Giving back to the global IAM community
  • Collaborating with other thought leaders and experts
  • Expanding your professional network
  • Developing your leadership skills and gain industry recognition

Our board members actively contribute 10-15 hours per month to various responsibilities, including monthly board meetings, project management, and engaging with current and prospective members.

At IDPro, we celebrate our team-oriented, inclusive approach that transcends cultures, nationalities, and time zones. We eagerly anticipate board nominations that embody these values. We firmly believe that diversity within the industry benefits everyone, and we welcome identity practitioners and qualified individuals from all backgrounds to apply.

If you or someone you know fits the bill and is eager to contribute to the ongoing success and growth of IDPro, don’t miss this chance to make a difference in the world of digital identity – submit your nomination today! We can’t wait to see the incredible talent and enthusiasm you bring to the table.

If you’re interested in being an IDPro Board nominee, please submit your resume to director@idpro.org by April 14, 2023.

We would like to extend a warm welcome to our newest corporate members Indigo Consulting and Structured Query!

Indigo Consulting is a boutique consulting company specializing in the delivery of Identity, Governance and Access Management services to worldwide customers. Indigo Consulting offers the OpenConnect product which provides the capability to manage mobile identities and system access.

Structured Query is an information technology and software development consultancy company.

As corporate IDPro® members, Indigo Consulting and Structured Query receive access to branding opportunities throughout the IDPro website, the ability to create content for IDPro’s monthly newsletter the Monthly Member Update, early access to the annual IDPro Skills, Programs & Diversity survey, access to our members-only Slack channel to discuss IAM topics, CIDPRO exam discounts and more. You can read more about IDPro membership benefits on our website.

IDPro continues to grow as more industries, corporations and professionals recognize the importance and necessity of the digital identity industry. The discussion continues to grow on identity and access management topics within our Slack channels and throughout our published content. As the industry expands, as well as industry recognition, the need for certification tools and identity resources like the CIDPRO exam and the Body of Knowledge compendium also grows.

Stay tuned for future member updates on the IDPro community!

Identity, much like tamale-making, the Wave, or Bollywood dancing, is best done in the community.

We learn the most as we interact with the ideas and experiences of others, and while online mediums allow for some level of conversation, gathering in person accelerates the process.

Most of us cannot go to every available identity gathering; since it is the season when the ground is still frozen and plans are being made for the rest of the calendar year, we thought it might be helpful to pull together a menu of conferences for 2023.

We’ve provided a general description of what to expect, what the value is, and how it may or may not relate to identity.

TL;DR: if you have budget, but can go to only one huge conference, make it the European Identity Conference or Identiverse (whichever is closest). If you lack budget, hit up a BSides event near you.

RSAC : General Security Conference

Date: April 24-27

Location: San Francisco

One of the biggest security conferences on the planet. Walk into the Expo floor, and it is instantly apparent that there is a lot of money being tossed around in this industry. Sights I’ve seen in the past: a woman sitting 15 feet up in a silver ring for eight hours straight, live-pets-as-bait for passerby, and an entire booth setup as a sardonic comment on the overspend of marketing departments.

The actual sessions have long struggled with ensuring quality talks, but they’ve raised the bar over the last decade or so. The innovation sandbox and the villages that are starting to proliferate are highlights, as is the networking potential: if you know someone in the industry, you’ll likely run into them here.

BSidesSF – Community Security Conference

Date: April 22-23

Location: San Francisco

If you’re in town for RSAC, one potential bonus attraction is the weekend before: BSidesSF. It’s at the same basic location, is much cheaper, and the quality of speakers has been historically high. Case in point, Allan Friedman, senior strategist at CISA, gave a talk about SBOM at BSidesSF back before SBOM was cool.

European Identity Conference (EIC): Identity (slight European focus)

Date: May 9-12

Location: Berlin

The first real identity conference of the year. Last year, it moved to Berlin from Munich. It’s sponsored by KuppingerCole, but they do a good job of keeping vendors out of the main sessions. (Paid keynotes are a different matter, but they’re usually pretty great as well.)

This event is panel-heavy, and also has deep ties to standards organizations: the Globally Assured Identity Network was announced at EIC in 2021, as an example. Standards organizations often have a small workshop on the morning of the first day before the main event starts; often they provide as much if not more value than the conference itself.

Identiverse: Identity (slight North American focus)

Date: May 30-June 2

Location: Las Vegas

The best North America Identity conference. Highly associated with IDPro (mic drop) and formerly run by Ping, it has always been an unrivaled source of architectural approaches, standards updates, and a good read on where identity is actually in use in enterprises today.

Also, it’s moved to Vegas for 2023. (Insert your own reaction here)

Internet Identity Workshop (IIW) XXXVI and XXXVII: Identity in the Making

Spring Date: April 18-20, 2023

Fall Date: October 10-12, 2023

Location: Mountain View

Ah, the unconference. This is a bi-annual event that has been running for XXXVI divided by II years. That’s 18 years for you non math and latin double majors. It’s an event with a different speed; each morning everyone gathers in a large circle. If you want to talk about a particular topic, you stand up, write it on a post-it note, and tell the group what you want to talk about. You get a room and a time slot, and anyone who’s interested shows up.

Lots of new ideas sprang out of this gathering, most recently FastFed and Shared Signals.

Note to the reader: decentralized identity, verifiable credentials, and that area of identity play a huge role here.

BlackHat: Exploits and Vulnerabilities

DefCon: Let’s break things (fairly responsibly)

Date (BlackHat): August 5-10

Date (DefCon): August 10-13

Location: Las Vegas

The combination of these two are known as “Hacker Summer Camp.” They are two of the most well known security conferences in the world, but they’re slightly different.

BlackHat has seen a lot of vendors get more involved over the last decade or so, to the point where the expo floor at BlackHat looks a lot like the one at RSAC. The sessions are still solid and full of good information, though, and the Arsenal / open-source tools section are particularly worthy of visiting to see what people are building for the future.

DefCon is more community focused; villages make up a large portion of the content, and they are focused on a particular topic area: social engineering, or AI, or cryptography and privacy. The price for DefCon is much cheaper, and there is more of an “alternative” vibe to the whole scene. Just like BlackHat, there’s a tools section that is worth paying attention to as well.

BSides: Hyperlocal, Community-centric Conferences

Dates: Various

Location: Various

Even if you can’t travel far, there’s likely a BSides conference near you that you could attend. Much less expensive than the RSAC / EIC / Identiverse trilogy, they seek to make security (and identity) more accessible and develop the local security community.

Mike Kiser

Director of Strategy and Standards

SailPoint

Mike Kiser is insecure. He has been this way since birth, despite holding a panoply of industry positions over the past 20 years—from the Office of the CTO to Security Strategist to Security Analyst to Security Architect—that might imply otherwise. In spite of this, he has designed, directed, and advised on large-scale security deployments for a global clientele. He is currently in a long-term relationship with fine haberdashery, is a chronic chronoptimist (look it up), and delights in needlessly convoluted verbiage. He speaks regularly at events such as the European Identity Conference and the RSA Conference, is a member of several standards groups, and has presented identity-related research at Black Hat and Def Con. He is currently the Director of Strategy and Standards at SailPoint Technologies.

I recently decided to switch to a new password manager and thought it would be worthwhile to share my experience with the larger community. No need to get into which one I left and which one I picked. I was unhappy with the old one and found one that addressed my issues. For those interested, there was some good discussion in our Slack space last month on this very topic. The new password manager made it very easy to migrate my 320+ passwords (good grief!) from the old platform. That feature is definitely something to look for if you’re considering a move of your own!

There were other features which were also very important to me:

  • Multi-Factor Authentication (MFA) on the vault(s). I’ve said it before, and it bears repeating. Your password manager is all that’s protecting all the passwords that fill your life. Make sure you’re using MFA to protect that data in the unfortunate event of your vault password ever being compromised!
  • Support for multiple forms of MFA on multiple platforms. I have an iPhone and an iPad. I have a MacBook. I have a personal Windows desktop and a work-issued Windows laptop. I tend to use Safari on the mobile devices and Chrome on the others. And I need to easily access my passwords from the multitude of mobile apps that periodically make me reauthenticate. The tool I chose supports standard OTP apps, like Google Authenticator and Microsoft Authenticator. It also supports Windows Hello, Security Keys, biometric unlock on Android, Face ID on iOS, and Touch ID on MacBook and iOS. That ought to cover things.
  • A family plan. I’m not the only one in my household with passwords. Therefore, I wanted to have a good family plan so we can all be protected and even share some passwords.

Now, as it happens, the triggering event of my migration was a breach that has me concerned about the safety of my passwords. I actually think I was probably safe based on how I was using that service (following best practices, etc.), but I didn’t want to take any more chances with that vendor, so I jumped ship. However, my (healthy) paranoia also insisted that I should reset my passwords, just in case the miscreants managed to open my old vault.

This brings up another critical feature: a good user experience using it! The one I chose has a much better UI than its predecessor, so that was an upgrade. It also interacts with the fields involved with passwords very nicely. As I worked through changing my passwords, it popped up with suggested passwords right in the “new password” and “confirm password” fields, making it super easy to change each password and then save it in the vault. Was it perfect? Nope, but I’m happy to give it a score in the low to mid 90s, which is a decent grade in anyone’s book.

If you’re about to embark on a similar journey, here’s my best advice. Prioritize your high value accounts first. Since so many of the various services we all use leverage e-mail in their password recovery ceremonies, that should be among the first to get a fresh, strong password. Next, update the password for your mobile service provider. Your Google account is another high priority given it’s used to log into other services. Same for your social accounts (Facebook, for example). If you’re an iOS user, go protect your Apple ID with a fresh password. If you have an ISP, include an update there (i.e. those of you with vanity domain names, like me). The point here is to protect your internet security foundation first, then build from there.

Next up in the priority list, at least for me, were:

  • Other authentication services and MFA providers you use
  • If you deal with the US government (IRS), time to update ID.me
  • Financial institutions (banks, credit cards, retirement accounts, pension plans, investment accounts, insurance, etc.)
  • Amazon.com (I’m just going to assume almost all of you use it)
  • Anything related to healthcare (insurance plans, patient portals, online pharmacies, veterinary care, etc.)
  • Home security systems
  • Online accounts for your vehicles
  • Other frequently used online shopping accounts
  • Travel related (airline, rental car, and hotel loyalty programs)
  • Any other cloud storage you use (Box, Dropbox, cloud backup, etc. I’d mention Google Drive, but by the time you’re this far into the list, you should have changed that one hours or days ago!)
  • DocuSign and similar services
  • All your streaming services
  • Everything else

A natural outcome of this exercise should also include a reduction in the number of passwords in your vault. I found duplicate account/password combos, sites I never use anymore, and sites for defunct organizations. Also, this isn’t something you’ll want to do all at once, which is another reason to prioritize. This is a project to chip away at over a number of days. I’m still not finished, but I’m much less concerned about the ones that I still need to hit. I have mixed feelings reporting that I’m down from 320 to 252 passwords. That’s better, but still way too many. Really looking forward to that passwordless future. 😊

Greg Smith

Chair, IDPro Editorial Committee

Radiant Logic

Greg Smith is a Solutions Architect with Radiant Logic where he serves as a trusted advisor for new and existing customers. He has been implementing Identity & Access Management solutions for over 35 years. He holds BSEG and MSBA degrees from Bucknell University, where he also began his professional career before moving into the pharmaceutical industry in 1996. Following a 25-year career there, he retired in November 2021 from Johnson & Johnson, where he led the engineering team for J&J’s single sign-on, risk-based authentication, multi-factor authentication, access governance, directory synchronization and virtualization, provisioning automation, and PKI services. He has spoken at Identiverse® and other industry events on numerous occasions. He was CIDPRO™ certified in October 2021 and is also a founding member of IDPro, where he currently chairs the editorial committee.

IDPro® is pleased to welcome our newest corporate member MassMutual!

MassMutual is a Massachusetts-based insurance company that offers life insurance, disability income insurance, long-term care insurance, retirement/401(k) plan services and annuities. MassMutual aims to help its clients develop strategies to overcome financial problems and achieve a secure financial future. 

As corporate members, MassMutual receives access to numerous branding opportunities on the IDPro website, the ability to create content for IDPro’s monthly newsletter the Monthly Member Update, early access to the annual IDPro Skills, Programs & Diversity survey, access to our members-only Slack channel to discuss IAM topics, digital identity event registration discounts, CIDPRO exam discounts and more. 

IDPro’s growth continues to accelerate as more industries, corporations and professionals recognize the importance of the digital identity industry as well as the necessity of a digital identity organization that defines, supports and improves the profession. With more members joining, the discussion is furthered on identity and access management topics within our Slack channels and our published content. Furthermore, the need for certification tools and identity resources, like the CIDPRO exam and the Body of Knowledge expands.

Stay tuned for future member updates on the IDPro community!

The AWS Inclusion, Diversity, and Equity (ID&E) Innovation Fund program enables organizations to make a positive impact on racial and ethnic groups, people with physical and cognitive disabilities, the LGBTQIA+ community, veterans, women and any intersection thereof by championing change in these underrepresented groups. We are pleased to announce that IDPro® received a $30,000 AWS grant to translate the IDPro Body of Knowledge (BoK) into Spanish, expanding digital identity learning opportunities and extending our reach to more people within the identity and access management (IAM) industry. 

IDPro’s BoK is a compendium of curated articles that forms the basis of a robust learning resource for our CIDPRO certification program. Each article within the BoK is written and reviewed by IAM professionals aiming to provide resources for the career advancement of fellow IAM professionals worldwide. 

There are more than 450 million native Spanish speakers worldwide. By translating the BoK to Spanish, our capability to reach the international audience with these learning opportunities expands greatly. IDPro’s vision is to increase the availability of continuing education and development materials for identity practitioners in all phases of their careers. The grant from Amazon helps us achieve this goal.

“We need identity professionals now more than ever, as demand for online privacy and security is increasing at an even faster pace than demand for online content and services,” said Sarah Cecchetti, Head of Product, Amazon Cognito, AWS, and Co-Founder, IDPro. “As we build new identity professionals, we want to make sure that those people are coming from all walks of life, just like the people they are building for. We’re taking on the work of translating the BoK into Spanish because we want the 450 million worldwide native Spanish speakers to have the same experience learning and understanding identity terms and concepts from our BoK that English speakers do, enabling them to lead the way in building the next generation of safe and secure online apps.”

If you’re interested in supporting our ongoing translation efforts, contact us at editor@idpro.org. This is a great opportunity to share your IAM knowledge and assist fellow IAM practitioners in accessing digital identity industry knowledge. 

You can learn more about the Body of Knowledge by visiting our website

IAM is a fundamental component of security, privacy, and user experience. Without any measure of the diversity, goals, interests, skills, and trends among identity professionals and the enterprises that employ them, however, we have no hope of efficiently and purposefully improving our industry.

January 17, 2023 is the kick-off of our sixth IDPro Skills, Programs & Diversity Survey, which shows us what’s driving the digital identity industry today.  The survey is open now until March 1, 2023, and we will share the results in time for Identiverse 2023.

<Take the Survey today – Open until March 1, 2023>

IDPro created the survey in 2018 to assess the goals, interests, and skills of identity professionals. Each year , we expanded the survey to examine individual and employer priorities, as well as inclusion and diversity within the IAM industry. 

Insights from IAM Professionals 

Since our original survey in 2018, we continue to see the trend that IAM practitioners struggle with the complexity of the industry, including the ability to  feel proficient in their field. And this “imposter syndrome”, only gets worse the longer you’re in Identity! 

We’ve discovered  an interesting trend in the tools and platforms used. In 2021, IAM practitioners stated that directories were heading out the door, and they were increasingly concerned with identity verification. IAM has become ubiquitous enough across the enterprise by 2022 that the term “applied identity”—individuals who apply identity services to address particular business needs but do not consider themselves identity practitioners—is a new field to watch. 

For the last few years, the global pandemic has been a driving force in business, and survey results reflected that. Technical and emotional support were top of mind in 2021 as respondents prioritized issues around work from anywhere, digital transformation and the need for practitioners to connect  It will be interesting to see if that is still the case when we see the results of the latest survey as more organizations rescind work-from-home support and require time back in the office. 

Supporting Identity Practitioners. 

As the survey results capture the evolution of our industry, IDPro evolves with it. The CIDPRO™ exam exists because of the 2020 survey, where  practitioners shared the value vendor-neutral training. The IDPro Body of Knowledge exists to provide vendor-neutral educational material to address the ongoing challenges of identity practitioners not feeling proficient in their field and newcomers to identity needing a place to start. 

The BoK launches new articles several times a year and . takes direction from what IAM practitioners tell us what they need to know.

The Value of Membership

The IDPro Skills Survey is the chance to reflect on the state of our work, the gaps between individual and enterprise goals, where the diversity and inclusivity of the industry can be improved, and what skills are most important to build up to stay relevant in the future. With your participation, we have a powerful opportunity to observe our industry at when it is more relevant to the world than ever before.

The survey is open to all IAM practitioners until March 1, 2023, and we will share the results in time for Identiverse 2023.

I was lucky enough to be able to attend the Internet Identity Workshop (IIW) in November after being away from it for a couple of years. IIW always holds a special place in my heart as it was my first opportunity to meet and really get to know many of my heroes in the identity space. IIW’s unique format guarantees you a more personal and conversational back-and-forth with experts in the field.

You arrive at the 3-day conference with no set agenda. This year, 300 attendees would gather each morning and many would pitch sessions that they wanted to present during the day in one of the five designated time slots. Within those time slots, presenters would choose one of 15 available rooms so on any given day there could be up to 75 sessions available. Over the course of the conference, 171 sessions were presented.

When I said presenters choose their room, it may be more appropriate to say they fight for a room. When the pitches are over, the presenters all crowd around the main display board with their session information written on pieces of construction paper, waiting to grab sticky notes with the room and timeslot designations on them. When the signal to begin is given by the organizers, a mad rush commences as presenters try to get their choice locations. In this manner, the conference schedule for the day is manifested for participants to view (and snap pictures of for later reference).

In the recent past, IIW has seen a continuous uptick in the number of sessions concerning Self Sovereign Identity (SSI) and its related technologies (blockchain, DIDs, distributed ledgers, etc). This year, while those sessions were still prominent, more traditional identity topics have not only made their way back into vogue but there were a number of sessions that discussed how to accomplish some of the goals of SSI.

Some topics I found of particular interest:

  • Kristina Yasuda, with Torsten Lodderstedt, gave a presentation on OpenID for Verifiable Credentials (which was formerly named OpenID for SSI) which covered all of the changes that had been made since the last IIW in May, including the use of OAuth now as the base protocol. Kristina gave a presentation later on Selective Disclosure for JWTs (SD-JWT) which allows the user to act between the issuer and the verifier to only selectively disclose certain claims while hiding others.
  • George Fletcher also wants to leverage the advantages of JSON to solve technical issues that were raised during his presentation Enabling Native Mobile UX for OAuth/OpenID Connect Flows where he proposed adding a new response_mode=json to OAuth, which returns a JSON-based description of what has to happen for the login and which is meant exclusively for first-party apps. This would allow a certain flexibility to the app to improve UX while ensuring that login and confidentiality of the credentials are handled properly.
  • Passkeys continue to be a popular topic as they showed up in a number of different presentations. Tim Cappalli presented his Passkeys 101 presentation from October’s Authenticate conference while Dean Saxe found some potential security concerns that still need to be mitigated during his Passkeys are Great, Until They’re Not discussion.

The biggest trend at the conference, however, seemed to be the rise of the wallet with ten different sessions discussing different aspects of wallets and their potential uses. Many saw the wallet as a unique combination of verifiable credentials and SSI as you maintain the wallet on your device and essentially load it with claims made by issuers related to you (tickets, credit cards, mobile driver’s licenses [mDL], etc). In Heather Flanagan’s session on Digitized vs Digital Credentials, it was noted that a digital wallet was finally something both technical and non-technical people could relate to. Finally, there was a presentation on the new Open Wallet Foundation that is looking at how to develop wallet consortiums outside the existing Apple/Android ecosystems.

And then, of course, there were many sessions on topics that normally wouldn’t arise at a traditional conference including: SSI Tech Stack for New Zealand Farming, End Surveillance Capitalism, Trans Identity, Your Greatest Standardization Regret, Roger & We: Collective Action for Collective Action, Can SBT (Soul Bound Token) be a practical tool for identification?, Show Me the MONEY Biz Models, People are LAZY So How Can We Make It Easier for Them to Do the Right Thing?, and many others.

IIW remains an incredibly valuable and entertaining conference. Its greatest characteristic is its openness and low barrier to entry as anyone can present. It’s also a great environment to begin conversations with those on the front lines of standards development and possibly an opportunity for you to begin your journey in that space.

If you would like to get a closer look at what goes on at the conference, you can check out the Book of Proceedings for not just the latest conference, but for the past 29 conferences at https://internetidentityworkshop.com/past-workshops/. These BoPs contain notes for almost every session that was presented in November.

Steve “Hutch” Hutchinson

Director of Security Architecture at MUFG

Steve “Hutch” Hutchinson is the Director of Security Architecture at MUFG. After cutting his teeth in C/C++ software development and network engineering, Hutch spent a decade as an enterprise architect in the healthcare sector focused on security and network technologies. Hutch is a founding member of IDPro and is honored to sit on the inaugural Board focused on community development which has always been one of his passions. If you’re ever in Richmond, VA on a Wednesday night, drop him a note for an invite to his biweekly backyard get-together.

It’s that time of year when we all pause to reflect on events in our lives, personal and professional. As I look back on 2022, there are some notable highlights:

  • In-Person Conferences Returned! Our IDPro membership met up at events including Idenitiverse, RSAC, EIC, Authenticate, Oktane, Ping YOUniverse, Gartner IAM, and many more. Many of our members presented Sessions and Master Classes at these events covering industry trends, deep technical knowledge, and even soft skills we all need to master to advance our careers.
  • IDPro continues to grow for both individual and corporate memberships. If you’re an individual member, why not ask your leadership about becoming a corporate member so the rest of your team can realize the benefits of an IDPro membership. (I did with my new company!)
  • Many of us started new positions, myself included. There are plenty of opportunities out there for IAM folks looking for something new. This is still a fairly small (but growing!) industry. I personally find it amazing to continually run into people from earlier parts of my career in new positions or companies when attending conferences.
  • This past year, we had over 35 new CIDPRO certifications awarded. I’ve been seeing CIDPRO listed on a growing number of LinkedIn profiles. I’ve also seen CIDPRO certification listed in job requirements. The exam itself provides broad coverage of foundational knowledge for an I&AM professional, and it’s a great mix of questions spanning our Body of Knowledge (BoK). If you’re preparing to take the exam, plan on spending some quality time reviewing the BoK beforehand. This is an exam for which you need to prepare! 
  • Speaking of the BoK, we just released Issue 10! See Heather Flanagan’s Editor’s Note for full details of our new and recently reviewed articles.
  • Breaches occurred far too often throughout the year. Many of these were avoidable. Some of these were caused by not using MFA with services hosting intellectual property, leaving access keys hard coded in publicly available repositories, or improperly securing APIs. We need to do better!
  • The IDPro Slack channels continue to be a wealth of information, advice, humor, camaraderie, and education. And a source of newsletter article ideas. If you’re not already familiar with our Slack space, check it out today! If you have a question, your fellow IDPros will usually have an answer for you in short order.

As we look toward 2023, I hope our members will include a resolution to write at least one article for the IDPro Blog site and Newsletter! Seriously, your membership gives you a great path to share your thoughts and ideas with your fellow IAM practitioners. What are you waiting for?

On behalf of the entire Editorial Committee, I wish you all a wonderful holiday season and Happy New Year for 2023!

Greg Smith

Chair, IDPro Editorial Committee

Radiant Logic

Identity management is a multibillion-dollar global industry with growth expected to double in the next five years. But finding experienced identity and access management (IAM) practitioners is harder than ever. To solve this IAM skills gap, IDPro® developed the Certified Identity Professional (CIDPRO™) certification to expand candidates’ IAM knowledge to fill experience gaps and provide a focused development path for employees.

In a recent IDPro webinar covering the CIDPRO exam, Kevin Streater, Head of the IDPro’s Certification Committee, and Sarah Cecchetti, Co-Founder of IDPro and a member of the IDPro Certification Committee discussed how IDPro certification professionalizes the identity industry by enabling the next wave of practitioners to validate their identity knowledge through the CIDPRO exam.

“CIDPRO fills a significant gap in the professionalization of digital identity as the importance of this essential discipline continues to increase within enterprise security” – Romain Lenglet, Chief Software Architect, SGNL.ai

Why get CIDPRO certified?

The CIDPRO exam provides a consistent measurement of foundational IAM knowledge which helps organizations to better assess a candidate’s level of knowledge while enabling the candidate to showcase their expertise in the field. 

The CIDPRO exam is a rigorous, vendor-neutral certification for IAM which is extremely valuable for organizations looking to employ knowledgeable candidates. Certification is designed for individuals working or planning to work as security architects, software engineers and developers, IAM administrators, information risk managers, product managers and anyone looking to:

  • Expand their IAM knowledge by identifying and expanding upon their skill sets.
  • Validate their personal digital identity expertise.
  • Earn credibility and enhance their employability.
  • Demonstrate their commitment to the IAM industry.
  • Be recognized for skills and achievements through vendor-neutral certification and badging.

What topics does the CIDPRO exam cover?

The exam covers essential IAM topics and undergoes a rigorous review process while remaining up to date on industry developments. Some of the topics within the CIDPRO exam include:

  1. Explain the functional elements of an identity solution.
  2. Describe identifiers, identity lifecycle, and identity proofing.
  3. Know the core concepts of security for identity.
  4. Describe the rules and standards that relate to identity.
  5. Explain identity operational considerations.

Watch our CIDPRO webinar recording to learn more about the CIDPRO exam.

During a recent IDPro® webinar, IDPro Principal Editor Heather Flanagan and IDPro Executive Director and President Heather Vescent discussed the Body of Knowledge, a compendium of curated articles covering a wide range of IAM topics, enabling identity practitioners to expand and solidify their industry knowledge.

The webinar covered the establishment and continued development of the BoK as a resource for people interested in expanding their identity and access management (IAM) knowledge. The BoK is written and reviewed by IAM professionals, for IAM professionals, and aims to develop and provide resources for the education and career advancement of identity professionals worldwide. The BoK has continued to expand over the course of several years. After initially publishing in 2020, the BoK has gone on to receive 25 additional articles, 60,000 views and 10,000 downloads, all within 145 countries.

The key to the BoK’s success is in its contributors. The BoK webinar outlined the process for contributing, including a wish list for the ideal Volume 1 BoK Table of Contents. An article goes through a lifecycle of ideation and brainstorming to collaboration, writing, submission, peer review, publication and refreshing articles which occurs about 3-4 times a year.

Some of the desired topics within this wish list include:

  • Identity and shared devices
  • Use case: integrating with social media
  • Specific challenges to the “leaver” phase of the digital identity lifecycle
  • Authentication policies and governance

The webinar also covered some of the ways you can get involved with supporting the BoK either as an author or a peer reviewer. IDPro can help with any challenges you may face with writing academic articles.

If you would like to participate in the development of the Body of Knowledge, contact Heather Flanagan at editor@idpro.org. You can also watch the IDPro Body of Knowledge Webinar in its entirety on our YouTube channel.

We would like to extend a warm welcome to our newest corporate members Allstate and Okta!

Allstate is an American insurance company specializing in numerous insurance offerings, including auto insurance, life insurance and home insurance. Allstate provides affordable, simple and connected protection that empowers its customers while adding economic value for shareholders, creating opportunities for our teams and working with local and national partners to improve communities.

Okta is an American Identity and Access Management company that provides cloud software aiming to help companies manage and secure user authentication into applications, and for developers to build identity controls into applications, website web services and devices.

As corporate IDPro® members, Allstate and Okta receive access to numerous branding opportunities on the IDPro website, the ability to create content for IDPro’s monthly newsletter the Monthly Member Update, early access to the annual IDPro Skills, Programs & Diversity survey, access to our members-only Slack channel to discuss IAM topics, CIDPRO exam discounts and more. You can read more about IDPro membership benefits on our website.

IDPro continues to grow as more industries, corporations and professionals recognize the importance and necessity of the digital identity industry. The discussion continues to be furthered on identity and access management topics within our Slack channels and throughout our published content. As the industry grows, the need for certification tools and identity resources, like the CIDPRO exam and the Body of Knowledge compendium, also grows.

Stay tuned for future member updates on the IDPro community!

Why Should I Care About GraphQL ?

If you work in identity, sooner or later you’ll need to secure access to some APIs. It’s unavoidable in our hyper-connected world: APIs are the very fabric that ensures interoperability of services across domains and geographies. Besides, a recent poll by PostMan shows that 89% of executives will invest more in APIs over the next 12 months (87% of CEOs, 86% of CIOs, and 93% of CTOs).

Now when we think of APIs, most people still think of REST (or even good old “SOAP”). The world is changing though, and API clients need more flexibility in how they access data. Flexibility that REST(or SOAP) can’t provide. Hence the rise of GraphQL. 

The adoption rate of GraphQL has grown steadily since its initial release by Facebook in 2015. From only 6% of developers using the API language in 2019, it is now used by 28% of developers worldwide. That’s almost 1/3 of all developers, and more keep adopting it every day. And if you look at the Javascript community alone, which includes Node.JS and all other JS based runtimes, a whooping 47% of developers used it in 2020. Although REST APIs still constitute the bulk of the APIs out there, their usage declines (89% usage, down from 92% last year).

In short, APIs are still a growing market, and within that market, GraphQL is the contenter to watch. Identity professionals should therefore at least be aware of its existence, and ideally also know the kind of problems it has the potential to create for their organizations.

In order to understand how to secure GraphQL APIs, we must first understand exactly what it is. If you already know, you may skip this next section…

What is GraphQL?

GraphQL is a language for APIs. It has its own specification and implementation vendors (Apollo dominates that market). In GraphQL, an API exposes 1 single REST endpoint. Client applications can then (typically) POST to that endpoint, sending requests written in GraphQL language in the body of their requests.

A GraphQL request can be either a Query or a Mutation. A query is a request to read some data, whereas a mutation is a request to modify data (create, update or delete).

GraphQL requires a Server that can understand GraphQL requests and interact with a backend data store. It also requires Clients that can prepare and package GraphQL requests and understand the responses from the GraphQL servers. Clients typically run in browsers, but any application running on any platform can implement a GraphQL client.

A GraphQL server exposes its capabilities through a schema. A GraphQL schema is a definition of types, queries and mutations that it exposes. The schema is usually introspectable – clients can therefore discover the capabilities of any given GraphQL Server. Here is a Schema example:

  • We first define a simple User type with a set of properties (note: it looks like JSON, but it isn’t!)
  • Notice the “!”? That marks a mandatory attribute on that type:
type User {
  ID: ID!
  username: String!
  Email: String
  Name: String
  password: String
}
  • Then some mutations – Create, Update and Delete a User. Create and Update return the User object, Delete returns a Boolean:
type Mutation {
CreateUser(
    Name: String
    Email: String
    password: String
    username: String!
  ): User
  UpdateUser(
    ID: ID!
    Email: string
    Name: String
    password: String
    username: String!
  ): User
  DeleteUser(ID: ID!): Boolean
}
}
  • And finally a Query – which returns all users (an array of Users actually), with a pagination option:
Type Query { 
User(options: paginationOptions): [User]
}

Now all that’s missing is to provide actual code to implement the above schema. This is done through Resolvers, which are pieces of code, or functions that do the actual work of interacting with the backend store and implementing the CRUD operations requested. 

Resolvers can be implemented in any language, and can interact with any number of backend stores. This fact alone explains some of the popularity of GraphQL: any given schema can be the collation of multiple backend stores. A GraphQL server could for example use SQL and no-SQL data sources at the same time to serve its schema.

Now for the “Graph” part in GraphQL, which is probably the main driver for the language’s popularity.

GraphQL enables developers to create relationships in the schema between its various types. For example, in an IoT use-case, a user may own one or more devices. 

This (User)-[OWNS]->(Device) relationship can be modeled as follows in GraphQL:

type Device {
  ID: ID!
  IP: String
  Name: String
  DeviceID: String
}
type User {
  ID: ID!
  Email: String
  Name: String
  password: String
  username: String!
  Devices: [Device!]! @relationship(type: “OWNS”, direction: OUT, properties: “since”)
}

So now we’ve just defined a graph layer on top of any kind of backend database(s)! Note that using a graph database as a backend makes the most sense here, as it removes certain issues such as the “Object/Relational Impedance Mismatch” problem. Essentially, GraphQL is a graph layer sitting over any set of data stores.

Given a schema as defined above, a client App could then send a request to get all the Users and Devices they own in one single call. The GraphQL Query would look something like this :

query getUserDevices {
  User  (options: { ← The type we’re querying
    limit: 10 ){ ← The pagination option: fetch at most 10 users
    ID ← The user properties to return
    Username
    Name
    Devices{ ← We’re now traversing the OWNS relationship to Devices
      ID ← We’re on the Devices related to the User, fetch some properties
      DeviceID
    }
  }
}

🛑  Important note: in the above query, we’re accessing two types: the Users and the Devices they own. These are two types of resources: each may have its own access policies and requirements. How would you protect that? (answer in the next section).

Anyway, the response always matches the “shape” of the request. For example, the results of the query above would look like this:

{
        “ID”: “4f45a8eb-a674-4d63-8273-2c01796fc7f2”,
        “username”: “alexb”,
        “Name”: “Alex Babeanu”,
        “Devices”: [
          {   ← This is a Device entity
              “ID”: “5b2ed103-50ce-4f2b-9a73-c53515ab4856”,
              “DeviceID”: “DABC1”,
              “Name”: “Flow Sensor 1”
          }
        ]
      },
      {
        “ID”: “1f9affc3-4d0b-41c9-b109-979c1d9d9f24”,
        “username”: “bdoe”,
        “Name”: “Bob Doe”,
        “Devices”: []  ← Bob doesn’t own any devices
      }
}

Queries like the above are hard to implement in REST. An endpoint specific to this query would have to be developed, which takes time to code anew. Then, clients would be stuck with this exact query, if they ever needed even slightly different data, then the REST endpoint would need to change accordingly. 

Now compare REST with the GraphQL approach: clients can freely request any property of any type, while traversing any relationship, at any given time as long as they are requesting something defined in the schema. Nobody needs to rewrite anything at all on the server side. GraphQL is a full blown language. Pretty powerful indeed, no wonder it’s so popular! And it’s probably the only viable approach if you have to handle complex, highly connected data such as what is in the Web 3.0 universe, for example.

Problems with GraphQL

But as we know, with great power comes great… headaches.

The very flexibility of GraphQL makes it rather difficult to deploy and operate, so much so that some even recommend never to expose GraphQL APIs to the internet… Among the many challenges, we can cite:

  • Poor performance, due to the risk of running n+1 queries
  • Security and Access
  • Hard to perform File Upload
  • Change management
  • Object/Relational impedance mismatch (due to using a graph layer on top of a non-graph backend)
  • Type explosion
  • Vulnerability to deep queries
  • And more

Now, most of the above are purely software development problems that can be solved or alleviated through the use of appropriate techniques. For our purposes here, as Identity and Access Management (IAM) professionals, we will focus only on security and access issues. We will also briefly discuss the problem with deep queries, as a Denial of Service (DoS) attack is still a security issue.

Enforcing API Security

Developers notoriously hate having to deal with Authentication (AuthN) and Authorization (AuthZ), which are often done as after-the-fact band-aids, typically resulting in poor API security (to say the least). This has led us to the traditional way of securing APIs by delegating the tediousness of AuthN and AuthZ to external services.

The Traditional Way

The traditional (and textbook) way of enforcing API security is to place some kind of Proxy server in front of the API: a Gateway that acts as a gatekeeper that filters the requests. The requests that reach the APIs have therefore already been vetted by the Access Control system.  Figure 1 below depicts this traditional access enforcement architecture.

Figure 1 – Traditional architecture for Access Control
[source: LDAPwiki – https://ldapwiki.com/wiki/Policy%20Based%20Management%20System]

  • The Policy Enforcement Point (PEP) is a proxy deployed in front of our traditional REST API. The PEP typically relies on HTTP routes to determine which resource is requested by the client.
  • The PEP sends an access request to a Policy Decision Point (PDP).
  • The PDP takes care of authentication (and may thus initiate a login flow) and then authorization.
  • For authorization, the PDP fetches adequate access policies through a Policy Administration Point (PAP).
  • Optionally, the PDP can also fetch some additional data from a Policy Information Point (PIP), which has all the adequate attribute values necessary to make access decisions.
  • The PDP replies with an access answer to the PEP, which then lets the request reach the API, or not.

In any case, this architecture is fine as long as your resources to be protected can be modeled as distinct HTTP routes. But as we’ve seen, GraphQL exposes only one single route for a potentially huge number of possible types of requests. We therefore can’t use this architecture at all, not unless our PEP can also parse POST requests and understand GraphQL! 

Now, some recent such proxy offerings do support GraphQL. Nevertheless, as we’ll see below, this model goes against GraphQL best practices.

Securing GraphQL APIs

From an IAM perspective, the main worries here are AuthN, AuthZ and Deep Queries. 

Architecture

The first consideration is the architecture we need to implement our IAM system. Whereas the traditional way would have us place a PEP proxy in front of our resources, the GraphQL specification and best practices tell us that the authorization checks should all be made at the Business Layer. Figure 2 below depicts the placement of this Business Layer:

Figure 2 – Placing Authorization in the GraphQL stack

[source: https://graphql.org/learn/thinking-in-graphs/#business-logic-layer]

This is radically different from what we, in IAM, have been doing so far! No more Proxies – all the enforcement should be done in the API implementation itself. This actually makes sense, in that the Business logic layer is where all the data processing happens. The data is readily available there, it is therefore the best place to evaluate access policies.

But wait, does this mean that we have to trust the developers with this? Thankfully, not necessarily, as we’ll see…

The two Business Layer Places

The “Business Layer” can mean two things in GraphQL implementations:

  1. At the database level

Here we’re implementing access control by augmenting the queries that would run normally on the DB to process the incoming requests. This is typically done by dynamically adding WHERE clauses to the query, as appropriate in order to filter the results based on predefined access rules. This is as deep in the “business” layer as it gets: we’re at the DB level.

Pro’s:

  • Full control of the data
  • Super-fine grained authorization possible

Con’s:

  • Definitively developer work
  • Access Policy changes require code changes
  • Really hard to manage policies.
  • Arguably a bit late to enforce AuthZ at the DB level. A lot of processing could have been avoided by checking access earlier.
  1. At the GraphQL Layer

Now luckily, the GraphQL Specification provides us with a tool that we can leverage to enforce access control: the @auth directive…

Directives are annotations that can be added to the types and properties of a GraphQL Schema. The @auth annotation is reserved in GraphQL for all authorization considerations. For example, we can augment our simple schema defined above with an @auth annotation as follows:

directive @auth(requires: Role = ADMIN) on OBJECT | FIELD_DEFINITION
→ Auth directive that takes a “requires” input parameter of type “Role”. It defaults the Role parameter  to “ADMIN”. It can be applied to Type or Field of objects.
type User @auth(requires: USER){ → This type requires subjects to have a “USER” Role
  ID: ID!
  Email: String
  Name: String
  password: String
  username: String! @auth(requires: ADMIN)→ Access to this property requires an “ADMIN” role
  Devices: [Device!]! @relationship(type: “OWNS”, direction: OUT, properties: “since”)
}

This directive requires its own implementation function, which can be written in any language and perform any processing. The GraphQL server ensures that the directive implementation function is run first, before any operation on the annotated object or field. This is perfect, as it can serve, in effect, as a PEP placed directly within the GraphQL schema at the business layer.

⇒ Recommendation: use the @auth annotation to make a call to an external Authorization Service or PDP as needed. This is the closest implementation to the traditional PEP/PDP architecture we can get.

Authentication

AuthN is actually “easy”, in that it isn’t much different from securing REST APIs. It could be done through a Proxy, a Gateway or out-of-band, or as the first step in the @auth directive implementation. The trick is to ensure that the access_token provided in the incoming request is valid.

Authorization

Calls to the PDP can be made from the @auth directive function, right after the authentication check.

Throttling

The last piece of the puzzle is due to the very nature of GraphQL. As we have seen above, it is possible to request the traversal of relationships in any Query. It is therefore also possible to request “deep” queries. Here’s an example of a deep query:

query getUserFriends {
  User  {
    Name
    HAS_FRIEND {
      to  {
        Name
  HAS_FRIEND {
       to  {
           Name
  HAS_FRIEND {
       to  {
         Name
… etc…,
      }
       }
       }
       }
      }
    }
  }
}

If the backend database(s) stores several million users, a single query like the one above can easily hang the whole system. This can be used easily for DoS attacks, but also accidentally by unaware clients.

Usual Throttling techniques can’t be used either. We’re only talking here about one single request, not N requests per second. Instead of relying on the typical time-based request count, we need to implement a new kind of throttling by calculating the cost of any given GraphQL request (in particular, queries).

Various techniques exist for mitigating this risk. One handy method is to calculate the cost of any query before running it. This cost can additionally be used to implement various subscription levels in a SaaS, for example. In any case, several cost analysis and open-source libraries exist, for example: https://github.com/pa-bru/graphql-cost-analysis .

Cost analysis is also the most flexible and versatile approach, in that it applies to all queries without having to change the GraphQL schema at all.

⇒ Recommendation: implement cost analysis to protect any GraphQL API. Note that cost analysis can also be done in the @auth directive.

Conclusion

IAM professionals should be aware of GraphQL as its adoption across the industry is growing. Securing GraphQL requires at least some specialist knowledge: traditional methods don’t apply.

The best and least intrusive method is to use special directive annotations on the GraphQL schema, acting as PEPs from within the API’s business layer, and delegate the authorization tasks, as usual, to an external PDP service.

Additionally, it is imperative to compute the cost of any GraphQL query before running it, because of the risk of Deep Queries and easy DoS attacks. Only run queries that have a cost cheaper than a preset limit.

Alex Babeanu

Co-Founder and CTO of 3Edges

IDPro’s mission is to globally foster ethics and excellence in the practice and profession of digital identity. We do that by supporting IAM professionals with the Body of Knowledge, the CIDPRO certification and our vibrant members-only community. 

To better serve our community, we are streamlining our membership structure, making IDPro more accessible to a broad audience and enabling members to upgrade as their goals evolve.  

IDPro membership will now consist of two levels, 1) individual membership and 2) corporate membership tiers from Ash at $300 to our Diamond tier at $50,000. Corporate membership benefits closely map previous levels. 

The streamlined membership levels introduce new benefits like CIDPRO exams and exam discounts, and in many cases increase the number of individual memberships included as part of a corporate membership. The membership agreement will be updated to reflect the new structure. As per the agreement, changes will not take effect for existing members until 60 days after the revised agreement has been sent out. To be clear, your membership level will not change unless you agree to it. 

As we continue to grow, each member helps IDPro toward its goal of defining, supporting and improving the digital identity profession while standardizing key terminologies and furthering the discussion on IAM industry topics.

Visit the membership section of the website here for more information and to reach out with questions.

During the latest IDPro® webinar, the 2022 Skills, Programs & Diversity Survey Q&A webinar, IDPro founders Ian Glazer and Andi Hindle shared an overview of the 2022 IDPro Skills Survey, discussing the survey development process and important trends for identity and access management industry professionals while also answering audience questions.

The 2022 IDPro Skills Survey is the fifth annual installment of IDPro’s digital identity industry survey and is made possible due to the ongoing support of our members. The Skills Survey provides an opportunity to better understand the goals, interests, skills and trends among identity industry professionals and their employers. The survey has continued to expand to examine industry members and their employer’s priorities as well as inclusion and diversity within the IAM industry.

The webinar was a great opportunity to further the discussion into leading trends within the digital identity community, including:

  • The COVID Spring: A concept where organizations and individuals shifted their priorities, interests and skills within the digital identity industry after emerging from the COVID pandemic.
  • The Rise of Applied Identity: This change in the identity profession demonstrates the rise of identity practitioners who take identity technology and apply it in different disciplines.
  • Misalignment of Individual and Enterprise Interests: There is a notable mismatch between what identity professionals believe will matter and what the enterprise is interested in within the IAM industry.

During the Q&A, Ian and Andi responded to questions from the audience furthering the discussion on these trends. The questions we received included:  

  • How did you differentiate between practitioners and vendors. Or how did you remove non-practitioners from the survey?
  • Are there any plans to bring forth additional certifications, including developer-focused or a level-up?
  • What is meant by “lateral thinking” within the survey?

If you’re interested in learning more, the webinar recording is available on our YouTube channel and the full report is available for download on our website.

IDPro® is excited to welcome CVS Health as the newest Champion member of our organization of digital identity professionals.

CVS Health is a leading health solutions company, delivering care and improving the health of communities across America through its local presence, digital channels and over 300,000 dedicated colleagues, including over 40,000 physicians, pharmacists, nurses and nurse practitioners. CVS assists people with managing chronic diseases, staying compliant with their medications or accessing affordable health and wellness services in the most convenient ways. CVS Health helps people navigate the health care system and their personal health care by improving access, lowering costs and being a trusted partner for every meaningful moment of health. 

As Champion members of IDPro, CVS Health receives a nominating committee seat, unlimited individual memberships for employees, and prominent brand recognition across the website, at conferences and seminars, and throughout in-member communications. CVS Health receives the ability to create content for IDPro’s monthly newsletter the Monthly Member Update, early access to the annual IDPro Skills, Programs & Diversity survey, access to our members-only Slack channel to discuss IAM topics, CIDPRO exam discounts and more. 

“We are excited to have such a prominent company with a strong influence within the health and wellness industry join our organization and demonstrate its commitment to the importance of digital identity,” said Heather Vescent, Executive Director and President, IDPro. “We aim to ensure that the disciplines of digital identity and access management are globally seen as vital and vibrant counterparts to privacy and information security and, with the help of member companies like CVS Health, we can achieve this goal.”

IDPro continues to grow as more industries recognize the necessity of digital identity and access management. As we welcome more members into IDPro, the discussion is furthered on identity and access management topics within our Slack channels and throughout our published content. Furthermore, the need for certification tools and identity resources, like the CIDPRO exam and the Body of Knowledge compendium expands.

To learn more about our membership options, you can view our members’ benefits chart and learn more about these benefits on our website. Stay tuned for future member updates on the IDPro community!

I have a “smart” TV which apparently can act like a hub for other “smart” appliances (made by the same vendor). This is a feature I had no reason to use, until…

I got a washer and dryer from the same vendor which are also “smart.” And thus an opportunity to break stuff appeared. So, I downloaded the vendor’s mobile app and signed up with Sign In With Apple (SIWA) which creates an account without a password.

Graphical user interface, text, application, chat or text messageDescription automatically generated

SIWA makes it easy for users to sign-in to your apps and websites using their Apple ID. It uses a WebAuthn-like process to create an unphishable credential – which is pretty darn nice and handy, BUT…

This kind of credential requires that I am on an Apple device to use it… and that brings me back to the “smart” TV. After having set up the washer and dryer in the mobile app I wanted to see them on the TV cuz why wouldn’t I?

The TV isn’t an Apple device and only offers a few social sign-on options along with a username/password option, none of which I have thanks to SIWA. So much for using the TV as a hub for devices that I associated with the mobile app account. And if someone else in the household wanted to use the “smart” features they would have to log into my account, which can only be accessed by my Face ID/Touch ID. 

I kinda guessed this was going to happen and I chose this path to see if, in fact, things were going to break as expected…and they did. To be clear, this post isn’t to shame the vendor. In the olden days, I would have created a username/password credential, stored it in a password manager, and then used it on the TV… and even shared it within the household.

Whilst I strongly believe that customer identity and access management (CIAM) is about experience and bottom-line growth, it also has security implications. You can have a great experience and great security, but you have to plan carefully… and even if you do, the future is unknown. New popular authentication ceremonies will arise that your now legacy products should accommodate but will struggle to do so. This is all to say that B2C companies need identity professionals sitting shoulder to shoulder with user experience, product management, marketing, and security.

Lastly, if anyone has a good use case for password-less credentials used to get an alert that the dryer has finished, please let me know! 😉 

Thanks to @samuelgoto for jogging the hazy memory loose that IF the TV implemented Sign in with Apple, then it would prompt me for my email address to sign in with but if I used the “Hide my email address” feature SIWA provides, then I am certainly out of luck.

Ian Glazer

Ian Glazer

Senior Vice President, Identity Product Management, Salesforce.com

Board Member Emeritus, IDPro.org

Ian Glazer is the Senior Vice President for Identity Product Management, at Salesforce. His responsibilities include leading the product management team, product strategy and identity standards work. Prior to that, he was a research vice president and agenda manager on the Identity and Privacy Strategies team at Gartner, where he oversaw the entire team’s research. He is the co-founder IDPro, the professional organization for digital identity management, and works to deliver more services and value to the IDPro membership, raise funds for the organization, and help identity management professionals learn from one another. During his career in the identity industry, he has co-authored a patent on federated user provisioning, co-authored and contributed to user provisioning specifications, is a noted blogger, speaker, and photographer of his socks.

IDPro® would like to warmly welcome our newest corporate members Easy Dynamics Corporation, Cross River and IDENTOS! 

Easy Dynamics Corporation is a leading technology services provider focused on cybersecurity, cloud computing and information sharing. Easy Dynamics brings well-architected solutions and management consulting to their clients to align them with the best practices they demand. Easy Dynamics provides outstanding technical excellence and advises its customers on both tactical and strategic initiatives.

Cross River is an American financial services organization that provides technology infrastructure powering the future of financial services to fintech and technology corporations. Cross River delivers innovative and scalable embedded payments, cards, lending and crypto solutions to millions of consumers and businesses. Together with its partners, Cross River is reshaping global finance and financial inclusion.

IDENTOS is a Toronto-based corporation that designs and develops digital identity & access technology to meet modern demands of user-centricity, respect for privacy and distributed system interoperability. With IDENTOS, organizations connect with far greater confidence, compliance and agility.

As corporate members, Easy Dynamics Corporation, Cross River and IDENTOS receive access to numerous branding opportunities on the IDPro website, the ability to create content for IDPro’s monthly newsletter the Monthly Member Update, early access to the annual IDPro Skills, Programs & Diversity survey, access to our members-only Slack channel to discuss IAM topics, CIDPRO exam discounts and more. IDPro offers different levels of corporate membership each featuring varying benefits. You can compare membership options in our members’ benefit chart and read more about these benefits on our website.

IDPro continues to grow as more industries, corporations and professionals recognize the importance and necessity of the digital identity industry. With more members joining, the discussion is furthered on identity and access management topics within our Slack channels and throughout our published content. Furthermore, the need for certification tools and identity resources, like the CIDPRO exam and the Body of Knowledge compendium expands.

Stay tuned for future member updates on the IDPro community!

We are pleased to share that 25 people have completed IDPro’s Certified Identity Professional (CIDPRO) exam and have become CIDPRO certified. This officially concludes the CIDPRO exam’s beta test and demonstrates its continued growth as an essential, foundational piece in validating identity experts’ industry knowledge.

To become CIDPRO certified, candidates must complete the CIDPRO exam which tests the individual’s knowledge in a range of industry subject matter including functional and operational elements of an identity solution, core concepts of security for identity and rules and standards as well as identifiers, identity lifecycle and identity proofing. Becoming CIDPRO-certified validates a candidate’s personal experience and knowledge of essential identity skills helping to increase credibility through verified vendor-neutral foundational knowledge. 

Read testimonials from recently certified CIDPRO certification holders sharing why the exam is important for them:

“CIDPRO fills a significant gap in the professionalization of digital identity as the importance of this essential discipline continues to increase within enterprise security. The breadth of the exam materials and questions personally helped me get up to speed with many aspects of identity that I was unfamiliar with, from privacy laws and NIST standards to the practical challenges of running a helpdesk. It is so valuable that we are considering making it our standard training for new employees at SGNL.ai.” Romain Lenglet – Chief Software Architect – Identity and Access Management at SGNL.

“The CIDPRO exam is important to me for two reasons: it helps me validate my individual identity knowledge and it enables me to demonstrate my commitment to the digital identity field. As one of the first real certifications in the area of identity, the CIDPRO exam promotes identity and access management as an important aspect of any organization.” Paul van Berlo – Solution Architect Modern Workplace / Microsoft 365 at Philips Domestic Appliances.

“The CIDPRO exam questions are carefully designed by IAM domain experts. Great care is taken to ensure that candidates really understand the context of the content of the Body of Knowledge (BoK) articles and have not just memorized them. I was able to apply the knowledge I acquired from the BoK in a variety of ways in my professional practice. I am proud to have passed the exam and to be an active and supporting member of IDPro.” Martin Lössl – Senior Consultant at Ventum Consulting.

Following the successful completion of the CIDPRO exam’s beta test, we are actively incorporating feedback to improve the candidate experience. We plan to continue developing additional in-depth study materials and collaborate with external trainers to expand the exam and its reach. Our main goal is to create an all-inclusive digital identity exam that enables identity and access management industry practitioners to validate their experience and demonstrate their knowledge.

Register for the upcoming CIDPRO webinar for more information.

We warmly welcome the newest corporate IDPro® members, Radiant Logic and Target, to our community of identity and access management professionals!

Radiant Logic is an American computer software corporation developing solutions for identity and enterprise information integration, information security, and data management. Radiant Logic’s RadiantOne solution assists corporations in unifying and simplifying their identity management processes.

Target is an American general merchandise retailer with 1,938 stores nationwide and over 400,000 employees. Target offers many products including groceries, general merchandise—clothes and home decor—and pharmacy.

As corporate members, Radiant Logic and Target gain access to branding opportunities on IDPro materials, the ability to help create content for the Monthly Member Update, early access to the annual IDPro Skills, Programs & Diversity survey, access to our members-only Slack channel, CIDPRO exam discounts, and more. IDPro offers different levels of corporate membership each featuring different benefits. You can compare membership options in our members’ benefit chart and read more about these benefits on our website.

IDPro continues to grow as more industries, corporations, and professionals recognize the importance and necessity of the digital identity industry. With more members joining, the discussion is furthered on identity and access management topics within our Slack channels. Furthermore, the need for certification tools and identity resources, like the CIDPRO exam and the Body of Knowledge compendium expands.

Stay tuned for future member updates in the IDPro community and an upcoming article from our new board.

After two long covid years it is once again possible to meet up with your IAM industry peers. If you are US-based you have had Identiverse® and Gartner IAM to enjoy, but even for the people that are based in the EU, there are a wealth of choices.

In the Netherlands we have the annual IdNext event running on September 27 and 28 in Utrecht. The theme this year is “Decentralized identities in practice” and if this piques your interest, please take a look at the agenda: 

IDnext Event 2022

(In the interest of full disclosure I do have to mention that I am part of the program committee for IdNext.)

The Whitehall Media IDM Europe conference takes place on October 4, also in Utrecht, and features several interesting talks by a number of distinguished speakers including our own Anmol Singh.  Full details are at:

IDM Europe 2022

Sweden has finally gotten a proper IAM conference in the form of IdentityDay that runs on October 5 in Stockholm:

IdentityDay

Later in the year, KuppingerCole is inviting us to come to Berlin November 8 to 10 to attend their Cybersecurity leadership summit. Although this conference is primarily focused on Cybersecurity, there will be a fair amount of IAM talks as well.

Cybersecurity Leadership Summit 2022

In addition to these larger conferences, you may also want to attend one or more of the various IAM meetups including:

IdentiBeer Oslo

IdentiBeer Copenhagen

IdentiBeer Helsinki

Amsterdam Digital Identity Meetup

Martin Sandren

IDPro® Founding Member

Ahold Delhaize

Martin Sandren is a security architect and delivery lead with over twenty years of experience of various information security related roles. Primarily focused on security architecture and digital identity including global scale customer, privileged and employee IAM systems using Microsoft Azure Active Directory, Sailpoint, Saviynt, Forgerock, IBM and Oracle security stacks. Experience includes architect, onshore and offshore team lead as well as individual developer. Wide international experience gained through having lived and worked in Sweden, Germany, UK, USA and the Netherlands. Martin is a frequent speaker at international conferences such as Consumer Identity World, MyData and European Identitiy and Cloud Conference.

“In my role as IAM engineering manager I lead our global team of IAM engineers and BAs who continuously strives to provide quality IAM services to our 750 000 associates in 20+ opcos.”

Martin Sandren is a board member of the IdNext foundation, founder of the Digital Identity Amsterdam meetup and active within IDPro. Learn more and sign up

A friend of mine recently received his new electric vehicle, full of all the expected modern experiences and connectivity. So of course he needed to set up yet another online account with its own username and password. He was kind enough to share his experience with his fellow IDPro members in our Slack channel, and the discussion was spirited! (If you’re not already a member of IDPro, sign up today and join the discussion! Our Slack space is a wealth of valuable information and gives you access to people with some of the deepest IAM experience on the planet.)

First off, it was nice to see that the vendor has put a priority on providing its customers with a “convenient and safe digital experience.” Kudos to them! To that end, they provided some password advice. I’ll share their recommendations in just a moment, but before I do, let me just share that opinions on the efficacy of these six suggestions vary. Widely. To quote my friend, “4 of these are great advice, 1 might be okay advice, and the other is folklore advice”. See if you can identify which is which!

The vendor suggested that passwords should, and I quote:

  1. Be at least 12 characters long. The longer your password, the better.
  2. Use uppercase and lowercase letters, numbers and special symbols
  3. Not contain obvious keyboard paths e.g. 12345qwerty
  4. Not be based on personal information such as your birthdate or name
  5. Be unique for each account you have
  6. If possible, be generated and stored with the help of a password manager.

So, let’s take a look at these. At first blush, they all seem like good advice. And certainly the more technical folks among our readership are going to have no problem with them. But then, it’s not just the techies who buy modern vehicles with online features. So we need advice that’s good for everyone.

Number one: This is good advice. Yes, we’re all concerned about password cracking, and longer ones are harder to crack. But they’re also harder to guess or discover through social engineering. So, I’m good with this one and so were my colleagues in IDPro. In fact, a phrase you can remember is a good idea here because it results in a longer password. I’ll just drop an xkcd link here…

Number two: Yes, it’s conventional wisdom, but not really as helpful in increasing security as number one, and it makes it harder to remember your password, leading to writing them down, which we all think is not desirable. To quote a fellow IDPro member I deeply respect, a better #2 might be “You can use any characters in your password, including symbols and spaces.” He went on to elaborate that “we don’t want systems to limit the input, but we also don’t want them to require it.”

Number three: Per my IDPro colleagues, it would be better to just prevent these lame passwords in the password blocklists. i.e. Rather than telling people not to do that, just don’t let them in the first place. If they try to use such a password, the system should “Just say NO!”

Number four: Yup. Solid advice.

Number five: I like this one. It really limits the blast radius if someone manages to steal one of your passwords. If you’re using the same password on Facebook, Amazon, your online bank, and other obvious targets, and someone gets it, you have to know they’re going to hit all those sites as quickly as possible to do the most damage. Having different passwords provides protection.

Number six: So, if you buy into #5, you’re going to need #6 to make your life manageable. No one can remember all those passwords. Seriously. The modern world is insane with accounts and passwords. In the course of writing this article, I did a quick count in my password manager and it’s over 300 unique accounts and passwords. Obviously, I need to simplify my life, but until I do, I need my password manager.

One closing tip from me. Password managers can help bring order to your digital life and limit the potential damage of any one compromised password, but they also rely on a password. If you do use a password manager, please please please make sure you enable its MFA features! That password is the key to your life, so make sure it’s protected by MFA. Oh, and see above for your password manager password for some useful tips to protect it. Lastly, use MFA everywhere it’s available until we can get to something better than passwords for all these online services that dominate our lives.

Greg Smith

Chair, IDPro Editorial

Radiant Logic

Greg Smith is a Solutions Architect with Radiant Logic. He has been implementing Identity & Access Management solutions for over 35 years. He holds BSEG and MSBA degrees from Bucknell University, where he also began his professional career before moving into the pharmaceutical industry in 1996. Following a 25-year career there, he recently retired from Johnson & Johnson, where he led the engineering team for J&J’s single sign-on, risk-based authentication, multi-factor authentication, access governance, directory synchronization and virtualization, provisioning automation, and PKI services. He has spoken at Identiverse® and other industry events on numerous occasions. He was recently CIDPRO™ certified and is also a founding member of IDPro, where he currently chairs the editorial committee.

We are pleased to announce that the results of the 2022 IDPro Skills, Programs & Diversity Survey are now available to our members. 

This is the fifth annual installment of the digital identity industry survey conducted by IDPro and made possible thanks to our members and participants. IDPro’s annual surveys are a deep dive into the identity and access management industry. 

We initially created the survey to assess the goals, interests, and skills among identity professionals. Since that time, we’ve expanded the Skills, Programs & Diversity Survey to examine industry members and their employer’s priorities as well as inclusion and diversity within the IAM industry. 

The survey took place during numerous global geopolitical issues: the Ukraine conflict, the continued global impact of COVID-19, and economic challenges. These situations played a role in the responses – with some surprising, and not-so-surprising, results.

Notable trends found in this year’s results include:

  • COVID-19 continues to impact our society and—by extension—the challenges facing digital identity professionals.
  • Applied identity is a newly emerging field as individuals within organizations—who do not today consider themselves identity practitioners—apply identity services to address particular business needs.
  • Traditionally on-premise directory services have now shifted to remote due to the COVID-19 pandemic and subsequent lockdowns. 
  • Personal identity technologies are becoming more important for enterprises, however, they have yet to respond to this ongoing trend.

Members are encouraged to read the full report, available for download in the IDPro Slack channel.

IDPro Founders Ian Glazer and Andi Hindle will be hosting an exclusive live Q&A— IDPro members, continue the discussion on the survey trends. Stay tuned for the registration announcement on Slack.

The IDPro Body of Knowledge (BoK) offers information on topics ranging from Identifiers and Usernames to the IAM Reference Architecture. One of the most broadly useful articles, though, is the Terminology in the IDPro Body of Knowledge

Every article in the BoK is expected to include a terminology section. The purpose of this section is two-fold: first, to normalize (when possible) on common definitions  and second, to make sure the context in which a particular definition is used is clear. Article authors are encouraged to review and use existing definitions before offering new ones for terms already described in the BoK.

While the terms listed are not a complete set of definitions for the entire field of IAM (there are ‘only’ just over 200 terms included as of July 2022), it is a start. The context an individual reader brings to the table will influence how accurate the terminology is for their environment.

In an ideal world, the industry would normalize on concrete, common definitions for terms like “authentication” and “authorization.” With every new article, though, we’re getting closer to that future as authors learn from others who have published in the BoK.

This industry will not normalize on terms until we understand and adapt to the different ways the terms are used, and come to consensus on how to be consistent.   

You can be a part of improving the industry! Lend your time and talents toward this goal. If you have any thoughts or feedback regarding any of the terms listed in the Terminology document, please add an issue in GitHub against one of the articles where the term is used. 

Heather Flanagan

Principal Editor

IDPro

Heather Flanagan, IDPro Principal Editor and Principal at Spherical Cow Consulting, comes from a position that the Internet is led by people, powered by words, and inspired by technology. She has been involved in leadership roles with some of the most technical, volunteer-driven organizations on the Internet, including  the IETF, IAB, and IRTF as RFC Series Editor, ICANN as a Technical Writer, and REFEDS as Coordinator, just to name a few. If there is work going on to develop new Internet standards, or discussions around the future of digital identity, she is interested in engaging in that work.

By Greg Smith

The Internet Identity Workshop meets twice a year and publishes proceedings for those who were unable to join. IIW34 was held at the end of April, and our own Heather Flanagan led a topic entitled “What do you wish you’d known when you first started in identity?”, which is a topic near and dear to all of us in IDPro. Here’s a quick overview of some of the thoughts participants came up with (I deliberately did not edit the bullets captured from the whiteboard):

Wish I’d been there for the live discussion! These are some of the same challenges we’ve all had throughout our careers. Fortunately, we now have IDPro to help newcomers to the identity and access management industry with some of these challenges, starting with our Body of Knowledge, which addresses many of the questions above.

The green statement added to the first bullet stating that there’s “always a new context to solve for” especially rang true for me. This is a workspace that is constantly evolving, and you’re never really “done”.  That definitely feeds into the “Don’t worry solve everything” idea. Huh? Wait, what? Had to check with Heather on that one, and in the heat of the moment, words were missed on the whiteboard. The discussion actually went along the lines of “Don’t worry about solving for everything; every process is an evolution.” Okay, that makes more sense. To look at identity from an agile perspective, this is clearly a practice that benefits from iteration as new contexts continually show up.

Not captured on the whiteboard, but every bit as relevant, IDPro member Joe Andrieu said “I wish I knew that identity is how we recognize, remember, and respond to specific people and things. I also wish I knew that different people have fundamentally different mental models of what identity means. And we often talk past each other even as we honestly try to communicate.” So true! He also shared links to his Functional Identity Primer and Five Mental Models of Identity articles with the group. Definitely worth a read, folks!

What else do you wish you’d known when you got started in this space? Let us know in our Slack workspace and keep the conversation going.

Greg Smith

Chair, IDPro Editorial

Radiant Logic

Greg Smith is a Solutions Architect with Radiant Logic. He has been implementing Identity & Access Management solutions for over 35 years. He holds BSEG and MSBA degrees from Bucknell University, where he also began his professional career before moving into the Pharmaceutical industry in 1996. After a 25 year career there, he recently retired from Johnson & Johnson, where he led the engineering team for J&J’s single sign-on, risk based authentication, multi-factor authentication, access governance, directory synchronization and virtualization, provisioning automation, and PKI services. He has spoken at Identiverse® and other industry events on numerous occasions. He was recently CIDPRO™ certified and is also a founding member of IDPro, where he currently chairs the editorial committee.

by David William Silva, PhD

This is the last article of a series of four on the basics of the General Data Protection Regulation (GDPR). In the first article, we covered context, motivations, and goals. In the second article, we reviewed terminology and basic definitions. In the third article, we discussed examples and applications of some of the main building blocks of GDPR. In this fourth article, we review some of the most critical issues in the GDPR while identifying, classifying, and analyzing each one in practical terms.

Without any concrete instance of an application subject to GDPR compliance, one might look into the GDPR text from a dangerously relaxed perspective which can lead (and it has been leading) to GDPR violations, overwhelming fines, and further administrative penalties. On the other hand, generally speaking, it is not always clear how to ensure GDPR compliance. Resorting to the GDPR text without a strategy might feel like drinking from a fire hose. The whole point of this series of four articles on GDPR was to propose a gentle introduction to the subject matter in a gradual, structured way.

The primary motivation behind this fourth and last article is to propose a way to identify key regulatory components that can be classified into major groups so we can discuss their importance and practical implications. 

We organized the following discussion in four major groups: Must Know, Must Do, Better Have, and Better Do. It goes without saying that this is a non-exclusive and non-exhaustive discussion. Instead, for each of these major groups, we will select one or a few examples that configure a good start on the road to GDPR compliance. The “analysis” piece of this article will be presented as an informal discussion to keep this article within an acceptable length.

Must Know

If there are components that anyone interested in GDPR must know, these are probably the applicability and non-applicability of the Regulation and associated fines. The GDPR text can sometimes be very specific and practical, while some other portions leave too much room for interpretation. In any case, establishing a knowledge foundation is the best one can do towards GDPR compliance.

Applicability and Non-Applicability of the GDPR

The General Data Protection Regulation (GDPR) establishes rules for protecting natural persons concerning the processing of personal data (Article 1). The GDPR “applies to the processing of personal data wholly or partly by automated means and to the processing other than by automated means of personal data which form part of a filing system or are intended to form part of a filing system” (Article 2). The Regulation applies to any data processing related to members of the European Union (EU) regardless of the processors’ location. (Article 3)

Natural person and data subject are synonymous. Personal data is a term for data that reveals information that identifies or has the potential to identify a natural person. Processing is the term used to describe any operation executed on personal data. A processor is a term to describe a natural or legal person who processes data (Article 4).

The GDPR does not apply to “the processing of personal data which concerns legal persons and in particular, undertakings established as legal persons, including the name and the form of the legal person and the contact details of the legal person” (Recital 14). Legal entities many times operate as a processor. Although the GDPR does not apply to data that identifies legal entities, legal entities often possess data that identifies natural persons (their customers). Therefore, GDPR protects these customers’ right to privacy (Article 28).

Penalties

Perhaps the most important exercise an organization intending to process data that can be seen as personally identifiable information (PII) can do is to identify what from the GDPR applies and does not apply in the context of the application that the organization is responsible for. It is not rare to see organizations downplaying the need to comply with privacy regulations such as the GDPR in an attempt to overlook its severity. However, in 2021, the GDPR issued fines up to $823.9 million for violations. 

Violations can seem subtle for some organizations already in possession of personal data. In 2020 the GDPR issued a fine of $29.3 million to a company that failed to obtain consent or to inform customers about using their personal data for telemarketing purposes.

The first step towards compliance is, obviously, to know the requirements and their applicability. In some portions of its text, the GDPR advises that in case of doubt, the requirement must be fulfilled regardless, such as in the case of performing a privacy impact assessment.

Furthermore, all the general conditions for imposing fines, with different levels of severity, can be found in Article 83 of the GDPR. 

The GDPR establishes fines and further remedies or corrective powers when a violation occurs. Fines must be “effective, proportionate and dissuasive for each individual case. For the decision of whether and what level of penalty can be assessed, the authorities have a statutory catalogue of criteria which it must consider for their decision”. Severe violations (Article 83) are subject to fines of up to 20 million euros or up to 4% of an organization’s global turnover of the preceding year, whichever is higher (GDPR Fines and Penalties).

Must Do

Not all procedures and specifications in the GDPR are mandatory, and most of what is mandatory is subject to exceptions under proper conditions. However, if there is one issue above all others that can never be neglected, that could easily be the requirement for consent. We discussed consent in the previous articles of this series, and we return to this subject to place it in the Must Do group from a practical perspective.

Consent 

As we discussed in previous articles of this series, if an organization aims to process personal data, a mechanism for obtaining the consent of data subjects must be in place. According to Council Directive 93/13/EEC, consent must be requested via a pre-formulated interface presented in an intelligible and easily accessible form, using objective and easy-to-understand language, avoiding any terms that might be considered unfair. Before providing consent, the data subject should have no doubt of the controller’s identity and the purpose of processing personal data that is being requested.

The Regulation summarily prohibits the processing of personal data unless expressly allowed by law or by the data subject. Besides consent, other mechanisms also apply for allowing the processing of personal data, such as contract, legal obligations, vital interest of the data subject, public interest, and legitimate interest according to Article 6(1). Processing personal data in the clear without consent or the previously mentioned mechanisms is a violation. (Key Issues: Consent)

Recall that consent must be “freely given, specific, informed, and unambiguous.” If processing personal data has been enabled by consent, whoever is processing that data must be able to prove that the data subject has indeed consented to the processing of their data. The data subject has the right to withdraw their consent at any time, and this process must be as easy as it was to give the consent. Withdrawing consent does not affect the lawfulness of the processing of data based on consent before its withdrawal. Conditions for Consent, Article 7, is part of the main principles of the GDPR.

At any indication that consent was obtained under pressure, penalty, and/or by some form of imposition, consent will not be regarded as freely given since, in this case, the data subject is unable to refuse or withdraw consent without detriment.

The GDPR prohibits the processing of personal data that reveals “racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data to uniquely identify a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation.” Article 9 establishes several exceptions to this prohibition, including law enforcement activities, support of court procedures, public interest, and legal inability of a data subject to give consent.

The processing of data (by third parties) that leads to identifying data subjects is a violation of the GDPR. (Key Issues: Personal Data)

Processing of personal data is allowed when the processing no longer permits the identification of data subjects, provided that appropriate safeguards exist, such as pseudonymization (Recital 156).

Consent for personal data collection and processing for a particular purpose is not everything and certainly not the end of an organization’s concerns with respect to GDPR compliance. Still, it is undoubtedly one of the most important first steps toward the lawful processing of personal data.

Better Have

The term “better” here does not imply any relaxation with respect to obligations imposed by the GDPR. As mentioned earlier, some requirements are followed by conditions and exceptions which might release an organization from associated obligations. The term “better” here implies that even if it is not objectively mandatory, some requirements are so important that it is better for an organization to address them than otherwise. That is, the benefits of implementing some measures outweigh any associated inconvenience.

Data Protection Officer

The GDPR establishes the concept and conditions for the obligation of organizations to have a Data Protection Officer (DPO). The legal obligation to appoint a DPO does not depend on the size of the organization “but on the core processing activities, which are defined as those essential to achieving the company’s goals. If these core activities consist of processing sensitive personal data on a large scale or a form of data processing which is particularly far-reaching for the rights of the data subjects, the company has to appoint a DPO.” The GDPR also establishes that “willful or negligent failure to appoint a Data Protection Officer despite a legal obligation is an infringement subject to fines” (Key Issues: Data Protection Officer).

Organizations need to take the need and role of a DPO seriously. The DPO must be impartial and empowered to assist the organization in implementing all necessary protective measures to meet GDPR requirements. The DPO cannot perform functions that place them in a position of conflict of interest.

Electing a DPO is one of those measures that an organization processing personal data might want to have in place regardless of a clear conviction of its legal obligation, providing immediate benefits versus risks and penalties associated with failing to do so.

Additional information about the DPO, including the associated qualification they might have and how to hire one, is available.

Better Do

Once again, “better” here does not intend to relax any obligations from the Regulation. Instead, we use it to identify and put together mechanisms, procedures, and requirements which are better to address even if an organization falls into some condition in which it is not obligated to comply.

Privacy Impact Assessment

An organization that intends to process data must first conduct a privacy impact assessment (PIA) or data protection impact assessment (DPIA) and document it. If certain measures are in place, a PIA or DPIA might not be absolutely necessary. A PIA or DPIA is mandatory if risks from data processing are high. In case of doubt or difficulty in determining risk, a DPIA should be conducted. (Key Issues: Privacy Impact Assessment)

Records of Processing Activities

When personal data is processed, the GDPR obligates written documentation and an overview of the procedures by which personal data is processed. (Article 30) This documentation must be made entirely available to authorities upon request. (Key Issues: Records of Processing Activities) Not maintaining records of processing activities is a violation of the GDPR (Article 83(4)(a)).

Procedural Rights

A data subject has the right to access personal data being processed. Omitted or incomplete disclosure of access to personal data being processed upon request is subject to fines. (Key Issues: Right to Access) Any right provisioned by the GDPR, such as the Right to be Forgotten and the Right to be Informed, must be observed when applicable. 

Safeguards

The GDPR establishes that security measures must be considered and implemented according to risk assessment. These measures include (but are not limited to) pseudonymization, encryption, mechanisms for ensuring confidentiality, integrity, availability, and resilience, regular testing, ongoing evaluation of the effectiveness of present measures, and continuous improvement of the security of processing (Article 32).

Data Minimization

Data minimization is the term used to describe “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed” (Article 5). It is about only collecting and processing data that is absolutely required for the purposes stated when consent was requested.

Data minimization can prevent organizations from accidentally violating GDPR requirements for processing personal data, such as purpose limitation, where data is only collected for the legitimate purposes stated when requesting consent and not further processed in a way that violates its limits. Data minimization can also reduce risks and liabilities when processing personal data, such as in the case of data leakage.

Processing personal data might be allowed for particular purposes such as archiving, scientific or historical research, or statistical purposes as long as appropriate safeguards are in place. These safeguards aim to ensure that required measures are in place, particularly the principle of data minimization (Recital 156).

Data minimization is part of general data protection principles recognized by the GDPR, such as purpose limitation, limited storage periods, data quality, data protection by design and by default, the legal basis for processing, processing of special categories of personal data, measures to ensure data security, among others (Article 47).

Anonymization

The GDPR does not apply “to anonymous information, namely, information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.” Furthermore, the GDPR “does not, therefore, concern the processing of such anonymous information, including for statistical or research purposes” (Recital 26).

Although allowed by the GDPR, it is well known that techniques such as anonymization are faulty (Broken Promises of Privacy: Responding To The Surprising Failure of Anonymization). At least since the late 2000s, schemes for de-anonymizing data have been proposed (Robust De-Anonymization of Large Sparse Datasets).

Pseudo-anonymization

Pseudonymization “means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable natural person” (Article 4).

The GDPR establishes that techniques such as pseudonymization can reduce risks to the data subjects and help controllers and processors meet their data-protection obligations. The explicit introduction of pseudonymization is not intended to exclude any other measures for data protection (Recital 28).

The GDPR acknowledges that techniques such as pseudonymization may be reversed by unauthorized parties, which constitutes a violation (Recital 85).

Encryption

Organizations can reduce the probability of a data breach as well as the risks of fines by resorting to the encryption of personal data. Processing data is naturally associated with a certain degree of risk. The GDPR recognizes encrypted data as unreadable by non-key owners, which therefore minimizes the risks in case of incidents during data processing. Furthermore, the GDPR recognizes encryption as the best way to protect data in transit and at rest (Key Issues: Encryption).

Authentication

User authentication is part of the concept of Privacy by Design discussed in the Regulation (Key Issues: Privacy by Design). If not done properly, instead of a safeguard, authentication can be an opening for a GDPR violation. One example would be to collect from a natural person more information than necessary for implementing an authentication mechanism and, from there, make inferences about the individual that exceeds the scope of authentication. The GDPR clearly states that personal data is, by nature, sensitive data (Recital 51). 

Requesting additional data for identification purposes is allowed if a controller can’t identify a natural person but is not mandatory. The controller should not refuse to take additional information from the data subject (Recital 57).

Where To Go From Here

The IDPro Body of Knowledge offers an introduction to the GDPR and a discussion on the impact of GDPR on identity and access management. The full GDPR text is available online in a friendly format. Some templates are also available such as the Data Processing Agreement, instructions on how to write a GDPR-compliant privacy note, and the Right to Erase Request Form. The European Data Protection Board has a GDPR-centric news feed which can be useful for keeping up with the latest developments about GDPR.

About the Author

David William Silva is a Senior Research Scientist at Symetrix Corporation and Algemetric and is responsible for the research and development of innovative products related to security, privacy, and efficient computation powered by applied mathematics. David started his career as a Software Engineer focused on web services and agile software development, which led him to be involved with several projects from startups to government and large corporations. After 17 years of conducting R&D in Brazil, David moved to the US to engage in scientific research applied to a global industry of security and privacy, which has been his focus for the past seven years. 

By Vittorio Bertocci

After having attended in person one Identiverse, two EICs, one AuthenticateCon, one IETF, one OSW and one IIW, I thought I definitely left behind the woes of the Lockdown Winter that forced our favorite events to take place in the netherspace that is Zoom or proprietary eponym equivalents. Boy, was I wrong. None of those events prepared me for a conference that takes over entire blocks, where the expo alone is large enough (700+ exhibitors!) to have its own weather system, if not its own zip code. Above all, I wasn’t prepared for an event where you no longer have direct line of sight with your tribe, and the majority of the badge-clad people are perfect strangers.

The very distrustful conference site (it asked for my username/password every couple of hours; is this what continuous authentication means?) offered a staggering 612 sessions. One first-time attendee asked me – “how do you choose what to attend at RSA?”. My answer: use the search feature to tease out what’s relevant to your interests. That is also what I have done: as a result, expect this report to be a very partial & personal account of the event. If you want to broaden your perspective, you can chat to other IDPros who were there; you’ll find them on #RSAConference in our IDPro Slack.

Content highlights

Despite all the rhetoric about identity being the new perimeter and other platitudes, and some shoutouts to OIDC and FIDO from the keynote, RSAC 2022 had very little content on our favorite topic. The Identity track had 44 sessions, 13 of them being vendor sponsored and 7 being duplicates (overflow rooms). 

While zero trust was still one of the dominating buzzwords, I was surprised to see that a search for “blockchain” only returned 5 sessions, “web3” exactly zero, and “decentralized” only one track session, from IDPro’s very own George Fletcher.

George’s session, “Managing De-Centralized Identities: A Relying Party Perspective”,  was one of the highlights of the conference for me. The first time I saw George present on this topic was at an IIW in 2019. In a nutshell, the session takes tools and principles of Self-Sovereign Identity (SSI)/decentralized identity and tries to apply them when developing a realistic relying party. In so doing, he uncovers discrepancies, opportunities and impedance mismatch that are both a powerful tool to understand SSI’s value proposition and an honest litmus test to assess the maturity level of those new technologies. TL;DR: things did improve since that first 2019 session, but much still needs to be figured out for those technologies to be viable in real world use. 


After the session, a bunch of IDPros and identirati congregated right outside, engaging in an incredibly satisfying 30 minute long discussion on VCs, passkeys and their potential impact on society. Those 30 minutes alone were worth the trip, what a JOY to be among one’s people!  

Passkeys were the centerpiece of the other awesome event-in-the-event I had the chance to attend, the half-day FIDO Alliance seminar on – surprise surprise – passwordless and passkeys in particular. The Apple WWDC announcements about passkeys created a huge interest around the topic, and the seminar was the perfect opportunity to demystify the technology and get a glimpse of the enormous potential it has to finally deliver a substantial blow to passwords in consumer authentication. Our very own IDPro member Tim Cappalli delivered key parts of the event, from the very first live cross device/vendor passkey demo to a very lively panel.   

Remaining in IDpro territory, mighty board member and acclaimed book author Jon Lehtinen presented a session on “Demystifying the Identity Capabilities of AWS for Enterprise Practitioners”- no one will be surprised to learn that it was well received.

Just because it’s RSA, and the RSA experience needs some “pure security” to be complete, I decided to attend “Bypassing Windows Hello for Business and Pleasure” –  and I wasn’t disappointed. The lengths to which the researcher had to go in order to defeat Windows Hello were substantial, and he presented his journey with flair and competence. If you have access to the on-demand content, I would recommend this session.

In Summary

Is RSA still worth the very hefty admission price? From the content perspective, I am honestly not sure. I got lucky with George and passkey, but the dearth of identity content is concerning, though I suspect part of the fault falls on ourselves – I personally had some submission fatigue and didn’t propose anything. Perhaps we should resolve to submit in bigger numbers and see whether we move the needle. From the experience perspective… I’d say it’s a resounding YES. True, we identirati have other opportunities to see each other, but with its sheer size, parties (rrrisky) and general vibe, RSA remains an important milestone in the conference calendar, and I am glad it’s back!

About the Author

Vittorio Bertocci is a Principal Architect for Auth0|OKTA. A veteran of the identity industry, in his 20 year career Vittorio helped create, shape and steer key identity products, technologies, and practices. Vittorio is currently serving on the OpenID Foundation board of directors, and is the host of the Identity, Unlocked podcast. An active member of the identity community, Vittorio is a well-known speaker, educator, and published author.

As part of IDPro®’s continued efforts to promote a diverse and inclusive identity community, we are pleased to announce that we are offering two Diversity & Inclusion Packages for those wishing to attend Identiverse® 2022. 

These packages include one Identiverse event ticket, donated by Identiverse, and up to $1,000 for expense reimbursement, fully funded by generous donations from IDPro members.

“We are excited to be able to offer these Diversity & Inclusion Packages to the identity community. I have been a firsthand witness to the impact these values are having on this industry and am very proud of our organization for being able to support this effort.” Heather Vescent, Executive Director and President of IDPro.

To be considered, please submit a personal statement of no more than 300 words to director@idpro.org by 11:59 PM PDT on June 7, 2022. Your personal statement should answer the following questions:

  1. Can you please share a little bit about your background?
  2. How did your interest in identity come about?
  3. What do you hope to learn at Identiverse 2022?
  4. Why are diversity and inclusion important to you?
  5. Are you willing to write a brief blog post or be interviewed about what you learn at Identiverse 2022? 

Please include any social media links in your personal statement. 

Our vision at IDPro drives us toward enabling a diverse, supportive, and inclusive identity community and we are grateful for our dedicated members who are helping us achieve this important goal. We look forward to reviewing your submissions and we hope to see you at Identiverse 2022!

by David William Silva, PhD

This is the third article of a series of four about the General Data Protection Regulation (GDPR) basics. In the first article, we covered context, motivations, and goals. In the second article, we reviewed terminology and basic definitions. In this third article, we discuss examples and applications of some of the main building blocks of GDPR.

Default Prohibitions and Non-Applicability of the Regulation

In our first article, we remarked that the “Regulation is all about protecting people, their privacy, their right to privacy, their right to own and protect their data, to choose what can be shared and with who, in which conditions, for how long, and to which extent.” It is also well-known that the Regulation is feared by many as it is recognized as “the world’s strictest security and privacy law.” However, in almost every section of the GDPR text, virtually all restrictions are followed by some form of exception that describes the context in which a particular data processing prohibited by default can be allowed, which means that under certain conditions, the prohibitions introduced by the Regulation shall not apply. This can be seen as a statement that while protecting natural persons and their rights, the Regulation also acknowledges the reality that organizations need data to function properly.

In the remainder of this article, we will highlight the cases and conditions where the Regulation shall not be applied, therefore releasing organizations of any risk of privacy violation and associated fines.

Data

In order to highlight how GDPR rules can affect an organization’s operation, we must observe the practical difference between the two main classifications of data. As we discussed in the previous article and according to Article 4 of GDPR, personal data means any information related to an identified (directly or indirectly) or identifiable natural person, also known as a data subject, via identifiers such as name, identification number, location data, IP address, etc.

According to Recital 51, personal data can be further categorized as sensitive personal data concerning fundamental rights and freedoms. Personal data is considered a key issue in the GDPR law, in which sensitive personal data is labeled as the most important of all special categories of personal data, including genetic, biometric, and health data, as well as racial or ethnic origin, political opinions, religious or ideological convictions or trade union membership. Article 9 states that the processing of sensitive personal data shall be prohibited, including data concerning a natural person’s sex life or sexual orientation.

Article 9 presents a list of conditions through which the above does not apply. The number one is if the data subject has given explicit consent to processing their personal data for one or more specific purposes.

In some parts of the world, a data subject might be exposed to verbal and physical assaults (and in some cases even risk of death) if information regarding their sexual activities and/or sexual orientation is accessed and processed without consent. In many places in the world, the political and ideological views of an individual, which can often be directly or indirectly inferred by the individual’s habits such as shopping, entertainment, and other things, can also lead to all sorts of discrimination and other prejudicial actions. Some organizations might unlawfully calibrate their decisions towards their audience based on information related to race and ethnicity. The Regulation is very aware of all of these risks and therefore imposes requirements for preventing violations of a natural person’s rights and freedoms, as well as defines severe penalties for those who, for any reason, violate these requirements.

Data Protection Officer

The GDPR establishes that, regardless of its size, an organization might be obligated to appoint a Data Protection Officer (DPO) if the data processing is essential to achieving its goals. A DPO can be internal (an employee) or external. If an organization chooses to elect an employee as their DPO, they must ensure that the employee is not subject to conflict of interest (e.g., supervising themselves). The DPO must enforce the Regulation towards the organization’s compliance, and it cannot be dismissed or penalized as a result of the fulfillment of their tasks. Regardless of the motive, not appointing a DPO is an infringement subject to fines if an organization is found legally obliged to do so.

If a DPO is an employee, it might be better to be someone exclusively acting as a DPO. If an organization chooses the IT manager or someone in HR to fulfill this role, this might cause a conflict of interests and, therefore, defeats a DPO’s purpose.

Data Protection

Two mechanisms can serve as enablers of lawful data processing under certain conditions when it comes to data protection. The first one provides two-fold benefits: it serves the data subject’s interests with respect to their privacy while they can release organizations from provisions in the Regulation. The first one is anonymizationRecital 26 presents principles of data protection applied to any information related to an identified or identifiable natural person, which does not apply to anonymous data. In other words, personal data that is anonymized is exempt from the requirements of GDPR.

Anonymization is described as a process that produces information that no longer relates to an identified or identifiable natural person. Therefore, it is an interesting mechanism since the Regulation allows the general processing of this type of data.

A corollary of the above is that any other mechanism that undeniably produces information no longer related to an identified or identifiable natural person enables the same benefits associated with anonymized data.

The second data protection mechanism is pseudonymization. Article 4 describes pseudonymization as the process that produces information that can no longer identify a natural person without considering additional information. Pseudonymization is allowed by the GDPR as long as such additional information is kept and handled separately and subject to security and administrative measures to prevent the re-identification of a natural person.

The main distinction between anonymization and pseudonymization is that the likelihood of re-identification is higher in the case of pseudonymization.

Trivial techniques for anonymization and pseudonymization might not provide the results one would expect from data protection mechanisms. It is not new that anonymized information can be “deanonymized.” In fact, some schemes for deanonymization have been available since the late 2000s. Depending on the data type, it is possible to derive deanonymization attacks for a particular niche.

When it comes to security and privacy measures, the GDPR recommends controllers choose methods to secure personal data, among other aspects, those considered the state of the art.

Data Protection Impact Assessment

Data processing can be performed in many different ways. Sometimes it might involve standard procedures within well-known scenarios. The risks and benefits are known in these cases, and further analysis before data processing might not be needed. In other cases, data processing can involve new technologies applied in contexts and for purposes that have not yet been comprehensively vetted or even properly understood. In these cases, the controller, advised by the DPO, shall execute a data protection impact assessment before the data processing, where a single assessment may address many similar operations, each of which similarly presents high risks. Article 35 states that an impact assessment is required when data processing has legal effects and when monitoring a publicly accessible area on a large scale.

With respect to qualifying data processing, a data protection impact assessment shall include:

  • At least a description of the involved operations.
  • An assessment of the needs and purposes of each operation.
  • An assessment of the risks to the rights and freedoms of data subjects induced by each operation.
  • The measures to mitigate the risks.

Data Protection by Design and by Default

Conducting an assessment risk prior to any new data processing routine is undoubtedly recommended for any organization working towards GDPR compliance. However, it might not be enough. Article 25 establishes the notion of data protection by default and by design within the scope of GDPR. This notion involves executing assessments and data protection measures while designing and developing data processing operations. This approach greatly increases the chances of an organization meeting the GDPR requirements.

An organization that implements data protection by design and by default may advertise self-proclaimed compliance with this notion or, as provisioned in Article 42, voluntarily subject to a third-party transparent certification process.

Data Breach

Article 4 defines a personal data breach as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data transmitted, stored or otherwise processed.” Data breaches must be notified no later than 72 hours after their detection to the supervisory authority and the data subject without delay if the breach is likely to impose risks to the rights and freedoms of natural persons.

Unlike what many people think, not all data breaches result from a sophisticated cyberattack. With many companies currently adopting the “bring your own device” culture, data breaches might occur if an employee loses a device (external drive, phone, laptop, etc.). Suppose an organization does not have a policy for disposing of documents. In that case, it might be a target of an attack known as dumpster diving, which can lead to unauthorized access to data under the protection of GDPR.

Among other duties, the DPO must work to increase security and privacy awareness, educate the members of the organization they are working for, and ensure that the required procedures are being observed. Unintentional actions of apparent harmless nature can cause severe damage to someone’s rights and freedoms. This can happen with an email containing sensitive information sent to the wrong recipient and/or a text message sent to a group of people which exposes the association of those people with matters that were supposed to be kept private.

Consent

What might not be apparent to the casual observer is that the processing of personal data is generally prohibited, with two exceptions: 1) when it is expressly allowed by law, and 2) the data subject has consented to the processing. However, consent is one of the six bases for processing personal data. Besides consent, the GDPR also allows contract, legal obligations, vital interests of the data subject, public interest, and legitimate interest.

One of the most common ways of obtaining consent is via web forms that request the user to opt-in for receiving marketing-related information, which is typically optional. However, some services will require the user to agree to the organization’s terms and services and privacy policies to proceed (where proceeding could mean signing up to a web service, as an example).

GDPR and IAM

Biometric data is sensitive data. If biometric data is required for the purpose of authentication, consent must be requested for that purpose and nothing more. The same applies to other “what you are” types of information, as much as they don’t sound as risky as others, such as behavior patterns. Sensitive data can be in the form of physical data (such as personal information in ID cards, physical documents, certificates, etc.) or digital data (videos, audio, images, etc.). These are often used for various authentication procedures that leverage multiple different factors.

Even if used for security purposes, consent for processing sensitive information must be requested. If the processing occurs in the data subject’s vital interests, GDPR allows its processing. Any organization unsure about what qualifies as “vital” should not assume compliance and avoid potential Regulation violations.

Some authentication techniques analyze images of the user. Each organization must review the type of data being used, for what purpose, and the associated provisions in the Regulation. For instance, GDPR distinguishes the processing of different types of personal data. Recital 51 states that “The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person.”

IAM plays multiple roles with respect to data privacy. In order to use personal and sensitive data from a user for authentication and authorization purposes, consent must be given from that user for the particular purpose expressed in the consent request. Once consent is given and personal and sensitive data are collected, IAM mechanisms can be used to protect access to that data.

DPOs can request the implementation of IAM policies, methodologies, tools, and procedures to prevent violations caused by negligent behavior, bad practices, ignorance, and malicious activities that might occur. In particular, strong IAM measures can prevent intentional and even accidental data breaches (such as those mentioned previously.) An organization can refer to its IAM framework as part of providing evidence of GDPR compliance. User authentication is one component of the “privacy by design” notion. Recital 57 presents procedures for receiving additional data for identification purposes, which must be used to support the exercise of the data subject’s rights.

Summary

Even breaking down an overview of GDPR into four articles and dedicating one to examples and applications, it is virtually impossible to cover all (or any reasonably large amount of) examples and applications. In this article, we initiated a conversation about connecting some highlights of the GDPR text and associated concepts with their practical applications. An organization seeking compliance with the Regulation should take advantage of key points discussed in this article, such as exploring data processing over data that cannot identify data subjects, embracing the “data protection by default and by design” approach, properly requesting consent whenever the processing of personal data is needed, properly establishing a DPO, undergoing an external certification, and using IAM as one of the key GDPR compliance enablers.

About the Author

David William Silva is a Senior Research Scientist at Symetrix Corporation and Algemetric and is responsible for the research and development of innovative products related to security, privacy, and efficient computation powered by applied mathematics. David started his career as a Software Engineer focused on web services and agile software development, which led him to be involved with several projects from startups to government and large corporations. After 17 years of conducting R&D in Brazil, David moved to the US to engage in scientific research applied to a global industry of security and privacy, which has been his focus for the past seven years. 

Did you know about World Password Day? It takes place every year on the first Thursday in May and is meant to encourage people to consider their password practices and adopt some new – and healthy – digital security habits. 

We asked the IDPro community to share their thoughts on password safety and they didn’t hold back! 

“Use a different password for each site and use a password manager to generate and keep track of them all.” – Greg Smith

“When using passwords: self-service password reset is a must have. If MFA is not available, the ‘password forgotten’ email reset is a low-budget version of MFA.” – Andre Koot (@meneer)

“Don’t generate your own passwords. People are bad at being random. Have a computer generate it and either memorize it or use a password manager. If you can – especially if you need to memorize it – use a wordlist generator to create a very long but human-memorable password. Pro tip: if a site lets you have a long password with spaces but still has archaic complexity requirements, create a long wordlist password then append ‘Aa1!’ to the end of it to hit all the character classes.” – Justin Richer (@justin__richer)

“If you must use passwords, one trick is to use the hash of your password instead, salted with the domain. That way, it’s reproducible but still reasonably ‘random.’ It’s reproducible given your unique knowledge of the passphrase and uniquely salted for the particular website. This way you don’t have to store it in a password manager. If there is a character limit, use either the largest portion that the website will allow or some standard number of characters, or follow an algorithm. For example: google.com is 10 characters, so use the first 10… 

$ openssl passwd -6 -salt ‘google.com’ ‘correct battery horse staple’ | cut -d’$’ -f4 | cut -c 1-10

Be sure to consider command line history if you adopt this method, though.” – Shannon Roddy

“When possible, don’t use passwords at all. With the imminent introduction of FIDO’s multi-device credentials, it will be easier than ever to leave those relics behind. This time, it’s really happening!” – Vittorio Bertocci (@vibronet)

“If it was up to me, I would introduce a minute of silence on World Password Day for all the forgotten passwords as part of breaches – followed by a demonstration of hate for passwords organized by the MFA (Movement For ‘better’ Authentication). I would finish the day by unsubscribing to a service provider I no longer use to reduce the storage needs for my password manager…and celebrate Cinco de Mayo!” – Elie Azerad (@ElieAzerad)

Learn more about World Password Day and share your thoughts with us on Twitter. And be sure to #LayerUp!  

Welcome to Identity Management Day 2022!

Identity management is the term that describes how organizations maintain effective security to prevent unauthorized users from obtaining access to secure systems. Good identity management keeps systems and people secure, enhances privacy, and enables efficient digital experiences for both businesses and individuals.

Identity Management Day was first hosted on April 12, 2021 by the Identity Defined Security Alliance and the National Cybersecurity Alliance to spread awareness about the importance of proper identity management and the dangers of improperly managing digital identities. 

We asked our members to share their best IAM practices for protecting digital identity. Learn from the best by following these 9 tips:

  1. Only collect the data you absolutely need to provide your product or service. The more data you have, the more attractive you become to attackers, and the more risk you take on.
  2. Bad data quality will kill every IAM approach. For example: people suddenly without managers, missing required data or having it disappear from a source overnight. Plan to keep the bad data out and when it creeps in (because it will) make sure you have tested  the unhappy path before you accidentally fire the CEO.
  3. Follow the ‘principle of least privilege.’ Meaning, don’t assign too many privileges to those who don’t need them; instead only assign what is needed to do their jobs.
  4. Prune and clean your account list and remove your “leavers”. It should be a no-brainer, but is actually an often-neglected control measure.
  5. Any MFA is better than no MFA (Multi-Factor Authentication). (see #6)
  6. If you’re using MFA, use Adaptive MFA. Don’t carpet-bomb every transaction with laborious authentication requirements, because other parts of your business could suffer (e.g., signup funnels). Have clear policies when you require stronger authentication and only present those prompts when necessary.
  7. Encrypt personally identifiable information (PII) and personal data (PD) at rest and in transit. Things like emails and phone numbers should never be stored or sent in cleartext.
  8. Block the use of known breached passwords / credentials.
  9. Adopt SSO (Single Sign-on) as a default practice. Friends don’t let friends connect things directly to LDAP for sign-on or local user ID/password pairs — they adopt SSO. You don’t know who wrote and tested a given application, much less what they actually contain for code or their patching practices. They do NOT need to handle clear text user ID and password pairs. Local accounts pose the risk of ghosting credentials, jeopardizing them, or handling them without the same duty of care needed for good security hygiene. SSO is vastly more helpful than trying to remember all the touch points on local credentials when revoking them. 

Now it’s YOUR turn to participate! 

Identity practitioners are encouraged to share their best security practices during the 2022 Identity Management Day Virtual Conference, inspiring others to employ effective strategies for securing their digital identities and helping leadership understand the importance of a strong identity management team. 
Want to learn more? Check out this 2022 RSAConference presentation by IDPro members – Vittorio Bertocci and Sarah Cecchetti – Securing Your Direct to Consumer Identity Strategy.

One of the core values of IDPro is “Transparency”, and with that in mind the Board has a well-established tradition of publicly publishing the annual plan for our organization.  The planning tool we use is the ‘V2MOM’ — vision, values, methods, obstacles and measures — and you can read more about that here.

Our plan for 2022 builds on last year’s successful launch of the CIDPRO. We want to make the exam available to as many professionals as possible. We aim to do just that by promoting the CIDPRO directly and working to significantly expand our membership, which will naturally increase the diversity of our already vibrant community of practitioners, leading to more meaningful support and mentorship.

Growth requires that operations act as efficiently and are as future-proof as possible, so we have included goals focused around improving the foundations of the organization and ensuring that we have the right people and processes in place to support our aspirations.

Finally, you’ll see a continued focus on the core work of IDPro, such as:

  • Consolidating the CIDPRO with a second form of the test,
  • Extending the Body of Knowledge,
  • Promoting the identity profession and supporting our peers at conferences and meetups, 
  • And understanding the state of our industry through our annual Skills, Programs and Diversity survey.

We are excited to see what 2022 brings and encourage members to participate in the community. You make this all possible!

IDPro Members can access the full 2022 V2MOM via the IDPro Slack channel. If you are not a member but are interested in reading the V2MOM and taking advantage of other IDPro membership benefits, we hope you’ll consider joining us.

by Heather Flanagan

This month marks  the second anniversary of our first articles published to the IDPro Body of Knowledge (BoK). In the past two years, we’ve published 28 articles on foundational identity topics from Authentication and Authorization to Practical Implications of Public Key Infrastructure for Identity Professionals

The first two years of the BoK topics were  bootstrapped–any foundational identity  topic was welcome. This has led to full coverage in some areas like workforce identity topics, but less coverage in others like access governance. Looking forward, the BoK Committee is  developing an editorial calendar to focus on specific topics. In particular, we’re soliciting  articles on:

The list goes on and is updated on our publication status page. 

I am incredibly proud of and grateful to a community that is willing to not just write about what they know best, but also take time to review material and offer the feedback that makes the IDPro BoK an incredible collection of information. Our industry is constantly evolving and maturing, and we have already refreshed 18 of our foundational articles to ensure the material is current.

But wait, there’s more! As the BoK evolves, so too does the program committee behind it all. We have recently updated the program charter and need new membership driven leadership to help guide our strategic direction. This is an opportunity to lead and contribute to the professional knowledge base needed to build best of breed identity solutions.

Get in touch on slack or at editor@idpro.org to volunteer. I look forward to hearing from you!

Heather Flanagan

Principal and IDPro BoK Editor

Spherical Cow Consulting

Heather Flanagan, Principal at Spherical Cow Consulting, comes from a position that the Internet is led by people, powered by words, and inspired by technology. She has been involved in leadership roles with some of the most technical, volunteer-driven organizations on the Internet, including IDPro as Principal Editor, the IETF, the IAB, and the IRTF as RFC Series Editor, ICANN as Technical Writer, and REFEDS as Coordinator, just to name a few. If there is work going on to develop new Internet standards, or discussions around the future of digital identity, she is interested in engaging in that work.

by Ian Glazer

Women In Identity (WiD) is a volunteer-run, international not-for-profit organization that promotes diversity and inclusion across the identity industry, aiming to “promote universal access which enables civic, social and economic empowerment around the world.

As part of its ongoing work supporting the industry, WiD recently developed an Identity Inclusion Code of Conduct. The first research phase focused on a financial services use case stemming from interviews with individuals impacted by identity exclusion in the UK and Ghana as well as experts in the field. The goal was to better understand how these issues might be addressed by seeking responses to key questions:

1. Who are the key demographics excluded in digital identification to access financial services and products? How might they differ in mature and emerging markets? (Markets selected for this work were UK and Ghana)

2. What form does this exclusion usually take? What do users recommend in terms of inclusion?

3. What measures are product designers and policymakers taking to ensure inclusion? How can these be strengthened? How do those buying ID systems see how inclusion has been built in?

4. What might an Identity Code of Conduct for inclusion and diversity in identification for financial services look like? 

Output from their work to date includes video interviews, the “The Human Impact of Identity Exclusion” report, and a project outline. Much like our annual Skills, Programs & Diversity survey, WiD’s research is essential to ensuring that the digital identity and IAM communities are considering and incorporating perspectives and skills that might otherwise be overlooked. 

“The lack of diversity and inclusion in identity systems and how that affects access to even basic financial services is a widely discussed problem, but the actual human impact is often far less well understood. This groundbreaking work highlights the stories and struggles of those who have faced exclusion firsthand.” — Louise Maynard-Atem, Research Lead Women in Identity

We are excited to see what will come of their ongoing research and will be anxiously awaiting updates. For more information and regular updates, visit the Women In Identity website and follow them on Twitter.

Ian Glazer

IDPro Co-Founder & Board Member

by Martin Sandren

If you have been working in the IAM space for a while it is quite interesting to see how some trends are born, gather momentum, and break through to the mainstream, while other trends fizzle out at some point in their lifecycle. 

Back in 2015 one strong emerging trend was social registration and login. The basic concept was to make it easier for potential customers to sign up for your product by leveraging the fact that the customers already had provided key info to their social network of choice. Instead of typing the same info into your interface the customer could simply share the already provided information. The customer could also leverage their social network to facilitate the login through social logins which meant that they did not have to remember a separate password. The most important social data providers varied in different markets but Google, Facebook, and Twitter were important in most European markets.

In 2015, many enterprises bought entire CIAM platforms whose core functionality was social registration and social login. The conventional CIAM players struggled to incorporate social features in their products to compete with the newer platforms and there were even projects where social logins were built as custom additions to conventional CIAM platforms by professional services teams.

A few years later, the lure of social login and registration was significantly diminished. Consumers are less interested in sharing information between different platforms and in many markets, such as in Germany, the business may feel that sharing information with the American FAANGS may have dangerous privacy implications.

Meanwhile, there has been a budding movement for self sovereign data where the individual consumer has control of their own data in some form of a data wallet on their smartphone. The consumer makes the choice of what data they want to share with whom through consent flows.

This movement did not really take off due to the simple chicken and egg challenge that in order to make it attractive for providers to support the setup you needed a significant consumer population, and in order to make it attractive for consumers to bother with installing and populating the wallet you needed a significant service catalogue. 

In some markets there were digital identity solutions that were successful i.e. the BankID solution in Sweden and Norway and the DigID solution in the Netherlands. These solutions managed to create a significant penetration into the consumer market and achieve critical mass amongst the service providers.

Over the last couple of years the self sovereign identity movement has morphed into the decentralized identity approach and has gotten support from a number of important regional and global players. One example of an important regional player is Datakeeper from Rabobank in the Netherlands and the strongest global proponent is probably Microsoft. The European Union is also a strong proponent of an interoperable European Digital ID.

Over the next year we will see if the decentralized approach manages to reach critical mass in any significant markets and become an interesting proposition for consumers, and therefore a must have integration for service providers and CIAM vendors.

Martin Sandren

Domain Architect IAM, AholdDelhaize

Martin Sandren is a security architect and delivery lead with over twenty years of experience of various information security related roles. Primarily focused on security architecture and digital identity including global scale customer, privileged and employee IAM systems using Microsoft Azure Active Directory, Sailpoint, Saviynt, Forgerock, IBM and Oracle security stacks.

Experience includes architect, onshore and offshore team lead as well as individual developer. Wide international experience gained through having lived and worked in Sweden, Germany, UK, USA and the Netherlands. Martin is a frequent speaker at international conferences such as Consumer Identity World, MyData and European Identity and Cloud Conference.

In my role as IAM engineering manager I lead our global team of IAM engineers and BAs who continuously strives to provide quality IAM services to our 750 000 associates in 20+ opcos.

Martin Sandren is a board member of the IdNext foundation, founder of the Digital Identity Amsterdam meetup and active within IDPro.

Learn more and sign up at: https://www.meetup.com/Amsterdam-Digital-Identity-Meetup-Group/

by David William Silva, PhD

This is the second of four posts about the General Data Protection Regulation (GDPR) according to a proposed scheme for inspecting the Regulation, which starts by examining its context, motivations, and goals. In the first post, we saw that the GDPR protects natural persons concerning the processing of personal data, which is considered by the European Union (EU) a fundamental right that every EU citizen has. The Regulation is about establishing enforced standards for improving security and privacy mechanisms associated with the collection and use of personal data.

Now it is time to move to the second layer of understanding of the GDPR by discussing highlights of its terminology and basic definitions. Our goal is to go beyond a dictionary-style of terms and definitions in this post. Instead, the building blocks of the Regulation’s terminology will be presented within a narrative that naturally continues the initial discussion about context, motivations, and goals.

Organization

When we look at the GDPR, we see some terms repeating more frequently than others, and we see many terms being defined in terms of fundamental ones. We refer to these terms as the main objects. These main objects are associated with main actions via a main tool, which is accessed or somehow explored by main actors. We will also single out what we describe as a main event. We will see that these labels are all related, directly or indirectly, to data. Therefore we will also discuss the main types of data covered by the Regulation. The pattern “the main _____” indicates that although there are other elements in each of these categories, the ones discussed in this post are clearly the most representative in the Regulation.

The Main Objects

When reading the GDPR, it is clear what the main actors of the Regulation are. We will talk about them later in this post. We will first look at the highlights within the actors, which we refer to here as the main objects: natural person and personal data.

natural person or data subject is anyone that can be directly or indirectly associated with an identifier such as a name, an identification number, location data, email, or factors related to the identity of a person, including physical, physiological, genetic, economic, cultural, or social. All the data that can lead to identifying a natural person is referred to as personal data.

The Main Actions

The main objects are the foundation for the remainder of the discussion in this post. Virtually everything in the Regulation is related to a natural person, personal data, or one of their derivatives. We refer to the portion of the Regulation that covers how to appropriately interact with the main objects as the main actions.

Personal data can be collected, generated, structured, adapted, consulted, organized, transmitted, altered, stored, and deleted. Whether or not by automated means, any of these actions or operations is a form of data processing. Personal data can be processed in many ways to achieve many purposes. To prevent unauthorized use of personal data, a restriction of processing is invoked, which consists of collecting and marking data to limit its processing in the future, according to some well-defined scope.

The automated processing of personal data to analyze or predict aspects of a natural person associated with their performance at work, economic situation, health, personal preferences, interests, behavior, among others, is known as profiling. Sometimes personal data can be organized and processed so that it is no longer attributed to a natural person without additional information, often kept separately and subject to administrative measures that ensure that it is not used for identifying a natural person. This is referred to as pseudonymization.

Consent is a freely given, specific, informed, and unambiguous declaration of the data subject’s wishes concerning collecting and processing their personal data. This can be done by a complete and formal statement or any explicit affirmative action of their understanding and agreement of the access and processing of their personal data.

The Main Tool

There are many tools associated with the GDPR in some capacity. But one tool stands out by itself for its generality and central role in the Regulation: a filing system.
Personal data is typically located in what is known as a filing system, which can be described as any structured set of personal data, whether centralized, decentralized or dispersed in terms of functional or geographical criteria.

The Main Actors

Some particular actors in the GDPR can be generally described as an entity, that is, a natural or legal person, public authority, agency, or any other body. In this sense, the GDPR discusses the attributes and responsibilities of the following entities: controller, processor, recipient, and third party.

controller is an entity that determines the purposes and means of processing personal data. The controller can act either alone or jointly for ruling over what type of data can be used, how it can be used, via what means, and for what purposes. Suppose the purposes and means of personal data processing are determined by Union or Member State law, in which case the controller will also be provided by Union or Member State law. An entity that processes personal data on behalf of the controller is a processor.

When a controller and/or a processor is/are directly involved in more than one Member State, the main establishment refers to the place of its central administration in the Union.

recipient is an entity that receives personal data, regardless if the recipient is a third party or not. Whenever the entity receiving data is a public authority (according to specific criteria of particular inquiry), that entity may not be referred to as a recipient.

third party is an entity that is not the data subject, controller, processor, or any other person authorized to process data under the authority of the controller.

representative is a natural or legal person designated by the controller or processor to represent the controller or processor concerning their obligations under the Regulation. An enterprise is a natural or legal person engaged in economic activity.

The Main Event

Similar to the notion of highlighting a single tool while acknowledging the existence of several tools in the GDPR, we also single out an event in the Regulation due to its criticality (and it is not a good one): a personal data breach.
personal data breach refers to a security incident that leads to accidental or unlawful destruction, loss, alteration, unauthorized disclosure of (or access to) personal data access and/or processing.

The Main Types of Data

Personal data related to a natural person’s inherited or acquired genetic characteristics are called genetic data. This type of data can provide unique information about a person’s physiology or health, typically obtained via examining biological samples from that natural person.

When personal data is more specifically related to physical or mental health, it is referred to as data concerning health, including healthcare services. This type of data can reveal information about a person’s health status.
When personal data is associated with specific technical processing relating to physical, physiological, or behavioral characteristics, it is called biometric data. Biometric data is typically used to confirm the identification of a natural person, which can be done by inspecting fingerprints, facial characteristics, body movement, among many other examples.

The Main Concepts

In the subject-matter and objects of the GDPR, it is clear that the Regulation establishes rules to protect natural persons with respect to their rights and freedoms, including freedom of movement of personal data, which can many times and for many reasons, undergo the process of pseudonymisation that we mentioned before, that is, the processing of personal data is performed in such a way that the personal data can no longer be attributed to a specific data subject without the use of additional information. Rights also include the right of privacy, data protection, data portability, erasure (the right of being forgotten), and the restriction of data processing.

The rules in the Regulation determine that personal data can only be accessed with consent, which must be freely given, specified, unambiguous, assessed, and informed. Consent also can be withdrawn. 

Overall, rules are defined to enforce security and privacy of processing personal data, which must be accurate, lawful, fair, and transparent, have limited purpose, and limited storage, ensure integrity and confidentiality, and involve data minimization. Rules also serve to regulate controllers, which must be accountable. Figure 1 provides visualization of how some of the main concepts in the GDPR are related to each other.

Figure 1: The Main Concepts in the GDPR and Their Connections (click for full size)

Summary

There are many terms, concepts, and definitions in the GDPR and they are all connected somehow. The GDPR can be described as a set of rules for protecting natural persons and their personal data in a variety of scenarios and objectives for the protection of their rights, including the right of privacy. Although there is clearly much more that can be said about terminology and definitions in the GDPR, hopefully this post can contribute for a better appreciation of the official main text of the Regulation and related materials

David William Silva, PhD

Senior Research Scientist at Symetrix & Algemetric

IDPro Member, CIDPRO

About the Author

David William Silva is a Senior Research Scientist at Symetrix Corporation and Algemetric and is responsible for the research and development of innovative products related to security, privacy, and efficient computation powered by applied mathematics. David started his career as a Software Engineer focused on web services and agile software development, which led him to be involved with several projects from startups to government and large corporations. After 17 years of conducting R&D in Brazil, David moved to the US to engage in scientific research applied to a global industry of security and privacy, which has been his focus for the past seven years.

by Greg Smith

Only three months to go! Identiverse is IDPro’s home event, and it will be taking place in Denver as an in-person conference on June 21-24, 2022. The content committee has been busy reviewing and selecting proposals. It’s shaping up to be another excellent agenda. Together with fellow IDPro member Lorrayne Auld at MITRE, I’m excited to be co-leading the Deployments and Leading Practices (D&LP) topic once again. In this blog, I’d like to share some of the upcoming highlights for our track.

D&LP is the place where you can come to learn how some of our larger enterprises deal with identity at scale, how they manage large rollouts, and the challenges they face. These could be workforce identity implementations, CIAM programs, or any combination thereof. In short, expect some war stories from the real world, and some great advice for avoiding some of the pitfalls of large IAM programs.

Our speakers in this track will be coming from a healthy mix of global enterprise identity practitioners, international government agencies, consulting companies, financial institutions, and identity solution vendors. And more than a few of our fellow IDPro members!

This year’s theme is “Trust”, which offers plenty of latitude for our topics. You’ll hear from companies like Target and J&J about their trust journeys with FIDO2 adoption and managing Single Sign-On at scale. We’ll hear from PayPal about the frameworks they developed for Connected Identity across the PayPal ecosystem. HSBC will be talking about building trusted identity frameworks using open-source software. The Norwegian Labour and Welfare Administration will explain why agility should be considered when evaluating identity products for your organization.

Our Trust theme wouldn’t be complete without a session on Zero Trust. We have at least four, from Uberether, ProofID, Ping Identity, and Easy Dynamics. And two of those are real-world deployments for the US federal government and the US Department of Agriculture. The ProofID session will dive into customer experience from an omni-channel perspective, which can be immensely challenging. Nok Nok will be sharing five real-world deployment stories for passwordless authentication, and our speaker from Gluu will remind us that the password isn’t quite dead yet. Microsoft will share advice on getting to strong authentication on your passwordless journey while showing a positive ROI to your senior leadership. Customer experience is trending and gaining attention, not only within the Federal Government, but also here in our track where the FIDO Alliance will provide an update on optimizing the user experience for FIDO security keys.

We’ll learn more about verifiable credentials from Avast. Curity’s speaker will explain how applying OIDC profiles for Open Banking can benefit the financial services industry as well as the rest of us. We’ll hear from Authlete about real-world examples of configuring OAuth and OIDC correctly to avoid data breaches. Last, but certainly not least, Wavestone will provide invaluable advice on how not to fail at your IAM project.

Over the next couple of months, our speakers have a lot of work to do to turn those topics into full-fledged sessions. I am really looking forward to seeing what they come up with, and then sharing it with all of you at Identiverse in Denver! If you haven’t already registered to attend, what are you waiting for?

Stay tuned for more Identiverse updates in the weeks to come.

Greg Smith

Chair, IDPro Editorial

Radiant Logic

Greg Smith is a Solutions Architect with Radiant Logic. He has been implementing Identity & Access Management solutions for over 35 years. He holds BSEG and MSBA degrees from Bucknell University, where he also began his professional career before moving into the Pharmaceutical industry in 1996. After a 25 year career there, he recently retired from Johnson & Johnson, where he led the engineering team for J&J’s single sign-on, risk based authentication, multi-factor authentication, access governance, directory synchronization and virtualization, provisioning automation, and PKI services. He has spoken at Identiverse® and other industry events on numerous occasions. He was recently CIDPRO™ certified and is also a founding member of IDPro, where he currently chairs the editorial committee.

by David William Silva, PhD

The General Data Protection Regulation (GDPR) is considered the most comprehensive security and privacy law worldwide. The GDPR was drafted and passed by the European Union (EU) and enforced obligations onto organizations anywhere on Earth. These organizations target or collect data somehow associated with the people in the EU.

The full text of the GDPR is organized in 99 articles across 11 chapters and 88 pages. It is clearly a substantial amount of information that would not be possible to be exhaustively covered in a single blog post.

You certainly read and/or heard about GDPR many times in the past few years. In one way or another, the chances that the GDPR and related subjects have been brought to your attention are high. But even if you have never heard about the GDPR (although unlikely), I would like to provide a closer look at what is considered the world’s strictest security and privacy law. For that, I propose a simple technique I use when approaching any new subject, which consists of a representation of four layers of understanding, as shown in the figure below.

Our first step is to understand the context in which the GDPR came on the scene, the motivations, and its goals. This first layer of understanding is typically the minimum required to get the conversation started around any given subject. Next, we examine terminology and basic definitions. 

Getting into the second layer of understanding equips one to read and retain information from documents related to the topics at hand, which would be cumbersome without an established foundation of terms, acronyms, and definitions. 

The third layer is about examples and applications. In other words, it is about understanding terms and definitions in action in specific scenarios. Understanding how the building blocks of a subject under consideration relate to each other, how they are activated, and/or how they impact any given sequence of ideas or actions is paramount for solidifying the practical applications of the information gathered thus far. 

The fourth layer refers to observing arbitrary events and identifying the notions associated with the previous layers, relating actors and their roles, and classifying them according to terms and definitions in the second layer. It also involves applying critical thinking to what could be “gray areas” in the fundamentals of the referred subject and being able to propose new practical ideas, measures, and methods that are strongly aligned with the guiding principles of that particular subject. According to this simple 4-layer scheme, understanding all layers well means a good overview of the referred topic.

Next, we will take a quick look at some of the context, motivations, and goals of the GDPR.

Context

In November 1950, in Rome, Italy, the Convention for the Protection of Human Rights and Fundamental Freedoms took place. Better known as the European Convention on Human Rights (ECHR), it established the first instrument to enforce some of the rights stated in the Universal Declaration of Human Rights. ECHR was adopted by the Council of Europe to guard fundamental freedoms and human rights of the people in Europe. The original text signed in 1950 took effect on September 3, 1953, and amended its original version by 11 additional protocols. The official original text is available online.

Despite the date, this initiative from over 70 years ago is considered “the most advanced and successful international experiment in the field to date.” A part of the 1950 ECHR was a profound discussion on the right to privacy. The debate around privacy had to be adjusted to the advances in society and technology to the point that in 1995, the EU passed the Data Protection Directive (DPD), officially known as Directive 95/46/EC, establishing a minimum set of data security and privacy standards, enough to enable each member state to execute their own law implementation. In 2011, after a series of incidents involving personal data privacy violations, the EU recognized the need for a more comprehensive approach to personal data protection. Since 1995, the DPD has been updated to address new issues and needs.

The fact that each member state had its own way of implementing laws to protect the security and privacy of personal data worked until a certain point. In 2012, the European Commission submitted a draft proposal for substantial reform of the data protection rules in the EU. On December 15, 2015, the European Parliament, in conjunction with the Council and Commission, agreed upon what was called the new data protection rules, the EU General Data Protection Regulation. The final text of the GDPR was approved on April 14, 2016.

Motivations

The underlying concept of the right to privacy is that “everyone has the right to respect for his private and family life, his home and his correspondence.” This was the driving notion that led the EU to ensure the right to personal data protection via legislation.

There was also a hope that an EU-wide law would solve several problems directly related to the fragmentation and somewhat independence of member state members in enforcing data security and privacy laws. The idea was to facilitate cooperation fighting crimes and any form of violation against the right to privacy.

Therefore, the GDPR supersedes the DPD, building on top of crucial components of the DPD while adding more specific requirements concerning data protection. The GDPR adds more rigorous enforcement of security and privacy laws with harsh penalties and substantial fines.

Goals

The main goal of the GDPR is to create and enforce standards for data protection legislation applied to all EU members and those somehow in connection to data associated with EU citizens. The GDPR also aims to equip EU residents to be known and understand their right to privacy, the resources available to them, where to look for help and any kind of support, and what to expect from organizations requesting any form or volume of personal data.

The GDPR establishes specific rules for accessing and processing personal data, together with responsibilities and penalties for those who violate any aspect of data protection under the Regulation.

When examining the full-text of the GDPR, it is crystal clear that the Regulation is all about protecting people, their privacy, their right to privacy, their right to own and protect their data, to choose what can be shared and with who, in which conditions, for how long, and to which extent.

Summary

The cornerstone of the GDPR is the protection of natural persons concerning the processing of personal data, which is referred to as a fundamental right that everyone in the EU has. Protecting data is just one direct consequence of protecting the privacy of the individual, which can be violated through unlawful, unsolicited, or incorrect manipulation of personal data. The GDPR addresses modern concerns with data privacy, but its principles go back to 1950. Since then, the EU has been actively improving their security and privacy mechanisms of personal data from individual execution of privacy-preserving measures to a now unified, EU-wide security and privacy standards and laws to enforce, by all means necessary, the protection of personal data. As anticipated, we are just scratching the surface of GDPR, as we just entered the first layer of understanding the Regulation, according to our proposed simple scheme for organizing information. In the second part of this series, we will look at significant highlights of terminology and basic definitions and how they relate to each other in the grand scheme of all things GDPR.

David William Silva, PhD

Senior Research Scientist at Symetrix & Algemetric

IDPro Member, CIDPRO

About the Author

David William Silva is a Senior Research Scientist at Symetrix Corporation and Algemetric and is responsible for the research and development of innovative products related to security, privacy, and efficient computation powered by applied mathematics. David started his career as a Software Engineer focused on web services and agile software development, which led him to be involved with several projects from startups to government and large corporations. After 17 years of conducting R&D in Brazil, David moved to the US to engage in scientific research applied to a global industry of security and privacy, which has been his focus for the past seven years.

After what feels like an eternal period of COVID waves we are finally moving into what we hope is the post COVID world and what can be better to invoke the spring feelings than a few IAM conferences and events?

The main EU event this spring will be KuppingerCole EIC 2022 which will take place in Berlin May 10-13. Thanks to Ian’s hard work we have gotten a rebate code for IDPro members. More info on this can be found on the IDPro slack in the #EIC channel.

In addition to EIC there is also the Heliview IAM conference in Den Bosch in the Netherlands on May 19 as well as IDM Dach on May 24 in Frankfurt. Please reach out through the #conferences slack channel if you need registration codes.

The Amsterdam Digital Identity Meetup group is up and running again and hopes to transition to a physical meetup during the spring. Identi Beer Oslo and Identi Beer Copenhagen are other good alternatives for low key identity meetups.

Have a great spring and I hope to see you at one of the events!

by Simon Moffatt

Well, as we enter 2022 – and a good way into 60 years of using commercial computer technology of some sort – the password is very much alive and kicking. For example:

  • This article is being written in Google Docs, which requires my username, password + MFA.  
  • It will be promoted on Twitter: Username, password + MFA.
  • Shared on LinkedIn. Username, password + MFA.  

Note the pattern?  Yes MFA is absolutely in the mix for me personally, but a) that doesn’t necessarily equate for all users and b) the underlying requirement for a shared secret still exists.

The “cost” to a service provider or application developer to reach out for the username and password pattern is very low.  Libraries exist and many password storage approaches now rely heavily on techniques using salts and hashes.  Making a choice for something different has some pretty big impacts – namely changes to usability and hoops to skip through regarding security change management if some new and funky passwordless approach is selected.

Drivers Towards Passwordless

However, there are emerging shoots of hope for those who wish to see a password-free world. A quick Crunchbase search reveals a tasty $700+ million has been poured into startups with the word “passwordless” in their description in the last 36 months.  A chunk of change (admittedly heavily influenced by Transmit Security’s $543 million last summer) that is empowering new techniques to the age-old problem of authentication.

The interesting aspect is that authentication is the main pinch-point of both B2E and B2C interactions.  B2E identity is having to contend with distributed working, migrations to zero trust and secure service edges and data security, whilst the continued drive for B2C consumer identity sees a need for secure yet usable user verification driven by retail and financial services and the increasing need for secure PII sharing.

All in all, user interruptions during the authentication process are increasing hugely.  The volume increases and the context surrounding the transaction is becoming more complex and subtle, too.  Usernames and passwords just won’t cut it, even with a decent MFA overlay leveraging one time passwords (generated client side of course not sent via SMS or email…) or Push Notifications.

Passwordless Requirements

Passwordless adoption requirements for both B2C and B2E will be subtly different.  It can be quite interesting to analyze requirements of passwordless just as you would any other credential – via a life cycle model.

A basic example would see steps such as enroll, use, add, migrate, reset, and remove.

Each step in the life cycle can then be broken down into the capabilities needed.  A consistent theme would seem to be a need for increased end user self-sufficiency – especially around enrollment and reset, where the dreaded call to the helpdesk instantly increases cost and reduces end user happiness.  (Obligatory sales nudge, I worked on a buyer guide for passwordless in 2021…)

B2E

From a B2E perspective, concerns for a passwordless model seem to focus upon replacing existing MFA components.  Many organisations often have numerous disconnected modals perhaps focused on specific user communities or applications.  Any consolidated passwordless approach must provide a range of application integration options from SDK’s, standards integration, or out of the box native integrations.  It would also be worth considering orthogonal authentication use cases for PAM and even physical building access.  Can that be integrated into a mobile centric passwordless approach?  The buzz words of zero trust and contextual and

adaptive access need to be shoe-horned into this landscape too, likely with a decoupled

approach to authentication away from the identity provider and network infrastructure plumbing.

B2C

Consumers are a different beast.  The focus is often upon rapid user onboarding with transparency and usability being important.  Can KYC and identity proofing be augmented into the credential issuance process?  Can those processes also be used during any reset

activities?  Clearly fraud – I’m thinking ATO, phishing, credential stuffing and basic brute force attacks – are all a huge issue with an Internet facing service, so any passwordless service needs to be immune.  Compliance initiatives such as the Strong Customer Authentication aspect of PSD2 is also driving a need for an authentication method that is secure yet can be operated at high scale by the end user.

What Are The Options?

So we all hate passwords. Service providers are getting hacked daily – the HaveIBeenPwned site is nearly at 12 billion breached accounts – and end users pick easy to break passwords that they re-use.  But, numerous startups are coming to the rescue – typically with a local mobile focused biometric (aka FaceID/fingerprint) that unlocks a private key on a device in order to respond to a challenge being set by a service that requires an authentication result.  Many do this in a proprietary way and many now leverage the W3C WebAuthn approach as a standards-based model.

A few other subtleties start to emerge.  How is the private key stored?  If on device, does it

leverage the trusted execution environment or secure enclave?  If off-device, is it stored in a

distributed manner, so no single point of failure exists?  If on device, what happens if the device is lost or stolen?  Does the end user have to re-enroll? Questions that all emerge once roll out starts to hit big numbers.

Another aspect to consider, away from just the technicalities, are things like end user training

and awareness.  Whilst many service providers aim for “frictionless” experiences and

transparency, a user journey that is too seamless, may actually make the end user suspicious – they want to see some aspect of security.  The classic “security theatre” scenario.  As with any mass rollout approach, not all users are the same. Behaviour, geographical differences, device preferences and the like will result in the need for a broad array of usage options and coverage. Can the new passwordless models cope with this?

Summary

Passwords aren’t dead, but they’re definitely quite ill.  The options for moving to something new are starting to become broad and numerous.  However, authentication doesn’t exist in a silo and on its own carries little use.  It would seem that before authentication (think proofing) and after authentication (think session integration coverage) use cases would likely emerge as the biggest competitive battlegrounds in the next 24 months.  Those suppliers that can create authentication ecosystems that integrate into a range of different devices, users, and systems

would likely see success.


Simon Moffatt

Founder & Industry Analyst, The Cyber Hut

Simon Moffatt is Founder & Industry Analyst at The Cyber Hut. He is a published author with over 20 years experience within the cyber and identity and access management sectors. His most recent book, “Consumer Identity & Access Management: Design Fundamentals”, is available on Amazon. He is a CISSP, CCSP, CEH and CISA. He is also a part-time postgraduate on the GCHQ certified MSc. Information Security at Royal Holloway University, UK. His 2022 research diary focuses upon “How To Kill The Password”, “Next Generation Authorization Technology” and “Identity for Hybrid Cloud”.

by André Koot

I’m not sure when I heard it first. It must have been a while back, but it still seems a valid statement. Accountability, the mandate and responsibility of an owner, is not an easy to grasp concept. Or rather: easy to grasp, hard to realize. Ask any professional if he or she wants to be accountable, and the answer is most probably No: For the candidate it’s not clear what accountability means (both positive and negative) and what the consequences will be in the end. And if someone is only hesitant, the next question will be: what’s my mandate? And that’s just logical: accountability and ownership need a mandate because one has to be able to enforce the accountability responsibilities.

In most IAM programs we try to find owners, who are accountable for access control decisions, like defining SoD rules and responding to access requests from self-service portal users. It’s a daunting task to find any business owner, a person with the mandate to make access control decisions. And you will probably agree with my finding that without a business owner, any IAM program is doomed…

And then, in any RBAC project, one of the deliverables is appointing Role Owners. And that’s odd. What is a role owner?

In RBAC the core concept is to define a business role or application role. The role connects user accounts to entitlements. For instance, the business role of Accounts Receivable contains the entitlements, or permissions, to application functions and resources in one or more platforms or services. Then, when the AR role is assigned to a person, the corresponding entitlements are granted automatically. The role owner is the responsible party for maintaining the contents of the role and changing the role’s entitlements if required by relevant stakeholders.

Yes, that’s odd. The role owner is not accountable for the role content. The role owner doesn’t own anything, the role owner has no mandate to change the role in whatever way. The role owner is, however, obliged to discuss role content changes with all relevant stakeholders. And that’s all too logical: suppose the finance manager wants to add an HR entitlement to the AR role. That’s not just a decision for the finance manager to make; the HR manager or the HR data owner should have a say in the request to add the HR information entitlements to the AR role. The AR role owner is obliged to find out what other obligations are there, and which other stakeholders must approve the request.

So here we have an owner without any mandate but with many obligations. 

And then what struck me was the similarity with Data Governance. In data governance a specific ‘role’ was created to show the difference in tasks, responsibilities and accountabilities, between mandates and duties. It’s the data steward, a person responsible for maintaining data integrity on behalf of multiple stakeholders. This figure has no mandate to manage the data, no ‘CRUD’-like authorizations, just guarding the well-being of the data. Nice term, a steward.

In my opinion we should change the term role owner to role steward. Stewardship is a better fit than ownership. So… let’s appoint role stewards for maintaining the role contents. Don’t use the term role owner anymore; it’s an empty word.

A person wearing glassesDescription automatically generated with medium confidence

André Koot

IAM Strategist, co-founder of SonicBee

Member BoK committee IDPro

About the Author

André Koot is principal IAM consultant at Dutch IAM consultancy and managed services company SonicBee (an IDPro partner). And member of the Advisory Board of IdNext.eu. He has over 30 years of infosec experience and over 20 years of experience as an IAM expert, acting as architect, auditor and program lead. For the last nine years he has taught a 4-day IAM training course. André contributes to the IDPro BoK as committee member, author, and reviewer.

A picture containing textDescription automatically generated
A picture containing logoDescription automatically generated

In November 2021, the world lost Kim Cameron, a champion of digital identity. In honor of this esteemed thought leader, trailblazer, and beloved friend, members of the identity community have shared their fond memories with Kim and continue to celebrate his legacy. 

Vittorio Bertocci, Auth0

Kim might no longer update his blog, nudge identity products toward his vision or give inspiring, generous talks to audiences large and small, but his influence looms large in the identity industry – an industry Kim changed forever. A lot has been written about Kim’s legacy to the industry, by people who write far better than yours truly, hence I won’t attempt that here.

I owe a huge debt of gratitude to Kim: I don’t know where I’d be or what I’d be doing if it wouldn’t have been for his ideas and direct sponsorship. That’s something I have firsthand experience on, so I can honor his memory by writing about that.

Back in 2005, still in Italy, I was one of the few Microsoft employees with hands-on, customer deployment experience in WS-STAR, the suite of protocols behind the SOA revolution. That earned me a job offer in Redmond, to evangelize the .NET stack (WCF, workflow, CardSpace) to Fortune 500 companies. That CardSpace thing was puzzling. There was nothing like it, it was difficult to develop for, and few people appeared to understand what it was for. One day I had facetime with Kim. He introduced me to his Laws of Identity, and that changed everything. Suddenly the technology I was working on had a higher purpose, something directly connected to the rights and wellbeing of everyone – and a mission, making user-centric identity viable and adopted. I gave myself to the mission with abandon, and Kim helped in every step of the way:

  • He invested time in developing me professionally, sharing his master negotiator and genuinely compassionate view of people to counter my abrasive personality back then.
  • He looped me into important conversations, inside and outside the company – conversations way above my paygrade or actual experience at that point. He introduced me to all sorts of key people, and helped me understand what was going on. Perhaps the most salient example is the initiative he led to bring together the different identity products Microsoft had in the late 2000s (and culminating in a joint presentation we delivered at PDC2008). Back then, the company was a very different place, and his steely determination coupled with incredible consensus building skills forever changed my perception of what’s possible and how to influence complex, sometimes adversarial organizations. 
  • He really taught me to believe in myself and in a mission. It’s thanks to his encouragement that I approached Joan Murray (then acquisition editor at Addison Wesley) on the expo floor of some event, pitching to her a book that the world absolutely needs about cardspace and user-centric identity, and once accepted finding the energy to learn everything (putting together a ToC, recruiting coauthors, writing in English…) as an evenings and weekends project. Kim generously wrote the foreword for us, and relentlessly promoted the book. His sponsorship continued even after the CardSpace project, promoting my other books and activities (like those U-prove videos now lost in time). 

Those are just the ones top of mind. I am sure that if I dig through his or my blog, I’d find countless more. It’s been a huge privilege to work so closely with Kim, and especially to benefit from his mentorship and friendship. I never, ever took that privilege for granted. Although Kim always seemed to operate under the assumption that everyone had something of value to contribute, and talking with him made you feel heard, he wasn’t shy in calling out trolls or people who, in his view, would stifle community efforts.

Kim’s temporary retirement from Microsoft and eventually my move to Auth0 made my interactions with Kim less frequent. It was always nice to run into him at conferences; we kept backchanneling whenever industry news called for coordinated responses; and he reached out to me once to discuss SSI, but we never had a chance to do so. As cliche as it might be, I now deeply regret not having reached out more myself.

Last time I heard from him, it was during a reunion of the CardSpace team. It was a joyous occasion, seeing so many people that for a time all worked to realize his vision, and touched in various degrees by his influence. His health didn’t allow him to attend in person, but he called in and we passed the phone around, exchanging pleasantries without knowing we were saying our goodbyes. I remember his “Hello Vittorio” as I picked up the phone from Mike – his cordial, even sweet, tone as he put his usual care in pronouncing my name just right – right there to show the kindness this giant used with us all.  

Dan Hitchcock, AWS

I was working on the same team as Kim at Microsoft, around 2010. Kim was working from France, and we’d set up a big-screen TV (still relatively rare at that time) so that we could have meetings with him as a floating head on a screen (also rare at that time). He came on camera with his usual cheerful self and tinted glasses, and one of us asked, “Hey Kim, you’re in France – where’s the wine?” Kim smiled, reached off-camera, and continued to smile as he slowly slid a full wine glass across his desk, into the frame, then slowly slid it back off again.

I’d never met Kim before, and I was at an event at Microsoft where he was speaking. In my youthful zest and awkwardness, I made a comment that accidentally called out the fact that he hadn’t attributed one of his main slides to its author. Kim took it totally in stride and thanked me, and after the meeting, graciously spent a few minutes talking with me 1:1 as we waited for our rides – sharing his characteristically thoughtful insights, and not once mentioning my gaff from the meeting. Ever the gentleman!

I still cite Kim’s Laws of Identity semi-frequently in my role. Most often it’s Law #1, to hold the line for the empowerment of the user in a digital world that relentlessly conspires, intentionally or otherwise, to lay us all bare.

André Koot, SonicBee

At EIC in Munich, Kim was, of course, a member of a panel. The room was loaded, but at the start of the session Kim was not there –  probably held up by someone in the conference building, as was often the case.

The panel discussion did not start before Kim entered, some five minutes late. Everyone understood that a panel without Kim, cannot be called a “panel.”

Ian Glazer, IDPro

Reification. I learned that word from Kim. In the immediate next breath he said from the stage that he was told not everyone knew what reify meant and that he would use a more approachable word: “thingify.” And therein I learned another lesson from Kim about how to present to an audience.

My memories of Kim come in three phases: Kim as Legend, Kim as Colleague, and Kim as Human, and with each phase came new things to learn.

My first memories of Kim were of Kim as Legend. I think the very first was from IIW 1 (or maybe 2 – the one in Berkeley) at which he presented InfoCard. He owned the stage; he owned the subject matter. He continued to own the stage and the subject matter for years…sometimes the subject matter was more concrete, like InfoCard, and sometimes it was more abstract, like the metaverse. But regardless, it was enthralling.

At some point something changed…Kim was no longer an unapproachable Legend. He was someone with whom I could talk, disagree, and more directly question. In this phase of Kim as Colleague, I was lucky enough to have the opportunity to ask him private follow-up questions to his presentation. Leaving aside my “OMG he’s talking to me” feelings, I was blown away by his willingness to go into depth of his thought process with someone who didn’t work with him. He was more than willing to be challenged and to discuss the thorny problems in our world.

Somewhere in the midst of the Kim as Colleague phase something changed yet again and it is in this third phase, Kim as Human, where I have my most precious memories of him. Through meeting some of his family, being welcomed into his home, and sharing meals, I got to know Kim as the warm, curious, eager-to-laugh person that he was. There was seemingly always a glint in his eye indicating his willingness to cause a little trouble. 

The last in-person memory I have of him was just before the pandemic lockdowns in 2020. I happened to be lucky enough to be invited to an OpenID Foundation event at which Kim was speaking. He talked about his vision for the future and identity’s role therein. At the end of his presentation, I and others helped him down the steep stairs off of the stage. I held onto one of his hands as we helped him down. His hand was warm.

by Martin Sandren

One of the lessons learned from 2021 is that ransomware can target any and all companies. Hailing from the country of meatballs and flatpack furniture, some of the most prominent ransomware attacks were made against the biggest food retailers and one of the smaller municipalities.

Each attack contains important lessons learned—like the ability to turn the complete outage of all point-of-sale systems into an opportunity for driving digital change and rolling out the new smart-phone-based checkout system while managing essential communications during the attack—but the unifying trend is that any and all organisations can be targeted. This fact presents a new challenge for enterprises who have traditionally focused their supply chain cybersecurity efforts on partners that handle sensitive data or provide services that involve information technology. In the new world, you may no longer have any cheese to sell as the firm that slices and ships cheese has been taken out in a ransomware attack…and trust that this is very upsetting, particularly if you are Dutch.

So, how can enterprises support partners who have very small or even nonexistent cybersecurity systems in place? It probably depends quite a bit on what kind of company you are, but as a retailer, we have found that although many of our main partners only have a small cybersecurity or IT department, they are very aware that they need to improve security or risk losing their entire IT infrastructure. We frequently field questions about cybersecurity from partners wanting to learn more about the topic. In many cases, relatively small efforts—such as community building—are also resulting in much improved resilience. The format that we have found works best is a virtual roadshow for a relatively small audience of similar partner companies.

Our experience has proven the importance of anchoring the discussions around identity topics such as privileged access management, multi-factor authentication, and basic account hygiene in a simple end-to-end ransomware attack model. Another important factor is to provide hands-on examples and to leave plenty of time for questions and discussion. If possible, it also helps to provide the information in the local language rather than in English.

We also recommend partnering with local cybersecurity organisations—such as your local IDPro chapter, OWASP, and other like-minded organisations—as well as local cybersecurity trade shows. With a little additional effort and attention to the needs of your partners, you can provide the support from cyberattacks that they so desperately need. And, of course, save the cheese.

Martin Sandren

Domain Architect IAM, AholdDelhaize

Martin Sandren is a security architect and delivery lead with over eighteen years of experience of various information security related roles. Primarily focused on security architecture and digital identity including global scale customer, privileged and internal IAM systems using Microsoft Azure Active Directory, Sailpoint, Saviynt, Forgerock, IBM and Oracle security stacks.

Experience includes architect, onshore and offshore team lead as well as individual developer. Wide international experience gained through having lived and worked in Sweden, Germany, UK, USA and the Netherlands. Martin is a frequent speaker at international conferences such as Consumer Identity World, MyData and European Cloud Conference.

In my role as domain architect IAM at AholdDelhaize I am responsible for IAM architecture and delivery of IAM services to our 450 000 users globally. Martin Sandren is a board member of the IdNext foundation, founder of the Digital Identity Amsterdam meetup and active within IDPro.

During the season of giving, there’s always at least one person on the list that seems impossible to buy for. This year, skip the last-minute dash for yet another overpriced candle or gourmet popcorn tin and give something they likely hadn’t thought to give themselves? Or, if you’re ahead of the game and have already finished your holiday shopping, treat yourself! 

We’ve compiled a list of digital identity-themed gifts that can advance an identity practitioner’s career, and also provide some unique experiences for anyone interested in the identity and access management industry.  

We may be a bit biased, but the #1 recommendation on our list is the CIDPRO™ – Certified Identity Professional – program that launched earlier this year. The CIDPRO certification offers a range of benefits such as credibility and employability within the industry as having proven, vendor-neutral-identified, foundational IAM knowledge. Developed with the help of a diverse team of global identity professionals, CIDPRO is a long-awaited, rigorous, vendor-neutral certification program. The globally credentialed CIDPRO exam is available for participation and registration to earn the CIDPRO designation and badge is easy!

As in-person events make their comeback, what better way to get back into the swing of things than attending Identiverse®? Join IAM peers in Denver, Colorado next June to experience unparalleled education, collaboration, and insight into the future of Identity. Pros and novices won’t want to miss this unique event experience and the opportunity to participate alongside experts and fellow identity practitioners in information-rich sessions on the latest technologies, best practices, and industry trends. Register to attend and, if you’re feeling generous, register a friend and introduce them to the wide world of digital identity. (Reminder: The Call for Papers is still open until January 7, 2022.) 

The identity community is made up of a diverse group of individuals, working to promote privacy and information security across the industry. One of IDPro’s esteemed partners, the FIDO Alliance, is an identity organization that is dedicated to safeguarding digital presence and strengthening authentication. Other distinguished IDPro partners include the Kantara Initiative and the Digital ID & Authentication Council of Canada (DIACC). Kantara actively works to ensure secure, identity-based, online interactions while preventing misuse of personal information so that networks will become privacy-protecting and trustworthy environments. Finally, the DIACC advocates for an inclusive, privacy-enhancing digital ID strategy that works for all Canadians. Becoming a member of any of these organizations is an excellent way to further immerse yourself in the growth of digital identity and expand your network. Learn more about membership opportunities for the FIDO Alliance, the Kantara Initiative and the DIACC to discover how these organizations are making strides in the identity community. And don’t forget, membership with IDPro is always an impeccable option.

There are plenty of ways to celebrate the holiday with digital identity programs, so make your list and check it twice! Set your friend, family member, or yourself up for success in 2022 by investing in the future and increasing expertise by participating in any or all of the programs outlined in this list. IDPro has a plethora of exciting activities coming up in the new year, so be sure to stay tuned for new opportunities to get involved. Happy holidays from IDPro!

by Greg Smith

Are you taking full advantage of all that your IDPro membership has to offer? One of the best resources we have is our Slack space, where we all have easy access to each other to discuss any topics or problems we face in our daily jobs. There are many channels of identity related conversations available, from working from home (#wfh) to available #gigs to #certification. There’s a generally humorous #random channel, and even a #pit-boss channel for all things barbecue and smoked meat. Who knew? 

Just last month, there was a very good conversation in the #general channel on the value proposition for Identity & Access Management. The initial question was (of course) “Why is it so hard in 2021 to push Identity and Access Management initiatives within the enterprise?” The ensuing discussion among the 10 of us who participated was enlightening. In general, we found that unless we actually work at an identity company, our leadership most certainly does not understand identity, or the value it can bring to the enterprise. Identity and security, and IT in general, are viewed as costs to be minimized. In order to free up investment dollars, we need to make a business case to our leadership and address their many misconceptions. It also doesn’t help our case that identity has always been invisible to our leaders – it’s just there in the background and works. All the non-IT decision makers ever encounter is a password prompt and periodic MFA ceremonies (which they often complain about). Oh, and access reviews. They really love those, right?

A significant piece of the solution is education. I saw this myself in my last job, where we had a months-long education campaign among senior IT and business leaders to get them on board with our own program. It behooves us as Identerati to not bore those leaders with dry, complex, identity-specific jargon. We need to adjust our entire pitch “to align with the listener’s comprehension ability” and experience. Being able to automate role assignments to reduce the need to request access, go through approvals, and conduct subsequent periodic reviews was a goal they liked, but to get HR to realize they had skin in the game in terms of providing meaningful attributes about people to enable automated assignments was a huge challenge. When we finally got through to them with real world examples, it was like watching a metaphorical light bulb flicker to life above their heads. We need to be sure our pitch also includes things our leaders are going to be interested in, such as passwordless authentication, better user experiences, and the business outcomes that will (positively) affect key objectives.  Better customer experience; better customer (or employee) retention; reduced risk, for example. For an enterprise, anything that improves the first-day onboarding experience for new hires is a great enticement. Another strong selling point is illustrating how your IAM program can create consistency in aid of digital transformation. IAM is so much more than a compliance box to be checked. Once an organization’s process and control objectives are documented, IAM will likely be critical to meeting those objectives; and not meeting them can affect the organization’s reputation and bottom line.

In addition to educating those leaders who hold the purse strings, we also need to educate other key stakeholders within the organization. Specifically, our application owners and developers, whose priorities are focused on application features, performance and uptime, and user experience. Getting them aligned to use SSO is typically easier than having them adopt centrally managed roles and permissions, but either way, putting IAM into an application owner’s critical path as a potential point of failure is a tough sell. You need to be able to show them the advantages of adopting your IAM service to get them on board.

The other topic that came up was mandates to adopt IAM services and how that fits into the overall application lifecycle. For an existing application, a mandate to adopt is a tough sell because it means retrofitting the IAM integration into the app; there will be resistance. For apps that are earlier in their lifecycle, there’s greater latitude to integrate your IAM service into the design. The key thing is to get to app owners as early as possible, starting now.

To wrap up, the recurring theme throughout was that it’s really about selling your IAM program to everyone, whether it’s the IT leadership, the finance department, HR, app owners, or end users. And to be good at selling, you have to position it as something people will want. That means you really need to make an effort to understand your stakeholders and what is valuable to them! 

By the way, make sure you check out the Body of Knowledge for an upcoming article on the “Business Case for IAM”; in the meantime, you can find a link to the current draft article in the #bok channel of our Slack space.

My thanks to members Michael Jean-Jacques, Brian Simoni, Mark Russell, Ted Tanner, Marc Boorshtein, Ian Glazer, Lance Peterman, Matt Topper, and André Koot for the engaging discussion on Slack. Additional thanks to Jon Lehtinen, James Dodds, and Andi Hindle for their invaluable assistance in crafting this article.

Greg Smith

Chair, IDPro Editorial

Greg Smith has been implementing Identity & Access Management solutions for over 35 years. He holds BSEG and MSBA degrees from Bucknell University, where he also began his professional career before moving into the Pharmaceutical industry in 1996. After a 25 year career there, he recently retired from Johnson & Johnson, where he led the engineering team for J&J’s single sign-on, risk based authentication, multi-factor authentication, access governance, directory synchronization and virtualization, provisioning automation, and PKI services. He has spoken at Identiverse® and other industry events on numerous occasions. He was recently CIDPRO™ certified, and is also a founding member of IDPro, where he currently chairs the editorial committee.