80%+ of breaches involve identity attacks, yet visibility alone won't stop them. Learn why identity intelligence and AI agents are now essential for enforcing least-privilege access at scale.


I just came back from a private conference with 90 cybersecurity CEOs, leaders from some of the largest security companies in the world. One point of near-universal agreement: identity is the top priority for their customers in 2026.
The consequences of getting identity wrong are severe: over 80% of breaches involve an identity attack since it is the weakest link in the chain1. We saw it with the Stryker attack, where a compromised identity led to 200,000 devices being wiped. Identity is the attack surface.
So, how do we deal with this? When I talk with CISOs, they tell me they struggle with granular visibility across all their identities. It is hard to know, across humans, contractors, service accounts, and bots, for all apps, down to fine-grained permissions, who actually has access to what. The scale is staggering. The surface area keeps growing.
In 2025, Gartner defined an entirely new category for this: Identity Visibility and Intelligence Platforms (IVIP). Today I want to talk about the more non-obvious part of that category: Intelligence.
In this piece, I will cover why visibility alone is not enough, what intelligence actually means in the context of identity, how to think about AI features vs. Agents, and why the shift to agentic identity management is not optional if you want to actually solve least-privilege access.
The identity problem is compounding. Every new app your company adopts does not just add one integration. It adds dozens or hundreds of permission levels and role combinations that need to be understood, reviewed, and governed. A 5,000-person company with 400 apps is not managing 400 access decisions. They are managing hundreds of thousands of fine-grained permission and user combinations, and that number grows every week.
Meanwhile, the identity and security teams responsible for governing all of this stayed the same size. Every identity team I talk to has a massive backlog of hygiene tasks they know they should be doing but do not have the hours to get to: rotating credentials for service accounts, removing dormant accounts, making sure every permission has a description and a risk level, reviewing access for employees who changed roles six months ago. The knowledge of what needs to happen is not the bottleneck. The capacity to do it is. You cannot recruit your way out of an exponential problem with a linear solution.
And the problem is getting harder. With AI agents entering the enterprise, users are delegating their permissions to those agents (think MCP integrations where an agent inherits the user's access). The permissions themselves have not changed, but the blast radius has. An over-privileged human who never actually uses their admin access is one thing. An AI agent with those same permissions executing autonomously at machine speed is a completely different risk profile. Least-privilege just became 10x more important.
On the productivity side, identity matters just as much. You want to make sure users have the right access the moment they join, without even asking for it. You might know this as RBAC or persona-based provisioning. It sounds clean in theory, but in practice most organizations cannot tell you what the right baseline access for a given role actually looks like. In the past, you interviewed every team leader to ask what access their team needs, but that process does not scale anymore, and the output is outdated by the time you finish the exercise. New hires wait days for the access they need. The access they eventually get is a best guess cobbled together from what the last person in a similar role had, not a deliberate decision grounded in what the role actually requires.
Gartner's IVIP category captures something real: you need a unified view of who has access to what, across every app, for every identity type. But here is what I have learned over the last 12 months, after giving hundreds of CISOs that visibility for the first time: it is not enough.
Here is the question I keep coming back to: If you could see every identity and every permission across your environment right now, what would you actually do about it?
Most security leaders pause. Knowing that an engineer has access to 26 apps and 485 permissions does not give you anything actionable when your identity team is already running on fumes. The actual hard part is knowing which of those permissions represent real risk, which you can safely ignore, and for which users, all in the context of how your organization actually works.
That leap, from "what exists" to "what matters," is intelligence. True identity intelligence means the system can reason about your environment. It learns what baseline access looks like for a given role. It flags anomalies that a human reviewer would catch if they had unlimited time, but they never do. It reads a permission called "Developer Role" and tells you that what it actually grants is far more privileged than the name suggests. It connects signals across systems: an employee transferred departments three months ago, still has their old access, and that access includes sensitive financial data they no longer need.
Now, people hear "intelligence" and immediately say: AI is the answer. I think it is more nuanced than that. We have to differentiate between levels of AI, because not all of them solve the capacity problem.
Every vendor is adding "AI" to their product. But there is a meaningful difference between bolting an AI feature onto an existing workflow and building agents that take identity work off your plate entirely.
AI features are what most vendors ship today. They generate a summary. They auto-fill a field. They classify something with a confidence score. Of course, these are useful. But they add things on your plate. More findings, more alerts, more data points that still require a human to review, prioritize, and act on. The process does not change. The bottleneck of human attention stays exactly where it was. In fact, it might get even worse. Vendors call their chatbots ‘agents’, but they are not agentic, unless they take action.
Identity agents take work off your plate. An agent owns a job. It does two things that AI features cannot: it helps you prioritize what actually matters in your environment, and then it takes action on what it finds. You give it a scope, instructions in plain language, and a level of autonomy. It runs continuously. It learns from your feedback. When you dismiss a finding and tell it why, it remembers. Over time, the agent gets better at your environment specifically, not environments in general.
Think of it less like configuring software and more like managing a direct report. You onboard them. You review their early work closely. You give feedback. You build trust. Eventually, you delegate routine work entirely and focus your attention on the exceptions.
The end state is an army of agents, one for each identity job that is table stakes but that no team has the bandwidth to do well manually: dormant account cleanup, separation of duties analysis, non-human identity (NHI) ownership tracking, threat analysis, credential rotation, entitlement classification, role mining, access reviews. All of these things need to happen. Agents do the heavy lifting so your team can focus on the decisions that require human judgment.
Let me make this concrete with three examples:
Most organizations have RBAC policies on paper, but the actual access landscape has drifted far from those policies over years of ad-hoc provisioning and exceptions. The Role Mining Agent analyzes actual access patterns across your workforce, identifies peer groups, checks whether access has actually been used and surfaces what baseline access should look like for each role. It finds the engineer who has access to 30 apps when every other engineer in their group has access to 12, and tells you which of those 18 extra permissions were granted for a project that ended.
A typical enterprise has tens of thousands of permissions. Most have no clear owner, no risk classification, and no plain-language description of what they actually grant. With employees constantly joining, changing roles, and leaving, and new permissions appearing all the time, keeping ownership and descriptions accurate is impossible to do manually. The Entitlement Analyst identifies and assigns ownership by analyzing which teams use each permission, which apps they belong to, and who has historically managed access. It flags unowned permissions, generates plain-language descriptions, and tags sensitivity levels, continuously across your entire environment.
Once a quarter, managers get a list of permissions they do not understand, for employees whose day-to-day work they may not be close to, and they are asked to approve or revoke. Most approve everything because they lack the context. The Access Review Agent pre-analyzes the access: here is what is normal for this role, here is what is anomalous, here is what has not been used in 90 days. Reviewers assess the agent’s recommendations, not a wall of data.
The models are powerful enough. LLMs today can reason, classify, and generate at impressive levels. The bottleneck is not the model. It is the context you feed it.
A generic model with no knowledge of your environment will produce generic output. We call this discipline context engineering: structuring identity data, policies, and organizational knowledge so that AI can reason over it effectively. It has four components:
Models are commoditizing. Context is the secret ingredient.
Here is the uncomfortable truth about identity today: the industry has not actually solved least-privilege access. Most identity solutions are workflow systems. They can route an access request to an approver and log the decision. That is valuable plumbing. But it does not answer the question that actually matters: should this person have this access?
That question has gone unanswered for years because answering it is genuinely hard. It requires understanding what the permission actually grants, what the person's role requires, what their peers have, what they have actually used, and whether the access still makes sense given how the organization has changed. No human team can do that analysis across hundreds of apps and tens of thousands of permissions on an ongoing basis. So in practice, most access gets approved, most reviews are rubber-stamped, and least-privilege remains an aspiration.
That was an acceptable trade-off when the main risk was a human who had access they did not use. It is not acceptable when AI agents inherit those same permissions and operate autonomously at machine speed. The blast radius of over-privileged access just changed by orders of magnitude.
This is why the shift to agentic identity management matters right now. The companies that wait will spend the next three years drowning in a backlog that grows faster than they can work through it. The companies that move will 10x their identity team's capacity without adding headcount.
When you are evaluating identity platforms today, the right question is not which product has the longest feature list. The question is whether the system can actually get smarter about your environment over time. That comes down to three things:
This is what we are building at Lumos. You can use Lumos on top of any identity system you currently have, whether it is a PAM, IGA, or SSO solution. Check us out, we'd love to show you.
Book a 1:1 demo with us and enable your IT and Security teams to achieve more.