Share
December 9, 2025
 - 
2 minute read

The End of Rubber Stamping: Introducing Agentic User Access Reviews

Discover how Lumos Agentic User Access Reviews eliminate rubber‑stamping with AI‑driven context, usage verification, anomaly detection, and auto‑remediation to shrink review cycles, tighten least privilege, and deliver audit‑ready evidence.

Last updated
 - 
December 9, 2025
Janani Nagarajan
Product Marketing @Lumos

In this article

For the last decade, User Access Reviews (UARs) have been one of the most painful rituals in security. Every quarter, teams export tens of thousands of rows of entitlements into spreadsheets, email them to hundreds of managers, and brace for the churn.

But we know the reality. Managers quickly get overwhelmed. They lack context. They don’t know why a user has access or if they even use it. So, they do the only thing they can to survive the audit and meet the deadline: Select All > Approve. 

This "Rubber Stamping" epidemic isn’t just an operational waste; it is a massive security failure. It leaves dormant accounts open, privilege creep unchecked, and toxic access hiding in plain sight. 

Today, we are changing that. We are thrilled to launch Lumos Agentic User Access Reviews (UARs) - the industry’s first review system powered by autonomous AI agents. 

Eliminating guesswork and manual "row-checking", we are now moving to a new era of agent-led verification. AI Agents recommend the right decision with rationale, surface what’s risky or changed, and help turn review decisions into real enforcement with audit-ready evidence.

Here is exactly how it works and why we built it. 

The Shift: From Raw Data to Reason-backed Decision

The fundamental flaw in traditional IGA tools is that they present raw data and ask humans to interpret it. To make a safe decision, a manager has to evaluate a complex set of signals for every single line item:

Identity Context + Usage History + Peer Baseline + Permission Sensitivity = Decision

Expecting a human to do this accurately across hundreds of items in a matrix of apps and identities is not realistic. So we built Albus, our AI Identity Agent, to do that reasoning for them. 

Under the Hood: 3 Phases of Agentic UARs 

How does an agentic approach ease the manual process?  By following this three-step logic flow that mimics a human security analyst while bringing machine speed and scale. We break it down to the three phases for every campaign: total visibility, deep AI-powered analysis, and automated enforcement.

To see how Lumos runs users access reviews with AI-powered review insights and suggested actions, watch this short demo video:

Phase 1: Get Complete Visibility Across Humans and Non-Human Identities

UARs break down when reviews only cover employees and ignore the accounts that actually power your systems. Service accounts, bots, and other non-human identities often sit outside the normal review flow, and they quietly accumulate privilege.

Lumos brings humans and NHIs into a single review experience, so you can certify access consistently across your environment. This closes the classic blind spot where over-privileged automation accounts hide, and it makes the campaign scope match the real attack surface.

Phase 2: Turn “Risk Scores” into Plain-English Explanations

Traditional tools hand managers an abstract score and a giant table of entitlements. Albus does the opposite. It explains the “why” in natural language, right next to the decision. Unlike basic automation that just routes tickets, Albus acts as a Tier 1 Security Analyst. Before a human ever sees the review campaign, Albus ingests data from your IdP, HRIS, and downstream apps to run a "First Pass" analysis and comes up with recommendations for approvals/rejects.

Under the hood, this is powered by 3 engines: 

  1. Activity Engine: Usage Verification
  2. The Peer Group Engine: Anomaly Detection
  3. The Sensitivity Engine: Risk Scoring

Let's take a closer look at each.

1. Activity Engine: Usage Verification

Most tools stop at "Last Login." Albus goes deeper and queries downstream application APIs to determine granular usage.

  • The Logic: "Has this user logged into this specific app or utilized this specific permission set in the last 90 days?"
  • The Output: If the answer is no, Albus flags it as Dormant Access and pre-fills the decision to REMOVE with a clear explanation: "User hasn't logged in for 90+ days."

2. The Peer Group Engine: Anomaly Detection

This is where the agentic capability shines. Albus dynamically clusters users based on HRIS attributes (Department, Title, Manager) to build a baseline of "normal" access.

  • The Logic: "User A is a Marketing Manager. 95% of Marketing Managers utilize HubSpot, but User A also has write-access to the Production Database."
  • The Output: Albus flags this as a Role Anomaly—a deviation from the peer group—and prioritizes it for immediate scrutiny.

3. The Sensitivity Engine: Risk Scoring

Not all access is equal. Albus analyzes permission metadata to distinguish between safe (Read/Member) and toxic (Admin/Delete) scopes.

  • The Logic: "Does this combination of entitlements violate Separation of Duties (SoD)?"
  • The Output: High-risk permissions are surfaced at the top of the queue, ensuring managers spend their energy on the 5% of access that actually matters.

We designed Albus to speak human. Instead of a score, we inject Natural Language Explanations directly into the review interface.

  • Safe: "Matches peer group usage. Active within the last 30 days."
  • ⚠️ Caution: "New access assigned since last review."
  • Risk: "Unused for 90 days. Misaligned with Peer Group (Engineering)."

This allows humans to focus entirely on the 'last mile', verifying the high-risk exceptions and anomalies that actually require investigation and judgment. By removing the noise, our customers have compressed their review times from months to weeks.

Phase 3: Closing the Loop: Automated Remediation

A review only reduces risk if the decision is enforced. In the legacy world, “revoke” often means “create a ticket and hope it gets done later.” That gap is where risk lingers, auditors get skeptical, and teams end up doing manual cleanup after the fact or worse, failed audits and security breaches. 

Lumos Agentic UARs close the loop immediately. When a reviewer rejects access, Lumos triggers automated remediation through downstream integrations to deprovision access, downgrade roles, or remove licenses right away. Every action is logged with audit trails, so there is no ambiguity about what changed, when it changed, and who approved it.

End your campaigns with Lumos-generated access review reports that package decisions, rationale, and remediation evidence into audit-ready documentation. The result is simple: no tickets, no lag time, proof of compliance and tighter attack surface.

Ready to Stop Guessing?

We are entering an era where the volume of access with the explosion of apps and identities is far outpacing the human ability to manually govern it. By moving to an Agentic approach, we aren't just making reviews faster (though our customers are seeing 6x faster completion times). We are making our customers safer and operationally productive.

We are moving from "Rubber Stamping" to "Verified Security."

Agentic User Access Reviews are available today for all Lumos customers.

New to Lumos? Request a Demo to see it in action.

Janani Nagarajan
 •
Product Marketing @Lumos