Resources
- -
- Solutions
- RadiantOne
- Why Radiant Logic
- Company
- Support
- Resources
© 2026 Radiant Logic, Inc. All Rights Reserved. | Privacy Policy
Find out how to eliminate 96% of identity breaches without tearing down and replacing your existing systems. Wade Ellery and John Ross Petrutiu from Radiant Logic show you how to secure access, reduce identity-related risks and automate IAM controls. Concrete solutions, Zero Trust security, proven results you can count on.
Wade: Well, I think we can probably kick off. Are we recording? Make sure that’s happening. Looks like we’re good to go. It looks like we’re recording.
All right. Well, thank you everyone for joining us today. This is RadiantLogic’s webinar on Trusted Identities in the Age of Zero Trust. What’s interesting to me — actually, a very positive experience for me — is I think we are now finally in the age of zero trust. I kind of saw it coming for quite a while, and I’m very excited now to see it actually getting tremendous amounts of traction in the business area.
So we’re going to talk today about really the criticality of identities when you start talking about Zero Trust, or really anything you do in the identity management space where identity becomes the foundation and really the linchpin for your complete security platform.
I’m joined today by JR, who is our Senior Solutions Consultant with Radiant Logic. My name is Wade Ellery. I am the Field CTO at Radiant Logic, and we’ll be presenting today insights into the impact of identity in the security model and how to build out your platform to increase overall platform security, focusing on identity as the linchpin again for that model.
So what we’re realizing now — and we’ve sort of seen an evolution in identity management over the decades — we started out with single sign-on, we went to provisioning, we went to governance to solve problems within the identity infrastructure and make business more effective. But what’s happening now, and we see it in the news every day, is that breaches are the number one fault within organizations today. It’s no longer business efficiency. It’s no longer access to applications. It’s the threat that an outsider is going to come in and either ransomware your environment, lock it up so you can’t use it, steal your proprietary information, or steal your identity data. This is the number one challenge now.
Ninety percent of the organizations out there right now will or have been breached. And these breaches can be traced back, currently, to the compromise of an account — the compromise of an identity, a user identity, a service account, a local account, some machine account or non-human entity. These are the entryways now into the network. Where we used to have to find our way through a firewall, someone said recently: someone doesn’t have to get on your network — they just have to log in. Because everything is so distributed now, compromising an account is the way into the system.
So how do you counter something like that? How do you deal with the idea that a compromise in your environment is so easy to take on? JR, how would you approach that?
JR: Absolutely. That’s a good question. I think, just to your point Wade, what’s happened here is an evolution of the security perimeter. When we’re talking about the traditional security perimeter, initially you were talking about network boundaries — the idea that you have a clear distinction between inside or outside the org.
Over the last few years, we’ve seen an evolution away from that model to, like you said, a distributed model where resources exist in many different locations, both physically and in different cloud repositories. And users themselves aren’t always working from within a physical building — a lot of work from home, work from anywhere.
And so what we see here is a change in the threat landscape, away from the traditional perimeter towards identity as the perimeter itself. The idea being that, like you said, if a user account, a service account, or an admin account is compromised, that will lead to the ability to compromise additional services across the organization.
The real shift here is that threat actors have also realized that it’s actually much easier to compromise an account than it is to compromise software. If you have a lot of different tools in place like single sign-on, MFA, or IGA to secure access to accounts, it then becomes a question of: is it easier to find a flaw in a piece of software, or is it easier to social engineer your way into access to an account which has legitimate access rights? By impersonating a user, you gain access to critical resources within the company rather than having to find a flaw within the software. It’s really about what’s simplest for the threat actor — the easiest, quickest approach with the least amount of effort. And generally speaking nowadays, that’s compromising the identity.
Wade: It sounds like that adage of you don’t have to run faster than the bear — you just have to run faster than your buddy next to you when the bear is chasing you. So you want to look and see what your weakest point is in your network. And over time, that weakest point has evolved to be the account itself.
So it sounds like we’re hearing about this not just from our own experiences, as you and I have had quite a few in this area, but we’re hearing it from across the industry — from large customers, from analysts, from governing bodies. They’re all coming back and basically validating the same concern: that identity is now the only place left — or the critical piece — that you have to secure in order to secure your environment.
And unfortunately, over a lot of time, we have created a tremendous amount of identity debt. A lot of it centered around identity itself, and that has caused a tremendous increase in the vulnerability of identity — especially when you talk about service accounts. The capability of service accounts to have a higher level of access but a more anonymous kind of operation within the network, and much less likelihood of being governed and managed, makes them very susceptible. And as mentioned in a couple of quotes from leaders in the industry, the advent of the cloud, and the distribution of our identities outside our organization and into other trusted organizations, creates a real challenge around being able to secure myself within a third party.
JR: Just to add to that — because of the evolving landscape, what we see increasingly is that traditional measures, like the traditional network perimeter, or even traditional tools nowadays — look at the classic IGA approach. In essence, IGA is glorified automation to a large degree, where you’re automating onboarding and offboarding tasks. There is some level of analytics included in most traditional IGA platforms and approaches. But what it ends up with a lot of the time is a deferred time change — it may take a long time for changes to propagate across the org. Additionally, you end up with a somewhat static authorization model where through your governance engine you’re building things like static groups and roles which, while maybe dynamically populated based on a set of conditions, are necessarily making a decision before the time of access. We’re talking about role-based access control or group access control.
Wade: So it sounds like we have a lot of inherent challenges just in the way things have been engineered and evolved over time. Let’s take a look and see what we can do to try and heal this wound.
Clearly — what happens when identity fails? Well, this is the classic compromise of a network. In fact, I do a lot of commentary for our organization on events. There are breaking news of a breach somewhere in NIH or someplace else, and they look for commentary from industry professionals to highlight either the causes or the effects or the continuous nature of some of these. And I’m now getting on a daily basis from our marketing team a request for comment on a breach. It is becoming ubiquitous.
What was interesting to me was a quote I read recently that said two years ago, it used to take about a month for an attacker to get into your network, compromise enough resources to gain some level of control, and be able to take over access to vital information, put identity data on the dark web, or lock up the platform with a ransomware model. But now with the advent of AI, with the democratization of attacks, with platforms for rent on the dark web to do these attacks, you’re down to less than thirty minutes to compromise an account and gain control in an environment.
So if you’ve got a platform that’s doing your audit and review and access verification on a twenty-four hour cycle because it takes that long to load the data every night, you’re going to miss something. I come in at eight in the morning, work for a few hours in your environment, grab everything I need, hide myself, clear the logs, and I’m gone by noon. You never saw me. And I can do that over and over again because I’m undetected by particular systems today.
So really, JR, what do we do in a situation like that when our existing eyes on glass can’t see the actual negative events taking place in the background?
JR: That’s a good question. I would actually extend that a little bit and say: what do we do in a situation where the tools in place do detect a threat but aren’t able to take action quickly enough? And that’s really the scenario we’re illustrating in this slide.
This is an actual use case we saw before deploying with a customer of ours. They were in a situation where they had a breach. Their SOC did detect that there was an event happening. It took them under thirty minutes to cut access for the compromised accounts in that scenario. But the real issue was that within those thirty minutes, the bad actor was able to exfiltrate multiple terabytes worth of data onto their own machines, and were then able to sell some of that data and leak the rest of it.
But back to your question, Wade — what can you do about this? It highlights the need for a more preventative approach to security, because just-in-time, real-time response is simply not fast enough. What this highlights is the need to detect, ahead of an attack, any vulnerabilities or issues with accounts — over-allocated access, things like that — in order to rectify those issues so that even when an account does get compromised (because it will happen inevitably), you can minimize ahead of time the impact that compromise or breach will have. So that’s really focusing on a more preventative approach rather than just a response to an event as it’s happening.
Wade: So it sounds like, even though you’ve got excellent measures to counter a burglar once he’s in your house, you’d really rather live in a world where you prevent the burglar from getting in. The fact that you have to react to them and fight them off is the last resort. If you can do something that actually prevents that and cuts down on the capability of that person to compromise your network, then you’re really ahead of the game. You’re no longer reactive. You’re being preventative.
That seems like a great way to take this conversation. But is that something that we can do? Because it sounds like identity right now is apparently highly compromised. If we were easily able to solve this problem, we wouldn’t have it as such a ubiquitous problem today — where three out of four organizations are compromised and eighty-six percent of them are done for some kind of overprivileged account.
So what is the step forward? What is the method we want to take to try and harden our identities? What can we do? Are we ripping and replacing everything we have, or is there another way to make this a more successful endeavor?
JR: That’s a good question. I think to begin to answer it, we need to look at the reason why this is possible — why these types of attacks happen. It’s about taking a step back and reassessing the current state within organizations. Figuring out first of all what the attack surface is in terms of identities across the org. Doing some initial cleanup and analysis into the existing identities and accesses to very quickly pinpoint ghost accounts, over-privileged accesses, orphan accounts, illegitimate accounts, things like that. And as you’re doing that, it’s about trying to establish a risk level across the org — looking at all of these accesses, associating risk so that other applications downstream can also leverage risk scores.
For example, you might have accounts that have legitimate access to a highly risky application that has to do with financial transactions. A user might require that access, but if you’re labeling that properly, there are ways through other applications — like in the SOC — to look at behavior there and be able to lock things down more quickly. So if you do detect a breach, you can lock it down quicker. But also, before that breach happens, being able to reduce the level of access that accounts have which they don’t necessarily require.
The steps to move towards this are, first of all, consistent data management processes across the entirety of the organization — making sure that identity data is aggregated into one place, maintained properly, and everyone has the right level of access. Then preventative controls on top of that, which help to quickly detect and remediate any possible deviations to basic security principles like least privilege access. And then, from there, using this aggregated data and controls to perform in-depth assessments on accesses and security within the organization in a continuous way — continuously ingesting and observing any changes to accesses, new accounts created, new permissions allocated, things like that.
I’ll move on to the next slide and we’ll start to dig into how you can do this in a concrete way. But first, let’s talk about the pillars of zero trust. Do you want to talk to this one, Wade?
Wade: Yeah. Bridging off what you just said — it sounds like observability. Visibility into your identity data is a critical first step. You can’t manage what you can’t see. And blind spots are what the bad actors are exploiting — the areas that you haven’t locked down and cleaned up. And because of the amount of IT debt we have, there’s a lot of work to do there.
Really going back down to the identities at the attribute level — what provisions do they have, what have they been provisioned to, what attributes do they have. Because when you start doing zero trust, moving from right to left, you’re starting with network access: can I even get on the network? Am I on a secure device? Am I coming from a secure location? Am I a trusted user? And then where can I go within the network in terms of network segmentation, controlling flow and security there?
Data access at the actual row and column level within databases is now policy-controlled in a Zero Trust model. Application access and application operation, east-west operations by applications within the environment themselves, service accounts and those models, devices — and then all the way up to application access, which is moving towards the Zero Trust model.
Every one of those policy decision points — at each of those areas that says “yes, this user can access this resource” — is evaluating identity data attributes about that user against the policy. If the policy says it’s okay, they gain access. If the policy says it’s a violation, they don’t. It’s the identity data that makes those decisions.
And today, can you trust your identity data one hundred percent to be accurate, to be complete, to be uncorrupted by a bad operator, to be something you trust to authorize all the access from the edge of your network all the way to your applications? That’s something where I think we have to focus our attention. And that really highlights even more so why identities become more and more critical here.
JR: Absolutely. So we’ve kind of established what some of the challenges are today around identity-first security, especially around zero trust — where identity shifts as a foundation for all of the other decisions that are being made within the organization from a data perspective and a security perspective.
At its core, the identity data itself is what needs to be cleaned up. Accesses need to be restricted to the minimum necessary for users to accomplish their job. And there needs to be a proactive approach to making sure that the data stays in a good state over time.
The way that we approach this with our customers is through an approach that we call “get clean, stay clean” — and then from there, use that clean data.
For the get clean piece, the concept is to first gain full visibility — full insight into the data that exists across the organization. This is basically creating a full inventory of user accounts, non-human accounts, and entitlements that those accounts can access.
From there, we focus on staying clean. Once we create this full catalog, the goal is to apply controls, fix any issues with accesses — remove extraneous access, over-allocated privileges — identify and deactivate orphan accounts or attach them to a user if they still need to be there. Really reduce the scope of accounts first of all, and then the accesses that they have.
In order to stay clean, the idea is to continuously detect and ingest any changes that occur to those identities — newly created identities, newly created permissions allocated to users, things like that. Then identify any quality issues — permissions that were added but shouldn’t be there. And then take the results of that identification in order to automatically remediate: either revoke accesses or validate them with line managers or application managers who can confirm that the access should indeed be there. And then enforce regular access reviews to validate the legitimacy of those permissions — making sure consciously with your managers that the accesses that are there are indeed good and shouldn’t be revoked.
And then finally, allowing applications to leverage that data in order to make just-in-time access decisions. This is what moves you towards the Zero Trust security model — having clean, accurate data; having a mechanism through which you can guarantee that it stays clean and accurate; and then being able to present it to your downstream applications so they can make policy-based decisions in real time as effectively and as quickly as possible.
Wade: Excellent. And it seems to me there are three major functional areas here for this to operate.
One is the introduction of AI, because the idea of getting clean — the sheer scale of the number of identities multiplied by the number of attributes across different platforms, different formats, and different identifiers — that is more than the human mind can put their head around. If you’re a large organization with thousands or tens of thousands or hundreds of thousands of employees, you have a massive amount of data to go through. AI is built for that kind of task — looking at massive amounts of data, finding commonality, finding anomalies, looking for patterns, and learning as it goes along. So implementing AI in this platform has tremendous value at a number of points.
The access review and Stay Clean capability is also AI-driven, where you can build context around the review process so that the human reviewer has more information to make a more valid decision. Because the worst thing you can do is clean up your environment and then go through rubber-stamping exercises to verify it’s staying clean as it slowly gets dirtier and entropy introduces into the system.
And then the ability to recognize change in real time is critical — to be able to recognize an attempt to change information in a way that compromises an account, potentially by a bad operator, potentially by an inadvertent activity or something out of band. Historically, admins go in and make changes because that’s the way they’ve operated for twenty years. But if you’ve got a controlled, locked-down system, changes should only come from authorized sources of truth — not just from anyone arbitrarily making changes. So you have to recognize those and back them out in real time.
And then the use clean piece is the capability of delivering this data to other applications. A lot of our components in the Identity Data Management stack function solely for themselves — they gather identity together but do it just to do provisioning activities. They don’t make that data available for authorization. Or they do it just for PAM operations but don’t make it available for Zero Trust. The ability to make this information — now pristine, ideal, and a source of truth — available for all the different applications that need to consume identity data equally is critical. So that you’re working off the same set of information when you’re authorizing access with your SAML token, when you’re onboarding a user with certain access rights, when you’re giving someone the zero trust permission to access a resource. You want to do this off the same clean data — one place to audit, one place to manage, and one place to trust.
JR: Absolutely. So the next thing we can look at are a few ways in which we can approach this challenge — how do we actually get clean, stay clean, and how are we actually able to use that data?
We have three main things we wanted to look at. The first one is around creating a single pane of glass. The idea here is that if you don’t have a single place to see all these accesses and all these accounts, how can you go about the task of cleaning things up and reducing accesses? This is the core, most important upfront part — creating the single pane of glass. Do you want to comment on this one, Wade?
Wade: Yeah. Just along the lines of what we said a moment ago — it’s critical that everyone’s working from the same piece of sheet music. If you’re trying to have a symphony that’s playing together and making harmonious sounds, they’ve all got to be using the same information. They all have a little bit different way that the notes are constructed for them, but they’re playing the same symphony.
And that’s critical because if you let each section in your orchestra write their own interpretation of a symphony, you’re going to get a bunch of crazy, very uncomfortable noise and not music. This is the critical nature of identity management — we need to really understand this is a holistic model. This is a complete posture across the full span of my identity environment that we’re trying to manage. And that starts with everyone using the same information to make the decisions that they make at their level.
JR: Absolutely. And part of this — the idea here is to end up with a single pane of glass for visibility. But this also helps to potentially, if you have a single source of truth as a result, reduce the number of credential checks that have to happen within the org. So fewer needs around password duplication and synchronization, which inherently reduces your attack surface.
Also, through centralizing authentication and authorization, you have one place to control those authorizations downstream. This also helps to simplify audits and reduce risk on the compliance side. If you can very quickly understand what’s going on and audit and track that, you can very quickly produce reports to provide to auditors and prove that you’re in a compliant state.
And then finally, if you have trusted data — if you know it’s in a single place and clean — from there you can easily automate a lot of tasks with the confidence that those tasks will be executed on that clean data, and that the results they produce will be clean as well.
Wade: And I think anyone that’s deployed an identity management platform — whether it’s a governance platform, a provisioning platform, single sign-on, or a PAM platform — understands what a heavy lift it is to get all the data into the system. That’s the first big hurdle you have to get over when you’re deploying a platform.
If you can do this once, do it correctly, and then make that data easily available to every system that needs it — especially as you start to roll out more and more policy decision points closer and closer to the resources you’re protecting — you don’t want to go through that process of reconnecting, aggregating, normalizing, and cleaning the data seven, eight, nine times across your organization. This is where you should focus your effort once and then reuse it as much as possible. That’s a tremendous boost in efficiency. In fact, Gartner indicated you’d double the ROI on your IGA deployment if you did the data hygiene, data cleanup, and single source of identity data upfront before you started to embark on that project. So it’s definitely something to invest in because the dividends are tremendous.
JR: Absolutely. And closely related to this is a second task. As you’re going through the task of creating this single source of truth — a single pane of glass — what becomes apparent very quickly is that there are most likely a large number of sources within an organization, a large number of systems which contain identity-related information. Traditionally these would be directories; these could also be cloud repositories. But creating this single pane of glass highlights a need around potentially consolidating and modernizing a lot of this infrastructure.
There are two main challenges to this task. One is technical in nature, the other is more qualitative.
On the technical side, when we talk about infrastructure consolidation, we’re talking about cleaning up both on-prem and cloud resources to reduce the need to sync fragments of identity across different places. It also has to do with decommissioning and performing modernization that’s long overdue. For example, any out-of-support legacy directories need to be replaced anyway. Other types of identity access management solutions — MIM is a great example — it’s been end-of-life for the past ten years, and end of life is in 2029. So they’re getting there, but there’s a need to replace a lot of these tools. And part of a Zero Trust modernization project can be to actually clean up a lot of this tech debt.
The other challenge is qualitative in nature — it’s the deeper understanding beyond just the data itself. It’s understanding what the data means and the accesses that it provides. This is everything around things like orphan account removal, streamlining or realigning access, potentially changing your access model. As you shift towards a policy-based access model, you still have roles in place that define a base level of access, and additional policies based off attributes that look into additional conditions. But in order to manage this more effectively, you might look at also potentially changing your role model behind the scenes. There’s a lot of cleanup and streamlining that can happen as part of this modernization and Zero Trust deployment.
Wade: And I think what’s critical to recognize here is something we as an industry finally started admitting to our customers a few years ago with Zero Trust — that this is a journey. This is not one product. This is not one project. This is an effort you undertake to continuously chip away at this iceberg. There’s a lot of tech debt in most organizations. You want to focus on the low-hanging fruit early. You want to focus on the high risk early. But you want to put in processes that help you move down the line and eat that elephant one bite at a time — and get it all consumed.
Equally critical is that you have to put in measures to maintain that data, to keep it clean — because otherwise you’re going to be in a continuous loop of cleaning information that gets dirty as soon as you let go of it.
And what’s critical to recognize is we’re not talking about just cleaning up the data in the master user record in the unified data. We’re talking about writing back to the sources and remediating those errors in the original sources of truth. Because there are still going to be applications in your environment no matter how hard you work that are still talking to original data sources that aren’t able to redirect themselves to a federated access or a Zero Trust model. So you need to make sure that the data — everywhere it exists, wherever it’s been synchronized, wherever it’s been distributed — picks up the cleanup changes. That data needs to reflect the quality of the data in your master record.
JR: Absolutely. And there’s something you alluded to in there that leads to our last point — this is our second-to-last slide. It’s the idea of taking reviews performed for compliance purposes and turning them into living controls. As you said Wade, a lot of the time people perform a cleanup but immediately once that cleanup is done, it’s no longer effective — because the second after you say something is valid, a condition changes and suddenly it’s no longer valid.
This is especially the case with compliance reviews or audits. A lot of the time this is a once or twice a year task where the audit team goes through, creates a list of suggestions and improvements, and those changes are applied — but immediately afterwards things have changed even since the audit was finished. And now you have six months or a year until the next recertification.
Wade, do you have comments on this one?
Wade: Yeah, just to reiterate — we started at the very beginning explaining how fast a bad operator can move in your environment. So once something is being altered on a real-time basis, you need to have policies in place that will recognize that in real time — that can either block it, alert on it, or remediate it. Because a lot of the time, ITDR systems are looking for behavioral patterns or network traffic. They’re not looking at the identity data itself that a bad operator may be manipulating to escalate their own privileges and move within the organization. So you need to be watching the front door and the back door at the same time, and the closer you can get to real time, the closer you can get to preventing something from getting bad — or getting worse — if it is starting to go sideways.
And that, again, becomes critical — the ability to maintain that data is the insurance you’re buying on top of all the work you’re doing. It’s like going to an appliance store and buying a brand new oven and they offer you a three-year warranty. This stay-clean methodology is your warranty. Is all the work I put in to clean up going to be valuable? Yes, because I’m maintaining it and it becomes a lifestyle. Once you’ve implemented these processes, once you have these systems in place, they run automatically and help you stay clean. But you have to take the extra step of going all the way out to doing that.
JR: Absolutely. And the traditional approach — one way of keeping things clean, especially around reviews — is the concept of micro recertification campaigns. The idea there is using a piece of software that will look at the changes that have occurred between review periods. This could be potentially daily snapshots or even real time, increasingly, when we’re talking about identity security posture management.
The idea is to have a tool that will ingest any change that occurs and immediately evaluate it against the previously known clean state. If there’s a new access that’s anomalous in some way, it either alerts users so they can verify and check off that yes, this is indeed expected. In other scenarios, it might prompt a larger access review where you go through a batch of changes once a week — maybe the ten riskiest ones. But again, the idea is to have software in place that allows you to very quickly pick up on those changes and then turn these reviews into more frequent micro recertification reviews — in a much less painful way for users and administrators — to guarantee that you’ll stay in a good, clean state over time.
And so this leads us to the last slide, which is: what kind of timeline are you looking at when you’re performing this kind of get clean, stay clean approach?
The idea here is to look at this in terms of short, medium, and long term.
In terms of the short term — within the first month of this kind of modernization project moving towards Zero Trust — what you want to do is first map out all the different systems within your enterprise that manage identities. That’s identifying the different silos and repositories where identities reside, performing some initial analysis to assess the quality of that data — identifying some of those orphan accounts, making sure you can attach those to people, things like that. And then starting to define your master user record, which will be the one place that all applications go to gain access to data they need for authorization, as well as the single place to perform analytics on as changes come in from the backend sources.
Medium term — roughly the three-month mark — the idea is to start cleaning and consolidating the data itself. Bringing it into one directory or fewer directories than exist today. If you’re looking at moving some of the data to the cloud as well, consolidating down to one tenant. Also going through the process to start triggering access reviews on key events on a more regular basis — risky accesses granted to users, new privileges, things like that should trigger a micro recertification so that somebody signs off on a change to make sure it’s good. And then from there, designing and implementing some of the governance policies and policy-based access control that will be in the final product.
And then finally, within the twelve-month span, it’s working on the maturity of the model and the deployment. The idea here being to automate reviews and detect anomalies in real time as they occur, but also to integrate the results — this master user record that we started defining within the first month — with other applications within the organization. Enterprise risk management applications, things like that. Basically taking all this work done to unify data and associate risk scores with it, and allowing other security-focused applications — within the identity team, SOC team, or security team — to take that data and make better decisions, flag anomalous behavior, and so on very quickly.
And then finally, having a system in place — dashboards and analytics, continuous controls — that are applied in order to observe the identity data security posture and its evolution over time with real-time changes taken into account.
Wade, anything to add?
Wade: Yeah. I just want to highlight a couple of things. One, again restating that this is a journey. You want to make sure you do the process in the right order. If you’re going on a trip from Los Angeles to New York, or from London to Paris, there are things you do first and things you do later. You don’t pick up your bags at the baggage claim in Paris before you’ve taken off from London.
It’s almost analogous to building a house. I have a lot of things to consider in building a house — heating and air conditioning, windows, the kitchen, the number of bathrooms. But all of this is built on a foundation. That foundation is identity data. And if you don’t have that foundation correct — if you don’t have it sized properly for the house you’re building, if you don’t have it strong enough to hold the second floor — then everything else you do later is not going to stand up over the test of time or the test of external attackers on your environment. So that foundation is critical.
It’s like the vegetables, or cleaning the garage. No one likes to deal with cleaning up identity data. It’s not the most popular part. We want to get to the dessert — dashboards and access requests granted. But you need to lay that foundation first to make sure that piece is in play, and then everything else will fall into place.
Even things like a merger and acquisition — absorbing a whole other organization into yours — starts with laying that foundation at the new organization you’re acquiring. What kind of data do they have? What’s the condition of the information there? What kind of open doors am I inviting into my environment if I simply just connect our two organizations together? I need to take that acquired organization through the same process — to get it to a level of maturity where I’m comfortable joining it to my environment. Now, when you’ve built all these processes internally and you’re using them consistently, it’s much easier to add on another organization and get the business value out of a merger much more quickly without introducing a whole other layer of security and risk.
JR: Absolutely. Thank you, Wade. That’s all we have for today. Thank you everyone for attending. If you have any questions, please feel free to reach out. We’re a bit over time, so unfortunately we probably can’t do a Q&A today. We look forward to hearing from you. We will have some other sessions coming up as follow-ons to this that will focus more in-depth on some of the specific actions you can take in order to move towards Zero Trust. Wade, do you have any additional information on that?
Wade: Just to let you know that everyone that attended today will get a copy of the slides and a recording of our session — everyone that’s registered. We will be sending out additional invitations to the next set of sessions a little bit after the break for the beginning of summer. You’ll have, as JR mentioned, more in-depth analysis and recommendations around taking the steps forward to making this work.
But I want to say first of all, and lastly — JR, thank you very much. It’s always a pleasure working with you. I appreciate the depth of your insights and your knowledge, and I look forward to continuing this series with you.
JR: Thank you, Wade. Same to you. Take care.
Wade: All right. Thank you, everybody.