Apple Device Hardening in an AI‑Driven Threat Landscape
AI is changing how attackers target Apple devices — and the pace is accelerating. Watch this SudoTalks session with Addigy’s Joel Cedano and Nicolas Ponce to learn what’s changed, what’s at risk, and what you can do about it right now.
hope that by the end of this presentation, you’ll have some new insights and some actionable takeaways that you could start implementing in your work right away. Before we dive in, I just wanna take a moment to welcome any new attendees to the Sudol This is our monthly webinar series, so designed specifically for IT admins like you. Whether it’s, new features, strategies, or best practices, SudoTalks is here to keep you informed and to keep you prepared. Today, you’ll be joined by Nicholas Ponce, who’s our VP of operations and security, and myself, Joel Solano, from the product team. Collectively, we have twenty years of Addigy experience, and we’re gonna be happy to answer any of the questions you guys have along the way. We will have a q and a option on the webinar. Please submit your questions through there. Things do get lost in the chat. And we’ll also have other members of the product team here as well answering questions, so feel free to fire away. If you see that little chat bubble on the top right of the slide, that means that we’re gonna send out a poll. So that’s just a little bit of a heads up there. Alright. Well, let’s get started. Today, we’re gonna be discussing Apple device hardening, within this new AI landscape. So what is Apple device hardening anyways? It’s the process of securing your devices by reducing its attack surface. It’s making sure we keep the doors locked, the windows shut, and the gates closed. It’s really about making it as difficult as possible for attackers to find a way in. Now we’ve all heard that Apple devices are secure out of the box. They never get any viruses, and they never have any issues. And this is not entirely wrong. Apple devices do come with a large suite of security protocols and mechanisms. They have SIP. They have Gatekeeper, TCC, XProtect, notarization, and many more security mechanisms. And while these are all great for personal computers, we often need additional hardening to make them enterprise ready, and the threat landscape is evolving. At the end of twenty twenty four, we saw a one hundred and one percent increase in macOS info stealers in a single quarter, and that’s only increasing. Hackers are using existing valid developer IDs to notarize, nefarious applications, and this basically allows them to undermine, things like gatekeeper altogether. MacOS hacking tools are being commercialized, and they’re readily available for bad actors, really making it easy for anybody to jump in and start exploiting unprotected devices. And last year, we saw nine zero days exploited in the wild, and I believe there’s already one this year that dates back all the way to iPhone one. These are only a handful of the exploits we’re seeing in the Apple ecosystem, but the trend is clear. Bad actors are increasingly attacking Apple devices. And as they gain more adoption in the enterprise space, which is likely to accelerate, for the release of things like Neo, which come at lower price point, we expect this trend to continue. Now AI is really, reshaping and advancing that threat landscape in a few different ways. And so the three ways we’re gonna look at today are, like, economics of an attack. Things are just getting, cheaper to execute. The speed of those exploitations, AI just makes things so much faster, and just new and advanced threats that are now possible with AI. So the first thing we’re seeing is that AI drops the cost of software development, and it’s becoming a lot cheaper to run these large scale attacks. And there’s also a lot more commercialized attack software being made. So we’re seeing phishing as a service tools like Tycoon two f a that bypass MFA at scale. They only cost a hundred and twenty dollars. I think Microsoft reported that sixty two percent of phishing attempts were coming from this type of tool. We’re seeing impulse dealers like Atomic Stealer, which are, you know, readily available for one k a month. They even come with support. And this is the most popular malware on macOS right now. And there’s also, ransomware as a service, programs that come with affiliate programs to incentivize arbitrary attacks. We’re also seeing individuals leverage tools like Cloud Code, which can be accessed for as low as twenty dollars a month, to attack governments. There’s a a good case study from Anthropic on this where they showed that a solo operator was able to jailbreak their tool, Colab Code, with a few thousand prompts. And then over one month, they’re able to breach, ten different Mexican government agencies, and they were able to extract a hundred and ninety five million taxpayer records, something like a hundred and fifty gigabytes of data. And this was a single person with an AI tool. IBM also published a paper citing that what used to take sixteen hours of work to build, sophisticated phishing campaigns now only takes five minutes. And this paper was, I believe, from last year, and so these things are constantly accelerating. Overall, the cost of an attack has been slashed dramatically, and what would have required pretty much state funding can now be done with just a handful of AI engineers and a lot of AI agents. There’s also a lot of competition in the AI space, and so we could expect the cost of compute to drop as these companies compete. And that’s really just gonna reduce the cost of these type of attacks. I think one of the tipping points here is gonna be when the the local models get strong enough, and then the cost basically just becomes, like, the hardware and the electricity, required to run these AI agents. Now the next thing we’re seeing is the speed of these attacks is increasing too. There’s research at the University of Santa Barbara that showed that CVEs can be exploited for one dollar of compute, within fifteen minutes. It can read. It could design and execute it without any prior knowledge of that CVE in the AI’s training. And this is why it’s more important than ever to scan for CVEs and patch them immediately. We’re also seeing that building out, spear phishing campaigns. This used to require lots of time and research. Now these AI agents can craft these super sophisticated, and personalized, campaigns in record time. The best AI, spear phishing campaigns are seeing a fifty four percent click through rate, which is insane to think about, a little bit over half, compared to just twelve percent for traditional. And, there’s also data showing that eighty two percent of all phishing emails are now using AI. And so we’ll likely see a movement away from these sort of generic, phishing attempts to something very specific where they’re forming your LinkedIn data. They’re parsing your social media. They’re creating these very targeted emails that look exactly like the ones you’re getting from your existing vendors, and they’re using AI, to bring all this information and to craft those email templates. And, yeah, and because these attacks are, they’re able to execute them so quickly and cheaply, we’re just seeing an explosion of AI powered scams. There’s been a one thousand two hundred and ten percent increase in twenty twenty five alone. It’s it’s gonna be difficult to be reactive with the speed in which these attacks are moving, which is why we need to make sure that our devices are hardened, our users are informed, and ready to brace themselves against this onslaught of incoming attacks. And last but not least, we’re seeing a whole new wave of attack types. So, they’re not just attacking the humans in your organization, but also your AI agents. There is over two thousand two hundred, and these are just the ones they found, two thousand two hundred malicious skills and agent packages uploaded to GitHub with the intent of getting your AI agent to install them. Once your AI agent installs them, this grants them access to your systems, networks, or any other authorized, authorizations that you granted that agent. I’m sure you guys heard about MCP servers and how they’re the best thing ever. Well, if your agent installs one of these malicious skills, then the attackers also get access to all the authorizations that you granted, through those various MCP servers. We’re also seeing yep. There’s the poll, that went up. Hopefully, it takes some time to, to answer that. I guess I’ll I’ll give it a second here so people can answer the poll, and then, I’ll continue. Joe, we actually in the chat, there’s a question, and I may have, missed it. But, Brandon asked, what’s the, what was the largest backtite you just mentioned right before QuadCode? Think it was on the slide right before this or two sides ago. Sorry. What was the question? I have to look at the chat. Side one, not this side, but the previous side. What was the largest Oh, Stealer? Atomic Stealer. Yeah. Yeah. Somebody in the chat answered it. Yep. Yeah. That one that one’s pretty scary because you could just kinda buy it online. Right? And then you could start, exploiting. Not a problem. And sorry about that, guys. I’m not very good at, giving the presentation. I’m looking at the chat, but I will look at the chat, after I pass it on to to Fonte here. Alright. And kinda going back to some of the new threats we have, the deepfakes are getting increasingly sophisticated. Bad actors can now clone your likeness, your look, how you talk, how you move, how you sound with just three seconds of a recording. It’s quite scary. There is a case here of a company where attackers are able to run an entire live video conference populated entirely by AI generated defakes of the CFO and the executives on the call. On that call, they actually got some of the employees to wire over twenty five million dollars to the attackers. You know, your boss is telling you to do something on a live call. You’re likely to do it. Luckily, in this case, I believe they were able to revert the transaction, but these things are happening all the time. I yesterday morning, I read about a similar attempt where they cloned an executive’s voice, and then they started sending, WhatsApp voice notes using their voice. And so this is just the type of stuff that, we need to be aware of in this day and age as AI continues to improve. Malware is evolving too, and it’s doing so every fifteen seconds. Attackers are now developing polymorphic malware, which updates itself every few seconds. This means that it rotates its name, its its processes, its variables. It’s it’s rewriting itself live in a way, and it’s trying to stay under the radar from these static EDR databases. So that’s why it’s more important than ever to have an EDR with behavioral detection. AI is advancing quickly, and attackers are the first adopters. We’re gonna see plenty of new attacks surface in the coming years, and so we need to be ready. Some of these attacks are gonna be built and executed entirely by AI agents, and so that’s the sort of threat that we should be expecting. And so what can we do about this? We need to harden our devices primarily with the tools that we have available to us, and we also need to keep our users informed. And this is where something like Addigy security suite can come in handy. When you look at the Addigy security suite as SentinelOne with, EDR and MDR, like I was mentioning before, what I like about SentinelOne is that it does have that behavioral detection included in there. So even if there’s polymorphic malware, as soon as it tries to, like, curl something out of your system, that’s gonna get flagged. It does have the MVR component as well. And so if you’re sleeping on vacation or, you know, just busy, there is a second set of eyes that’s gonna look over your environment for you and flag that and take action on your behalf. It also has the CVE scanning. Again, these AI agents are acting very quickly once these CVEs become available, and so it’s good to know that those vulnerabilities are out in your environment, so you could patch that third party software or systems as needed. The other component of this is device compliance. With Addigy, you get CIS, NIST, CMMC, and DISA out of the box. Within a few minutes, you could have your devices fully compliant, with these frameworks. This includes hundreds of different, controls that you could apply to those devices to really, harden it and, keep it safe. This is the, you know, locking the doors, closing the windows, putting up the gates around your devices. And I heard something that I thought was very interesting. It was that hackers aren’t hacking in. They’re logging in. And that’s where something like conditional access comes into play. Right? Or if somebody pulls off one of these advanced deepfake phishing attempts and or steal somebody’s credentials through, spear phishing, if they’re not signing in from a managed device, then conditional access will prevent that login altogether and essentially protect your corporate data. These are the things that we have inside the security suite, but we also provide ThreatDown, which has some of these other, capabilities as well. And then through Addigy, you have your monitoring and remediation, maintenance windows, and other tools that you could leverage as well to protect and harden your devices. Now I’m gonna pass it over to our our chef. This is SuddleTalk, so we like to do some live cooking. And, yeah, I’ll pass over to Ponce. He’s gonna give us a quick demo of things that you could do inside the product, to protect your environment. Awesome. Thank you, Joel. Share my screen. Alright. Can everybody see my screen? Yes. Okay. Alright. So I’m just picking up where, Joel just left off on the security suite page. As you can see, it’s enabled here. Everybody may have some different iteration or variation or may have some of these functions, but not everything. So, we’ll try and break it down as much as possible and walk through each one of these and, in as much depth as we can in the remaining forty five minutes we have left. So, as Joel mentioned, we have SentinelOne integration. You know, it’s point and click. In this case, we’ve already had the security suite enabled, and I’m just gonna hop over to my policies. And at the Addigy pseudotox parent policy level, I have SentinelOne already enabled. So the point and click, if you will, configuration has already been set up. So if you’re if you’ve never done this before or you’re just curious on how that works, you would can also go to third party integrations, sorry, Addigy add ons, threat detection, and you turn it on. Once you turn it on, then you would go to your policies you want it configured. And I have it at the parent policy, so all my devices get in and inherits them. And and then right here, it’s enabled. You can map it to multiple multiple pipes. Policies can map to a single site, or you can have multiple sites mapped to multiple policies for unique differentiation. And once that once that’s set up, it’s automatically gonna deploy. We automatically whitelist it. Really, there’s very little for you to do in terms of managing the deployment. You also get a threats and CVEs, function here. So you can see, actually, the threats detected from SentinelOne directly in the Addigy console. Here’s a couple ecard test samples we generated before this webinar just so you have an example. You can, press view details. You can look at the device and go live directly from here. You can look at it directly in SentinelOne, which actually just takes a direct link to that threat here, and you can see it. You can action on it. You can also action on it directly from Addigy if you wanted to as well. So you don’t have to contact shift to another console. You could just handle it directly from me. This is as as as pretty real time, I would say, Joel. Right? Like, it’s it’s fairly it’s really fast. Just to give you an example, here’s a device. It has the banshee malware zipped up. I’m gonna just gonna open it up. Do it on a VM if you’re gonna test stuff like this. Oops. I got a password here. And if you’re wondering how to get malware like this, this is from the Objective C Foundation, which is Patrick Wardle’s site. He has a malware collection sample that you can use to test efficacy on different solutions. Obviously, do it on a can do it on a VM or not a production machine. Alright. And you see just like that, threat detected, malicious file, file named Banshee. Let’s see if it shows up here already. And then it’s already showed up here. It’s mitigated. In this case, we’re gonna wanna set up MDR. In some cases, if you may or may not have MDR, MDR, just to quickly explain MDR, managed detection and response is the acronym, basically. Twenty four seven SentinelOne security engineers will help triage these incidents for you. So if you get a threat and you’re not sure what to do about it or or maybe you only work eight hours of the day like a normal human being and you’re not around twenty four seven to respond to these things, this team will triage these reports for you, giving you direct visibility into them and adding notes about why and then raising only the issues that cannot be resolved. Meaning, you’re gonna give them the ability to respond by killing, quarantine, remediating. Rollback is Windows only. And if it’s a false positive, unquarantine and whitelist. Now they will perform these actions on your behalf. And if there’s something that goes wrong in this process, they will contact you to action on it for you. So it’s a very nice feature. It’s a little bit more expensive, but, you know, that way a lot a lot of us can sleep at night. It’s also gonna determine vulnerabilities on applications. Joel’s talking about that for CVEs and, you know, how that works. So, basically, you know, this device or the devices are showing they have two c v two applications with CVEs, one with a hundred and twelve, which is Safari. No big surprise there. And then boot camp assistant with one and the number of endpoints. We can actually drill down into them. All this information is populated through SentinelOne. You can configure the vulnerability scanning in the SentinelOne console. It runs once a week. I don’t think you can configure it any more aggressive than that. And then, you can drill down into the details. You can look at the device and go live. You can actually see what the CVEs are here, and we have a link directly to the MITRE and MBD, website of the actual CVE report so you can get a better understanding of this. Alright? So at a high level, that’s what, you’re gonna get directly out of the box. Just turn it on and not doing much. We basically just turned it on here and, you know, haven’t really configured anything, including threat services MDR. If you are using definitely configure the contacts because it’s not going to start triaging. They will not start triaging the reports until you have an MDR contact. Okay. We also allow you to configure per policy so you can configure them, you know, as I mentioned, at the parent level or child policies for unique customers, etcetera. You can get pretty creative there. We also show the activity logs so you can see exactly what’s going on real time if people change things. If I, you know, I logged in as an example here and the banshee virus was detected or the malicious file was detected and, you know, they quarantine that file for us. So SentinelOne, EDR and MDR, and soon to be XDR as well, which is another yet another acronym. So endpoint detection and response, then managed detection and response, and then we’re gonna have extended detection and response. So if you want that triage team to triage more than just, you know, alerts on your MacBooks, let’s say there’s something on the network you want them to triage or, AWS or Google Cloud or some, infrastructure as a service, platform as a service you want them to triage, you can send those events in there, and they’ll they will help you triage those incidents twenty four seven, which is very nice. Okay. Let’s move on here. So the security. Right? We just talked about SentinelOne EDR and MDR and soon to be XDR CVE management. We also have Malwarebytes. I think that was on Joel’s side here. That that is a separate integration that’s not tied in with the security suite, but you can use it as well. It’s ThreatDown, formerly known as Malwarebytes. And that’s also very powerful. And they also EDR and MDR. And from what we’ve seen, they will actually ThreatDown will actually trigger, you know, some some suspicious activity for AI functions that we’ve seen, especially with, like, Cloud Code. They tend to be false positives for the most part. So, you know, pros and cons to all these solutions, and we can go into them after the fact if we have time, but let’s, stay on track here. So the next step here is device compliance. Right? Joel mentioned the devices themselves need to be hardened or, you know, we wanna make sure we lock the door. Just best practice. These devices come out of the box designed for, personal computing and not, enterprise ready, if you will. So if we take a look here, dangerous salamander here, this has an AI compliance benchmark. It has CIS level one for pseudotox, and then it has the CIS level one unmodified with fourteen rules that are failing. So we have a custom benchmark that we built that we took out fourteen rules. The default one has fourteen rules that are additional to that custom one we built that are failing that we’re not managing. But if we look at the arbitrary human VM here, it’s an out of the box one. We just built this machine. It’s brand new twenty six dot four. This is what you’ll get directly from Apple out of the box. You can see CIS level one has sixty two failing controls. CMMC level two has a hundred and sixty four failing controls. Meaning, out of the box, these devices are missing half the compliance controls needed to make these devices, quote, unquote, secure and compliant. So that’s where these compliance benchmarks come into place because that’s a lot of settings to manage and track on your own without these compliance benchmarks. And if you’ve tried to do it, drop a note in the chat because I’m sure you’ve had quite the headache trying to do it without these benchmarks. And it’s it’s it’s quite the task, and the benchmarks make it really easy to manage. So, you know, a good example of showing clients this information or other users or executives or, you know, stakeholders in the organization or organizations why this is important. And it’s like, but out of the box, you know, out of the x you know, I think this is maybe eighty something rules, ninety rules. Sixty two are failing. Right? So there’s only about thirty that are in compliance out of the box. So let’s take a look at what those compliance benchmarks look like here. Alright. So we have quite quite a few. And you see CIS level one for Mac OS twenty six has ninety four rules. So sixty two are failing. What’s that? Thirty two are compliant out of the box. This is important to show auditors that you’re tracking this for whatever framework you’re following, whether that’s SOC two or ISO twenty seven thousand one or CMMC or some other, you know, compliance benchmark you guys are following, ISO forty two thousand and one for AI. Right? A a lot of those benchmarks fall back to a lot of those compliance frameworks, I should say, fall back to these benchmarks because these benchmarks are what, like, the NIST team have determined as, like, keeping a device secure for enterprise organizations, meaning for business use. The key reason I’m harping on that is because some of these settings, if you look at them, will, you know, change the user experience. If it’s a personal device, they may want to use AirDrop as an example. They may want to I think some of them disable the webcam. Right? Those rules can be disruptive for a personal computing, you know, task. But in terms of the business and following business process and compliance, they’re very important to follow. So, what we usually recommend is cloning the the benchmark and then deciding what’s best for your organization. We go through the the rules, kinda like, you know, these are logging rules. You’re not gonna really disrupt anybody with logging. You know? That said, these disable rules, you’re definitely gonna wanna take a look at. Right? Because they’re gonna disable some function or feature for that end user, and they may want to use it. I think with CMMC and disastigs, they get a lot more aggressive, especially with, like, iCloud disabling iCloud things, and the users may wanna wanna use those. That’s up to the business or the compliance frameworks that you guys are trying to follow to determine whether an exception can be made for that rule or not. Nevertheless, you can easily apply them all. But, again, we recommend, you know, pruning some of these out that are that will be important for your organization. Or better yet, if you’re managing them with existing MDM profiles, you do not wanna duplicate or cause conflicting settings. I think the ones that are the most problematic are the password settings. The way macOS writes the password settings to a binary, they stick in memory. If you remove one profile, those settings stick, and you’ll have to clear out the password policy settings with a terminal command. And you may just have a bunch of conflicting rules, and it’ll say failed compliance, and you’re not sure why. So I see some questions on the benchmarks, so let me let me show them here. So each rule, if you’re not sure what they do, you can expand it. Look at the actual test. So this is what it’s checking for, and this is the expected response. Just to show you, we also put a this uses a profile. So, basically, it needs MDM. We’re gonna install an MDM profile to manage it. The fix is adding this key. The script that we run checks for that key on the device via an MDM profile. If you wanna know what that profile or what that rule is doing, the rule description will say why AirDrop. Whoops. It will say why AirDrop must be disabled. Right? You know, what’s the security context of it to prevent file transfers or from unauthorized devices. Right? It’s a way of ex filling data. Now let’s say you wanna show an auditor all the stuff you’re managing or a client’s all the stuff you can manage. You can actually download these specs. There’s a PDF for each one of these benchmarks with that CMMC or this or etcetera. Pull this up, and I’m gonna show you that PDF right now. And you can see here it’s quite quite a long PDF. Let me see how many pages. A hundred pages detailing every single rule and why those rules are being enforced and what they’re doing. And better yet, it will show we go there. References to, you know, what where they kinda align with other frameworks. So eight hundred fifty three r five, right, CIS benchmark, the CIS controls v v eight, CCE. So this is where they map to in other frameworks, which is very important. Alright. So that is how you look at the rules, what you can show to them. Once you once you set it up and deploy it, right, it’s going to, you’re gonna wanna assign it to a policy like we did here, And you can assign monitor only. You can assign, the full remediation benchmark. Just know that when you assign the full remediation benchmark, it is gonna go do those things. And you wanna at least give, the team or company or whoever notice before you deploy them because it will change user experience, I imagine, for some people depending on what you’re deploying. Alright. So if you look at it from a high level, you can look at it you know, there’s a compliance status and go live, which will show you a breakdown per device. You can also go to the devices page and look at the devices settings. The device is compliant at a, you know, holistic level from the devices page. You can also click that compliance status to see that same view that we just saw in go live. So we expose that view in multiple places. You can also give your auditors an export of these reports as a CSV format. That’s usually what they’re looking for. Right? So what’s the compliance status for these devices with this information, whether they’re compliant or not, device name, serial name, agent ID, but there’s also a few other ones here with each benchmark breakdown by, you know, failed rules, the whole entire benchmark breakdown, or the full report. These are all CSV formats, you you can hand them over to an auditor, you know, deploy the framework, make sure everybody’s compliant, export the CSV, give the auditors the information. You can also build your own custom dashboards, like this. At at this point, we have, you know, zero compliant devices. We just set up this dashboard. It’s probably gonna take twenty four hours to populate all the information, but we do have zero compliant devices at the time. And if we show data, you know, there’s nothing there. But if we show data here, we have the two devices that are noncompliant. We can see them here. Again, you can export this data as CSV from this page as well if you need to show auditors. You can also get a historical graph over time that we don’t have here just because we don’t have historical data, but you can put a specific date range on that on those values. Okay. And I think the extendable part or extensible part here that we have is I built an AI compliance benchmark. Right? So, we put a few, poll previously on, you know, how are you guys managing AI sprawl. Right? Like, everybody has their own AI tools. People are downloading AI tools. Right? Like, as a company policy, are you blocking them? You know, are you blocking them? So on and so forth. Right? So, it’s a big it’s a big endeavor, and we are working on or looking into or doing research on an AI compliance benchmark that we can provide out of the product, but you could do it today. Here’s one that we built with Cloud AI compliance and OpenAI compliance. Meaning, are they basically, we we built a device fax saying, are they using Cloud? And if they are using Cloud, whether it’s Cloud. App or Cloud Code, do they have the Cloud managed settings? So, basically, if you have, like, a team or enterprise account in Claude, you can deploy a manning manage settings JSON file that restricts Claude from being able to do things maybe it shouldn’t do. And what we mean by that is, like, by default, Claude is you know, can do anything on your device, delete files, change permissions, run tasks, and you can build a managed settings JSON file. I’ll just show you guys what we did here instead of talking about it. So, we basically built these two rules. And if you look at it, if, you know, Cloud we want cloud compliance to be true. If not, run the cloud code compliance settings. And what that does is it basically creates our managed settings JSON file on the device if we detect cloud code is installed or into some capacity, whether it’s the binary or the app itself. And then OpenAI compliance is similar. We’re just not, doing any remediation on it at the moment because, you know, if we detect Chat GPT or OpenAI or some iteration of those are installed, what do we want to do? Right? So do we wanna delete that because maybe it’s not our approved application, or do we wanna prompt the user saying, no. No. Hey. You should not be using it. We can get as creative as we want or as aggressive as we want with the custom benchmarks and rules. I don’t know if we have this question in the polls, guys, but I see Ace just asked the question. If you’re interested in an AI compliance benchmark of this nature to track and block or remediate or apply some configuration to kinda restrict what it can do, please let us know. You know, we’re always interested in what what will be of value to you all in the organization that you’re managing, you know, AI with. So I’m just gonna take a second there. Joel, is there anything else I’m missing there you wanna add to that kinda AI benchmark conversation? Yeah. I guess I could answer that, question live. Someone asked, are these AI, detectors gonna be something that Addigy releases? There’s nothing on the road map today. It’s something that we’re exploring. We’re very interested in. We also wanna know if this is something that you guys are interested in. If you are, please let us know, product at Addigy dot com. I’d love to hear your feedback around this. I’d love to see what you guys are doing today. But, yeah, it’s not on the road map, but it’s something that we are exploring. Alright. And then there was a question there. Sorry. Interrupt there. Can the AI compliance be made public? Upon time, I guess we could look at what we could share after. We could send that over. I know we always send the follow-up email. We’ll follow webinars. I did see that Eddie posted in the chat some documentation around, how you could start pushing out managed settings, for Cloud Code. So that could be a good start if you’re using that in your organization. Yeah. No. Definitely. I mean, the the detectors for sure, I think the remediation is gonna vary per organization. So we’ll we’ll put some documentation together on it. Right? Like, you know, here’s an example of the JSON file that we’re creating, and we’re basically, like, block denying, you know, change mod, every you know, everybody get everything. Right? Shut down, reboot, you know, remove remove the root directory and things like that. So, but you can also tell it you know, it can it can prompt. So instead of deny, there’s, like, prompt and allow so they can one is just allows. But then if maybe you don’t wanna deny, you can prompt the user if they wanna do that. May you know, a lot of times, Cloud will do things on its own, and that’s kinda the intention. Right? So just putting a gate on that, ability for Cloud to do anything on its own, especially if it gets, you know, poisoned or, you know, maybe something we didn’t expect to happen. There’s also some clogged controls in the teams or organizations that kinda allows you to, like, prevent it from just getting whatever whatever repos and things like that. So, okay. So, where were we? Okay. So we talked about device compliance, compliance dashboard, and then there’s conditional access. You know, I think I think we even had a question on conditional access that was answered. I the team answered it before I could even really read it. I, maybe it was in the chat. I think Fred asked. Yeah. Fred asked. One of my hesitations with the compliance benchmarks and enforcing compliance is we’re using Intune, which is, Microsoft conditional access, is that even when a single benchmark rule isn’t reporting back correctly, meaning it’s, like, false positive or it’s false, but it’s not really an issue, or not being remediated, it can trigger the conditional access policy and restrict someone that might actually be compliant. That is definitely, like, something we have you know, that has to be considered in the benchmark rules. Right? Like, conditional access for those that aren’t Microsoft users or just not familiar with it. Like, it’s this whole stack kinda reinforces the zero trust nature of security. Right? Zero trust was a big buzzword for a long time. But, basically, to Joel’s point, like, they want credentials so they can log in to your Microsoft. And I think if you’re paying attention to the news, Stryker and maybe another organization got attacked. Someone kind of breached their Intune Microsoft three sixty five admin credentials with a phishing or something and logged in and executed, I don’t know how many thousands of remote wipes and, deleted their active directory and do it at all sorts of havoc in Microsoft. So conditional access basically says there are indicators or signals on your machine that confirms you are who you say you are, and we’re not just trusting that you are who you say you are, and thus zero trust. Right? If the device is not compliant, that zero trust is broken, and you cannot access company resources. So to Fred’s point, you gotta be careful when you enable these things or start slowly with the compliance benchmarks in that capacity because you don’t wanna lock out the entire company when you turn this thing on. Right? And the same thing with the compliance benchmarks. You wanna prune out some problematic rules for you because you don’t wanna deploy the benchmark blindly without really understanding how you’re you know, what’s being used in that fleet. There are, like we mentioned, some rules that are safer than others, like auditing and logging and so on and so forth. But, by and large, you you wanna be careful deploying that out because it can stop them from doing certain things. And then we’ll say, you know, if it trickles down to conditional access, it will stop them from logging in. If there’s problematic rules, let us know. Or to Fred’s point, like, if if if it’s reporting false and it’s not an issue, maybe just take it out of the benchmark, you know, and write an exception like, you know, this for we’re managing this setting in a different way. And that can be done too. Right? Like, you could be managing your password settings differently. Like, if you’re using Addigy identity as an example and you’re tying it into Microsoft, your Addigy Identity password is your Microsoft password. Those password settings and rules should be coming from Microsoft. They shouldn’t necessarily be coming from the compliance framework, but they should align to some capacity, but not like, macOS password settings doesn’t perfectly align with what Microsoft may have and thus causing a conflict and, it never being compliant. So you really gotta understand what you’re doing today. And if you’re using multiple systems in the stack, right, like Addigy Identity and then the compliance benchmarks and then Microsoft, conditional access, it becomes a little, you know, you gotta make sure that everything is aligned properly. And I think one of the questions we have is can you do it, per policy? I think you can. Right, Joe? Yeah. Yeah. I I put some more information on how to access those settings there. But, yeah, it’s under the integrations and settings, integrated software, and then Intune. You could set it up at the policy level. Awesome. Awesome. What else are we missing here? I think the only kinda aspect maybe we’re missing is, this will also show, you know, your compliance status in self-service if you wanna expose that to your end users. This is also where they can register for Intune conditional access through the company portal and all that stuff if you don’t use Microsoft. That you know, that’s where it would appear as well. So they can actually see it real time and understand if they’re compliant and what rules they’re failing. And, yeah, I think that I think that covers most of the the topics we wanted to do today. If, again, if you’re interested, Joe, what’s the best way to reach out for, like, an AI compliance benchmark if they’re interested? If they’re interested, I would say product dot addigy dot com is a great place to go to. Yep. Okay. I’m gonna put that in the chat for everybody. If you’re interested in a personalized walk through of this stuff, fill out that fill fill out that survey. And, you know, I think some of the other identity providers have like, I think Google has context aware access and things like that, and there’s ways to do it, but we don’t have a native integration yet. Again, if you’re interested in things like that, please, contact product dot ad g dot com. Alrighty. Well, thank you, Ponce, for taking us through that. Definitely a lot of information. If you guys just wanna take it slow, have somebody walk you through the specific things that you’re interested in, yeah, that’s why we put out that poll. We’d love to hear from you guys. I did wanna give you guys a sneak peek on the things I thought were relevant, to you guys here. So thank you for joining. There’s two features that are coming out later this year that will definitely be helpful. We talked about how fast these AI agents are able to exploit CVs. It was something like one costs one dollar, and they do it in fifteen minutes. That was using some of the older models like GPT four. I can only imagine it’s a lot more deadly, with the latest things like Opus. And so what we’re releasing is CV auto remediation. And so as soon as those CVEs get detected, we’ll be able to patch that third party software, patch those systems right away, in less than fifteen minutes. Right? We really don’t wanna give those AI agents a chance to compromise those devices. This will not be, you know, this will not be dependent on SentinelOne, and so, you all should be able to have access to this feature. If there’s something that you’re interested in, you wanna talk more about this specific thing, CV auto remediation, reach out to product at Addigy dot com. Our product manager, Selena, will be happy to talk to you and and kinda walk through the use cases that we’re thinking about. So if you wanna influence this and and have a say in how it rolls out, please reach out to us. And then the second component here is the long awaited iOS conditional access. We do have conditional access for macOS today. We wanna extend that over into the iPhones and iPads as well. And so, that’s something that you’ll also see roll out later this year. That’s a great point, Joe. I didn’t even I didn’t even talk about the benchmarks for iOS, but, I think everybody kinda understands how this stuff works. So Yep. Well, it seems like we do have a little bit of time left here, so I will just open it up to q and a. I know we’ve been answering questions as we go. Let me bring that up here and see what we got. Gizmet. So, Pans, if you wanna answer any questions along the way, feel free to do so. I’m trying to find this. I gotcha. So Ross had a question possible to demo a rule in a benchmark that has configurable options like x minutes until screen lock. Let me see. I’m not sure if I understand the question. Let me let me see. So session lock after screen has started. This is asking ask for password delay five. So in the context of, the benchmarks, these are come opt defined by the, you know, the NIST team. There’s a couple other people that work on it, but I’m just gonna say that, that repo is public. Pull it up here. So if you ever wanna see it, you can find it and I’ll put it in the chat here, but this is this is where, that repo comes from, and they’re predefined to align with the, you know, the frameworks. That said, you know, they shouldn’t be modified unless, we don’t allow them to be modified. You can clone them and remove them from your benchmark if you don’t wanna enforce that rule. Yeah. You okay. So Ross is saying that that he, there’s you can’t customize the numbers of the rule. That is right. This, like, these aren’t made to be customizable, and most of them aren’t. We are Joe, I think we’ve I got some work plan to customize some of the messaging and things in it, but, like, the numbers themselves where you know, what those values should be are dictated by that team and and the benchmarks, you know, how they map to compliance frameworks. So, if you want, you can remove it and then add your own or, you know, in that case, I think it’s a password profile and just deploy your own password profile. Yep. I and just to add on to that, we are, looking at having some sort of variable support for some of these benchmarks. There’s other things too, like, login window message. I think it has, a static login window message, and I know people wanna be able to customize those as well. So it is something that, we’re looking into. There’s a question about Okta conditional access, based on Addigy’s compliance policies. I don’t believe we have a a built integration with, Okta’s conditional access, but all the compliance data is available, through the API. And so you could always pull that. You could pull it through the device audits, and then we also have independent compliance endpoints where you can pull the individual controls and their statuses as well. I’m answering some questions here. Someone earlier asked about a remediation with s one not available on my no. Remediation should be available. Let me see. There’s a lot of questions around CrowdStrike and if that’s a planned integration. Not something we have on the road map today. It it is, very popular, it seems, and, happy to look into that. That seems like a a great addition. And I’ll go through some of the questions we answer here, Ponce as well. Someone asked, can we bring our own s one, or does it need to be purchased through Addigy? Currently, all the inter integrated features of s one are through Addigy. We are exploring ways for you to bring your own s one account. We could always help you migrate that if you need to. We do have a process for that, but it’s, yeah, it’s something that we’re still looking at to allow you to bring your own account entirely. See if there’s anything else. Yeah. Some of the settings, I think, the semantics of it, like, remediate, it might not be available on Mac at the time. There are, I think we show something else. Let me see here. Navigation. Somewhere else in we’ll we’ll show Windows only. I forget where It I know the rollback feature punts is Yeah. Where rollback is only for the Windows devices, and and that, yeah, that’s just not supported on the Apple devices. S one is usually pretty good at flagging that for you, if you jump into the portal. If you do set up s one through Addigy, you also have access to the full, SentinelOne platform as well, you could always jump in there, and look at their dashboards as well. Yeah. I think, we got a question on the you know, how long does it take their MDR team to triage. I’m not sure exactly on the SLA. Let me see if I can get that. What other questions we got, Joel? Think those were the open ones. I was trying to see if we answered some other ones here. Think that’s about it. Are there any other questions anybody wants to ask before we end here? Alright. Well, if we now have any more questions there, just wanna end it here by thanking everybody for joining. If you guys want any more of the information that we talked about today, some of those stats on the different threads that AI is introducing to our environment, you’re gonna get the recording. We send that out to everybody registered. And so if you join late or anything, you’ll be able to see the the full information. And then, again, if you want a personalized demo to just talk about specifically the things that you care about, yep, feel free to schedule a demo. You can reach out to sales at Addigy, or you could just sign up for a demo on the website as well. And with that, we’ll we’ll end here. Thank you, everybody, for joining, and we will see you guys at the next Suddle Talks. And, again, if you have any questions, product at Addigy dot com. I read every single email that comes into there, so, looking forward to hearing from you Awesome. Thank you, Joel. And, here’s the service level objectives. I sent that in the chat. That’s the last remaining question we had that we didn’t get answered. So if you didn’t get an answer back or response on a threat detection in these time frames, let us know. We can talk to them about what happened. Alrighty. Thank you, Joe. Take care. Bye bye.
Want to go deeper? Explore the details below.
Apple devices are no longer considered inherently safe in enterprise environments. While Apple ships devices with built-in security mechanisms — including SIP, Gatekeeper, TCC, and XProtect — these protections are designed for personal computing, not enterprise-grade threat environments.
The threat landscape has shifted significantly:
- macOS hacking tools are now commercially available and easy to access
- Hackers are using valid developer IDs to notarize malicious applications, bypassing Gatekeeper entirely
- AI has dramatically reduced the cost, skill, and time required to launch sophisticated attacks
- As Apple devices gain more enterprise adoption, they are increasingly becoming a primary target
The key insight: hardening is no longer optional. Out of the box, a brand new macOS device is not enterprise-ready — and the gap between default settings and compliance requirements is larger than most IT teams realize.
These figures reflect the current state of the macOS and AI-driven threat landscape as shared during this webinar:
Threat growth
- 101% increase in macOS info stealers in a single quarter at the end of 2024
- 9 zero-day vulnerabilities exploited in the wild on Apple devices in the past year
- 1,210% increase in AI-powered scams in 2025 alone
AI-accelerated attacks
- CVEs can now be exploited for $1 of compute within 15 minutes — without any prior knowledge of the vulnerability (University of Santa Barbara research)
- Phishing campaigns that previously took 16 hours to build now take 5 minutes with AI tools (IBM)
- AI-powered spear phishing campaigns achieve a 54% click-through rate, compared to 12% for traditional phishing
- 82% of all phishing emails now use AI to craft content
Cost of attack tools available today
- Phishing-as-a-service tools that bypass MFA at scale cost as little as $120
- macOS info stealers like Atomic Stealer are available for $1,000/month — with customer support included
- AI coding tools can be accessed for as low as $20/month and have been used to breach government systems at scale
Compliance gaps on new Apple devices
- A brand new macOS 26 device fails 62 out of 94 CIS Level 1 compliance controls out of the box
- The same device fails 164 out of CMMC Level 2 controls — meaning over half of required compliance controls are missing from day one
SentinelOne EDR & MDR Addigy’s Security Suite includes a native SentinelOne integration that deploys with a single click at the policy level. Unlike static signature-based tools, SentinelOne uses behavioral detection — meaning it can identify and flag polymorphic malware that rewrites itself every few seconds. The MDR component provides 24/7 monitoring and triage by SentinelOne security engineers, so threats are actioned even when your team is offline. All detections are visible directly inside the Addigy console — no switching between platforms.
Compliance Benchmarks Addigy provides CIS, NIST, CMMC, and DISA compliance frameworks out of the box. Within minutes, admins can deploy these benchmarks across their entire fleet. Each rule is documented with a description of what it checks, why it matters, and how it maps to other frameworks. Compliance status is exportable as CSV for auditors and visible in real-time custom dashboards.
Conditional Access Addigy integrates with Microsoft Intune for conditional access — ensuring only managed, compliant devices can access company resources. Even if an attacker successfully steals valid credentials, they cannot log in from an unmanaged device. Conditional access for iOS devices is coming later this year.
CVE Auto-Remediation — Coming Soon Given that AI agents can exploit a CVE within 15 minutes of it becoming public, Addigy is releasing CVE auto-remediation — automatically patching third-party software and systems as soon as vulnerabilities are detected. This feature will be available independently of SentinelOne, making it accessible to all Addigy customers.
AI Compliance Benchmarks Addigy has built custom compliance benchmarks to detect and manage AI tool usage on devices — including Claude Code and ChatGPT. These benchmarks can detect whether AI tools are installed, enforce managed settings to restrict what they can do on the device, or prompt the end user. An out-of-the-box AI compliance benchmark is currently being explored for a future product release.
Frequently Asked Questions
Are Apple devices really being targeted more than before?
Yes — significantly. At the end of 2024, macOS info stealers increased by 101% in a single quarter. As Apple devices gain more enterprise adoption, they are increasingly targeted by attackers. Commercial macOS hacking tools are now readily available online, lowering the barrier for anyone to launch an attack.
What is Apple device hardening and why does it matter for MSPs?
Device hardening is the process of reducing a device’s attack surface by applying security configurations and compliance controls. Out of the box, a new Apple device fails over 60 CIS Level 1 compliance controls — meaning it is not enterprise-ready without additional configuration. For MSPs managing Apple fleets across multiple clients, having a scalable way to harden and monitor those devices is essential.
How does AI change the threat landscape for IT admins?
AI reduces the cost, time, and expertise required to launch sophisticated attacks. Phishing campaigns that once took 16 hours to build now take 5 minutes. CVEs can be exploited within 15 minutes for $1 of compute. Deepfakes can clone an executive’s voice or likeness from just 3 seconds of a recording. IT teams need to be proactive — hardened devices and automated detection are no longer optional.
What compliance frameworks does Addigy support out of the box?
Addigy provides CIS, NIST, CMMC, and DISA compliance benchmarks natively. Each benchmark includes pre-built rules mapped to the corresponding framework controls. Admins can deploy the full benchmark, clone and customize it, or assign it in monitor-only mode before enforcing remediation.
Can Addigy detect and manage AI tools installed on devices?
Yes — Addigy has built custom compliance benchmarks to detect whether AI tools like Claude Code and ChatGPT are installed on managed devices. From there, admins can enforce managed settings to restrict what those tools can do, prompt end users, or remove unauthorized applications entirely. An out-of-the-box AI compliance benchmark is being explored for a future release.
How do I get started with the Addigy Security Suite?
The Security Suite can be enabled directly from the Addigy platform under Add-ons → Threat Detection. SentinelOne deploys automatically to any policy it is assigned to — no manual device-by-device configuration required. A 14-day free trial is available at addigy.com/free-trial and personalized demos can be scheduled at addigy.com/live-product-demo.