Many companies face an arduous balancing act at the start of 2023 between an uncertain economic climate and the critical advantage they must maintain for competitive innovation. New technologies from Web3, blockchain, machine learning, and artificial intelligence continue to evolve work ethics that make that task more challenging. Cory Hymel, vice president of product at […]
Microsoft last week announced that, just as it did with .NET years ago, it will be putting generative AI into everything, including security.
Back in the .NET days, I joked that Microsoft was so over the top with .Net that the bathrooms were renamed Men.net and Women.net. Many of those efforts didn’t make a ton of sense. However, given that generative AI impacts most of what Microsoft does (except the bathrooms), it makes more sense for the company to do this now than it did then.
Let’s explore how generative AI will impact security.
The Biggest Security Exposure … Is You
We’re often overly excited about all the technology we have to mitigate breaches. But after layer over layer of security software to identify and correct breaches, the one constant is that the most common cause of a breach is a person. Ransomware attacks, identity theft, data theft, and any number of additional problems mostly track back to someone who was tricked into giving out information that was used to do harm.
The industry talks about regular employee training, security drills and audits, and extreme penalties, all of which have had minimal impact on the problem because companies don’t consistently and effectively practice any of them. I include security companies, particularly their executives, in that group, who often seem to think the rules they helped create don’t apply to them.
Back when I was doing security audits (at a company known for security) on a CEO who often bragged he knew more about security than anyone else in my division, I was able to access his most sensitive information that was in a locked safe in 10 minutes. Not by using some super-secret James Bond hacking technology but by looking in his secretary’s drawer where all the keys were stored, which was unlocked.
Human error is the most significant and prevalent cause of some of our most painful security problems, and it has been this way for decades.
HP PC Security Solutions
I’m writing this at HP’s Amplify partner event, where HP just kicked off its security solution. HP’s Wolf Security is arguably the best PC security solution in the market.
HP highlighted that the security business generates $8 trillion in revenue, which is a fraction of the money it protects. Yet all this technology is worthless if you can’t prevent an employee from doing something stupid.
The HP technology includes VMs, BIOs, protection, and some of the most impressive security solutions I’ve seen, but that only addresses someone who accidentally misplaces or loses a PC. It doesn’t deal with an employee who voluntarily or accidentally breaches their own protection.
One exception is HP Sure Click which helps prevent a user from clicking on a link they shouldn’t. Sure Click isolates risky actions in a virtual environment so that the damage doesn’t escape an isolated VM to cause harm. This effort goes a long way. However, while HP does more than most, it still isn’t enough.
Examples of Why We Need AI Security
One of the biggest problems I’ve ever covered was a CIO who was fired via email. He was so angry that he used his credentials to reformat all his ex-company’s hard drives, effectively putting them out of business. Yes, he was sued into poverty and went to jail, but that didn’t help the company that he shut down.
In another massive breach, an attacker used purloined credentials with uncontested access to a company’s HR system and crafted a global email that went to every non-management employee telling them the firm had been sold and that to get their final checks, employees needed to provide their banking information.
Nearly every employee gave their information before someone thought to ask a manager about it. By the time the effort was shut down, the attacking servers were offline, and the thieves were long gone.
These examples showcase successful exploits that would have bypassed HP’s Wolf Security. One because it was a physical breach with no laptop involved and the other because of a phishing attack that resulted in access to, and compromise of, an HR system that Wolf Security wouldn’t protect.
I’m not picking on HP here because neither HP nor any other tech company can address an employee-sourced problem effectively yet. But that “yet” is where AI potentially comes in.
AI to the Rescue: BlackBerry to Microsoft
Microsoft’s Security Copilot is initially focused on providing security professionals with information on current and potential breaches in real time so they can be rapidly mitigated. It should help address the ongoing problem of security being understaffed and under-resourced. This is the initial focus of most of these generative AI efforts: to increase productivity and reduce employee burdens.
However, the real promise for generative AI is that it can learn from employee behavior, and by learning from that behavior, it can mitigate it. At scale, the one company that has aggressively moved against this employee exposure with older AI technology is BlackBerry’s Cylance unit.
Forrester names NICE a leader in CCaaS
BlackBerry’s technology monitors employees and will move to block anyone who seems to be behaving unusually, like a service professional that suddenly starts downloading the firm’s employee or product development files — an indication that an attacker was using their credentials.
Generative AI can go much further and potentially move more quickly. Using huge models, generative AI can predict future behavior, identify employees who regularly violate company policies (indicating they are more likely to act improperly), and can recommend remediation ranging from recurring automated training to termination for employees that are most likely to be the cause of a breach, eliminating potential problems before an event.
Now, before you get upset about the “termination” part, realize that if these employees do cause a breach, the remedies include not only termination but also financial costs to the employee or even jail time based on the nature and size of the breach. So, even for the terminated employee, this remedy is better than what otherwise would have likely occurred.
Wrapping Up: Generative AI and the Future of Security
AI is being brought to security, starting with BlackBerry and ending with Microsoft’s most recent effort. The result is the potential final elimination of our most significant security exposure: people. As generative AI and other future forms of AI advance into security, we will finally have the opportunity to mitigate the one security problem that continues to bite us in the butt: ourselves.
As with other technologies, I expect IT will be slow to adopt these tools and that the avoidable breaches that will result will forever change a lot of our career paths and financial security.
AI will help not only keep our companies safe but those we love, including ourselves. Note that the individuals needing this protection the most are our aging population which bad actors often trick into giving up their retirement funds due to breaches like this.
The only question is if AI security will be deployed before this same technology is used against us. AI is neither good nor evil; it’s a tool. Sadly, in cybersecurity, new technologies are too often used more quickly against us than for us.
Companies that established open-source program offices over the last few years now need more C-suite oversight to drive education, awareness, and use of open-source software. That sets the stage for an expanded role of open-source program officers. Incorporating open-source technology brings organizations an ecosystem that expands the user base, resulting in loyalty and stickiness. It […]