With the cost of insider risk the highest it’s ever been, cyber leaders are failing to direct their budgets towards effectively addressing the problem, spending less than 10% of their security budgets on measures that could solve a problem that now costs an average of $16.2m (£13.25m) every year.
This is the eighth edition of the annual report, which covers organisations in EMEA, North America and APAC. For the 2023 edition, Ponemon and DTEX spoke to 1,075 IT and cyber security professionals at 309 organisations that had experienced a total of 7,343 insider incidents among them, an average of 24 per organisation, with each taking on average 86 days to contain, up from 85 last year.
The report defines insider risk based on MITRE’s Human Focused Insider Threat Types as either malicious or non-malicious. A malicious insider is someone who proactively seeks to do harm, through espionage, IP threat, unauthorised data disclosure, sabotage, fraud or workplace violence.
A non-malicious insider is someone who causes harm through negligence, carelessness or inattentiveness, someone who causes harm through a genuine mistake, and someone who causes harm through being outsmarted by a cyber attack or threat actor via social engineering.
The report found that non-malicious insiders account for 75% of incidents, but while malicious insider incidents were rarer, they cost more, up to $701,500 per incident.
The largest costs associated with breaches arising from insider actions centred on containment and remediation, costing $179,209 and $125,221 per incident.
But in spite of the growing cost and frequency of insider breaches, 88% of respondents are spending less than 10% of their security budgets on the issue, on average just 8.2%. The remaining 91.8% of security budgets are being directed towards external threats, despite over half of respondents attributing social engineering as the leading cause of outside attacks.
“The upward trends associated with incident costs, frequency and time to contain demonstrate that current approaches to insider risk are simply not working,” wrote the report’s authors. “In fact, the numbers clearly show we are going backwards.
“Funding is being inadvertently misdirected due in part to a widespread misunderstanding of insider risks and how they manifest based on early warning behaviours. A whole-of-industry approach is required to educate and find common ground on how we define and discuss insider risks with enterprise and government entities.
“On a positive note, more and more organisations are building insider risk programs and seeking budget and executive buy-in to fund and champion them,” they added.
“Our research echoes similar findings from other leading analysts and research organisations, notably Forrester, Gartner, MITRE Corporation and Verizon. The human is unquestionably at the centre of most data breaches – and increasingly, that human risk is an insider, right under our noses. By homing in on insider risk management, organisations have a powerful opportunity to proactively identify and mitigate insider risks well before a costly incident occurs.”
Change is coming
However, the report did find that this needed change may be coming, with almost 60% of respondents acknowledging their spending gap was inadequate and 46% actively planning to spend more on proactively addressing insider risk in 2024.
In terms of technology spending to address the issue, respondents are exploring purchases around user behaviour-based tools, considered essential or very important in detecting insider risk by 64%, and artificial intelligence and machine learning (AI and ML) options, considered essential or very important in preventing, investigating, escalating, scaling and remediating insider incidents by 64% again.
Meanwhile, 61% of respondents said automation technologies were essential or very important in managing insider risk.
“It is encouraging that most organisations consider AI and ML ‘essential’ to preventing insider incidents,” wrote the report’s authors. “Understanding why people become insider risks means understanding human behaviour and why people do the things they do – and AI can help achieve this in spades.
“Using AI and ML, analysts can capture early warning signals and apply analysis quickly, easily and at scale. In the case of non-malicious insiders, AI can also help drive automated education and awareness communications to provide teachable moments to risky employees in near real time.
“Given non-malicious insiders are behind most incidents, this is a powerful way for organisations to proactively exercise proportionality when resolving insider risks in a way that is both cost effective and fair,” they added.
Respondents indicated they would judge the success of their insider risk efforts and programmes chiefly by a reduction in incident volumes (50%), followed by assessment of insider risk (40%) and length of time taken to resolve incidents (38%).