Yearly Archives: 2023

Cyber Insecurity in Healthcare: The Cost and Impact on Patient Safety and Care 2023

A strong cybersecurity posture in healthcare organizations is important to not only safeguard sensitive patient information but also to deliver the best possible medical care. This study was conducted to determine if the healthcare industry is making progress in achieving these two objectives.

With sponsorship from Proofpoint, Ponemon Institute surveyed 653 IT and IT security practitioners in healthcare organizations who are responsible for participating in such cybersecurity strategies as setting IT cybersecurity priorities, managing budgets and selecting vendors and contractors.

The report, “Cyber Insecurity in Healthcare: The Cost and Impact on Patient Safety and Care 2023,” found that 88% of the surveyed organizations experienced an average of 40 attacks in the past 12 months. The average total cost of a cyber attack experienced by healthcare organizations was $4.99 million, a 13% increase from the previous year.

Among the organizations that suffered the four most common types of attacks—cloud compromise, ransomware, supply chain, and business email compromise (BEC)—an average of 66% reported disruption to patient care. Specifically, 57% reported poor patient outcomes due to delays in procedures and tests, 50% saw an increase in medical procedure complications, and 23% experienced increased patient mortality rates. These numbers reflect last year’s findings, indicating that healthcare organizations have made little progress in mitigating the risks of cyber attacks on patient safety and wellbeing.

According to the research, 88 percent of organizations surveyed experienced at least one cyberattack in the past 12 months. For organizations in that group, the average number of cyberattacks was 40. We asked respondents to estimate the single most expensive cyberattack experienced in the past 12 months from a range of less than $10,000 to more than $25 million. Based on the responses, the average total cost for the most expensive cyberattack was $4,991,500, a 13 percent increase over last year. This included all direct cash outlays, direct labor expenditures, indirect labor costs, overhead costs and lost business opportunities.

At an average cost of $1.3 million, disruption to normal healthcare operations because of system availability problems was the most expensive consequence of the cyberattack, an increase from an average of $1 million in 2022. Users’ idle time and lost productivity because of downtime or system performance delays cost an average of $1.1 million, the same as in 2022. The cost of the time required to ensure the impact on patient care was corrected increased from an average of $664,350 in 2022 to $1 million in 2023.

“While the healthcare sector remains highly vulnerable to cybersecurity attacks, I’m encouraged that industry executives understand how a cyber event can adversely impact patient care. I’m also more optimistic that significant progress can be made to protect patients from the physical harm that such attacks may cause,” said Ryan Witt, chair, Healthcare Customer Advisory Board at Proofpoint. “Our survey shows that healthcare organizations are already aware of the cyber risks they face. Now they must work together with their industry peers and embrace governmental support to build a stronger cybersecurity posture—and consequently, deliver the best patient care possible.”

The report analyzes four types of cyberattacks and their impact on healthcare organizations, patient safety and patient care delivery:

Cloud compromise. The most frequent attacks in healthcare are against the cloud, making it the top cybersecurity threat, according to respondents. Seventy-four percent of respondents say their organizations are vulnerable to a cloud compromise. Sixty-three percent say their organizations have experienced at least one cloud compromise. In the past two years, organizations in this group experienced 21 cloud compromises. Sixty-three percent say they are concerned about the threat of a cloud compromise, an increase from 57 percent.

Business email compromise (BEC)/spoofing phishing. Concerns about BEC attacks have increased significantly. Sixty-two percent of respondents say their organizations are vulnerable to a BEC/spoofing phishing incident, an increase from 46 percent in 2022. In the past two years, the frequency of such attacks increased as well from an average of four attacks to five attacks.

Ransomware. Ransomware has declined as a top cybersecurity threat. Sixty-four percent of respondents believe their organizations are vulnerable to a ransomware attack. However, as a concern ransomware has decreased from 60 percent in 2022 to 48 percent in 2023. In the past two years, organizations that had ransomware attacks (54 percent of respondents) experienced an average of four such attacks, an increase from three attacks. While fewer organizations paid the ransom (40 percent in 2023 vs. 51 percent in 2022), the ransom paid increased nearly 30 percent from an average of $771,905 to $995,450.

Supply chain attacks. Organizations are vulnerable to a supply chain attack, according to 63 percent of respondents. However, only 43 percent say this cyber threat is of concern to their organizations. On average, organizations experienced four supply chain attacks in the past two years.

As in the previous report, an important part of the research is the connection between cyberattacks and patient safety. Following are trends in how cyberattacks have affected patient safety and patient care delivery.

  • It is more likely that a supply chain attack will affect patient care. Sixty-four percent of respondents say their organizations had an attack against their supply chains. Seventy-seven percent of those respondents say it disrupted patient care, an increase from 70 percent in 2022. Patients were primarily impacted by delays in procedures and tests that resulted in poor outcomes such as an increase in the severity of an illness (50 percent) and a longer length of stay (48 percent). Twenty-one percent say there was an increase in mortality rate.
  • A BEC/spoofing attack can disrupt patient care. Fifty-four percent of respondents say their organizations experienced a BEC/spoofing incident. Of these respondents, 69 percent say a BEC/spoofing attack against their organizations disrupted patient care, a slight increase from 67 percent in 2022. And of these 69 percent, 71 percent say the consequences caused delays in procedures and tests that have resulted in poor outcomes while 56 percent say it increased complications from medical procedures.
  • Ransomware attacks can cause delays in patient care. Fifty-four percent of respondents say their organizations experienced a ransomware attack. Sixty-eight percent of respondents say ransomware attacks have a negative impact on patient care. Fifty-nine percent of these respondents say patient care was affected by delays in procedures and tests that resulted in poor outcomes and 48 percent say it resulted in longer lengths of stay, which affects organizations’ ability to care for patients.
  • Cloud compromises are least likely to disrupt patient care. Sixty-three percent of respondents say their organizations experienced a cloud compromise, but less than half (49 percent) say cloud compromises disrupted patient care. Of these respondents, 53 percent say these attacks increased complications from medical procedures and 29 percent say they increased mortality rate. 
  • Data loss or exfiltration disrupts patient care and can increase mortality rates. All organizations in this research had at least one data loss or exfiltration incident involving sensitive and confidential healthcare data in the past two years. On average, organizations experienced 19 such incidents in the past two years and 43 percent of respondents say they impacted patient care. Of these respondents, 46 percent say it increased the mortality rate and 38 percent say it increased complications from medical procedures. 

Other key trends in cyber insecurity

Concerns about threats related to employee behaviors increased significantly. Substantially more organizations are now worried about the security risks created by employee-owned devices (BYOD), an increase from 34 percent in 2022 to 61 percent of respondents. Concerns about BEC/spoof phishing increased from 46 percent to 62 percent in 2023.

Disruption to normal healthcare operations because of system availability problems increased to $1.3 million from $1 million in 2022. Users’ idle time and lost productivity because of downtime or system performance delays averaged $1.1 million. The cost of the time taken to ensure impact on patient care was corrected increased to $1 million in 2023 from $664,350 in 2022.

Accidental data loss is the second highest cause of data loss and exfiltration. Accidental data loss can occur in many ways, such as employees misplacing or losing devices that contain sensitive information, or mistakes made when employees are emailing documents with sensitive information. Almost half of respondents (47 percent) say their organizations are very concerned that employees do not understand the sensitivity and confidentiality of documents they share by email.

More progress is needed to reduce the risk of data loss or exfiltration. All healthcare organizations in this research have experienced at least one data loss or exfiltration incident involving sensitive and confidential healthcare data. The average number of such incidents is 19.

Cloud-based user accounts/collaboration tools that enable productivity are most often attacked. Fifty-three percent of respondents say project management tools and Zoom/Skype/video conferencing tools at some point were attacked.

Organizations continue to deploy a combination of approaches to user access and identity management in the cloud (56 percent of respondents). These include separate identity management interfaces for the cloud and on-premises environments, unified identity management interfaces for both the cloud and on-premises environments, and deployment of single sign-on.

The lack of preparedness to stop BEC/spoof phishing and supply chain attacks puts healthcare organizations and patients at risk.  While BEC/spoof phishing is considered a top cybersecurity threat, only 45 percent of respondents say their organizations include steps to prevent and respond to such an attack as part of their cybersecurity strategy. Similarly, only 45 percent say their organizations have documented the steps to prevent and respond to attacks in the supply chain. Malicious insiders are seen as the number one cause of data loss and infiltration — however, only 32 percent say they are prepared to prevent and respond to the threat. 

The primary deterrents to achieving an effective cybersecurity posture are a lack of in-house expertise, staffing and budget. Fifty-eight percent of respondents, an increase from 53 percent in 2022, say their organizations lack in-house expertise and 50 percent say insufficient staffing is a challenge. Those citing insufficient budget increased from 41 percent to 47 percent in 2023.

Security awareness training programs continue to be the primary step taken to reduce the insider risk. Negligent employees pose a significant risk to healthcare organizations. More organizations (65 percent in 2023 vs. 59 percent of respondents in 2022) are taking steps to address the risk of employees’ lack of awareness about cybersecurity threats. Of these respondents, 57 percent say they conduct regular training and awareness programs. Fifty-four percent say their organizations monitor the actions of employees.

The use of identity and access management solutions to reduce phishing and BEC attacks has increased from 56 percent of respondents in 2022 to 65 percent in 2023. The use of domain-based message authentication (DMARC) increased from 38 percent in 2022 to 43 percent in 2023.

To read the full report, download it from Proofpoint’s website.

Someone made a deepfake of my voice for a scam! (With permission…)

Bob Sullivan

“I need help. Oh my God! I hit a woman with my car,” the fake Bob says. “It was an accident, but I’m in jail and they won’t let me leave unless I come up with $20,000 for bail …Can you help me? Please tell me you can send the money.”

It’s fake, but it sounds stunningly real. For essentially $1, using an online service available to anyone, an expert was able to fake my voice and use it to create telephone-ready audio files that would deceive my mom.  We’ve all heard so much about artificial intelligence – AI – recently. For good reason, there have long been fears that AI-generated deepfake videos of government figures could cause chaos and confusion in an election.  But there might be more reason to fear the use of this technology by criminals who want to create confusion in order to steal money from victims.

Already, there are various reports from around North America claiming that criminals are using AI-enhanced audio-generation tools to clone voices and steal from victims. So far, all we have are isolated anecdotes, but after spending a lot of time looking into this recently, and allowing an expert to make deepfakes out of me, I am convinced that there’s plenty of cause for concern.

Reading these words is one thing; hearing voice clones in action is another. So I hope you’ll listen to a recent episode of The Perfect Scam that I hosted on this for AARP.  Professor Jonathan Anderson, an expert in artificial intelligence and computer security at Memorial University in Canada, does a great job of demonstrating the problem — using my voice — and explaining why there is cause for … concern, but perhaps not alarm.  His main suggestion: All consumers need to raise their digital literacy and become skeptical of everything they read, everything they see, and everything they hear. It’s not so far-fetched; many people now realize that images can be ‘photoshopped’ to show fake evidence.  We all need to extend that skepticism to everything we consume. Easier said, than done, however.

Still, I ponder, what kind of future are we building? Also in the episode, AI consultant Chloe Autio offers up some suggestions about how industry, governments, and other policymakers can make better choices now to avoid the darkest version of the future that I’m worried about.

I must admit I am still skeptical that criminals are using deepfakes to any great extent. Still, if you listen to this episode, you’ll hear Phoenix mom Jennifer DeStefano describe a fake child abduction she endured, in which she was quite sure she heard her own child’s voice crying for help.  And you’ll hear about an FBI agent’s warning, and a Federal Trade Commission warning.  As Professor Anderson put it, “based on the technology that is easily, easily available for anyone with a credit card, I could very much believe that that’s something that’s actually going on.”

 

Preparing for a Safe Post Quantum Computing Future: A Global Study

Sponsored by DigiCert, the purpose of this research is to understand how organizations are addressing the post quantum computing threat and preparing for a safe post quantum computing future. Ponemon Institute surveyed 1,426 IT and IT security practitioners in the United States (605), EMEA (428) and Asia-Pacific (393) who are knowledgeable about their organizations’ approach to post quantum cryptography.

Quantum computing harnesses the laws of quantum mechanics to solve problems too complex for classical computers. With quantum computing, however, cracking encryption becomes much easier, which poses an enormous threat to data security.

That is why 61 percent of respondents say they are very worried about not being prepared to address these security implications. Another threat of significance is that advanced attackers could conduct “harvest now, decrypt later” attacks, in which they collect and store encrypted data with the goal of decrypting the data in the future (74 percent of respondents). Despite these concerns, only 23 percent of respondents say they have a strategy for addressing the security implications of quantum computing.

The following findings illustrate the challenges organizations face as they prepare to have a safe post quantum computing future. 

Security teams must juggle the pressure to keep ahead of cyberattacks targeting their organizations while preparing for a post quantum computing future. Only 50 percent of respondents say their organizations are very effective in mitigating risks, vulnerabilities and attacks across the enterprise. Reasons for the lack of effectiveness is that almost all respondents say cyberattacks are becoming more sophisticated, targeted and severe. According to the research, ransomware and credential theft are the top two cyberattacks experienced by organizations in this study.

The clock is racing to achieve PQC readiness. Forty-one percent of respondents say their organizations have less than five years to be ready. The biggest challenges are not having enough time, money and expertise to be successful. Currently, only 30 percent of respondents say their organizations are allocating budget for PQC readiness. One possible reason for not having the necessary support is that almost half of respondents (49 percent) say their organization’s leadership is only somewhat aware (26 percent) or not aware (23 percent) about the security implications of quantum computing. Forty-nine percent of respondents are also uncertain about the implications of PQC.

Resources are available to help organizations prepare for a safe post quantum computing future. In the last few years, industry groups such ANSI X9’s Quantum Risk Study Group and NIST’s post-quantum cryptography project have been initiated to help organizations prepare for PQC. Sixty percent of respondents say they are familiar with these groups. Of these respondents, 30 percent say they are most familiar with the ANSI X9’s Quantum Risk Study Group and 28 percent are most familiar with NIST’s industry group.

Many organizations are in the dark about the characteristics and locations of their cryptographic keys. Only slightly more than half of respondents (52 percent) say their organizations are currently taking an inventory of the types of cryptography keys used and their characteristics. Only 39 percent of respondents say they are prioritizing cryptographic assets and only 36 percent of respondents are determining if data and cryptographic assets are located on-premises or in the cloud.

Very few organizations have an overall centralized crypto-management strategy applied consistently across the enterprise. Sixty-one percent of respondents say their organizations only have a limited crypto-management strategy that is applied to certain applications or use cases (36 percent) or they do not have a centralized crypto-management strategy (25 percent).

Without an enterprise-wide cryptographic management strategy organizations are vulnerable to security threats, including those leveraging quantum computing methods. Only 29 percent of respondents say their organizations are very effective in the timely updating of their cryptographic algorithms, parameters, processes and technologies and only 26 percent are confident that their organization will have the necessary cryptographic techniques capable of protecting critical information from quantum threats.

While an accurate inventory of cryptographic keys is an important part of a cryptography management strategy, organizations are overwhelmed keeping up with their increasing use. Sixty-one percent of respondents say their organizations are deploying more cryptographic keys and digital certificates. As a result, 65 percent of respondents say this is increasing the operational burden on their teams and 58 percent of respondents say their organizations do not know exactly how many keys and certificates they have.

The misconfiguration of keys and certificates and the ability to adapt to cryptography changes prevents a cryptographic management program from being effective. Sixty-two percent of respondents say they are concerned about the ability to adapt to changes in cryptography such as algorithm deprecation and quantum computing. Another 62 percent are concerned about the misconfiguration of keys and certificates. Fifty-six percent are concerned about the increased workload and risk of outages caused by shorter SSL/TLS certificate lifespans.

To secure information assets and the IT infrastructure, organizations need to improve their ability to effectively deploy cryptographic solutions and methods. Most respondents say their organizations do not have a high ability to drive enterprise-wide best practices and policies, detect and respond to certificate/key misuse, remediate algorithm remediation or breach and prevent unplanned certificates.

Crypto Centers of Excellence (CCOEs) can support organizations’ efforts to achieve a safe post quantum computing future. A CCOE can help improve operational cryptographic processes and increase an organization’s trust environment. They do require advanced technologies and expertise in cryptography to maintain secure operations and comply with applicable regulations. Most organizations in this research do plan on having a CCOE. However, currently only 28 percent of respondents say their organizations have a mature CCOE that provides crypto leadership, research, implementation strategy, ownership and best practices. Another 28 percent of respondents say they have a CCOE, but it is still immature.

Hiring and retaining qualified personnel is the most important strategic priority for digital security (55 percent of respondents). This is followed by achieving crypto-agility (51 percent of respondents), which is the ability to efficiently update cryptographic algorithms, parameters, processes and technologies to better respond to new protocols, standards and security threats, including those leveraging quantum computing methods.

 To read the key findings in this report, click on DigiCert’s website

Robot fear is approaching 1960s-levels; that might be a distraction

Bob Sullivan

When I logged onto a Zoom meeting recently, I was offered the chance to let the company use some kind of whiz-bangy AI magic that would summarize the meeting for me.  Cool? Maybe. Artificial intelligence? Not by my definition. New? Not really.  New name? Absolutely

I’m sure you’ve had this experience a lot lately.  “AI-powered” marketing-speak is everywhere, sweeping the planet faster than dot com mania ever did.  In fact, it’s come so fast and furious that the White House issued an executive order about AI on Monday. AI hasn’t taken over the planet, but AI-speak sure has. It’s smart to worry about computers taking over the world and doing away with humanity, but I think marketing hype might be AI’s most dangerous weapon.

Look, chatbots are kind of cool and impressive in their own way. New?  Well, as consumers, we’ve all been hijacked by some “smart” computer giving us automated responses when we just want a human being at a company to help us with a problem. The answers are *almost* helpful, but not really.  And chatbots … are…not really new.

I like to remind people who work in this field — before “the Cloud” there were servers.  Before Web 3.0 there was the Internet of Things, and before that, cameras that connected to your WiFi.  Before analytics, and AI, there was Big Data.  Many of these things work better than they did ten or twenty years ago, but it was the magic label — not a new Silicon Valley tech, but a new Madison Avenue slogan — that captured public imagination.  Just because someone calls something AI does not make it so.  It might just be search, or an updated version of Microsoft Bob that isn’t terrible.

I don’t at all mean to minimize concern about tech being used for evil purposes.  Quite the opposite, really. If you read the smartest people I can find right now, this is the concern you’ll hear.  It’s fine to fret about ChatGPT Jr., or ChatGPT’s evil half-brother, making a nuclear bomb, or making it substantially easier to make a nuclear bomb. We’ve been worried about something like that since the 1950s and 60s.  And we should still be concerned about it. But that’s not happening today.

Meanwhile, tech (aka “AI”) is being used to hurt people right now. There’s real concern all the focus on a sci-fi future is taking attention away from what needs to be done to reign in large technology companies right now.

Big databases have been used to harm people for a long time.  Algorithms decide prison sentences — often based on flawed algorithms and data.  (Yes, that is real!) Credit scores rule our lives as consumers. The credit reports on which they are built are riddled with errors. And as people seem to forget, credit scores did virtually nothing to stop the housing bubble.  I just read that credit scores are at an all-time high, despite the fact that consumer debt is at very high levels — and, in a classic toxic combination — interest rates are also very high. So just how predictive are credit scores?

I know this — Folks looking to regulate AI/Big Data/algorithmic bias haven’t done nearly enough research into the decades-long battle among credit reporting agencies, consumer advocacy groups, and government regulators.  Hint: It’s not over.

There is a lot to like in the recent AI executive order.  I’ve long been an advocate that tech companies should include “abuse/harm testing” into new products, the way cybersecurity teams conduct penetration testing to predict how hackers might attack. Experienced, creative technologists should sit beside engineers as they dream up new products and ponder: “If I were a bad person, how might I twist this technology for dark uses?”  So when a large tech firm comes up with a great new tool for tracking lost gadgets, someone in the room will stop them and ask, “How do we prevent enraged ex-partners from using this tech to stalk victims?” Those conversations should be had in real-time, during product development, not after something is released to the world and is already being abused.

Today’s executive order calls for red-teaming and sharing of results with regulators.   In theory, such reports would head off a nuclear-bomb-minded bot at the pass.  Good.  I just hope we don’t race past algorithmic-enhanced racial discrimination in housing decisions — which happens today, and has been happening for decades.

The best piece I read on the executive order appeared in MIT Technology Review — a conversation with Joy Buolamwini, who has a new book out titled  Unmasking AI: My Mission to Protect What Is Human in a World of Machines.  She’s been ringing the alarm bell on current-day risks for nearly a decade

For something that’s a more direct explanation of the order, read this piece on Wired.  

This is a link to the White House fact sheet about the executive order. 

 

 

 

Cost Of Insider Risks Global Report — 2023

Ponemon Institute is pleased to present the findings of the 2023 Cost of Insider Risks: Global study. Sponsored by DTEX, this is the fifth benchmark study conducted to understand the financial consequences of insider threats caused by careless or negligent employees or contractors, criminal or malicious insiders or credential thieves.  As revealed in this research, organizations face increasing costs to respond to insider security incidents. Moreover, the time to contain an incident has not improved  — it takes an average of 86 days to contain. In 2022 the time to contain the incident was 85 days. Only 13 percent of incidents were contained in less than 31 days.

This cost study is unique in addressing the core systems and business process-related activities that drive a range of expenditures associated with a company’s response to insider negligence and criminal behaviors. In this research, we define an insider-related incident as one that results in the diminishment of a company’s core data, networks or enterprise systems. It also includes attacks perpetrated by external actors who steal the credentials of legitimate employees/users (i.e., imposter risk).

The first study was conducted in 2016 and focused exclusively on companies in North America. Since then, the research has been expanded to include organizations in Europe, Middle East, Africa and Asia-Pacific with a global headcount of 500 to more than 75,000. In this year’s study, we interviewed 1,075 IT and IT security practitioners in 309 organizations that experienced one or more material events caused by an insider. A total of 7,343 insider incidents are represented in this research.

The most prevalent insider security incident is caused by careless or negligent employees.

According to the findings, 55 percent of incidents experienced by organizations represented in this research were due to employee negligence and the average annual cost to remediate these incidents was $7.2 million. Not as frequent are incidents involving criminal or malicious insiders (25 percent of incidents) and credential theft (20 percent of incidents). However, the average cost per these incidents are more costly at $701,500 and $679,621, respectively.

As shown in this research, the cost of insider risk varies significantly based on the type of incident. The activities that drive costs are monitoring & surveillance, investigation, escalation, incident response, containment, ex-post analysis and remediation.

The following are the most salient findings from this research.

 The negligent insider is the root cause of most incidents. The average number of negligent insider incidents is 14 in this year’s study. There are a variety of reasons employees can put their organizations at risk. These include not ensuring their devices are secured, not following the company’s security policy, forgetting to patch and upgrade to the latest version.

 Malicious insiders accounted for an average of 6.2 incidents and the average cost per incident of $701,500.  In the context of this research, malicious insiders are employees or authorized individuals who use their data access for harmful, unethical or illegal activities. Because of their potentially wider access to an organization’s sensitive and confidential data, malicious insiders are harder to detect than incidents caused by external attackers or hackers.

 Credential theft incidents average $679,621 per incident. The intent of the credential thief is to steal users’ credentials that will grant them access to critical data and information. These attackers commonly use phishing.

 Insider security incidents are increasing.  According to the 2023 research, 71 percent of companies are experiencing between 21 and more than 40 incidents per year. This is an increase from 67 percent in 2022 of companies having between 21 and more than 40 incidents.

Privileged access management (PAM) and user training and awareness are shown to reduce the cost of insider risk. The research analyzed the impact security technologies and activities can have on reducing costs. PAM can save an average of $5.9 million. User training and awareness programs can save $5.4 million and SIEM reduces the cost by $4.3 million.

Disruption or downtime and direct and indirect labor represent the most significant costs when dealing with insider threats. Investments in technology, which includes the amortized value and licensing for software and hardware that are deployed in response to insider-related incidents is the third most significant cost.

Companies spend the most on containment of the insider security incident. An average of $179,209 is spent to contain the consequences of an insider threat. The least amount of average cost is for escalation $29,794 and monitoring and surveillance is $33,596. Incidents that took less than 30 days to contain had the lowest average total cost of activities at $11.92 million. In contrast, average activity costs for incidents that take more than 90 days is $18.33 million.

North American companies are spending more than the average cost on activities that deal with insider threats. The total average cost of activities to resolve insider threats over a 12-month period is $16.2 million. Companies in North America experienced the highest total cost at $19.09 million. European companies had the next highest cost at $17.47 million.

Financial services and services have the highest average activity costs. The average activity cost for financial services is $20.68 million and services is $19.63 million.

Organizational size affects the cost. The cost of incidents varies according to organizational size. Large organizations with a headcount of more than 75,000 spent an average of $24.60 million over the past year to resolve insider-related incidents. To deal with the consequences of an insider incident, smaller-sized organizations with a headcount below 500 spent an average of $8 million.

Interviews with participants in this research revealed the following insights into insider threats.

In addition to determining the cost of insider threats for companies in this research, we interviewed participants about their experiences with the threat and what they are doing to reduce risks.

The insider threat continues to pose the greatest threat to organizations. Fifty-five percent of insider risks were caused by employee negligence. Of these organizations, 75 percent of respondents say the most likely cause of insider threat is a negligent insider who caused harm through carelessness or inattentiveness (15 percent), a mistaken insider who caused harm through a genuine mistake (35 percent), or an outsmarted insider who was exploited by an external attack or adversary (25 percent).

Sales and customer service are the roles or function that poses the greatest insider risks (48 percent and 47 percent, respectively). Functions that pose the least risk are IT and legal third-party contractor, 23 percent and 29 percent, respectively).

Malicious insiders were most likely to email sensitive data to outside parties (67 percent). They are also very likely to access sensitive data not associated with the role or function (66 percent) and scanning for open ports and vulnerabilities (63 percent).

Cloud and IoT devices are most likely to be the channels where insider-driven data loss occurred (59 percent and 56 percent, respectively. Less likely are corporate-owned endpoint (41 percent) and BYOD endpoints (43 percent). IoT and cloud are the channels organizations are of most concern (65 percent and 61 percent, respectively).

Malware and social engineering attacks were most likely to cause a non-insider attack that caused a data breach 56 percent and 53 percent, respectively. In the past 12 months, 58 percent of organizations had a minimum of two non-insider attacks which caused a data breach. Malware is considered the most important attack to prevent (65 percent of organizations).

More organizations believe the use of AI and machine learning is important to reducing insider threats.   Sixty-four percent of respondents believe AI and machine learning is essential (33 percent) or very important (31 percent) to preventing, investigating, escalating, containing and remediating insider incidents. This is a significant increase from 54 percent of organizations in 2022. Sixty-one percent say automation is essential (38 percent) or very important (23 percent) to managing insider risks.

Reduction in incidents is the top metric for measuring the success of insider risk efforts and programs (50 percent). This is followed by assessment of insider risks (40 percent) and length of time to resolve the incident (38 percent)

Five signs that your organization is at risk

  • Employees are not trained to fully understand and apply laws, mandates, or regulatory requirements related to their work and that affect the organization’s security.
  • Employees are unaware of the steps they should take at all times to ensure that the devices they use—both company issued and BYOD—are secured at all times.
  • Employees are sending highly confidential data to an unsecured location in the cloud, exposing the organization to risk.
  • Employees break your organization’s security policies to simplify tasks.
  • Employees expose your organization to risk if they do not keep devices and services patched and upgraded to the latest versions at all times.

To read the full report, visit the DTexSystems.com website.

A warning from historians: AI is going to create a lot of ‘sh*& jobs’

Bob Sullivan

We’ve all had the maddening experience of being shuttled off to a mindless chatbot when we need real customer service help. Few things can raise your blood pressure like a nonsensical automated response designed as a stall tactic when you have a real crisis on your hands.

I hope you all realize this is the world we are hurling madly towards with all the mindless promotion of AI we’ve seen lately. Since the advent of automated voice response systems, consumers have been swearing into their phones while corporations have engaged in a cynical race to the bottom. Make cost centers like customer service cheaper, and profits increase. Make consumers capitulate because of artificial, frustrating hurdles, and profits increase. That’s unchecked, broken-market capitalism in action.  Gotcha Capitalism, fueled by bots.  That’s not an opinion, it’s a fact.  Take a flight pretty much anywhere in America now and you’ll see what Big Data, and advanced analytics, and AI, or whatever other fancy tech marketing term you throw at it, has done to us. The race to the bottom is so real that we’ve become numb even to airline crash near-misses.

And so I want to talk today about another crash-landing that’s coming – one that’s predictable, but preventable if we act.

A billion useless people.

Yes, AI is coming for our (good) jobs. And no, despite what some ivory tower economists like to say, it’s not a given that we’ll simply replace those jobs with even better jobs.

A billion useless people was the headline of one of my favorite stories . In it, I discussed a simple dinner-party question: What job do you hope your son or daughter trains for? At least if your sensible parental goal was a comfortable life, that would have been a fairly easy question to answer a generation ago or so, but today? Doctor? Lawyer? Pilot? Professor? Software engineer? Ask a few of them and you might be surprised.

The real cause of tension should be a wide-ranging study conducted by Oxford University I wrote about in a similar piece. The study ranked 700 jobs in terms of their potential for automated replacement or “computerization.” How likely is this kind of worker to be replaced by a robot?  The results were stunning.  Many people like to think fast food workers have the most to lose from robots. In reality, it might be lawyers.  Plenty of today’s high-paying, white-collar jobs are filled with rote tasks that robots are very good at.  Knowledge workers have long convinced themselves they aren’t as replaceable as hamburger chefs. They’re wrong. If you’ve ever been bored at work, there’s a robot coming for your job.

On a micro scale, you should feel personally threatened. On a macro scale, this is a real threat to social order.  A billion useless people are going to get very angry, and the resulting unrest should scare everyone.  We must start thinking now about what to do with people when a large percentage of adults don’t have anything productive to do with their lives.

I wrote all that seven years ago, before ChatGPT was a twinkle in a programmer’s eye. Maybe you think I’m exaggerating, but Goldman Sachs published a report earlier this year estimating that “generative AI could expose the equivalent of 300 million full-time jobs to automation.”

So, this concerning future is coming — fast.  CNBC published a story this week with similar speculation about the future of the job market, and I recommend you read it.  There are some great comparisons in the story from labor market historians. Sure, there will still be some kind of higher-order jobs in a world of AI — someone has to “prompt” ChatGPT, right? — but the scale of job destruction could be immense.  Here’s one comparison offered by  Felix Koenig, assistant professor of economics at Carnegie Mellon University: Just about 100 years ago, audio tracks were introduced into silent movies, putting a generation of local musicians out of work — until then, orchestras would accompany the moving pictures.  Once “talkies” were invented, a single musician could play a soundtrack, that audio could be recorded, and replayed millions of times in theatres around the world — eliminating 99.9% of the jobs for movie musicians.  That one musician is still paid pretty well, but the rest are now out of luck.

And so it will be with AI. On the one hand, there will be a race to get a plumb job as a robot programmer. There could be a few big winners and then a lot of losers. Our economy currently favors that structure, and this is a great risk.

What worries me more is that people will be left to do jobs that are considered too expensive for robot labor.  Garbage collection in messy cities, for example.  Or maybe flipping hamburgers. After all, robots require free health care — they have to be repaired. People don’t. Let’s just call them “sh&t jobs,” which is what Jason Resnikoff, history professor, told CNBC.

This future is not yet written.  Yes, in the past society endured — thrived, really — when physically challenging farm jobs turned into urban paper-pushing jobs.  That kind of retraining doesn’t happen by accident, however.  In between those two events in America, we created a thriving college system and passed the GI Bill so millions could be trained without going into debt.  Today, such a massive initiative seems out of the question.

I believe the coming future offers great possibilities. I think the future will bring revenge of the artists, and revenge of the caring souls.  AI will not create great original art (though it will certainly rip off existing art).  And it will not inspire people recovering from strokes to endure the challenges of physical therapy. Therapists and artists have a bright future. Human inspiration and originality might finally gets its just rewards.  That’s what the Oxford study suggested.

But not everyone can do those things. And we must prepare now for this reality.  AI is as much hype as it is reality — everytime you hear AI, just think “Cloud” or “Internet 3.0” or whatever marketing term you like. But one thing will happen, I promise: corporations will figure out how to drive costs down in the name of artificial intelligence, and the race to the bottom will continue unabated, unless we do something to stop it.  Airlines will keep ripping off consumers until there are rules against it, and until there is real competition.  TV studios will generate boring shows based on search queries unless creativity is protected by labor rules. And so on.

Each time you have an insane interaction with an automated customer service tool, you are seeing a glimpse of the future.  Trust me, this is a future we want to stop before it’s too late.

State of API Security: 2023 Global Findings

The purpose of this research is to understand organizations’ awareness and approach to reducing application programming interface (API) security risks.  Ponemon Institute surveyed 1,629 IT and IT security practitioners in the United States (691) and the United Kingdom and EMEA (938) who are knowledgeable about their organizations’ approach to API security. “The Growing API Security Crisis: A Global Study,” is sponsored by Traceable.


I (Larry Ponemon), and Richard Bird, the Chief Security Officer of Traceable, will present and explain these findings at a webinar Sept. 27 at 9 a.m. You can register for it at this website.

For more details on the study, you can also visit Traceable’s microsite, with additional charts, graphs, and key findings


An API is a set of defined rules that enables different applications to communicate with each other. Organizations are increasingly using APIs to connect services and to transfer data, including sensitive medical, financial and personal data.

According to 57 percent of respondents, APIs are highly important to their organizations’ digital transformation programs. However, APIs with vulnerabilities put organizations at risk to have a significant security breach. Sixty percent of respondents say their organizations have had at least one data breach caused by an API exploitation. Many of these breaches resulted in the theft of IP and financial loss.

 A key takeaway from the research is that while the potential exists for a major security incident due to API vulnerabilities, many organizations are not making API security a priority. Respondents were asked to rate how much of a priority it is to have a security risk profile for every API and to be able to identify API endpoints that handle sensitive data without appropriate authentication on a scale from 1 = not a priority to 10 = a very high priority.

According to our research, slightly more than half of respondents (52 percent) say it is a priority to understand those APIs that are most vulnerable to attacks or abuse based on a security risk profile. Fifty-four percent say the identification of API endpoints that handle sensitive data without appropriate authentication is a high priority.

The average IT security budget for organizations represented in this research is $35 million and an average of $4.2 million is allocated to API security activities. Thirty-five percent of IT and IT security functions are most responsible for the API security budget.

The following findings are evidence that the API security crisis is growing 

  • Organizations are losing the battle to secure APIs. One reason is that organizations do not know the extent of the risk. Specifically, on average only 40 percent of APIs are continually tested for vulnerabilities. As a result, organizations are only confident in preventing an average of 26 percent of attacks and an average of only 21 percent of API attacks can be effectively detected and contained.  
  • APIs expand the attack surface across all layers of the technology stack. Fifty-eight percent of respondents say APIs are a growing security risk because they expand the attack surface across all layers of the technology stack and is now considered organizations’ largest attack surface.
  • The increasing volume of APIs makes it difficult to prevent attacks. Fifty-seven percent of respondents say traditional security solutions are not effective in distinguishing legitimate from fraudulent activity at the API layer. Further, the increasing number and complexity of APIs makes it difficult to track how many APIs exist, where they are located and what they are doing (56 percent of respondents). 
  • Organizations struggle to discover and inventory all their APIs. Fifty-three percent of respondents say their organizations have a solution to discover, inventory and track APIs. These respondents say on average their organizations have an inventory of 1,099 APIs. Fifty-four percent of respondents say it is highly difficult to discover and inventory all APIs. The challenge is many APIs are being created and updated so organizations can quickly lose control of the numerous types of APIs used and provided.
  • Solutions are needed to reduce third-party risks and detect and stop data exfiltration events happening through APIs. An average of 127 third parties are connected to organizations’ APIs and only 33 percent of respondents say they are effective in reducing the risks caused by these third parties’ access to their APIs. Only 35 percent of respondents say they are effective in identifying and reducing risks posed by APIs outside their organizations and 40 percent say they are effective in identifying and reducing risks within their organizations. One reason is that most organizations do not know how much data is being transmitted through the APIs and need a solution that can detect and stop data exfiltration events happening through APIs. 
  • To stop the growing API security crisis, organizations need visibility into the API ecosystem and ensure consistency in API design and functionality. Only 35 percent of respondents have excellent visibility into the API ecosystem, only 44 percent of respondents are very confident in being able to detect attacks at the API layer and 44 percent of respondents say their organizations are very effective in achieving consistency in API design and functionality. Because APIs expand the attack surface across all vectors it is possible to simply exploit an API and obtain access to sensitive data and not have to exploit the other solutions in the security stack. Before APIs, hackers would have to learn how to attack each they were trying to get through, learning different attacks for different technologies at each layer of the stack.
  • Inconsistency in API design and functionality increases the complexity of the API ecosystem. As part of API governance, organizations should define standards for how APIs should be designed, developed and displayed as well as establishing guidelines for how they should be used and maintained over time.  
  • Organizations are not satisfied with the solutions used to achieve API security. As shown in the research, most organizations are unable to prevent and detect attacks against APIs. It’s no surprise, therefore, that only 43 percent of respondents say their organizations’ solutions are highly effective in securing their APIs. The primary solution used is encryption and signatures (60 percent of respondents), followed by 51 percent of respondents who say they identify vulnerabilities and 51 percent of respondents who say they use basic authentication. Solutions considered effective but not frequently used are API lifecycle management tools (41 percent), tokens (32 percent) and quotas and throttling (20 percent).  
  • Despite the growing API security crisis, threats to APIs are underestimated by management. Almost one-third of respondents say API security is only somewhat of a priority (17 percent) or not a priority (14 percent). The reasons for not making it a priority are managements’ underestimation of the risk to APIs (49 percent), other security risks are considered more of a threat (42 percent) and the difficulty in understanding how to reduce the threats to APIs (37 percent).

Part 2. Key findings

In this section, we provide an analysis of the global findings. The complete findings are presented on this website. The report is organized according to the following topics.

  • Understanding the growing API security crisis
  • Challenges to securing the unmanageable API sprawl
  • API security practices and the state of API security practices
  • API budget and governance

Understanding the growing API security risk

Organizations have had multiple data breaches caused by an API exploitation in the past two years. Two well-publicized API security breaches include the Cambridge Analytica breach caused by a Facebook API loophole that exposed the personal information of more than 50 million individuals and a Venmo public endpoint unsecured API that allowed a student to scrape 200 million users’ financial transactions.

Sixty percent of respondents say their organizations had a data breach caused by an API exploitation and 23 percent of these respondents say their organizations had between a minimum of 6 and more than 7 exploits in the past two years. The top three root causes of the API exploits are DDoS (38 percent of respondents), fraud, abuse and misuse (29 percent of respondents) and attacks with known signatures (29 percent of respondents).

Organizations are losing the battle to secure APIs. One reason is that organizations do not know the extent of the risk. Specifically, on average only 40 percent of APIs are continually tested for vulnerabilities. As a result, organizations are only confident in preventing an average of 26 percent of attacks and an average of only 21 percent of API attacks can be effectively detected and contained,

API exploits can severely impact an organization’s operations.  Organizations mainly suffered from the IP and financial loss (52 percent of respondents). Other serious consequences were brand value erosion (50 percent of respondents) and failures in company operations (37 percent of respondents).

APIs expand the attack surface across all layers of the technology stack. Some 58 percent of respondents say APIs are a security risk because they expand the attack surface across all layers of the technology stack and is now considered organizations’ largest attack surface.  Fifty-seven percent of respondents say traditional security solutions are not effective in distinguishing legitimate from fraudulent activity at the API layer. The increasing number and complexity of APIs makes it difficult to track how many APIs exist, where they are located and what they are doing.  As a result, 56 percent of respondents say the volume of APIs makes it difficult to prevent attacks.

Challenges to securing the unmanageable API sprawl

Open and public APIs are most often used and/or provided by organizations. Thirty two percent of respondents say their organizations use/provide open APIs and 31 percent of respondents say their organization use/provide public APIs.

Organizations struggle to discover and inventory all their APIs. Fifty-three percent of respondents say their organizations have a solution to discover, inventory and track APIs. These respondents say on average their organizations have an inventory of 1,099 APIs.

Fifty-four percent of respondents say it is highly difficult to discover and inventory all APIs. The challenge is many APIs are being created and updated so organizations can quickly lose control of the numerous types of APIs used and provided.

An average of 127 third parties are connected to organizations’ APIs and only 33 percent of respondents say they are effective in reducing the risks caused by these third parties’ access to their APIs. Only 35 percent of respondents say they are effective in identifying and reducing risks posed by APIs outside (35 percent) and within (40 percent) their organizations. One reason is that most organizations do not know how much data is being transmitted through the APIs and need a solution that can detect and stop data exfiltration events happening through APIs.

To stop the growing API security crisis, organizations need visibility into the API ecosystem and ensure consistency in API design and functionality. However, only 35 percent of respondents have excellent visibility into the API ecosystem, only 44 percent of respondents are very confident in being able to detect attacks at the API layer and 44 percent of respondents say their organizations are achieving consistency in API design and functionality.

Because APIs expand the attack surface across all vectors it is possible to simply exploit an API and obtain access to sensitive data and not have to exploit the other solutions in the security stack. Before APIs, hackers would have to learn how to attack each they were trying to get through, learning different attacks for different technologies at each layer of the stack.

Inconsistency in API design and functionality increases the complexity of the API ecosystem. As part of API governance, organizations should define standards for how APIs should be designed, developed and displayed as well as establishing guidelines for how they should be used and maintained over time.

To download and read the rest of this report, visit Traceable’s website.

Forced into fraud: Scam call centers staffed by human trafficking victims

Bob Sullivan

Who’s on the other end of the line when you get a scam phone call? Often, it’s a victim of human trafficking whose safety — and perhaps their life — depends on their ability to successfully steal your money. A recent UN report suggests there are hundreds of thousands of trafficking victims forced to work in sweatshops in Southeast Asia devoted to one thing: Stealing money. If they don’t, they go hungry, or they are beaten … or worse.

In other words, there are often victims on both ends of scam phone calls.

Americans report they are inundated with scam phone calls, emails and text messages, and FBI data shows losses are skyrocketing.  Crypto scams alone increased more than 125% last year, with $3.3 billion in reported losses.  These numbers are so large that they are meaningless to most; and you’ve probably heard before that this or that crime is skyrocketing, so perhaps that alarmist-sounding statement doesn’t penetrate.  But let me say this: I spend all week talking to victims of scams and law enforcement officials about tech-based crimes, and by any measure I can observe, there is a very concerning spike in organized online crime.

A recent report published by the United Nations helps explain why — for some “criminals,” stealing money from you is a matter of life and death.

The recent surge of activity dates to the pandemic, the report says. Public health measures forced the abrupt closing of casinos in places like Cambodia, which sent operators — including some controlled by criminal networks — looking for alternative revenue streams.  The toxic combination of out-of-work casino employees and a new tool that made international theft easy — cryptocurrency — led to an explosion in “scam centers” devoted to romance crimes, fake crypto investment schemes, and so on.

The scam centers have an endless need for “workers.” Many are lured from other Asian nations by help-wanted ads with promises of big salaries and work visas. Instead, new arrivals are often faced with violence, their passports taken, their families left wondering what happened — or, called with ransom demands.

There have been plenty of horror stories with anecdotes about forced scam center labor. Here’s one account from a Malaysian man who went to Thailand for what he thought was a legitimate job.

  • “Ah Hong soon found out that he was to carry out online scams for a call center that targeted people living in the United States and Europe. Everyone working there was given a target and those who failed to achieve it would be punished. “Punishment included being forced to run in the hot sun for two hours, beaten by sticks or asked to carry heavy bricks for long hours. “If we made a mistake, we were tasered,” he said. Ah Hong added that he was once punished by having to move bricks from 7 a.m. to 5 p.m., besides being beaten multiple times. A typical working day, he said, would begin at midnight and end at 5 p.m.

He was only released when his family paid his ransom.

The recent United Nations report attempted to estimate how many Ah Hongs there are.  The report’s conclusion is terrifying.

  • The number of people who have fallen victim to online scam trafficking in Southeast Asia is difficult to estimate because of its clandestine nature and gaps in the official response. Credible sources indicate that at least 120,000 people across Myanmar may be held in situations where they are forced to carry out online scams, while credible estimates in Cambodia have similarly indicated at least 100,000 people forcibly involved in online scams.

We only know what happens to these trafficking victims from the stories of those, like Hong, who have escaped.  The UN report details more horrors about conditions in these scam centers.

  • Reports have also been received of people being chained to their desk. Many victims report that their passports were confiscated, often along with their mobile phones or they were otherwise prohibited from contacting friends or family, a situation that UN human rights experts have described as ‘detention incommunicado’.
  • In addition, there is reportedly inadequate access to medical treatment with some disturbing
    cases of victims who have died as a result of mistreatment and lack of medical care. Reports commonly describe people being subjected to torture, cruel and degrading treatment and punishments including the threat or use of violence (as well as being made to witness violence against others) most commonly beatings, humiliation, electrocution and solitary confinement, especially if they resist orders or disobey compound rules or if they do not meet expected
    scamming targets. Reports have also been received of sexual violence, including gang rape as well as trafficking into the sex sector, most usually as punishment, for example for failing to meet their targets.

When I hear stories from the victim’s point of view, I am often amazed at how relentless the criminals can be. Some spend months, even years, grooming victims with faux attention and love.  Understanding how high the stakes are for the people on the other end of the phone helps explain why they can be so determined.

From a self-preservation point of view, I think it’s crucial we understand just why scam criminal activity is thriving right now.  But from a human rights point of view, it’s critical we call out this hideous behavior and work to stop it. The UN paper blames several factors, but one that caught my eye is the existence of Special Enterprise Zones — SEZs — designed to help support new industries. Ideally, SEZs encourage entrepreneurship by cutting red tape. But in some cases, they have become synonymous with “opaque regulation and the proliferation of multiple illicit economies, including human trafficking, illegal wildlife trade, and drug production,” the report says.

It’s also interesting to think about the implications for trafficking victims. Even after they are released or they manage to escape, many face challenges back home for being involved in criminal operations. The UN report stresses that scam center victims — like other human trafficking victims — are not legally responsible for crimes they were forced to commit against their will.  They should not face prosecution; doing so only prevents more victims from coming forward.

“People who are coerced into working in these scamming operations endure inhumane treatment while being forced to carry out crimes. They are victims. They are not criminals,” said UN High Commissioner for Human Rights Volker Türk. “In continuing to call for justice for those who have been defrauded through online criminality, we must not forget that this complex phenomenon has two sets of victims.”

The report also makes clear who likely victims are:

  • Most people trafficked into the online scam operations are men, although women and adolescents are also among the victims…Most are not citizens of the countries in which the trafficking occurs. Many of the victims are well-educated, sometimes coming from professional jobs or with graduate or even post-graduate degrees, computer-literate and multilingual. Victims come from across the ASEAN region (from Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam), as well as mainland China, Hong Kong and Taiwan, South Asia, and even further afield from Africa and Latin America.

Every thoughtful adult should read the UN report, and make sure your friends and family understand why the stakes in the scam world have become so high.

 

 

 

 

 

Half of Breached Organizations Unwilling to Increase Security Spend Despite Soaring Breach Costs

The global average cost of a data breach reached $4.45 million in 2023 – another record high and a 15% increase over the last 3 years, according to this year’s Cost of a Data Breach study, just published by IBM and conducted by The Ponemon Institute. Detection and escalation costs jumped 42% over this same time frame, representing the highest portion of breach costs, and indicating a shift towards more complex breach investigations.

According to the 2023 IBM report, businesses are divided in how they plan to handle the increasing cost and frequency of data breaches. The study found that while 95% of studied organizations have experienced more than one breach, breached organizations were more likely to pass incident costs onto consumers (57%) than to increase security investments (51%).

The 2023 Cost of a Data Breach Report is based on in-depth analysis of real-world data breaches experienced by 553 organizations globally between March 2022 and March 2023. The research, sponsored and analyzed by IBM Security, was conducted by Ponemon Institute and has been published for 18 consecutive years. Some key findings in the 2023 IBM report include:

  • AI Picks Up Speed – AI and automation had the biggest impact on an organization’s speed of breach identification and containment. Organizations with extensive use of both AI and automation experienced a data breach lifecycle that was 108 days shorter compared to studied organizations that have not deployed these technologies (214 days versus 322 days).
  • The Cost of Silence – Ransomware victims in the study that involved law enforcement saved nearly half a million ($470,000) in average breach costs compared to those that chose not to involve law enforcement. Despite these savings, 37% of ransomware victims studied chose not to bring law enforcement in.
  • Detection Gaps – Only one third of studied breaches were detected by organizations’ own security teams, compared to 27% that were disclosed by an attacker. Data breaches disclosed by the attacker cost nearly $1 million more on average compared to studied organizations that identified the breach themselves.

“Time is the new currency in cybersecurity both for the defenders and the attackers. As the report shows, early detection and fast response can significantly reduce the impact of a breach,” said Chris McCurdy, General Manager, Worldwide IBM Security Services. “Security teams must focus on where adversaries are the most successful and concentrate their efforts on stopping them before they achieve their goals. Investments in threat detection and response approaches that accelerate defenders’ speed and efficiency – such as AI and automation – are crucial to shifting this balance.”

Every Second Costs

According to the 2023 report, organizations that fully deploy security AI and automation saw 108-day shorter breach lifecycles on average compared to organizations not deploying these technologies – and experienced significantly lower incident costs. In fact, organizations that deploy security AI and automation extensively saw nearly $1.8 million less in average breach costs than organizations that didn’t deploy these technologies – the biggest cost saver identified in the report.

At the same time, adversaries have reduced the average time to complete a ransomware attack. And with 40% of studied organizations not yet deploying security AI and automation, there is still considerable opportunity for organizations to boost detection and response speeds.

To read the full report, visit IBM’s website — click here

The Frances Haugen interview. Two years after Facebook, now what?

Bob Sullivan

Nearly two years after focusing the world’s attention on Big Tech’s big problems, Frances Haugen remains a powerful force in the technology industry.  I redently interviewed Haugen for the Debugger podcast I host at Duke University.

In this interview, Haugen tells me how Covid lockdowns played a key role in her difficult decision to come forward and criticize one of the world’s most powerful companies, what she’s doing now to keep the pressure on tech firms, and how she handles the slow pace of change.

For a new book she’s just published, Haugen researched Ralph Nader’s battle against the automotive industry in the 1970s — her fight is like his in some ways, very different in others. She’s created a non-profit to pursue research into harms that tech companies cause — some of that will be conducted this summer by Duke University students —  and she offers up some simple things companies like Facebook could do immediately to mitigate those harms.

I hope you’ll listen to the episode. Haugen is an engaging speaker.  But if podcasts aren’t your thing, a full transcript is at this link

Click the play button below or click play to listen. You can also subscribe to Debugger on Spotify or wherever you find podcasts.

A brief excerpt from our conversation:

“One of the things I talk about in my book is… why was it when Ralph Nader wrote a book called Unsafe at Any Speed, that within a year … there were Congressional inquiries, laws were passed, a Department of Transportation was founded. Suddenly seat belts were required in every car in the United States. Why was that able to move so fast? And we’re still having very, very basic conversations about things like even transparency in the United States.”

Bob: So we’ve talked a lot about platform accountability on this podcast, the worry that Big tech doesn’t have to answer to anyone, not even governments. And this recent report by the Irish Council for Civil Liberties, which says that two thirds of cases brought before Ireland’s Data Protection Commissioner, which basically serves as the enforcement agency for the whole EU … says that and two thirds of the cases before it just resulted in a reprimand. Francis as someone who’s done a lot to try to make at least one big tech company accountable, how do you react to that?

Frances Haugen: One of the largest challenges regarding tech accountability is … legislation and democracy takes a lot more time than technical innovation. Pointing at things like adoption curves … you know, how long did it take us to all get washing machines? How long did it take for us to get telephones? What about cell phones? How many years do these processes take? And they’re accelerating. The process of adoption gets condensed. And when it comes to things like the data protection authority, it’s one of these interesting …  quirks, I would say, of how we learn to pass laws. Because when GDPR was passed, it was a revolutionary law. It was a generational law in terms of how it impacted how tech companies around the world operated. But we have seen over the years that the Irish Data Protection Authority is either unable or unwilling to act, and that pattern is consistent. One of the stats I was trying to find before I came on today was the fraction of complaints that they’ve even addressed is very, very small. So yes, they’ve only acted on a handful of cases in the last few years. It’s something like 95% of all the complaints that have been brought, they’ve never responded to. So I’m completely unsurprised by the recent report.

Bob: Is it frustrating that we’re still in this place?

Frances Haugen: Oh, no. This is one of these things where perspective is so important, trying to change the public’s relationship with these tech companies. And that’s fundamentally what the core of my work is — the idea that we should have an expectation that we have the right to ask questions and get real answers. That’s a fundamental culture shift … coming at a project like that from a place like Silicon Valley, where if you can’t accomplish something in two years, it’s, it’s not really considered valuable, right? Things get funded in Silicon Valley based on expectations two years out. If it takes five years or 10 years, that’s considered way too slow. And so I come at it assuming that it’ll take me years, like years and years, to get where I want the world to get. And that means that when there are hiccups like this, they’re not nearly as upsetting. And so I think it’s unfortunate. I think it’s unacceptable. But I think it’s also one of these things where I’m not surprised by it.