Author Archives: BobSulli

The Second Annual Global Study on the Growing API Security Crisis

Application Programming Interfaces (APIs) benefit organizations by connecting systems and data, driving innovation in the creation of new products and services and enabling personalized products and services. As organizations realize the benefits of APIs, they are aware of how vulnerable APIs are to exploitation. In fact, 61 percent of participants in this research believe the API risk will significantly increase (21 percent) or increase (40 percent) in the next 12 to 24 months.

As defined in the research, an API is a set of defined rules that enables different applications to communicate with each other. Organizations are increasingly using APIs to connect services and to transfer data, including sensitive medical, financial and personal data. As a result, the API attack surface has grown dramatically.

Sponsored by Traceable, the purpose of this research is to understand organizations’ awareness and approach to reducing API security risks. In this year’s study, Ponemon Institute surveyed 1,548 IT and IT security practitioners in the United States (649), the United Kingdom (451) and EMEA (448) who are knowledgeable about their organizations’ approach to API security.

Why APIs continue to be vulnerable to exploitation.  This year, 54 percent of respondents say APIs are a security risk because they expand the attack surface across all layers of the technology stack. It is now considered organizations’ largest attack surface. Because APIs expand the attack surface across all vectors it is possible to simply exploit an API and obtain access to sensitive data and not have to exploit to other solutions in the security stack. Before APIs, hackers would have to learn how to attack each one they were trying to get through, learning different attacks for different technologies at each layer of the stack.

Some 53 percent of respondents say traditional security solutions are not effective in distinguishing legitimate from fraudulent activity at the API layer. The increasing number and complexity of APIs makes it difficult to track how many APIs exist, where they are located and what they are doing.  As a result, 55 percent vs. 56 percent of respondents say the volume of APIs makes it difficult to prevent attacks.

The following findings illustrate the growing API crisis and what steps should be taken to improve API security 

  • Organizations are having multiple data breaches caused by an API exploitation in the past two years, which result in financial and IP losses. These data breaches are likely to occur because on average only 38 percent of APIs are continually tested for vulnerabilities. As a result, organizations are only confident in preventing an average of 24 percent of attacks. To prevent API compromises, APIs should be monitored for risky traffic performance and errors. 
  • Targeted DDoS attacks continue to be the primary root cause of the data breaches caused by an API exploitation. Another root cause is fraud, abuse and misuse. When asked to rate the seriousness of fraud attacks, almost half of respondents (47 percent) say these attacks are very or highly serious. 
  • Organizations have a very difficult time discovering and inventorying all APIs and as a result they do not know the extent of risks to their APIs. Many APIs are being created and updated so organizations can quickly lose control of the numerous types of APIs used and provided. Once all APIs are discovered it is important to have an inventory that provides visibility into the nature and behavior of those APIs.
  • According to the research, the areas that are most challenging to securing APIs and should be made a focus of any security strategy are preventing API sprawl, stopping the growth in API security vulnerabilities and prioritizing APIs for remediation.
  • Third-party APIs expose organizations to cybersecurity risks. In this year’s research, an average of 131 third parties are connected to organizations’ APIs. Recommendations to mitigate third-party API risk include creating an inventory of third-party APIs, performing risk assessments and due diligence and establishing ongoing monitoring and testing. Third-party APIs should also be continuously analyzed for misconfiguration and vulnerabilities.
  • To prevent API exploitations, organizations need to make identifying API endpoints that handle sensitive data without appropriate authentication more of a priority. An API endpoint is a specific location within an API that accepts requests and sends back responses. It’s a way for different systems and applications to communicate with each other, by sending and receiving information and instructions via the endpoint.
  • Bad bots impact the security of APIs. A bot is a software program that operates on the Internet and performs repetitive tasks. While some bot traffic is from good bots, bad bots can have a huge negative impact on APIs. Fifty-three percent of respondents say their organizations experienced one or more bot attacks involving APIs. The security solutions most often used to reduce the risk from bot attacks are web application firewalls, content delivery network deployment and active traffic monitoring on an API endpoint.

Generative AI and API security 

  • Generative artificial intelligence is being adopted by many organizations for its many benefits such as in business intelligence, content development and coding. In this research, 67 percent of respondents say their organizations have currently adopted (21 percent), in the process of adopting (30 percent) or plan to adopt generative AI in the next year (16 percent). As organizations embrace generative AI they should also be aware of the security risks that negatively affect APIs.
  • The top concerns about how generative AI applications affect API security are the increased attack surface due to additional API integrations, unauthorized access to sensitive data, potential data leakage through API calls to generative AI services and difficulty in monitoring and analyzing traffic to and from generative AI APIs.
  • The main challenges in securing APIs used by generative AI applications are the rapid pace of generative AI technology development, lack of in-house expertise in generative AI and API security and the lack of established best practices for securing generative AI API.
  • The top priorities for securing APIs used by generative AI applications are real-time monitoring and analysis of traffic to and from generative AI APIs, implementing strong authentication and authorization for generative AI API calls and comprehensive discovery and cataloging of generative AI API integrations.
  • Organizations invest in API security for generative AI-based applications and APIs to be able to identify and block sensitive data flows to generative AI APIs and safeguard critical data assets, improve efficiency of technologies and staff and real time monitoring and analysis of traffic to and from LLM APIs to quickly detect and respond to emerging threats.

To read key findings in this report, visit the Traceable.com website.

 

Facebook acknowledges it’s in a global fight to stop scams, and might not be winning

Bob Sullivan

Facebook publicly acknowledged recently that it’s engaged in a massive struggle with online crime gangs who abuse the service and steal from consumers worldwide. In a blog post, the firm said it had removed two million accounts just this year that had been linked to crime gangs, and was fighting on fronts across the world, including places like Myanmar, Laos, Cambodia, the United Arab Emirates and the Philippines. But in a nod to how difficult the fight is, the firm acknowledged it needs help.

“We know that these are extremely persistent and well-resourced criminal organizations working to evolve their tactics and evade detection, including by law enforcement,” the firm wrote. “We’ve seen them operate across borders with little deterrence and across many internet platforms in an attempt to ensure that any one company or country has only a narrow view into the full picture of scam operations. This makes collaboration within industries and countries even more critical.”

I’ve been writing about the size and scope of scam operations for years, but lately, I’ve tried to ring the alarm bell about just how massive these crime gangs have become (See, “They’re finding dead bodies outside call centers).  If you haven’t heard about a tragic victim in your circle of friends recently, I’m afraid you will soon.  There will be millions of victims and perhaps $1 trillion in losses by the time we count them all..and behind each one, you’ll find a shattered life.

Facebook’s post focused on a crime that is commonly called “pig butchering” — a term I shun and will not use again because it is so demeaning to victims. Often, the crime involves the long-term seduction of a victim, followed by an eventual invitation to invest in a made-up asset like cryptocurrency.  The scams are so elaborate that they include real-sounding firms, with real-looking account statements. They can stretch well into a year or two.  Behind the scenes, an army of criminals works together to keep up the relationship and to manufacture these realistic elements. As I’ve described elsewhere, hundreds of thousands of these criminals are themselves victims, conscripted into scam compounds via some form of human trafficking.

Many victims don’t find out what’s going on until they’ve sent much of their retirement savings to the crime gang.

“Today, for the first time, we are sharing our approach to countering the cross-border criminal organizations behind forced-labor scam compounds under our Dangerous Organizations and Individuals (DOI) and safety policies,” Facebook said. “We hope that sharing our insights will help inform our industry’s defenses so we can collectively help protect people from criminal scammers.”

It’s a great development that Facebook is sharing its behind-the-scenes work to combat this crime. But the firm can and must do more. Its private message service is often a critical tool for criminals to ensure victims; its platform full of “friendly” strangers in affinity groups is essential for victim grooming.  It would be unfair to say Facebook is to blame for these crimes; but I also know no one works there who wants to go home at night thinking the tool they’ve built is being used to ruin thousands of lives.

How could Facebook do more? One version of the scam begins with the hijacking of a legitimate account that already enjoys trust relationships.  In one typical fact pattern, a good-looking soldier’s account is stolen, and then used to flirt with users.  The pictures and service records are often a powerful asset for criminals trying to seduce victims.

Victims who’ve had their accounts hijacked say it can take months to recover their accounts, or to even get the service to take down their profiles being used for scams. As I’ve written before, when a victim tells Facebook that an account is actively being used to steal from its members, it’s hard to understand why the firm would be slow to investigate.  Poor customer service is our most serious cyber vulnerability.

In another blog post from last month, Facebook said it has begun testing better ways to restore hijacked accounts.  That’s good, too. But I’m here to tell you the new method the firm says it’s using — uploaded video selfies — has been in use for at least two years.  You might remember my experience using it. So, what’s the holdup? If we are in the middle of an international conflict with crime gangs stealing hundreds of millions of dollars, you’d think such a tool would be farther along by now.

Still, I take the publication of today’s post — in which Facebook acknowledges the problem — as a very positive first step.  I’d hope other tech companies will follow suit, and will also cooperate with the firm’s ideas around information sharing.  Meta, Facebook’s parent, is uniquely positioned to stop online crime gangs; its ample resources should be a match even for these massive crime gangs.

The 2024 Study on the State of Identity and Access Management (IAM) Security

Keeping enterprise and customer data secure, private, and uncorrupted has never been more important to running a business. Data is the great asset in our information-driven world and keeping it secure can allow your organization to maintain a healthy operation and reduce operational, financial, legal, and reputational risk.

This report is to understand how organizations are approaching Identity and Access Management (IAM), to what extent they are adopting leading security practices, and how well they are mitigating identity security threats. Sponsored by Converge Technology Solutions, Ponemon Institute surveyed 571 IT and IT security practitioners in the US to hear what they are currently practicing in IAM.

Keeping information safe has gotten more complex as technology has advanced, the number of users has grown, and the devices and access points they use have proliferated beyond the walls of the enterprise. Attackers see their opportunities everywhere.

Threat actors have also changed. It’s no longer the “lone wolf” hacker that is the threat, but now organized criminal organizations and bad-actor nation states are a constant threat to our data security. They have more sophisticated tools, expanding compute power, and AI. They’ve also had decades to hone their methods and are innovating daily.

Not a week goes by without a new data breach hitting the news cycle. A single successful attack can be painfully expensive. In the United States the average cost per data breach was $9.48 million in 2023. And this is just the financial impact which may not include reputational harm, loss of customers and other hidden costs.

Surprisingly, stolen or compromised credentials are still the most common cause of a data breach. While there is an entire industry devoted to identifying and remediating breaches as or after they happen, the best defense is to prevent credential theft in the first place.

At the heart of prevention are the practices of Identity and Access Management or IAM. IAM ensures that only trusted users are accessing sensitive data, that usernames and passwords aren’t leaked or breached, and that the enterprise knows precisely who, where and when their systems are being accessed. Keeping the bad guys from stealing credentials severely limits their ability to cause harm. Good IAM and awareness training does that.

The State of the Art of IAM

Like all technology practices, IAM has evolved over the years to become more sophisticated and robust as new techniques have been developed in keeping data and systems secure. Organizational adoption and enforcement vary greatly.

While some advanced businesses are already using endpoint privileged management and biometrics, there are still organizations with policies loose enough that using a pet’s name with a rotating digit as a password is still possible or credentials are on sticky notes stuck to employee monitors.

For most companies, it all begins with the basics of authentication. If you’re only using username and password, it is no longer enough authentication for your “primary” login for mission-critical systems. In legacy systems, where sophistication beyond usernames and passwords are not available, best practices must be taught and enforced rigorously. Practices such as very long passwords or passphrases and checking passwords against a blacklist must be put in place. These password basics are a starting point that many, many users still don’t universally adhere to.

The next critical step is adding Multi-Factor Authentication (MFA). Many cyberattacks are initiated by phishing where credentials and personal information are obtained from susceptible users. Others are brute force attacks where the password is eventually guessed. Using MFA introduces a second level of authentication that isn’t password-based to thwart attackers who may have discovered the right password. If your organization hasn’t yet implemented MFA, it is past time to act. This additional layer of security can dramatically reduce the risk of credential compromise.

If you’ve already deployed basic MFA, the next logical steps include Adaptive Authentication or Risk Based Authentication. This technique adds intelligence to the authentication flow to provide strong security but reduces a bit of the friction by creating authentication requirements based on the risk and sensitivity of each specific request rather than using the same MFA prompt every time. This reduces MFA response fatigue for end users.

On the leading edge, organizations may choose to forgo using passwords altogether and go passwordless to nearly eliminate the risk of phishing attacks. This method uses passkeys that may leverage biometrics (e.g., fingerprint, retina scan), hardware devices or PINs with cryptographic key pairs assigned and integrated into the access devices themselves.

A layer on top of these methods is Identity Threat Detection and Response (ITDR). This technology gathers signals across the ecosystem to automatically deal with a credential breach (or risk of one) as they happen to limit lateral movement. ITDR uses analytics and AI to monitor access points and authentication and identify anomalies that may represent possible attacks to force re-authentication or terminate sessions before further damage can be done. These systems have sophisticated reporting and analytics to identify areas of risk across the environment.

Regulatory Compliance: Identity Governance and Administration (IGA)

Regulatory non-compliance is another risk of failed IAM. Since regulations such as GDPR (General Data Protection Regulation), SOX (Sarbanes-Oxley), and HIPAA (Health Insurance Portability and Accountability Act) all set standards for data privacy, it is imperative that organizations identify, approve, and monitor access to critical data and systems.

The authoritative source of identity information for most organizations should be their HR system(s). A properly configured IGA solution utilizes this authoritative source as the starting point for determining access to an organization’s critical systems based upon the person’s role.

Beyond providing access, a viable IGA solution should also allow you catalog and attest to user entitlements associated with mission critical systems and systems with regulated data to create an audit trail. Periodic reviews of access (e.g., quarterly, annually) in addition to Separation of Duty (SoD) policies and event driven micro-reviews should be part of an IGA solution to ensure that compliance requirements are continually met.

Another avenue that is often exploited is over-privileged user accounts, where a user has access to data or systems that they don’t need, creating unneeded risks. User accounts can gain too much privilege in many ways, such as the retention of past privileges as individuals’ roles within the organization change. By managing lifecycle events with an IGA solution, organizations can minimize the risks of overprivileged accounts being compromised.

IGA solutions can enforce a policy of “least privileged access” where users are only assigned the necessary privileges to perform the duties required of them. This approach combined with SoD policy enforcement can help to greatly reduce your data security risk profile.

Similarly, Role Based Access Control (RBAC) can be a valuable methodology for managing the evolving access requirements of an organization. RBAC associates the required access based on the role an employee plays within the organization instead of using mirrored account privileges, thereby limiting the scope of what they can access to what is necessary. RBAC can greatly reduce the timeline necessary to roll-out large changes to systems and data thus allowing your organization to adapt quickly to the market and new requirements.

In addition to improving security, an IGA solution should also make life easier for users and administrators. An integrated IGA solution can take time- and labor-intensive manual provisioning operations and move them to automated request and fulfillment processes. The IGA solution not only performs the actions faster than manual provisioning activities, but it also ensures that the right resource is granted to the right person with the right approvals at the right time.

Privileged Access Management (PAM): The Rise of Enterprise Password Vaults

PAM systems control access and passwords to highly sensitive data and systems, such as those controlled by IT to access root systems, administrator access, command-line access on vital servers, machine user IDs or other applications where a breach could put the entire IT footprint in jeopardy. The key component of a PAM system is an enterprise password vault that monitors access activity on highly sensitive accounts.

The password vault does more than just safely store passwords. It updates them, rotates them, disposes of them, tracks their usage and more. Users “borrow” privileged accounts temporarily for time-bound sessions, creating an abstraction between the person’s typical user account and the privileged account, minimizing the potential for privileged account credential compromise. Once a vault is established, the next level is to automatically rotate the passwords after they are borrowed. This ensures that nobody but the current user knows the password for a temporary timeframe.

For highly regulated systems with extremely sensitive data, like found in healthcare and finance, security can go one step further and automatically proxy the privileged session so that even the admin doesn’t even know the username and password to use it. These sessions can also be recorded for forensic evidence of the work performed under privilege to provide auditability.

Privileged Identity Management (PIM) is another approach based upon the concept of zero standing privileges that can work in conjunction with traditional PAM. This is a “just-in-time” temporary enrollment into privileged access and their subsequent removal after use. In PIM, each session is provisioned, subject to approval, based on the requester’s justification for needing access. Sessions are time-bound and an audit history is recorded. This ensures that the most sensitive systems are extremely difficult to hack.

Adoption and Use are Key to IAM

IAM best practices and new technologies don’t work if they are not fully implemented to understand the current prevalence, adoption and impact of IAM practices, Converge Technology Solutions sponsored the Ponemon Institute to study and understand organizations’ approach to IAM and how they are working to mitigate security threats targeting their user credentials, sensitive information, and confidential data.

Ponemon Institute surveyed 571 IT and IT security practitioners in the US who are involved their organizations’ IAM program. The top three areas of respondents’ involvement are evaluating IAM effectiveness (51 percent of respondents), mitigating IAM security risk (46 percent of respondents) and selecting IAM vendors and contractors (46 percent of respondents).

The key takeaway from this research is how vulnerable organizations’ identities are to attacks. While organizations seem to know they need to improve the security posture of their IAM practices, they are not moving at the necessary speed to thwart the attackers. According to the research, organizations are slow to adopt processes and technologies that could strengthen the security posture of IAM programs.

Only 20 percent of respondents say their organizations have fully adopted zero trust. Only 24 percent of respondents say their organizations have fully implemented passwordless authentication, which uses more secure alternatives like possession factors, one-time passwords, register smartphones, or biometrics.

Following are research findings that reveal the state of IAM insecurity.

Less than half of organizations represented in this research are prepared to protect identities and prevent unauthorized access. Only 45 percent of respondents say their organizations are prepared to protect identities when attackers have AI capabilities. Less than half (49 percent) use risk-based authentication to prevent unauthorized access and only 37 percent of respondents say their organizations use AI security technology to continuously monitor authenticated user sessions to prevent unauthorized access.

Organizations lack the ability to respond quickly to next-generation attacks. Forty-six percent of respondents say if a threat actor used a stolen credential to log in to their organization, it could take 1 day to 1 week (18 percent), more than 1 week (28 percent) to detect the incident. Eight percent of respondents say they would not be able to detect the incident.

IAM security is not a priority. As evidence, only 45 percent of respondents say their organizations have an established or formal IAM program, steering committee and/or internally defined strategy and only 46 percent of respondents say IAM programs compared to other security initiatives are a high or very high priority.

IAM platforms are not viewed by many organizations as effective. Only 46 percent of respondents say their IAM platform(s) are very or highly effective for user access provisioning, lifecycle and termination. Only 44 percent of respondents rate their IAM platform(s) for authentication and authorization as very or highly effective. Similarly, only 45 percent of organizations that have a dedicated PAM platform say it is very or highly effective.

More organizations need to implement MFA as part of their IAM strategy. Thirty percent of respondents say their organizations have not implemented MFA. Only 25 percent of respondents say their organizations have applied MFA to both customer and workforce accounts.

Few organizations have fully integrated IAM with other technologies such as SIEM. Only 30 percent of respondents say IAM is fully integrated with other technologies and another 30 percent of respondents say IAM is not integrated with other technologies. Only 20 percent of respondents say practices to prevent unauthorized usage are integrated with the IAM identity governance platform.

As evidence that IAM security is not a priority for many organizations, many practices to prevent unauthorized usage are ad hoc and not integrated with the IAM platform. To perform periodic access review/attestation/certification of user accounts and entitlements, 31 percent of respondents say they use custom in-house build workflows, 23 percent say the process is manual using spreadsheets, and 20 percent of respondents say it is executed through IAM identity governance platform. Twenty-six percent of respondents say no access/review/attestation/certification performed.

Organizations favor investing in improving end-user experience. Improved user experience (48 percent of respondents) is the number one driver for IAM investment.  Forty percent of respondents say the constant changes to the organization due to corporate reorganizations, downsizing and financial distress is a reason to invest.

To read the rest of the findings in this report, visit the Converge Technology Solutions website. 

Suicide after a scam; one family’s story

Bob Sullivan

I’ve been saying for a while that the two halves of my journalism career — consumer protection and cybersecurity — are merging together.  I will tell anyone who listens that poor customer service is our greatest cybersecurity vulnerability. Consumers often trust criminals more than the institutions designed to protect them. and when you listen to some customer service interactions, that’s not as surprising as it sounds.

So this month, I’m sharing a story we covered on The Perfect Scam podcast, which I host for AARP.  It makes clear that the consequences of unpatched vulnerabilities, including inadequate customer service, can be deadly. On the other hand, I want those of you who work to protect people to hear this story as a reminder that what you do is incredibly important and valuable and….sometimes a matter of life or death.  Keep that in mind on the hard days.

This month, we interviewed an adult daughter and son whose father took his own life after becoming embroiled in a crypto/romance scam.

“When he had to accept that this is a world where this happened, he was no longer able to be in this world,” his daughter told me.

As I interviewed Dennis’ children, I really connected with him. He was a single dad; he encouraged his son to join multiple rock bands (even when they were terrible, I was told). Dennis even spent years photographing his son making music.  And today, he’s a successful musician. Dennis spent summers at the lake in Minnesota with his daughter and her kids.

He was a great guy who wanted one more bit of love, affection, excitement, and purpose in his life. He thought he’d found that with Jessica, and with crypto. He wasn’t looking to get rich. He was looking to leave something for his family.

Instead, every dollar he had saved to that point in his life was stolen. And when the very last dollar was gone, the criminals talked him through opening up an LLC so he could borrow more money, which they stole.  Even after the kids lovingly stepped in, and dad was persuaded he’d been defrauded, he still believed in Jessica. He figured she was a victim, too.  And whoever Jessica was, Dennis was probably right. As we’ve chronicled before, many scam callers are victims of human trafficking, forced to steal money online against their will.

And when Dennis just couldn’t wrap his head around everything that had happened, he ended his life.

“I heard a story of someone in a book, and the way it was talked about in that story was knowing that he took his own life, but also feeling like he was killed by a crime,” his daughter told me.

(This story and accompanying podcast include extensive discussion of suicide. If you or someone you love is in crisis, call 9-8-8, a free hotline staffed by professionals who can provide immediate help.)

Readers of my newsletter know this is not the first time I’ve talked about the scam/suicide connection. Last year we told the story of Kathy Book, who survived a suicide attempt and bravely talked with me about her experience. The stakes for scams have risen so much in the past couple of years, even since I started working on The Perfect Scam. I’m hardly the only one who thinks so. 

Also, please don’t be fooled into thinking this malady impacts only the elderly. Everyone can be a victim under the right circumstances. The pain, fear and shame of being a victim have driven many to contemplate self-harm, often with tragic results. Teenagers.  Women.  Anyone. 

Look, nobody wants to have this conversation.  I will be eternally grateful to Laura and Matt for speaking to me about their father — all because they want to help others. I can’t imagine how difficult that was for them, and what a gift it is to the rest of us. I can assure you I don’t want to talk with any more family members about their loved ones’ pain, suffering, and suicide.  And I know I sound like a broken record when I talk about scams being more sophisticated, more prevalent, and more dangerous.  But please, talk with one person you love about the dangers posed by crypto, and online dating, and online job hunting, and even online games. Tell them the Internet is full of liars who know how to say something to stir their our and make us click on something we’d “never” click on, or do something we’d “never” do.  It’s ok to repeat yourself.

But most of all, be a person that can be talked to under any circumstances. Cultivate a non-judgemental, open spirit so they know you can be trusted. Tell them that no matter how bad things might suddenly seem — an IRS audit, an arrest warrant, accusations of child pornography — they can always talk with you, there’s always another way.

If you’d like,  listen to this week’s episode, Suicide After a Scam: One Family’s Story.  Especially if you still have that nagging feeling like, “This could never happen to me or anyone I know.”

 

The dark deepfake version I’m most worried about

Bob Sullivan

Everyone should be concerned about deepfakes and voice cloning; what’s difficult is deciding how concerned to be.  When it comes to the use of AI in crime, I think we have a 5-alarm fire on our hands.

I just came back from a week of talks at the University of Georgia journalism school, and I can tell you students there are very worried about the use of artificial intelligence in the generation of fake news.  I’m less worried about that, perhaps betraying naivete on my part. This is the second presidential election cycle where much has been made about the potential to create videos of candidates saying things they’d never say; so far, there are no high-profile examples of this swaying an electorate. There was a high-profile cloning of then-candidate Joe Biden’s voice during the New Hampshire primary, when an operative paid for robocalls designed to suppress the vote (he said, later, just to make a point).  That fake call was exposed pretty quickly.

We can all imagine a fake video that a candidate’s opponents might want to spread, but my sense is that such a deepfake wouldn’t persuade anyone to change their vote — it would merely reinforce an existing opinion.  I could be wrong; in an election this close, a few votes could make all the difference.  But there are far easier ways to suppress votes — such as long voting lines — and those should get at least as much attention as deepfakes.

I am far more concerned about more mundane-sounding AI attacks, however. Research I’ve done lately confirms what I have long feared — AI will be a boon for scam criminals. Imagine a crime call center staffed by robots armed with generative conversational skills.  AI bot armies really can replace front-line scam call center operators, and can be more effective at finding targets.  They don’t get tired, they have no morals, and perhaps most significantly — they hallucinate.  That means they will change their story randomly (say from, ‘your child’s been kidnapped’ to ‘your child is on the operating table’), and when they hit on a story that works, they’ll stick with it. This might allow a kind of dastardly evolution at a pace we’ve not seen before.  And while voters might see through attempts to put words in the mouths of presidential candidates, a hysterical parent probably won’t detect a realistic-sounding imitation of their kid after a car accident.

As with all new tech, we risk blaming too much fraud and scandal on the gadgets, without acknowledging these very same schemes have always been around.  Tech is a tool, and tools can always be used for both good and bad.  The idea of scaling up crime should concern everyone, however.  Think about spam. It’s always been a numbers game. It’s always been an economic battle.  There’s no way to end spam.  But if you make spam so costly for criminals that the return on investment drops dramatically – if spammers make $1 from every 100,000 emails, rather than $1 from every 1,000 emails — criminals move on.

That’s why any tech which lets criminals scale up quickly is so concerning.  Criminals spend their time looking for their version of a hot lead — a victim who has been sent to a heightened emotional state so they can be manipulated.  Front-line crime call center employees act as filtering mechanisms. Once they get a victim on the line and show that person can be manipulated into performing a small task, like buying a $50 gift card or sharing personal information, these “leads” are passed on to high-level closers who escalate the crime.  This process can take months, or longer.  Romance scam criminals groom victims for years occasionally. Now, imagine AI bots performing these front-line tasks.  They wouldn’t have to be perfect. They’d just have to succeed at a higher rate than today’s callers, who are often victims of human trafficking working against their will.

This is the dark deepfake future that I’m most worried about.  Tech companies must lead on this issue. Those who make AI tools must game out their dark uses before they are released to the world.  There’s just too much at stake.

The 2024 Study on Cyber Insecurity in Healthcare: The Cost and Impact on Patient Safety and Care

An effective cybersecurity approach centered around stopping human-targeted attacks is crucial for healthcare institutions, not just to protect confidential patient data but also to ensure the highest quality of medical care.

This third annual report was conducted to determine if the healthcare industry is making progress in reducing human-centric cybersecurity risks and disruptions to patient care. With sponsorship from Proofpoint, Ponemon Institute surveyed 648 IT and IT security practitioners in healthcare organizations who are responsible for participating in such cybersecurity strategies as setting IT cybersecurity priorities, managing budgets and selecting vendors and contractors.

According to the research, 92 percent of organizations surveyed experienced at least one cyberattack in the past 12 months, an increase from 88 percent in 2023. For organizations in that group, the average number of cyberattacks was 40. We asked respondents to estimate the single most expensive cyberattack experienced in the past 12 months from a range of less than $10,000 to more than $25 million. Based on the responses, the average total cost for the most expensive cyberattack was $4,740,000, a 5 percent decrease over last year. This included all direct cash outlays, direct labor expenditures, indirect labor costs, overhead costs and lost business opportunities.

At an average cost of $1.47 million, disruption to normal healthcare operations because of system availability problems continues to be the most expensive consequence from the cyberattack, a 13 percent increase from an average $1.3 million in 2023. Users’ idle time and lost productivity because of downtime or system performance delays decreased from $1.1 million in 2023 to $995,484 in 2024. The cost of the time required to ensure the impact on patient care is corrected also decreased from an average of $1 million average in 2023 to $853,272 in 2024.

Cloud/account compromise. The most frequent attacks in healthcare are against the cloud, making it the top cybersecurity threat for the third consecutive year. Sixty-three percent of respondents say their organizations are vulnerable or highly vulnerable to a cloud/account compromise. Sixty-nine percent say their organizations have experienced a cloud/account compromise. In the past two years, organizations in this group experienced an average of 20 cloud compromises.

Supply chain attacks. Organizations are very or highly vulnerable to a supply chain attack, according to 60 percent of respondents. Sixty-eight percent say their organizations experienced an average of four attacks against its supply chain in the past two years.

Ransomware. Ransomware remains an ever-present threat to healthcare organizations, even though concerns about it have declined. Fifty-four percent of respondents believe their organizations are vulnerable or highly vulnerable to a ransomware attack, a decline from 64 percent. In the past two years, organizations that had ransomware attacks (59 percent of respondents) experienced an average of four such attacks. While fewer organizations paid the ransom (36 percent in 2024 vs. 40 percent in 2023), the ransom paid spiked 10 percent to an average of $1,099,200 compared to $995,450 in the previous year.

Business email compromise (BEC)/spoofing/impersonation. Concerns about BEC/spoofing/impersonation attacks have decreased. Fifty-two percent of respondents say their organizations are vulnerable or highly vulnerable to a BEC/spoofing/impersonation incident, a decrease from 61 percent in 2023. Fifty-seven percent of respondents say their organizations experienced an average of four attacks in the past two years.

Cyberattacks can cause poor patient outcomes due to delays in procedures and tests.
As in the previous report, an important part of the research is the connection between cyberattacks and patient safety. Among the organizations that experienced the four types of cyberattacks in the study, an average of 69 percent report disruption to patient care.

Specifically, an average of 56 percent report poor patient outcomes due to delays in procedures and tests, an average of 53 percent saw an increase in medical procedure complications and an average of 28 percent say patient mortality rates increased, a 21 percent spike over last year.

The following are additional trends in how cyberattacks have affected patient safety and patient care delivery. 

  • Supply chain attacks are most likely to affect patient care. Sixty-eight percent of respondents say their organizations had an attack against their supply chains. Of this group, 82 percent say it disrupted patient care, an increase from 77 percent in 2023. Patients were primarily impacted by an increase in complications from medical procedures (51 percent) and delays in procedures and tests that resulted in poor outcomes (48 percent).
  • A BEC/spoofing/impersonation attack causes delays in procedures and tests. Fifty-seven percent of respondents say their organizations experienced a BEC/spoofing/impersonation incident. Of these respondents, 65 percent say a BEC/spoofing/impersonation attack disrupted patient care. Sixty-nine percent say the consequences caused delays in procedures and tests that have resulted in poor outcomes and 57 percent say it increased complications from medical procedures.
  • Ransomware attacks cause delays in patient care. Fifty-nine percent of respondents say their organizations experienced a ransomware attack. Of this group, 70 percent say ransomware attacks had a negative impact on patient care. Sixty-one percent say patient care was affected by delays in procedures and tests that resulted in poor outcomes and 58 percent say it resulted in longer lengths of stay, which affects organizations’ ability to care for patients.
  • Cloud/account compromises are least likely to disrupt patient care. Sixty-nine percent of respondents say their organizations experienced a cloud/account compromise. In this year’s study, 57 percent say the cloud/account compromises resulted in disruption in patient care operation, an increase from 49 percent in 2023. Fifty-six percent of respondents say cloud/account compromises increased complications from medical procedures and 52 percent say it resulted in a longer length of stay. 
  • Data loss or exfiltration has had an impact on patient mortality. Ninety-two percent of organizations had at least two data loss incidents involving sensitive and confidential healthcare data in the past two years. On average, organizations experienced 20 such incidents in the past two years. Fifty-one percent say the data loss or exfiltration resulted in a disruption in patient care. Of these respondents, 50 percent say it increased the mortality rate and 37 percent say it caused delays in procedures and tests that resulted in poor outcomes. 

Other key trends 

Employee negligence because of not following policies caused data loss or exfiltration. The top one root cause of data loss or exfiltration, according to 31 percent of respondents, was employee negligence. Such policies include employees’ responsibility to safeguard sensitive and confidential information and the practices they need to follow. As shown in the research, more than half of respondents (52 percent) say their organizations are very concerned about employee negligence or error.

Cloud-based user accounts/collaboration tools that enable productivity are most often attacked. Sixty-nine percent of respondents say their organizations experienced a successful cloud/account compromise and experienced an average of 20 cloud/account compromises over the past two years. The tools most often attacked are text messaging (61 percent of respondents), email (59 percent of respondents) and Zoom/Skype/Videoconferencing (56 percent of respondents).

The lack of clear leadership is a growing problem and a threat to healthcare’s cyber security posture. While 55 percent of respondents say their organizations’ lack of in-house expertise is a primary deterrent to achieving a strong cybersecurity posture, the lack of clear leadership as a challenge increased significantly since 2023 from 14 percent to 49 percent of respondents. Not having enough budget decreased from 47 percent to 40 percent of respondents in 2024. Survey respondents note that their annual budgets for IT increased 12 percent from last year ($66 million in 2024 vs. $58.26 million in 2023) with 19 percent of that budget dedicated to information security. The healthcare industry seems to recognize cyber safety is patient safety \based on the findings.

Organizations continue to rely on security training awareness programs to reduce risks caused by employees but are they effective?  Negligent employees pose a significant risk to healthcare organizations. While more organizations (71 percent in 2024 vs. 65 percent of respondents in 2023) are taking steps to address the risk of employees’ lack of awareness about cybersecurity threats, are they effective in reducing the risks? Fifty-nine percent say they conduct regular training and awareness programs. Fifty-three percent say their organizations monitor the actions of employees.

To reduce phishing and other email-based attacks, most organizations are using anti-virus/anti-malware (53 percent of respondents). This is followed by patch& vulnerability management (52 percent of respondents) and multi-factor authentication (49 percent of respondents).

Concerns about insecure mobile apps (eHealth) increased. Organizations are less worried about employee-owned mobile devices or BYOD, a decrease from 61 percent in 2023 to 53 percent of respondents in 2024, BEC/spoofing/impersonation, a decrease from 62 percent in 2023 to 46 percent in 2024 and Cloud/ account compromise, a decrease from 63 percent in 2023 to 55 percent in 2024.  However, concerns about insecure mobile apps (eHealth) increased from 51 percent to 59 percent of respondents in 2024.

 AI and machine learning in healthcare For the first time, we include in the research the impact AI is having on security and patient care. Fifty-four percent of respondents say their organizations have embedded AI in cybersecurity (28 percent) or embedded in both cybersecurity and patient care (26 percent). Fifty-seven percent of these respondents say AI is very effective in improving organizations’ cybersecurity posture.

AI can increase the productivity of IT security personnel and reduce the time and cost of patient care and administrators’ work. Fifty-five percent of respondents agree or strongly agree that AI-based security technologies will increase the productivity of their organization’s IT security personnel. Forty-eight percent of respondents agree or strongly agree that AI simplifies patient care and administrators’ work by performing tasks that are typically done by humans but in less time and cost.

Thirty-six percent of respondents use AI and machine learning to understand human behavior. Of these respondents, 56 percent of respondents say understanding human behavior to protect emails is very important, recognizing the prevalence of socially-engineered attacks. 

While AI offers benefits, there are issues that may deter wide-spread acceptance. Sixty-three percent of respondents say it is difficult or very difficult to safeguard confidential and sensitive data used in organizations’ AI.

Other challenges to adopting AI are the lack of mature and/or stable AI technologies (34 percent of respondents), there are interoperability issues among AI technologies (32 percent of respondents) and there are errors and inaccuracies in data inputs ingested by AI technology (32 percent of respondents).

Click here to read the entire report at Proofpoint.com

The State of Cybersecurity Risk Management Strategies

What are the cyber-risks — and opportunities — in the age of AI? The purpose of this research is to determine how organizations are preparing for an uncertain future because of the ever-changing cybersecurity risks threatening their organizations. Ponemon Institute surveyed 632 IT and IT security professionals in the United States who are involved in their organizations’ cybersecurity risk management strategies and programs. The research was sponsored by Balbix.

Frequently reviewed and updated cybersecurity risk strategies and programs are the foundation of a strong cybersecurity posture.  However, cybersecurity risk strategies and programs are outdated and jeopardize the ability to prevent and respond to security incidents and data breaches.

 When asked how far in the future their organizations plan their cybersecurity risk strategies and programs, 65 percent of respondents say it is for two years (31 percent) or for more than two years (34 percent). Only 23 percent of respondents say the strategy is for only one year because of changes in technologies and the threat landscape and 12 percent of respondents say it is for less than one year.

The following research findings reveal the steps that should be included in cybersecurity risks and programs.

 Identify unpatched vulnerabilities and patch them in a timely manner. According to a previous study sponsored by Balbix, only 10 percent of respondents were very confident that their organizations have a vulnerability and risk management program that helps them avoid a data breach. Only 15 percent of respondents rated their organizations’ ability to identify vulnerabilities and patch them in a timely manner was highly effective.

In this year’s study, 54 percent of respondents say unpatched vulnerabilities is of the greatest concern to their organizations. This is followed by outdated software (51 percent of respondents) and user error (51 percent of respondents).

Frequent scanning to identify vulnerabilities should be conducted. In the previous Balbix study, only 31 percent of respondents said their organizations scan daily (12 percent) or weekly (19 percent). In this year’s research, scanning has not increased in frequency. Only 38 percent of respondents say their organizations scan for vulnerabilities more than once per day (25 percent) or daily (13 percent).

The prioritization of vulnerabilities should not be limited to a vendor’s vulnerability scoring. Fifty-one percent of respondents say their organizations’ vendor vulnerability scoring is used to prioritize vulnerabilities. Only 33 percent of respondents say their organizations use a risk-based vulnerability management solution and only 25 percent of respondents say it is based upon a risk scoring system within their vulnerability management tools.

Take steps to reduce risks in the attack vector. These risks especially are software vulnerabilities (45 percent of respondents), ransomware (37 percent of respondents), poor or missing encryption (36 percent of respondents) and phishing (36 percent of respondents). An attack vector is a path or method that a hacker uses to gain unauthorized access to a network or computer to exploit system flaws.

 Inform the C-suite and board of directors of the threats against the organization to obtain the necessary funding for cybersecurity programs and strategies.  In the previous Balbix study, the research revealed that the C-suite and IT security functions operate in a communications silo. Only 29 percent of respondents said their organizations’ executives and senior management clearly communicate their business risk management priorities to the IT security leadership and only 21 percent of respondents said their communications with the C-suite are highly effective. Those respondents who said they were very effective say it was because they were able to present information in a way that was understandable and they kept their leaders up-to-date on cyber risks and didn’t wait until the organization had a data breach or security incident.

 In this year’s study, 50 percent of respondents rate their organizations’ effectiveness in communicating the state of their cybersecurity as very low or low.  The primary reasons are negative facts are filtered before being disclosed to senior executives and the CEO (56 percent of respondents), communications are limited to only one department or line of business (silos) (44 percent of respondents) and the information can be ambiguous, which may lead to poor decisions (41 percent of respondents).

The IT and IT security functions should provide regular briefings on the state of their organizations’ cybersecurity risks. In addition to making their presentations understandable and unambiguous, briefings should not be limited to only when a serious security risk is revealed or if senior management initiates the request.

To address the challenge in meeting SLAs agreements, organizations need to eliminate the silos that inhibit communication among project teams. Forty-nine percent of respondents say their organizations track SLAs to evaluate their cybersecurity posture. Of these respondents,

Only 44 percent say their organization is meeting most or all SLAs to support its cybersecurity posture.

If AI is adopted as part of a cybersecurity strategy, risks created by AI need to be managed. Fifty-four percent of respondents say their organizations have fully adopted (26 percent) or partially adopted (28 percent). Risks include poor or misconfigured systems due to over-reliance on AI for cyber risk management, software vulnerabilities due to AI-generated code, data security risks caused by weak or no encryption, incorrect predictions due to data poisoning and inadvertent infringement of privacy rights due to the leakage of sensitive information.

Steps to reduce cybersecurity risks include regular user training and awareness about the security implications of AI, develop a data security programs and practices for AI, identify and mitigate bias in AI models for safe and responsible use, implement and consider a tool for software vulnerability management, conduct regular audits and tests to identify vulnerabilities in AI models and infrastructure, deploy risk quantification of AI models and their infrastructure and consider tools to validate AI prompts and their responses.

To more read key findings from this research, please visit the Balbix website.

Appeals court rules TikTok could be responsible for algorithm that recommended fatal strangulation game to child

Bob Sullivan

TikTok cannot use federal law “to permit casual indifference to the death of a ten-year-old girl,” a federal judge wrote this week.  And with that, an appeals court has opened a Pandora’s box that might clear the way for Big Tech accountability.

Silicon Valley companies have become rich and powerful in part because federal law has shielded them from liability for many of the terrible things their tools enable and encourage — and, it follows, from the expense of stopping such things. Smart phones have poisoned our children’s brains and turned them into The Anxious Generation; social media and cryptocurrency have enabled a generation of scam criminals to rob billions from our most vulnerable people; advertising algorithms tap into our subconscious in an attempt to destroy our very agency as human beings. To date, tech firms have made only passing attempts to stop such terrible things, emboldened by federal law which has so far shielded them from liability …  even when they “recommend” that kids do things which lead to death.

That’s what happened to 10-year-old Tawainna Anderson, who was served a curated “blackout challenge” video by Tiktok on a personalized “For You” page back in 2021. She was among a series of children who took up that challenge and experimented with self-asphyxiation — and died.  When Anderson’s parents tried to sue TikTok, a lower court threw out the case two years ago, saying tech companies enjoy broad immunity because of the 1996 Communications Decency Act, and its Section 230.

You’ve probably heard of that. Section 230 has been used as a get-out-of-jail-free card by Big Tech for decades; it’s also been used as an endless source of bar fights among legal scholars.

But now, with very colorful language, a federal appeals court has revived the Anderson family lawsuit and thrown Section 230 protection into doubt.  Third Circuit Judge Paul Matey’s concurring opinion seethes at the idea that tech companies aren’t required to stop awful things from happening on their platforms, even when it’s obvious that they could.  He also takes a shot at those who seem to care more about the scholarly debate than about the clear and present danger facilitated by tech tools. It’s worth reading this part of the ruling in full.

TikTok reads Section 230 of the Communications Decency Act… to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy. And a view that has found support in a surprising number of judicial opinions dating from the early days of dialup to the modern era of algorithms, advertising, and apps.. But it is not found in the words Congress wrote in Section 230, in the context Congress acted, in the history of common carriage regulations, or in the centuries of tradition informing the limited immunity from liability enjoyed by publishers and distributors of “content.” As best understood, the ordinary meaning of Section 230 provides TikTok immunity from suit for hosting videos created and uploaded by third parties. But it does not shield more, and Anderson’s estate may seek relief for TikTok’s knowing distribution and targeted recommendation of videos it knew could be harmful.

Later on, the opinion says, “The company may decide to curate the content it serves up to children to emphasize the lowest virtues, the basest tastes. But it cannot claim immunity that Congress did not provide.”

The ruling doesn’t tear down all Big Tech immunity. It makes a distinction between TikTok’s algorithm specifically recommending a blackout video to a child after the firm knew, or should have known, that it was dangerous ….. as opposed to a child seeking out such a video “manually” through a self-directed search.  That kind of distinction has been lost through years of reading Section 230 at its most generous from Big Tech’s point of view.  I think we all know where that has gotten us.

In the simplest of terms, tech companies shouldn’t be held liable for everything their users do, no more than the phone company can be liable for everything callers say on telephone lines — or, as the popular legal analogy goes, a newsstand can’t be liable the content of magazines it sells.

After all, that newsstand has no editorial control over those magazines.  Back in the 1990s, Section 230 added just a touch of nuance to this concept, which was required because tech companies occasionally dip into their users’ content and restrict it. Tech firms remove illegal drug sales, or child porn, for example.  While that might seem akin to the exercise of editorial content, we want tech companies to do this, so Congress declared such occasional meddling does not turn a tech firm from a newsstand into a publisher, so it does not assume additional liability because of such moderation — it enjoys immunity.

This immunity has been used as permission for all kinds of undesirable activity. Using another mildly strained metaphor, a shopping mall would never be allowed to operate if it ignored massive amounts of crime going on in its hallways…let alone supplied a series of tools that enable elder fraud, or impersonation, or money laundering. But tech companies do that all the time. In fact, we know from whistleblowers like Frances Haugen that tech firms are fully aware their tools help connect anxious kids with videos that glorify anorexia.  And they lead lonely and grief-stricken people right to criminals who are expert at stealing hundreds of thousands of dollars from them. And they allow ongoing crimes like identity theft to occur without so much as answering the phone from desperate victims like American service members who must watch as their official uniform portraits are used for romance scams.

Will tech companies have to change their ways now? Will they have to invest real money into customer service to stop such crimes, and to stop their algorithms from recommending terrible things?  You’ll hear that such an investment is an overwhelming demand. Can you imagine if a large social media firm was forced to hire enough customer service agents to deal with fraud in a timely manner? It might put the company out of business.  In my opinion, that means it never had a legitimate business model in the first place.

This week’s ruling draws an appropriate distinction between tech firms that passively host content which is undesirable and firms which actively promote such content via algorithm. In other words, algorithm recommendations are akin to editorial control, and Big Tech must answer for what their algorithms do.  You have to ask: Why wouldn’t these companies welcome that kind of responsibility?

The Section 230 debate will rage on.  Since both political parties have railed against Big Tech, and there is the appetite for change, it does seem like Congress will get involved. Good. Section 230 is desperate for an update.  Just watch carefully to make sure Big Tech doesn’t write its own rules for regulating the next era of the digital age. Because it didn’t do so well with the current era.

If you want to read more, I’d recommend Matt Stoller’s Substack post on the ruling. 

 

2024 Global PKI, IoT and Post Quantum Cryptography Study

The Public Key Infrastructure (PKI) is considered essential to keep people, systems and things securely connected. According to this research, in their efforts to achieve PKI maturity organizations need to address the challenges of having clear ownership of the PKI strategy and sufficient skills.

 The 2024 Global PKI, IoT and Post Quantum Cryptography research is part of a larger study — sponsored by Entrust — published in May involving 4,052 respondents in 9 countries. In this report, Ponemon Institute presents the findings based on a survey of 2,176 IT and IT security who are involved in their organizations’ enterprise PKI in the following 9 countries: United States (409 respondents), United Kingdom (289 respondents), Canada (245 respondents), Germany (309 respondents), Saudi (Middle East) (162 respondents), United Arab Emirates (UAE ) (203 respondents), Australia/NZ (156 respondents), Japan (168 respondents) and Singapore (235 respondents).

“With the rise of costly breaches and AI-generated deepfakes, synthetic identity fraud, ransomware gangs, and cyber warfare, the threat landscape is intensifying at an alarming rate,” said Samantha Mabey, Director Solutions Marketing at Entrust. “This means that implementing a Zero Trust security practice is an urgent business imperative – and the security of organizations’ and their customers’ data, networks, and identities depends on it.”

 The following is a summary of the most important takeaways from the research

 The orchestration of the PKI software increased from 42 percent of respondents to 50 percent of respondents. However, 59 percent of respondents say orchestration is very or extremely complex, an increase from 43 percent of respondents.

Responsibility for the PKI strategy is being assigned to IT security and IT leaders. As PKI becomes increasingly critical to an organization’s security posture, the CISO and CIO are most responsible for their organization’s PKI strategy. The IT manager being most responsible for the PKI strategy has declined from 26 percent to 14 percent of respondents.

Fifty-two percent of respondents say they have PKI specialists on staff who are involved in their organizations’ enterprise PKI. Of the 48 percent respondents who say their organizations do not have PKI specialists rely on consultants (45 percent) or service providers (55 percent).

A certificate authority (CA) provides assurance about the parties identified in a PKI certificate. Each CA maintains its own root CA for use only by the CA. The most popular method for deploying enterprise PKI continues to be through an internal corporate certificate authority (CA) or an externally hosted private CA—managed service, according to 60 percent and 47 percent of respondents, respectively.

No clear ownership, insufficient skills and requirements too fragmented or inconsistent are the top three challenges to enabling applications to use PKI. The challenge of no clear ownership continues to be the top challenge to deploying and managing PKI according to 51 percent of respondents. Other challenges are insufficient skills (43 percent of respondents) and requirements are too fragmented or inconsistent (43 percent of respondents).

Challenges that are declining significantly include the lack of resources (from 64 percent of respondents to 41 percent of respondents) and lack of visibility of the applications that will depend on PKI (from 48 percent to 33 percent of respondents).

As organizations strive to achieve greater PKI maturity they anticipate the most change and uncertainty in PKI technologies and with vendors. Forty-three percent of respondents say PKI technologies and 41 percent of respondents say it will be with products and services.

Cloud-based services continue to be the most important trend driving the deployment of applications using PKI. Cloud-based services continue to be the number one trend driving deployment of applications using PKI (46 percent of respondents). However, respondents who say IoT is the most important trend driving the deployment of applications using PKI has declined from 47 percent of respondents to 39 percent of respondents. BYOD and internal mobile device management has increased significantly from 24 percent of respondents to 34 percent of respondents.

More organizations are deploying certificate revocation techniques. In addition to verifying the CA’s signature on a certificate, the application software must also be sure that the certificate is still trustworthy at the time of use. Certificates that are no longer trustworthy must be revoked by the CA. Those organizations that do not deploy a certificate revocation technique has declined significantly from 32 percent to 13 percent.

The certificate revocation technique most often deployed continues to be Online Certificate Status Protocol (OCSP), according to 45 percent of respondents. For the first time, the manual certificate revocation list is the second technique most often deployed.

Smart cards (for CA/root key protection) to manage the private keys for their root/policy/issuing CAs are used by 41 percent of respondents. Thirty-one percent of respondents say removable media for CA/root keys cards are used.

Organizations’ primary root CA strategies are shifting significantly since 2021. A root certificate is a public key certificate that identifies a root certificate authority (CA). Both offline, self-managed and offline, externally hosted increased to 29 percent of respondents. Online, self-managed decreased from 31 percent of respondents to 25 percent of respondents and online, externally hosted decreased from 21 percent to 17 percent of respondents.

Organizations with internal CAs use an average of 6.5 separate CAs, managing an average of 31,299 internal or externally acquired certificates. An average of 9.5 distinct applications, such as email and network authentication, are managed by an organization’s PKI. This indicates that the PKI is at the core of the enterprise IT backbone. Not only the number of applications dependent upon the PKI but the nature of them indicates that PKI is a strategic part of the core IT backbone.

Conflict with other apps using the same PKI is becoming a bigger challenge to enabling applications to use the same PKI. While the number one challenge is not having sufficient skills, it has decreased from 43 percent to 37 percent of respondents.

Common Criteria Evaluation Assurance Level 4+ and Federal Information Processing Standards (FIPS) 140-2 Level 3 continue to be the most important security certifications when deploying PKI infrastructure and PKI-based applications. Fifty-seven percent of respondents say Common Criteria EAL 4+ is the most important security certification when deploying PKI. The evaluation at this level includes a comprehensive security assessment encompassing design testing and code review.

Fifty-five percent say FIPS 140-2 Level 3 is an important certification when deploying PKI. In the US, FIPS 140 is the standard called out by NIST in its definition of a “cryptographic module”, which is mandatory for most US federal government applications and a best practice in all PKI implementations.

SSL certificates for public-facing websites and services using PKI credentials is still the application most often used but has declined since 2022. Sixty-four percent of respondents say the application most often using PKI credentials is SSL certificates for public-facing websites and services. However, mobile device authentication and private cloud-based applications have increased as apps using PKI credentials (60 percent and 56 percent of respondents, respectively).

Scalability to millions of managed certificates continues to be the most important PKI capability for IoT deployments. While scalability is the most important, the support for Elliptic Curve Cryptography (ECC) is the number two most important PKI capability. ECC is an alternative technique to RSA and is considered a powerful cryptography approach. It generates security between key pairs for public key encryption by using the mathematics of elliptic curves.

Today and in the next 12 months, the most important IoT security capabilities are delivering patches and updates to devices and monitoring device behavior. Device authentication will become more important in the next 12 months.

Post Quantum Cryptography

For the first time, this 2024 global study features organizations’ approach to achieving migration to Post Quantum Cryptography (PQC). As defined in the research, quantum computing is a rapidly emerging technology that harnesses the laws of quantum mechanics to solve problems too complex for classical computers.

Sixty-one percent of respondents plan to migrate to PQC within the next five years. The most popular path to PQC is implementation of pure PQC (36 percent of respondents) followed by a hybrid approach combining traditional crypto with PQC (31 percent of respondents) and test PQC with their organization’s system and applications (26 percent of respondents).

Many organizations are not prepared to achieve migration because of the lack of visibility and not having the right technologies. Only 45 percent of respondents say their organizations have full visibility into their cryptographic estate and 50 percent of respondents say they have the right technology to support the larger key lengths and computing power required with PQC.

To prepare for migration, organizations need to know what cryptographic assets and algorithms they have and where they reside. It is important to know data flows and where organizations’ long-life data resides that is sensitive and must remain confidential. To achieve full visibility, organizations need to ensure they have a full and clear inventory of all the cryptographic assets (keys, certificates, secrets and algorithms across the environment) and what is being secured.

Organizations are slow to prepare for the post-quantum threat. The quantum threat, sometimes referred to as “post quantum”, is the inevitability that within the decade a quantum computer will be capable of breaking traditional public key cryptography. Experts surveyed by the Global Risk Institute predict quantum computing will compromise cybersecurity as early as 2027.

Most respondents are not preparing for the post-quantum threat. Twenty-seven percent of respondents say their organizations have not yet considered the impact of the threat, 23 percent are aware of the potential impact but haven’t started to create a strategy and 9 percent are unsure if their organizations are preparing for the post-quantum threat.

To prepare for the post-quantum threat, 44 percent of respondents say their organizations are building a post-quantum cryptography strategy. Although it is recommended as a best practice, only 38 percent of respondents say their organization is taking an inventory of its cryptographic assets and/or ensuring it is crypto agile. Crypto agility is the capacity for an information security system to adopt an alternative to the original encryption method or cryptographic primitive without significant change to system infrastructure.

To protect against the post-quantum threat, organizations need to be able to have an inventory of their cryptographic assets and achieve a fully crypto agile approach to be able to easily transition from one algorithm to another. Improving the ability to have a complete inventory of cryptographic assets (43 percent of respondents) and to achieve crypto agility (40 percent of respondents) are the top two concerns.

Crypto agility is critical to the migration to PQC. Crypto agility is the capacity for an information security system to adopt an alternative to the original encryption method or cryptographic primitive without significant change to system infrastructure. Only 28 percent of respondents say their organizations have a fully implemented crypto agile approach.

To read more key findings and the full report, please visit Entrust.com’s website.

Social media hack attack relies on the kindness of friends; I was (almost) a victim

Bob Sullivan

You might think your humble social media account would be of no use to multinational crime gangs, but you’d be wrong. Computer criminals have dozens of ways to turn hijacked Facebook, Instagram, TikTok or Twitter accounts into cash…or worse. You’d be stunned how quickly friend-of-friend attacks escalate into massive crime scenes, so it’s essential you protect your account. Be suspicious of ANY odd or surprising incoming messages. Your best bet in most cases is to do nothing.

I offer this reminder because I’ve just learned about a new(ish) way criminals steal social accounts. It relies only on the kindness of friends. It’s so simple, it almost got me, and it did get a friend of mine. And because there’s a bit of “truth” to the ask, you can see why victims might comply with the single, brief request the criminals make —  and inadvertently enable the hacker to use the change password / password recovery feature to hijack their account.

I’ll describe it.  It’s a bit confusing, but a picture is worth 1,000 words. I recently got this instant message on Instagram from a friend.

And, indeed, I had recently received an email from Facebook that looked like this:

The kicker is this message came from a long-time friend of mine — or at least from his account. So I was inclined to help him. He’d lost access to his account, which I know is essential to his small business. Also, the message came late at night, when I didn’t really have on my cybersecurity journalist hat. So, I opened the message and thought about responding by sending him the code.

I also recalled that Facebook uses friends to assist with account recovery when a criminal hijacks an account. At least, that was true until about a year ago.  An innovative feature called “trusted contacts” used to be available when victims were working to recover access to their accounts. In essence, Facebook/Meta would write to people in this trusted contact list and ask them to vouch for someone who was locked out of their account. Hackers learned how to exploit the feature, however, so Facebook discontinued it sometime in 2023. 

Still, since I had some vague recollection about it, I entertained my friend’s request.   Fortunately, instead of sending him the code I’d received in email from Facebook, I chose to send him a message using another piece of software owned by another company — not Facebook or Instagram or WhatsApp — to ask him what was going on.

And there, a few hours later, he told me he’d been hacked…just because he was trying to help out a friend regain access to his account. And now, like so many account hijacking victims I’ve written about, he’s lost in the hellscape that is trying to restore account access using Meta’s backlogged process.

It’s no secret I think companies like Facebook could do a lot more to protect users, beginning with better customer service to deal with problems when they arise. Recall, it took me half a year to regain access to my dog’s Instagram account after my cell phone was stolen.  In this case, I have an additional beef with Facebook. Look again at the email I received. The subject line really works in the criminal’s favor. It just says “XXXX is your account recovery code.” That’s all you see in an email preview, and it would be easy to just read that off to someone who asked for it.  The *body* of the email indicates that the code was sent in response to “a request to reset your Facebook password.”  But if a recipient were to quickly try to help out a friend in distress, they might not read that far.

By now, you’ve figured out the “game” the hackers are playing. They were trying to get a code that would have allowed them to reset my Facebook account and hijack it.  I was lucky; my friend was not.

What could a criminal do with access to his account, or mine? They could soon start offering fraudulent cryptocurrency “opportunities.”  Or run a convincing “I need bail money” scam.  Or, they would bank the account with thousands of other hijacked accounts for use in some future scam or disinformation campaign.  An account could be used to spread a fake AI video of a presidential campaign, for example. Pretty awful stuff you’d never want to be a part of.

This attack is not new; I see mentions of it on Reddit that date back at least two years.  So I hope this story feels like old news to you and you are confident you’d see through this scam. But it feels very persuasive to me, so I wanted to get a warning to you as soon as possible.

Let me know if you’ve seen this attack, or anything similar, operating out there in the wild.  Meanwhile, please take this as a reminder that criminals want to steal your digital identity, even if you believe your corner of the Internet universe is so small that no one would ever want to steal it.