Monthly Archives: December 2024

The Second Annual Global Study on the Growing API Security Crisis

Application Programming Interfaces (APIs) benefit organizations by connecting systems and data, driving innovation in the creation of new products and services and enabling personalized products and services. As organizations realize the benefits of APIs, they are aware of how vulnerable APIs are to exploitation. In fact, 61 percent of participants in this research believe the API risk will significantly increase (21 percent) or increase (40 percent) in the next 12 to 24 months.

As defined in the research, an API is a set of defined rules that enables different applications to communicate with each other. Organizations are increasingly using APIs to connect services and to transfer data, including sensitive medical, financial and personal data. As a result, the API attack surface has grown dramatically.

Sponsored by Traceable, the purpose of this research is to understand organizations’ awareness and approach to reducing API security risks. In this year’s study, Ponemon Institute surveyed 1,548 IT and IT security practitioners in the United States (649), the United Kingdom (451) and EMEA (448) who are knowledgeable about their organizations’ approach to API security.

Why APIs continue to be vulnerable to exploitation.  This year, 54 percent of respondents say APIs are a security risk because they expand the attack surface across all layers of the technology stack. It is now considered organizations’ largest attack surface. Because APIs expand the attack surface across all vectors it is possible to simply exploit an API and obtain access to sensitive data and not have to exploit to other solutions in the security stack. Before APIs, hackers would have to learn how to attack each one they were trying to get through, learning different attacks for different technologies at each layer of the stack.

Some 53 percent of respondents say traditional security solutions are not effective in distinguishing legitimate from fraudulent activity at the API layer. The increasing number and complexity of APIs makes it difficult to track how many APIs exist, where they are located and what they are doing.  As a result, 55 percent vs. 56 percent of respondents say the volume of APIs makes it difficult to prevent attacks.

The following findings illustrate the growing API crisis and what steps should be taken to improve API security 

  • Organizations are having multiple data breaches caused by an API exploitation in the past two years, which result in financial and IP losses. These data breaches are likely to occur because on average only 38 percent of APIs are continually tested for vulnerabilities. As a result, organizations are only confident in preventing an average of 24 percent of attacks. To prevent API compromises, APIs should be monitored for risky traffic performance and errors. 
  • Targeted DDoS attacks continue to be the primary root cause of the data breaches caused by an API exploitation. Another root cause is fraud, abuse and misuse. When asked to rate the seriousness of fraud attacks, almost half of respondents (47 percent) say these attacks are very or highly serious. 
  • Organizations have a very difficult time discovering and inventorying all APIs and as a result they do not know the extent of risks to their APIs. Many APIs are being created and updated so organizations can quickly lose control of the numerous types of APIs used and provided. Once all APIs are discovered it is important to have an inventory that provides visibility into the nature and behavior of those APIs.
  • According to the research, the areas that are most challenging to securing APIs and should be made a focus of any security strategy are preventing API sprawl, stopping the growth in API security vulnerabilities and prioritizing APIs for remediation.
  • Third-party APIs expose organizations to cybersecurity risks. In this year’s research, an average of 131 third parties are connected to organizations’ APIs. Recommendations to mitigate third-party API risk include creating an inventory of third-party APIs, performing risk assessments and due diligence and establishing ongoing monitoring and testing. Third-party APIs should also be continuously analyzed for misconfiguration and vulnerabilities.
  • To prevent API exploitations, organizations need to make identifying API endpoints that handle sensitive data without appropriate authentication more of a priority. An API endpoint is a specific location within an API that accepts requests and sends back responses. It’s a way for different systems and applications to communicate with each other, by sending and receiving information and instructions via the endpoint.
  • Bad bots impact the security of APIs. A bot is a software program that operates on the Internet and performs repetitive tasks. While some bot traffic is from good bots, bad bots can have a huge negative impact on APIs. Fifty-three percent of respondents say their organizations experienced one or more bot attacks involving APIs. The security solutions most often used to reduce the risk from bot attacks are web application firewalls, content delivery network deployment and active traffic monitoring on an API endpoint.

Generative AI and API security 

  • Generative artificial intelligence is being adopted by many organizations for its many benefits such as in business intelligence, content development and coding. In this research, 67 percent of respondents say their organizations have currently adopted (21 percent), in the process of adopting (30 percent) or plan to adopt generative AI in the next year (16 percent). As organizations embrace generative AI they should also be aware of the security risks that negatively affect APIs.
  • The top concerns about how generative AI applications affect API security are the increased attack surface due to additional API integrations, unauthorized access to sensitive data, potential data leakage through API calls to generative AI services and difficulty in monitoring and analyzing traffic to and from generative AI APIs.
  • The main challenges in securing APIs used by generative AI applications are the rapid pace of generative AI technology development, lack of in-house expertise in generative AI and API security and the lack of established best practices for securing generative AI API.
  • The top priorities for securing APIs used by generative AI applications are real-time monitoring and analysis of traffic to and from generative AI APIs, implementing strong authentication and authorization for generative AI API calls and comprehensive discovery and cataloging of generative AI API integrations.
  • Organizations invest in API security for generative AI-based applications and APIs to be able to identify and block sensitive data flows to generative AI APIs and safeguard critical data assets, improve efficiency of technologies and staff and real time monitoring and analysis of traffic to and from LLM APIs to quickly detect and respond to emerging threats.

To read key findings in this report, visit the Traceable.com website.

 

Facebook acknowledges it’s in a global fight to stop scams, and might not be winning

Bob Sullivan

Facebook publicly acknowledged recently that it’s engaged in a massive struggle with online crime gangs who abuse the service and steal from consumers worldwide. In a blog post, the firm said it had removed two million accounts just this year that had been linked to crime gangs, and was fighting on fronts across the world, including places like Myanmar, Laos, Cambodia, the United Arab Emirates and the Philippines. But in a nod to how difficult the fight is, the firm acknowledged it needs help.

“We know that these are extremely persistent and well-resourced criminal organizations working to evolve their tactics and evade detection, including by law enforcement,” the firm wrote. “We’ve seen them operate across borders with little deterrence and across many internet platforms in an attempt to ensure that any one company or country has only a narrow view into the full picture of scam operations. This makes collaboration within industries and countries even more critical.”

I’ve been writing about the size and scope of scam operations for years, but lately, I’ve tried to ring the alarm bell about just how massive these crime gangs have become (See, “They’re finding dead bodies outside call centers).  If you haven’t heard about a tragic victim in your circle of friends recently, I’m afraid you will soon.  There will be millions of victims and perhaps $1 trillion in losses by the time we count them all..and behind each one, you’ll find a shattered life.

Facebook’s post focused on a crime that is commonly called “pig butchering” — a term I shun and will not use again because it is so demeaning to victims. Often, the crime involves the long-term seduction of a victim, followed by an eventual invitation to invest in a made-up asset like cryptocurrency.  The scams are so elaborate that they include real-sounding firms, with real-looking account statements. They can stretch well into a year or two.  Behind the scenes, an army of criminals works together to keep up the relationship and to manufacture these realistic elements. As I’ve described elsewhere, hundreds of thousands of these criminals are themselves victims, conscripted into scam compounds via some form of human trafficking.

Many victims don’t find out what’s going on until they’ve sent much of their retirement savings to the crime gang.

“Today, for the first time, we are sharing our approach to countering the cross-border criminal organizations behind forced-labor scam compounds under our Dangerous Organizations and Individuals (DOI) and safety policies,” Facebook said. “We hope that sharing our insights will help inform our industry’s defenses so we can collectively help protect people from criminal scammers.”

It’s a great development that Facebook is sharing its behind-the-scenes work to combat this crime. But the firm can and must do more. Its private message service is often a critical tool for criminals to ensure victims; its platform full of “friendly” strangers in affinity groups is essential for victim grooming.  It would be unfair to say Facebook is to blame for these crimes; but I also know no one works there who wants to go home at night thinking the tool they’ve built is being used to ruin thousands of lives.

How could Facebook do more? One version of the scam begins with the hijacking of a legitimate account that already enjoys trust relationships.  In one typical fact pattern, a good-looking soldier’s account is stolen, and then used to flirt with users.  The pictures and service records are often a powerful asset for criminals trying to seduce victims.

Victims who’ve had their accounts hijacked say it can take months to recover their accounts, or to even get the service to take down their profiles being used for scams. As I’ve written before, when a victim tells Facebook that an account is actively being used to steal from its members, it’s hard to understand why the firm would be slow to investigate.  Poor customer service is our most serious cyber vulnerability.

In another blog post from last month, Facebook said it has begun testing better ways to restore hijacked accounts.  That’s good, too. But I’m here to tell you the new method the firm says it’s using — uploaded video selfies — has been in use for at least two years.  You might remember my experience using it. So, what’s the holdup? If we are in the middle of an international conflict with crime gangs stealing hundreds of millions of dollars, you’d think such a tool would be farther along by now.

Still, I take the publication of today’s post — in which Facebook acknowledges the problem — as a very positive first step.  I’d hope other tech companies will follow suit, and will also cooperate with the firm’s ideas around information sharing.  Meta, Facebook’s parent, is uniquely positioned to stop online crime gangs; its ample resources should be a match even for these massive crime gangs.