State of API Security: 2023 Global Findings

The purpose of this research is to understand organizations’ awareness and approach to reducing application programming interface (API) security risks.  Ponemon Institute surveyed 1,629 IT and IT security practitioners in the United States (691) and the United Kingdom and EMEA (938) who are knowledgeable about their organizations’ approach to API security. “The Growing API Security Crisis: A Global Study,” is sponsored by Traceable.


I (Larry Ponemon), and Richard Bird, the Chief Security Officer of Traceable, will present and explain these findings at a webinar Sept. 27 at 9 a.m. You can register for it at this website.

For more details on the study, you can also visit Traceable’s microsite, with additional charts, graphs, and key findings


An API is a set of defined rules that enables different applications to communicate with each other. Organizations are increasingly using APIs to connect services and to transfer data, including sensitive medical, financial and personal data.

According to 57 percent of respondents, APIs are highly important to their organizations’ digital transformation programs. However, APIs with vulnerabilities put organizations at risk to have a significant security breach. Sixty percent of respondents say their organizations have had at least one data breach caused by an API exploitation. Many of these breaches resulted in the theft of IP and financial loss.

 A key takeaway from the research is that while the potential exists for a major security incident due to API vulnerabilities, many organizations are not making API security a priority. Respondents were asked to rate how much of a priority it is to have a security risk profile for every API and to be able to identify API endpoints that handle sensitive data without appropriate authentication on a scale from 1 = not a priority to 10 = a very high priority.

According to our research, slightly more than half of respondents (52 percent) say it is a priority to understand those APIs that are most vulnerable to attacks or abuse based on a security risk profile. Fifty-four percent say the identification of API endpoints that handle sensitive data without appropriate authentication is a high priority.

The average IT security budget for organizations represented in this research is $35 million and an average of $4.2 million is allocated to API security activities. Thirty-five percent of IT and IT security functions are most responsible for the API security budget.

The following findings are evidence that the API security crisis is growing 

  • Organizations are losing the battle to secure APIs. One reason is that organizations do not know the extent of the risk. Specifically, on average only 40 percent of APIs are continually tested for vulnerabilities. As a result, organizations are only confident in preventing an average of 26 percent of attacks and an average of only 21 percent of API attacks can be effectively detected and contained.  
  • APIs expand the attack surface across all layers of the technology stack. Fifty-eight percent of respondents say APIs are a growing security risk because they expand the attack surface across all layers of the technology stack and is now considered organizations’ largest attack surface.
  • The increasing volume of APIs makes it difficult to prevent attacks. Fifty-seven percent of respondents say traditional security solutions are not effective in distinguishing legitimate from fraudulent activity at the API layer. Further, the increasing number and complexity of APIs makes it difficult to track how many APIs exist, where they are located and what they are doing (56 percent of respondents). 
  • Organizations struggle to discover and inventory all their APIs. Fifty-three percent of respondents say their organizations have a solution to discover, inventory and track APIs. These respondents say on average their organizations have an inventory of 1,099 APIs. Fifty-four percent of respondents say it is highly difficult to discover and inventory all APIs. The challenge is many APIs are being created and updated so organizations can quickly lose control of the numerous types of APIs used and provided.
  • Solutions are needed to reduce third-party risks and detect and stop data exfiltration events happening through APIs. An average of 127 third parties are connected to organizations’ APIs and only 33 percent of respondents say they are effective in reducing the risks caused by these third parties’ access to their APIs. Only 35 percent of respondents say they are effective in identifying and reducing risks posed by APIs outside their organizations and 40 percent say they are effective in identifying and reducing risks within their organizations. One reason is that most organizations do not know how much data is being transmitted through the APIs and need a solution that can detect and stop data exfiltration events happening through APIs. 
  • To stop the growing API security crisis, organizations need visibility into the API ecosystem and ensure consistency in API design and functionality. Only 35 percent of respondents have excellent visibility into the API ecosystem, only 44 percent of respondents are very confident in being able to detect attacks at the API layer and 44 percent of respondents say their organizations are very effective in achieving consistency in API design and functionality. Because APIs expand the attack surface across all vectors it is possible to simply exploit an API and obtain access to sensitive data and not have to exploit the other solutions in the security stack. Before APIs, hackers would have to learn how to attack each they were trying to get through, learning different attacks for different technologies at each layer of the stack.
  • Inconsistency in API design and functionality increases the complexity of the API ecosystem. As part of API governance, organizations should define standards for how APIs should be designed, developed and displayed as well as establishing guidelines for how they should be used and maintained over time.  
  • Organizations are not satisfied with the solutions used to achieve API security. As shown in the research, most organizations are unable to prevent and detect attacks against APIs. It’s no surprise, therefore, that only 43 percent of respondents say their organizations’ solutions are highly effective in securing their APIs. The primary solution used is encryption and signatures (60 percent of respondents), followed by 51 percent of respondents who say they identify vulnerabilities and 51 percent of respondents who say they use basic authentication. Solutions considered effective but not frequently used are API lifecycle management tools (41 percent), tokens (32 percent) and quotas and throttling (20 percent).  
  • Despite the growing API security crisis, threats to APIs are underestimated by management. Almost one-third of respondents say API security is only somewhat of a priority (17 percent) or not a priority (14 percent). The reasons for not making it a priority are managements’ underestimation of the risk to APIs (49 percent), other security risks are considered more of a threat (42 percent) and the difficulty in understanding how to reduce the threats to APIs (37 percent).

Part 2. Key findings

In this section, we provide an analysis of the global findings. The complete findings are presented on this website. The report is organized according to the following topics.

  • Understanding the growing API security crisis
  • Challenges to securing the unmanageable API sprawl
  • API security practices and the state of API security practices
  • API budget and governance

Understanding the growing API security risk

Organizations have had multiple data breaches caused by an API exploitation in the past two years. Two well-publicized API security breaches include the Cambridge Analytica breach caused by a Facebook API loophole that exposed the personal information of more than 50 million individuals and a Venmo public endpoint unsecured API that allowed a student to scrape 200 million users’ financial transactions.

Sixty percent of respondents say their organizations had a data breach caused by an API exploitation and 23 percent of these respondents say their organizations had between a minimum of 6 and more than 7 exploits in the past two years. The top three root causes of the API exploits are DDoS (38 percent of respondents), fraud, abuse and misuse (29 percent of respondents) and attacks with known signatures (29 percent of respondents).

Organizations are losing the battle to secure APIs. One reason is that organizations do not know the extent of the risk. Specifically, on average only 40 percent of APIs are continually tested for vulnerabilities. As a result, organizations are only confident in preventing an average of 26 percent of attacks and an average of only 21 percent of API attacks can be effectively detected and contained,

API exploits can severely impact an organization’s operations.  Organizations mainly suffered from the IP and financial loss (52 percent of respondents). Other serious consequences were brand value erosion (50 percent of respondents) and failures in company operations (37 percent of respondents).

APIs expand the attack surface across all layers of the technology stack. Some 58 percent of respondents say APIs are a security risk because they expand the attack surface across all layers of the technology stack and is now considered organizations’ largest attack surface.  Fifty-seven percent of respondents say traditional security solutions are not effective in distinguishing legitimate from fraudulent activity at the API layer. The increasing number and complexity of APIs makes it difficult to track how many APIs exist, where they are located and what they are doing.  As a result, 56 percent of respondents say the volume of APIs makes it difficult to prevent attacks.

Challenges to securing the unmanageable API sprawl

Open and public APIs are most often used and/or provided by organizations. Thirty two percent of respondents say their organizations use/provide open APIs and 31 percent of respondents say their organization use/provide public APIs.

Organizations struggle to discover and inventory all their APIs. Fifty-three percent of respondents say their organizations have a solution to discover, inventory and track APIs. These respondents say on average their organizations have an inventory of 1,099 APIs.

Fifty-four percent of respondents say it is highly difficult to discover and inventory all APIs. The challenge is many APIs are being created and updated so organizations can quickly lose control of the numerous types of APIs used and provided.

An average of 127 third parties are connected to organizations’ APIs and only 33 percent of respondents say they are effective in reducing the risks caused by these third parties’ access to their APIs. Only 35 percent of respondents say they are effective in identifying and reducing risks posed by APIs outside (35 percent) and within (40 percent) their organizations. One reason is that most organizations do not know how much data is being transmitted through the APIs and need a solution that can detect and stop data exfiltration events happening through APIs.

To stop the growing API security crisis, organizations need visibility into the API ecosystem and ensure consistency in API design and functionality. However, only 35 percent of respondents have excellent visibility into the API ecosystem, only 44 percent of respondents are very confident in being able to detect attacks at the API layer and 44 percent of respondents say their organizations are achieving consistency in API design and functionality.

Because APIs expand the attack surface across all vectors it is possible to simply exploit an API and obtain access to sensitive data and not have to exploit the other solutions in the security stack. Before APIs, hackers would have to learn how to attack each they were trying to get through, learning different attacks for different technologies at each layer of the stack.

Inconsistency in API design and functionality increases the complexity of the API ecosystem. As part of API governance, organizations should define standards for how APIs should be designed, developed and displayed as well as establishing guidelines for how they should be used and maintained over time.

To download and read the rest of this report, visit Traceable’s website.

Forced into fraud: Scam call centers staffed by human trafficking victims

Bob Sullivan

Who’s on the other end of the line when you get a scam phone call? Often, it’s a victim of human trafficking whose safety — and perhaps their life — depends on their ability to successfully steal your money. A recent UN report suggests there are hundreds of thousands of trafficking victims forced to work in sweatshops in Southeast Asia devoted to one thing: Stealing money. If they don’t, they go hungry, or they are beaten … or worse.

In other words, there are often victims on both ends of scam phone calls.

Americans report they are inundated with scam phone calls, emails and text messages, and FBI data shows losses are skyrocketing.  Crypto scams alone increased more than 125% last year, with $3.3 billion in reported losses.  These numbers are so large that they are meaningless to most; and you’ve probably heard before that this or that crime is skyrocketing, so perhaps that alarmist-sounding statement doesn’t penetrate.  But let me say this: I spend all week talking to victims of scams and law enforcement officials about tech-based crimes, and by any measure I can observe, there is a very concerning spike in organized online crime.

A recent report published by the United Nations helps explain why — for some “criminals,” stealing money from you is a matter of life and death.

The recent surge of activity dates to the pandemic, the report says. Public health measures forced the abrupt closing of casinos in places like Cambodia, which sent operators — including some controlled by criminal networks — looking for alternative revenue streams.  The toxic combination of out-of-work casino employees and a new tool that made international theft easy — cryptocurrency — led to an explosion in “scam centers” devoted to romance crimes, fake crypto investment schemes, and so on.

The scam centers have an endless need for “workers.” Many are lured from other Asian nations by help-wanted ads with promises of big salaries and work visas. Instead, new arrivals are often faced with violence, their passports taken, their families left wondering what happened — or, called with ransom demands.

There have been plenty of horror stories with anecdotes about forced scam center labor. Here’s one account from a Malaysian man who went to Thailand for what he thought was a legitimate job.

  • “Ah Hong soon found out that he was to carry out online scams for a call center that targeted people living in the United States and Europe. Everyone working there was given a target and those who failed to achieve it would be punished. “Punishment included being forced to run in the hot sun for two hours, beaten by sticks or asked to carry heavy bricks for long hours. “If we made a mistake, we were tasered,” he said. Ah Hong added that he was once punished by having to move bricks from 7 a.m. to 5 p.m., besides being beaten multiple times. A typical working day, he said, would begin at midnight and end at 5 p.m.

He was only released when his family paid his ransom.

The recent United Nations report attempted to estimate how many Ah Hongs there are.  The report’s conclusion is terrifying.

  • The number of people who have fallen victim to online scam trafficking in Southeast Asia is difficult to estimate because of its clandestine nature and gaps in the official response. Credible sources indicate that at least 120,000 people across Myanmar may be held in situations where they are forced to carry out online scams, while credible estimates in Cambodia have similarly indicated at least 100,000 people forcibly involved in online scams.

We only know what happens to these trafficking victims from the stories of those, like Hong, who have escaped.  The UN report details more horrors about conditions in these scam centers.

  • Reports have also been received of people being chained to their desk. Many victims report that their passports were confiscated, often along with their mobile phones or they were otherwise prohibited from contacting friends or family, a situation that UN human rights experts have described as ‘detention incommunicado’.
  • In addition, there is reportedly inadequate access to medical treatment with some disturbing
    cases of victims who have died as a result of mistreatment and lack of medical care. Reports commonly describe people being subjected to torture, cruel and degrading treatment and punishments including the threat or use of violence (as well as being made to witness violence against others) most commonly beatings, humiliation, electrocution and solitary confinement, especially if they resist orders or disobey compound rules or if they do not meet expected
    scamming targets. Reports have also been received of sexual violence, including gang rape as well as trafficking into the sex sector, most usually as punishment, for example for failing to meet their targets.

When I hear stories from the victim’s point of view, I am often amazed at how relentless the criminals can be. Some spend months, even years, grooming victims with faux attention and love.  Understanding how high the stakes are for the people on the other end of the phone helps explain why they can be so determined.

From a self-preservation point of view, I think it’s crucial we understand just why scam criminal activity is thriving right now.  But from a human rights point of view, it’s critical we call out this hideous behavior and work to stop it. The UN paper blames several factors, but one that caught my eye is the existence of Special Enterprise Zones — SEZs — designed to help support new industries. Ideally, SEZs encourage entrepreneurship by cutting red tape. But in some cases, they have become synonymous with “opaque regulation and the proliferation of multiple illicit economies, including human trafficking, illegal wildlife trade, and drug production,” the report says.

It’s also interesting to think about the implications for trafficking victims. Even after they are released or they manage to escape, many face challenges back home for being involved in criminal operations. The UN report stresses that scam center victims — like other human trafficking victims — are not legally responsible for crimes they were forced to commit against their will.  They should not face prosecution; doing so only prevents more victims from coming forward.

“People who are coerced into working in these scamming operations endure inhumane treatment while being forced to carry out crimes. They are victims. They are not criminals,” said UN High Commissioner for Human Rights Volker Türk. “In continuing to call for justice for those who have been defrauded through online criminality, we must not forget that this complex phenomenon has two sets of victims.”

The report also makes clear who likely victims are:

  • Most people trafficked into the online scam operations are men, although women and adolescents are also among the victims…Most are not citizens of the countries in which the trafficking occurs. Many of the victims are well-educated, sometimes coming from professional jobs or with graduate or even post-graduate degrees, computer-literate and multilingual. Victims come from across the ASEAN region (from Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam), as well as mainland China, Hong Kong and Taiwan, South Asia, and even further afield from Africa and Latin America.

Every thoughtful adult should read the UN report, and make sure your friends and family understand why the stakes in the scam world have become so high.

 

 

 

 

 

Half of Breached Organizations Unwilling to Increase Security Spend Despite Soaring Breach Costs

The global average cost of a data breach reached $4.45 million in 2023 – another record high and a 15% increase over the last 3 years, according to this year’s Cost of a Data Breach study, just published by IBM and conducted by The Ponemon Institute. Detection and escalation costs jumped 42% over this same time frame, representing the highest portion of breach costs, and indicating a shift towards more complex breach investigations.

According to the 2023 IBM report, businesses are divided in how they plan to handle the increasing cost and frequency of data breaches. The study found that while 95% of studied organizations have experienced more than one breach, breached organizations were more likely to pass incident costs onto consumers (57%) than to increase security investments (51%).

The 2023 Cost of a Data Breach Report is based on in-depth analysis of real-world data breaches experienced by 553 organizations globally between March 2022 and March 2023. The research, sponsored and analyzed by IBM Security, was conducted by Ponemon Institute and has been published for 18 consecutive years. Some key findings in the 2023 IBM report include:

  • AI Picks Up Speed – AI and automation had the biggest impact on an organization’s speed of breach identification and containment. Organizations with extensive use of both AI and automation experienced a data breach lifecycle that was 108 days shorter compared to studied organizations that have not deployed these technologies (214 days versus 322 days).
  • The Cost of Silence – Ransomware victims in the study that involved law enforcement saved nearly half a million ($470,000) in average breach costs compared to those that chose not to involve law enforcement. Despite these savings, 37% of ransomware victims studied chose not to bring law enforcement in.
  • Detection Gaps – Only one third of studied breaches were detected by organizations’ own security teams, compared to 27% that were disclosed by an attacker. Data breaches disclosed by the attacker cost nearly $1 million more on average compared to studied organizations that identified the breach themselves.

“Time is the new currency in cybersecurity both for the defenders and the attackers. As the report shows, early detection and fast response can significantly reduce the impact of a breach,” said Chris McCurdy, General Manager, Worldwide IBM Security Services. “Security teams must focus on where adversaries are the most successful and concentrate their efforts on stopping them before they achieve their goals. Investments in threat detection and response approaches that accelerate defenders’ speed and efficiency – such as AI and automation – are crucial to shifting this balance.”

Every Second Costs

According to the 2023 report, organizations that fully deploy security AI and automation saw 108-day shorter breach lifecycles on average compared to organizations not deploying these technologies – and experienced significantly lower incident costs. In fact, organizations that deploy security AI and automation extensively saw nearly $1.8 million less in average breach costs than organizations that didn’t deploy these technologies – the biggest cost saver identified in the report.

At the same time, adversaries have reduced the average time to complete a ransomware attack. And with 40% of studied organizations not yet deploying security AI and automation, there is still considerable opportunity for organizations to boost detection and response speeds.

To read the full report, visit IBM’s website — click here

The Frances Haugen interview. Two years after Facebook, now what?

Bob Sullivan

Nearly two years after focusing the world’s attention on Big Tech’s big problems, Frances Haugen remains a powerful force in the technology industry.  I redently interviewed Haugen for the Debugger podcast I host at Duke University.

In this interview, Haugen tells me how Covid lockdowns played a key role in her difficult decision to come forward and criticize one of the world’s most powerful companies, what she’s doing now to keep the pressure on tech firms, and how she handles the slow pace of change.

For a new book she’s just published, Haugen researched Ralph Nader’s battle against the automotive industry in the 1970s — her fight is like his in some ways, very different in others. She’s created a non-profit to pursue research into harms that tech companies cause — some of that will be conducted this summer by Duke University students —  and she offers up some simple things companies like Facebook could do immediately to mitigate those harms.

I hope you’ll listen to the episode. Haugen is an engaging speaker.  But if podcasts aren’t your thing, a full transcript is at this link

Click the play button below or click play to listen. You can also subscribe to Debugger on Spotify or wherever you find podcasts.

A brief excerpt from our conversation:

“One of the things I talk about in my book is… why was it when Ralph Nader wrote a book called Unsafe at Any Speed, that within a year … there were Congressional inquiries, laws were passed, a Department of Transportation was founded. Suddenly seat belts were required in every car in the United States. Why was that able to move so fast? And we’re still having very, very basic conversations about things like even transparency in the United States.”

Bob: So we’ve talked a lot about platform accountability on this podcast, the worry that Big tech doesn’t have to answer to anyone, not even governments. And this recent report by the Irish Council for Civil Liberties, which says that two thirds of cases brought before Ireland’s Data Protection Commissioner, which basically serves as the enforcement agency for the whole EU … says that and two thirds of the cases before it just resulted in a reprimand. Francis as someone who’s done a lot to try to make at least one big tech company accountable, how do you react to that?

Frances Haugen: One of the largest challenges regarding tech accountability is … legislation and democracy takes a lot more time than technical innovation. Pointing at things like adoption curves … you know, how long did it take us to all get washing machines? How long did it take for us to get telephones? What about cell phones? How many years do these processes take? And they’re accelerating. The process of adoption gets condensed. And when it comes to things like the data protection authority, it’s one of these interesting …  quirks, I would say, of how we learn to pass laws. Because when GDPR was passed, it was a revolutionary law. It was a generational law in terms of how it impacted how tech companies around the world operated. But we have seen over the years that the Irish Data Protection Authority is either unable or unwilling to act, and that pattern is consistent. One of the stats I was trying to find before I came on today was the fraction of complaints that they’ve even addressed is very, very small. So yes, they’ve only acted on a handful of cases in the last few years. It’s something like 95% of all the complaints that have been brought, they’ve never responded to. So I’m completely unsurprised by the recent report.

Bob: Is it frustrating that we’re still in this place?

Frances Haugen: Oh, no. This is one of these things where perspective is so important, trying to change the public’s relationship with these tech companies. And that’s fundamentally what the core of my work is — the idea that we should have an expectation that we have the right to ask questions and get real answers. That’s a fundamental culture shift … coming at a project like that from a place like Silicon Valley, where if you can’t accomplish something in two years, it’s, it’s not really considered valuable, right? Things get funded in Silicon Valley based on expectations two years out. If it takes five years or 10 years, that’s considered way too slow. And so I come at it assuming that it’ll take me years, like years and years, to get where I want the world to get. And that means that when there are hiccups like this, they’re not nearly as upsetting. And so I think it’s unfortunate. I think it’s unacceptable. But I think it’s also one of these things where I’m not surprised by it.

 

Closing the IT security gap: What are high performers doing differently?

This year, 2023, marks the beginning of a new age of data-driven transformation. Security and IT teams must scale to keep pace with the needs of business to ensure the protection of any data, anywhere. Modern hybrid cloud landscapes present complex environments and daunting security challenges for security and IT teams who are responsible for the protection of data and apps and workloads operating across a heterogenous landscape of data centers, hybrid clouds and edge computing devices. As the volume of data generated by IoT devices and systems grows exponentially, the ability to close the IT security gap is proving to be elusive and frustrating.

The 2023 Global Study on Closing the IT Security Gap: Addressing Cybersecurity Gaps from Edge to Cloud, now in its third year, is sponsored by Hewlett Packard Enterprises (HPE) to look deeply into the critical actions needed to close security gaps and protect valuable data. In this year’s research, Ponemon Institute surveyed 2,084 IT and IT security practitioners in North America, the United Kingdom, Germany, Australia, Japan, and for the first time, France. All participants in this research are knowledgeable about their organizations’ IT security and strategy and are involved in decisions related to the investment in technologies.

Security and IT teams face the challenge of trying to manage operational risk without preventing their organizations from growing and being innovative. In this year’s study, only 44 percent of respondents say they are very effective or highly effective in keeping up with a constantly changing threat landscape. However, as shown in this research there are strategies security and IT teams can implement to defend against threats in complex edge-to-cloud environments.

The IT security gap is not shrinking because of the lack of visibility and control into user and device activities. As the proliferation of IoT devices continues, respondents say identifying and authenticating IoT devices accessing their network is critical to their organizations’ security strategy (67 percent of respondents). However, 63 percent of respondents say their security teams lack visibility and control into all the activity of every user device connected to their IT infrastructure.

How high-performing teams are closing the IT security gap

Seventy percent of respondents self-reported their organizations are highly effective in keeping up with a constantly changing threat landscape and close their organizations’ IT security gap (9+ responses on a scale of 1 = not effective to highly effective). We refer to these organizations as “high performers”. In this section, we analyze what these organizations are doing differently to achieve a more effective cybersecurity posture and close the IT security gap as compared to the 80 percent of respondents in the other organizations represented in this research.

As evidence of their effectiveness, high-performing organizations had fewer security breaches in the past 12 months that resulted in data loss or downtime. Almost half of respondents (46 percent) say their organizations had at least 7 and more than 10 incidents in just the past 12 months. In contrast, only 35 percent of high performers say their organizations had between 7 and more than 10 security incidents.

High-performing organizations have a larger IT security function. Fifty-four percent of high performing organizations say their organizations have a minimum of 21 to more than 50 employees in their IT security function. Only 44 percent of respondents of other organizations had the same range of employees in IT security.

 High performers are more likely to control the deployment of zero trust within a Network as a Service (NaaS) deployment. Of those familiar with their organization’s zero-trust strategy, more high performers (36 percent of respondents) than others (28 percent of respondents) say their organization is responsible for implementing zero trust within a NaaS. Only 20 percent of high performers say it is the responsibility of the NaaS provider and 10 percent say a third-party managed service provider is responsible.

 High performers centralize decisions about investments in security solutions and architectures. Sixty percent of high performers say it is either the network team (30 percent) or security team (30 percent) who are the primary decision makers about security solutions and architectures. Only 15 percent say both functions are responsible.

 More high performers have deployed or plan to deploy the SASE architecture. Forty-nine percent of high performers have deployed (32 percent) or plan to deploy (17 percent) the SASE architecture. In contrast only 39 percent of respondents in the other organizations have deployed (24 percent) or plan to deploy (15 percent) the SASE architecture.

 More high performers have achieved visibility of all users and devices. High performers are slightly more confident (38 percent of respondents) than other respondents (30 percent of respondents) that their organizations know all the users and devices connected to their networks all the time.

 Far more high performers are positive about the use of Network Access Control (NAC) solutions and their importance to proving compliance. These respondents are more likely to use these solutions for IoT security. Fifty-one percent of high performers say NAC solutions are an essential tool for proof of compliance vs. 42 percent of respondents in other organizations. Fifty-five percent of high performers vs. 38 percent of other respondents say NAC solutions are best delivered by the cloud.

 High performers recognize the importance of the integration of NAC functionality with the security stack. Respondents were asked to rate the importance of the integration of NAC functionality with other elements of the security stack on a scale from 1 = not important to 10 = highly important. Sixty-two percent of high performers vs. 54 percent of other respondents say such integration is important.

 High performers are more likely to believe continuous monitoring of network traffic and real-time solutions will reduce IoT risks. Sixty-two percent of high performers vs. 52 percent of other respondents say continuous monitoring of network traffic for each IoT device to spot anomalies is required. Forty-seven percent of high performers vs. 38 percent of other respondents say real-time solutions to stop compromised or malicious IoT activity is required.

 High performers are more likely to require current security vendors to supply new security solutions as compute and storage moves from the data center to the edge. Forty percent of high performers vs. 30 percent of other respondents say their organizations will require current security vendors to supply new security solutions. Respondents in other organizations say their infrastructure providers will be required to supply protection (45 percent vs. 34 percent in high performing organizations).

 High performers are more likely to require servers that leverage security certificates and infrastructures that leverage chips and/or certificates. The research reveals significant differences regarding compute and storage requirements. Specifically, high performers require servers that leverage security certificates to identify that the system has not been compromised during delivery (67 percent vs. 60 percent in other organizations). High performers are more likely to require infrastructure that leverages chip and/or certificates to determine if the system has been compromised during delivery (64 percent vs. 56 percent in other organizations). High performers also are more likely to believe data protection and recovery are key components of their organizations’ security and resiliency strategy (58 percent vs. 50 percent in other organizations).

 Conclusion: Recommendations to close the IT security gap

According to the research, the most effective steps to minimize stealthy or hidden threats within the IT infrastructure are the adoption of technologies that automate infrastructure integrity verification and implement network segmentation. The research also reveals there is a growing adoption of zero trust and Secure Access Service Edge (SASE) architectures to manage vulnerabilities and user access. Important activities to achieving a stronger level of IoT security, according to the research, is the continuous monitoring of network traffic for each IoT device to spot anomalies and real-time solutions to stop compromised or malicious IoT activity.

Other actions to be considered in the coming year include the following:

  • Require servers that leverage security certificates and infrastructures that leverage chips and/or certificates.
  • Invest in having a fully staffed and well-trained IT security function. Such expertise is critical to ensuring data protection and recovery are key components of an organization’s security and resiliency strategy. A lack of skills and expertise is also the primary deterrent to adopting a zero-trust framework.
  • Consider centralizing decisions about investments in security solutions and architectures as high performers in this research tend to do. A concern of respondents is the inability of IT and IT security teams to agree on the activities that should be prioritized to close the IT security gap. This concern is exacerbated by the siloed or point security solutions in organizations.
  • Deploy Network Access Control (NAC) solutions to improve IoT and BYOD security. These solutions support network visibility and access management through policy enforcement for devices on users of computer networks. NAC solutions can improve visibility and verify the security of all apps and workloads.

Click here to download the full report from Hewlett Packard

Hundreds of supplement companies warned about ads; is this any way to protect consumers?

Bob Sullivan

I’m often asked, “Isn’t there a truth in advertising law?!!??” by consumers who feel cheated by a company that embedded a gotcha in its advertisements.  My sad answer is often some variation of “No, not really.” At least that’s been the on-the-ground reality for some time.  There’s a glimmer of hope that things might be changing, however. The Federal Trade Commission recently sent out hundreds of letters warning companies that sell OTC drugs, homeopathic products, or dietary supplements that they’re being watched for potentially bogus ads — which is both a hopeful sign and a demonstration of just how weak consumer protection efforts are in the USA.

First, to get this out of the way, I’m not a lawyer, and there are actually many, many laws that govern advertising — some generic, some very industry specific. But as I say with only a hint of sarcasm, everything is legal until there’s a lawsuit or an arrest, and that’s the reality most consumers face every day.  Basically, TV and radio wouldn’t exist if it weren’t for aggressive snake-oil pitches from companies claiming their lab-tested products will make you younger, or stronger, or more focused — most backed by junk “science,” if at all.  But these firms have been given the tacit green light for decades by understaffed federal agencies that could hardly pick one in 1,000 battles to fight. And even worse, they’ve often seen a wink and a nod from agencies controlled by a hands-off philosophy derived from a perverted notion of how free markets are supposed to operate.

That’s why I’m encouraged by the announcement recently that the FTC had sent out a pile of so-called “Notice of Penalty Offenses” letters about “substantiation of product claims.” The approximately 700 recipients — large and small firms alike — have been put on notice that the FTC is worried they might be making claims that deceive consumers. The letters do not constitute a legal finding; but they do include warnings that should such a finding occur, the penalty could be about $50,000 per incident.  And the letters include reminders of what potential violations look like. Like this:

“Failing to have adequate support for objective product claims; claims relating to the
health benefits or safety features of a product; or claims that a product is effective in the cure,
mitigation, or treatment of any serious disease. These unlawful acts and practices also include:
misrepresenting the level or type of substantiation for a claim, and misrepresenting that a product claim has been scientifically or clinically proven.”

A particular pet peeve of mine in the age of social media is the deceptive use of consumer reviews and other endorsements.  Apparently, that’s a pet peeve of the current FTC too, because the warning letters also include reminders about that:

“Such unlawful acts and practices include: falsely claiming an endorsement by a third party; misrepresenting that an endorsement represents the experience or opinions of product users; misrepresenting that an endorser is an actual, current, or recent user of a product or service; continuing to use an endorsement without good reason to believe that the endorser continues to hold the views presented; using an endorsement to make deceptive performance claims; failing to disclose an unexpected material connection with an endorser; and misrepresenting that the experience of endorsers are typical or ordinary. Note that positive consumer reviews are a type of endorsement, so such reviews can be unlawful if they are fake or if a material connection is not adequately disclosed.”

“Everyone gets sick, and most of us will experience the infirmities that accompany aging,” wrote FTC Commissioner Rebecca Slaughter about the orders. “That shared vulnerability leaves us all susceptive to health-claim scams and to plausible-sounding treatments that promise to alleviate pain, to restore lost virility, or to help cure the most deadly and tragic of illnesses. At best, many of these product claims are unreliable and waste tens of billions of consumer dollars a year, and, even worse, they can cause serious health problems requiring acute medical attention.”

Advertising is a touchy area and a tough business.  There is a centuries-old tradition of sellers doing what they can to get buyers’ attention, with ad-makers walking up to and over the line of what’s deemed legal.  That’s to be expected.  With attention so divided in our time, those lines have become even more blurry, and the attempts to get consumers’ attention even more desperate.  Warning letters sent before dramatic fines certainly seem like a positive way to clean up a murky marketplace before doling out what might be death penalties to smaller companies.

However, the list of warning notice recipients certainly includes companies that could afford to do better research before publishing their ads.  Kellogg, AstraZeneca, BASF and Bausch and Lomb are on the list. So are Amazon, Goop, and Kourtney Kardashian’s Lemme, Inc. Again, there is no finding of illegality in these letters. You can see the list yourself.

This isn’t the first set of such warning notices sent out by the FTC recently.  In October of 2021, a batch 70 letters went to for-profit colleges focused on alleged exaggerated claims about the future workplace success of graduates.    And later that month, another 700 letters went to advertising firms about potentially illegal testimonials and endorsements.  And still another 1,000-plus notices went out to companies advertising get-rich-quick offerings to freelancers.

To my knowledge, none of the firms mentioned in the letters have faced fines or penalties, or been found guilty of anything related to the letters.

It might seem uncontroversial to have the nation’s federal watchdog for consumers send out warning letters to companies that could be engaging in deceptive conduct.  After all, I’d sure like a warning letter when I’m illegally parked.  However, all things have a context, and the strategy of FTC notice of penalty offenses has a deep past.

They were added to the FTC’s toolkit in the 1970s in an effort to more swiftly deal with potential consumer harms. Suing a company takes a long time, and the FTC authority to obtain penalties from law-breaking companies is severely limited.  In many cases, the FTC can only claw back ill-gotten gains from misbehaving firms — allowing them a so-called first bite of the apple.  In these cases, only after a firm agrees to a settlement with the FTC, then engages in the bad behavior AGAIN, can criminal penalties be assessed. In a fast-changing world, this is an ineffective tool for making sure consumer harm is quickly stopped.

Notice of penalty offenses were added to let the FTC skip to that second step. By telling companies that *other* companies had engaged in the same behavior, and been penalized, that one-bite-of-the-apple step could be skipped. The FTC could go after misbehaving companies straight away, after this warning notice, skipping what I think of as the “FTC two-step.”

This effort is not uncontroversial, however. Use of the letters fell out of practice in the 1980s and instead FTC lawyers used a different legal strategy (the so-called Section 13(b) authority — here’s a history lesson) to obtain penalties or seize and freeze assets belonging to companies engaged in deceptive behavior.  That strategy was challenged by a payday lender and in 2021, the U. S. Supreme Court sided with the lender, eliminating this route. So FTC staff resurrected the warning letters.

(Again, I’m not a lawyer. For a different version of this history lesson, visit Veneble’s website).

It’s not hard to find lawyers who think the FTC is on weak legal ground using the warning letters as this first step in the FTC two-step. Cases cited in some of these letters are decades old.  I don’t think anyone disagrees this is a workaround, and a less ideal solution than a new law passed by Congress that makes clear the FTC can freeze assets and penalize misbehaving companies on the first offense, the treatment that consumers expect from their local police officer.

If you’ve made it this far, you’ve come to understand my first point, which is how convoluted our efforts are to protect consumers in America — and how we still lay out the welcome mat to scammers and deceptive companies.  And I haven’t even delved into all the lame ways advertisers can shield themselves from federal (and state) “truth in advertising” laws.  Like “This product is not intended to diagnose, treat, cure or prevent any disease.” Or by our liberal use of the concept of “puffery,” which is legally protected. (It’s ok to say this is the “world’s favorite blog” but it’s not ok to say “4 out of 5 readers prefer this blog” unless I have something to back up that data.)

As I’m fond of saying, free markets are not free-for-all markets. True free markets require perfect information. We don’t have that. And the more imperfect our information is, the more markets require rules to protect the vulnerable.  Warning letters take us a step closer to that.  Armies of lawyers arguing about Section 12(b) authority for many years does not.

Understanding the Serious Risks to Executives’ Personal Cybersecurity & Digital Lives

Organizations are allocating millions of dollars to protecting their information assets and employees but are neglecting to take steps to safeguard the very vulnerable digital assets and lives of key executives and board members. Sponsored by BlackCloak, Ponemon Institute surveyed 553 IT and IT security practitioners who are knowledgeable about the programs and policies used to prevent cybersecurity threats against executives and their digital assets.

The purpose of this research is to understand the risks created by the cybersecurity gap between the corporate office and executives’ protection at home. According to 42 percent of respondents, their key executives and family members have already experienced at least one attack by a cybercriminal.

In the context of this research, digital executive protection extends cybersecurity to outside the office domain by safeguarding the personal digital lives of company executives, board members and key personnel to mitigate the risks of cybercriminals targeting them for hacking, IP theft, reputational risks, doxxing/swatting and financial attacks.

Digital assets include all aspects of an executive’s personal life: address/cell/emails; personal cell, tablet, computer and accounts (email, social etc.), home network and any scams targeting them (doxxing, swatting, personal exposure etc.).

A key takeaway from this research is that while it is likely that executives’ digital assets and lives will be targeted by cybercriminals, organizations are not responding with much needed strategies, budget and staff. We found 58 percent of respondents say the prevention of cyberthreats against executives and their digital assets is not covered in their cyber, IT and physical security strategies and budget. Moreover, only 38 percent of respondents say there is a dedicated team to preventing and/or responding to cyber or privacy attacks against executives and their families.

The following findings are evidence of the risk to executives’ physical security and digital assets

 Executives are experiencing multiple cyberattacks. According to the research, 42 percent of respondents say their executives and family members were attacked by cybercriminals and 25 percent of respondents say in the past two years executives experienced an average of seven or more than 10 cyberattacks. In addition to doxxing and malware infections, other attacks include personal email attacks or compromises (42 percent) and online impersonation (34 percent).

Attacks against executives have the same serious consequences as a data breach. Cyberattacks against executives resulted in the theft of sensitive financial data (47 percent of respondents), loss of important business partners (45 percent of respondents) and theft of intellectual property/company information (36 percent of respondents). More than one-third of respondents (35 percent of respondents) say the consequence was improper access to the executive’s home network, which is not secured or patched to the level an organization would require in its offices and facilities.

 The finance and marketing departments are most likely to send sensitive data to executives’ personal emails, according to 23 percent and 22 percent of respondents respectively. However, the executive suite (21 percent of respondents) and board members (19 percent of respondents) are also guilty of sending sensitive information to personal emails.

 Staff time and the steps taken to detect, identify and remediate the breach are the most costly following an incident.  Thirty-nine percent of respondents say their organizations measure the potential financial consequences from such an attack. Fifty-nine percent of these respondents say their organizations measure the cost of staff time involved in responding to the attack and 55 percent of respondents say they measure the cost to detect, identify and remediate the breach.

 It’s not if but when key executives will be targeted by organized criminals. Sixty-two percent of respondents say attacks against digital assets are highly likely and 50 percent of respondents say future physical threats against executives is highly likely.

Criminals are sophisticated and stealthy when targeting executives and other high-profile individuals. Executives are most likely to unknowingly reuse a compromised password from their personal accounts inside their company (71 percent of respondents) and 67 percent say it is highly likely that an imposter would send a text message to another employee at their company. Fifty-one percent of respondents say it is highly likely that an executive’s significant other or child receives an unsolicited email and clicks on a link taking them to a third-party website.

Organizations are not determining the extent of the threat to executives’ physical safety and security of personal digital devices. Only 41 percent of respondents say their organizations are assessing the physical risk to executives and their families and only 38 percent of respondents say organizations assess the risk to executives’ digital assets.

 Executives are the weakest link in the ability to protect their lives and digital assets. Only 16 percent of respondents say their organizations are highly confident that a CEO or executives’ personal email or social media accounts are protected with dual factor authentication. The most confidence (48 percent of respondents) is that CEOs and other executives would know how to secure their personal email. Twenty-eight percent of respondents are highly confident that executives would know how to determine if an email is phishing and 26 percent of respondents say they are highly confident that executives would know how to set up their home network securely.

Only 32 percent of respondents say executives take some personal responsibility for the security of their digital assets and safety and only 38 percent of respondents say executives understand the threat to their personal digital assets.

As executives switch to their home networks and personal devices, visibility critical to detecting attacks is diminished. According to the research, it is very difficult to have visibility into the following areas when working outside the office: personal devices (74 percent of respondents), executives’ personal email accounts (66 percent of respondents), the executive’s home network to prevent cyberattacks (64 percent of respondents), executives’ privacy footprint (61 percent of respondents) and password hygiene (57 percent of respondents).

Executives working outside the office increase the attack surface significantly. Fifty-nine percent of respondents say ensuring executive protection is more difficult due to the increasing attack surface. However, only about half of respondents (53 percent) say attacks against the digital assets of executives outside the office domain is as much a priority as preventing such attacks when they are in the office. Only 50 percent of respondents say their organizations track potential attacks against executives, such as doxing, phishing and malware attempts.

 To reduce the risk, executives should be trained to secure their devices and physical safety.  Almost all organizations are not doing the basics in enabling executives to protect themselves and their personal digital devices. Training executives to secure devices in and outside the workplace is only conducted by 37 percent and 36 percent of respondents, respectively. More organizations (53 percent of respondents) are providing self-defense training but only 42 percent of respondents say their organizations conduct tabletop exercises specific to the threats against executives.

 Steps taken to protect executives’ lives and digital devices are ineffective. According to 56 percent respondents, organizations are mainly focused on updating executives’ personal devices. Fifty-two percent of respondents say their organizations patch vulnerabilities and 51 percent of respondents say they use password managers. Only 45 percent of respondents say they are using dual factor authentication, 39 percent of respondents say they use botnet scanning and 36 percent of respondents say they analyze network connectivity on personal devices to detect malicious WiFi hotspots.

 Read the full white paper at BlackCloak’s website

 

Two-thirds use tech to avoid face-to-face interactions; the truth we don’t want to face

Click to watch this Amazon driver (heroically) deliver packages in the rain

Bob Sullivan

Machines dehumanize people.  I’ve long had a mental experiment in mind that I’d love to pull off one day — force people to walk at a grocery store the way they drive on a highway.  You know: cut each other off, flip the bird, breathe (literally) down someone’s neck on line.  It would all look and feel absurd, at least for most. All this to show people that we do things when we are in control of machines that we’d never do in “real” life. In other words, the machines control us, not the other way ’round.

Another easy thought experiment: a real-life mall where everyone says the things they’ve said (or heard) on Instagram or TikTok comments.   If you don’t know what I’m talking about, consult a woman.

This is bad for our souls.  When you treat another person like an object, you’re a jerk. But I believe it also rebounds into you, and a piece of your humanity dies every time you dehumanize another person, even if it “feels” good at the moment.  And this is how humans lose the robot war, without ever firing a shot.  We just surrender our humanity and take the robots’ side.  So if you are worried about ChatGPT, I think we have a lot more to worry about.

Cars, naturally, were just the beginning of this underhanded “invasion.” Smartphones have become a far more potent weapon in this dehumanization effort.  I don’t have to work hard to make my case – we’ve all seen someone staring down hypnotically at a handheld screen while a store clerks ask, “Can I help you? CAN I HELP YOU!?” a dozen times.

I saw a survey this week that provides a bit more evidence for my concern. It was sponsored by a website named PlayUSA.com, which describes itself as a news service that provides independent information about the legal U.S. gambling industry. The survey was designed to examine the impact of tech products on loneliness and it found:

  • 62% of respondents like that tech is replacing social interactions
  • 60% use self-service kiosks and mobile apps to skip talking with people
  • 75% report a decrease in social skills due to tech
  • 74% made a delivery driver leave food outside even if they could have opened the door to grab the delivery
  • 30% say they give drivers better ratings for not talking

As always, there’s a host of caveats to this survey.  It was conducted online, using Google forms, which does not produce the best random sample. The company told me it conducted four different surveys from four different age groups to ensure balanced generational perspective — so it tried. That doesn’t give you a sample that’s truly as diverse as the U.S. population, of course.  Doing so is tricky even under the best of circumstances.

Still, the results ring true. They do not necessarily prove my thesis — that tech is making us more lonely – or worse, dehumanizing us. After all there are plenty of other explanations for this behavior.  It can feel safer to avoid meeting in person with a delivery driver; plenty of women will tell you chit-chatting with a driver can turn into something more uncomfortable very quickly; and self-checkout is often quicker than waiting for a cashier.  Plenty of people with crippling social anxiety now have an avenue for living that has made their lives infinitely better, and I don’t mean to discount that.

Still, for most, our lives are designed to be full of human interactions large and small, or at least I believe they should be. I’ve written before about Eric Byrne’s theory of transactional analysis —  that the sum of your everyday hellos and goodbyes and “how-are-yous” really do add to or subtract from your mental health.  The pandemic severely limited our ability to engage in such daily niceties, but technology is keeping us that way.  There are plenty of studies suggesting younger Americans are suffering from depression and social anxiety at rates we’ve not seen before.  Tech clearly enables isolation.

But I worry about something more.

Tech tends to put a great distance between powerful people and weak people. It enables abuse because it can make abuse invisible. You would never yell at an older person in a grocery store for taking an extra moment to be sure-footed while stepping forward in a line.  You probably wouldn’t hesitate to scream at that same person when behind the wheel in a car.

One more thought experiment: The next time someone drives or cycles dinner to you, imagine if you would do the same for them.   I venture to guess you’d never directly ask someone you knew to cycle in the pouring rain for 15 minutes to bring you ice cream – but it’s sure easy to click “deliver” on an app and have the goodies left by the door.

I’m not saying food delivery is evil, or even bad. But I am saying that it’s unhealthy to avoid looking another human being in the eye when you make them do something for you.  And my real fear about artificial intelligence? It’ll put yet another layer of 1s and 0s between powerful people and weak people. Another victory for robots in this war we are losing.

The Hidden Cybersecurity Threat in Organizations: Nonfederated Applications

Nonfederated applications pose an unseen and severe threat because in most organizations there is a lack of visibility into who has access to what and how accounts are secured. Sponsored by Cerby, Ponemon Institute surveyed 595 IT and IT security practitioners in the United States who are involved in their organization’s identity and access management strategy. The study aims to determine organization’s level of understanding of the risks created by nonfederated applications and the steps that can be taken to mitigate the risk.

(Click here to download the full report immediately from Cerby’s website.)

A key takeaway from the research is that organizations don’t know what they don’t know when it comes to nonfederated applications. Less than half (49 percent) of organizations track the number of nonfederated applications they have that are not managed and accessed by their identity provider. Of those respondents who track nonfederated applications, 23 percent say they have between 101 to 250. The average number is 96. Despite efforts to have an accurate inventory, only 21 percent of these respondents are highly confident that they know all the nonfederated applications used throughout the enterprise.

Nonfederated applications are risky because they cannot be centrally managed using the organizations’ IdP, (59 percent of respondents). Fifty-one percent of respondents say they are risky because they do not support industry identity and security standards such as Security Assertion Markup Language (SAML) for single sign-on or System for Cross-domain Identity Management (SCIM) for the user onboarding and offboarding process.  As defined in this research, nonfederated applications lack support for the security standards organizations need to manage at scale. In the cloud and on-premises, these applications do not support common industry security standards.

NOTE: An IdP is a service that stores and manages digital identities. The use of an IdP can simplify the process of managing user identities and access, as it allows users to use a single set of credentials across multiple systems and applications. Many organizations use IdPs to manage user access to internal and external systems, such as cloud-based applications or partner networks.

The following findings are evidence of the risk posed by nonfederated applications. 

  • The cost and time of provisioning and deprovisioning access to applications quickly adds up. Before analyzing the risks, it is important to understand the costs. Seven hours is the average time spent provisioning access to a standard set of applications for one employee. At an average $62.50 hourly pay rate the cost is $437.50 per employee. To deprovision one employee takes an average of 8 hours costing $500 per employee. Organizations can use this benchmark to calculate the process’s impact based on the annual turnover in employees and contractors.
  • Salaries also need to be considered. An average of 8 people are involved in the provisioning and deprovisioning process in addition to their other responsibilities. The average annual salary per staff member is $81,000. Consequently, the total annual staff cost amounts to $648,000, with a significant portion allocated to the time-consuming manual work of provisioning and deprovisioning, which could be better utilized elsewhere.
  •  The total average annual cost to investigate and remediate cybersecurity incidents involving nonfederated applications is $292,500. This is based on 47 hours each week or 2,444 annually to investigate potential unauthorized access and 43 hours weekly, or 2,236 annually, to investigate and remediate cybersecurity incidents caused by unauthorized access to nonfederated applications.
  • Nonfederated applications are represented across all application categories and are not limited to a single business unit. As discussed previously, only 49 percent of organizations are tracking the use of nonfederated applications. Only 21 percent of these respondents say their organizations are confident in knowing all the nonfederated applications being used. Nonfederated application use across business units underscores the difficulty in managing them.
  • Fifty-two percent of respondents say their organizations have experienced a cybersecurity incident caused by the inability to secure nonfederated applications. Sixty-three percent of these respondents say their organizations had a minimum of 4 and more than 5 incidents. Loss of customers and business partners are the primary consequences of a cybersecurity incident caused by the inability to secure nonfederated applications, according to 43 percent and 36 percent of respondents respectively.
  • Security and identity teams are often left out of managing and manually controlling access to nonfederated applications. According to the research, shared management of nonfederated applications leads to a decentralized approach. Business units (63 percent of respondents) are most likely to manage these applications followed by IT teams (54 percent of respondents). Only 45 percent of respondents say the security and/or identity teams are responsible for managing these applications. Moreover, 54 percent of respondents say the granting and revoking of access are controlled by business units.
  • Organizations are using inefficient manual processes to grant and revoke access to applications. An average of 84 applications in organizations represented in this research require an admin to manually log in to add, remove or update access, meaning the application doesn’t support SCIM and the organization cannot leverage automation through its IdP. The primary reasons for not automating the process are SCIM is not supported (33 percent of respondents) and the cost (31 percent of respondents).
  • Organizations rely upon business units to report their use of nonfederated applications. While there are several methods used to collect information about current nonfederated applications, business units are most likely to self-report their use of nonfederated applications (62 percent of respondents) followed by the use of a cloud access security broker (CASB) (48 percent of respondents) and endpoint detection tools (47 percent of respondents). Only 39 percent of respondents say business units complete a form to confirm the nonfederated applications used.
  • An average of more than half of tracked nonfederated applications do not support single sign-on (SSO). As discussed previously, there is an average of 96 nonfederated applications in organizations that track their use and respondents estimate that an average of 50 of these do not support SSO. As described in the research, the benefit of SSO is that it permits a user to have one set of login credentials—for example a username and password to access multiple applications. Thus, SSO eases the management of multiple credentials.
  • Organizations lack an effective process to prevent employees from putting data in nonfederated applications at risk. Few organizations report that they are effective in preventing employees’ reuse of passwords, retaining access to critical systems after they leave or change roles and preventing the disabling of MFA.
  • There is a desire to prioritize nonfederated application security, but the risk is underestimated due to a lack of awareness. While only 34 percent of respondents say their organizations do not make the security of nonfederated applications a priority, 44 percent of respondents say management underestimates the cybersecurity risks. When educated on the risks, 82 percent of respondents say the importance of securing nonfederated applications increased.
  • Employees are sharing their account login credentials, making it critical to have the proper security safeguards in place. Seventy-six percent of respondents say employees are sharing account login credentials with both employees and external collaborators (35 percent), sharing account login credentials with other employees (21 percent) and sharing with external collaborators (20 percent).
  • Exposing, failing to rotate passwords and being unable to track who is accessing a shared account are top security concerns. Forty-one percent of respondents say employees or collaborators share accounts without concealing the password and another 41 percent say passwords are not rotated. Reused or weak credentials also create risk (36 percent of respondents).
  • Organizations are not able to reduce the cybersecurity risks caused by shared accounts. Half of respondents (50 percent) say their organizations’ access management strategy enables employees to share login credentials securely when required by the application. However, only 27 percent of respondents say their organizations are very or highly effective in reducing cybersecurity risks from shared accounts. Of those respondents (73 percent) who rank their organization’s effectiveness as low, 56 percent are motivated to reduce the cybersecurity risk.
  • Organizations lack processes and policies to make nonfederated applications secure. Only 41 percent of respondents have a process to make nonfederated applications secure and compliant with their organizations’ policies and only 35 percent of respondents say they have a policy that prevents the trial use of new nonfederated applications. Thirty-nine percent of respondents say the use of nonfederated applications is limited. As shown in this research, organizations do not like to limit the use of nonfederated applications because it can affect employee morale and productivity.
  • The challenge for organizations is that they don’t know what they don’t know. The top two challenges to securing nonfederated applications is the inability to know and manage all nonfederated applications because of the lack of visibility and not having an accurate inventory. This is followed by the inefficient use of manual processes to secure nonfederated applications. Budget and in-house expertise are not considered as much a challenge.
  • Most organizations do not follow up to ensure password and MFA policies adherence. Fifty-seven percent of respondents say employees are required and reminded to turn on MFA and about half (48 percent of respondents) say employees are required and reminded to rotate passwords regularly. However, only 40 percent of respondents say they follow up with every account to make sure MFA is turned on and passwords are rotated in accordance with their policies.

 To read the full report, visit the Cerby website.

Rules for Whistleblowers: a Handbook for Doing What’s Right

Bob Sullivan

Ever see something at work that you just knew wasn’t right, but felt like there was nothing you could do? Maybe there is something you can do. And maybe you can do it … anonymously.

When whistleblower Francis Haugen came forward and testified before Congress about what she thought was going wrong inside Facebook, she changed big tech forever. Or did she?

I recently talked about this with Stephen Kohn, author of the book, Rules for Whistleblowers, A Handbook for Doing What’s Right. He’s also one of the nation’s leading whistleblower attorneys. We discussed the lasting impact Haugen did (or didn’t) have on the tech industry. But more important, he offered a roadmap for people who work in tech to come forward if they think something terribly wrong is happening at their company. And he explained how workers can do this without putting their livelihoods at risk.

“What we’ve seen is for every one whistleblower who’s willing to go public and really risk a lot, there’s a thousand who would go non-public and provide supporting information,” he said to me on the Duke Debugger podcast that I host. But those who go public often get “crushed” by well-funded legal teams.

“That’s why Congress in 2010 with the Dodd-Frank Act created these… what I call super anonymity laws. When I discussed those with the Senate banking committee, when the law was being debated …  I’ll never forget it, the Senate staffer said to me, ‘Steve, if Wall Street knows who you are, you will be crushed no matter what, and your career will be destroyed. You know, we have to create procedures to prevent that.’ And I said, ‘Hallelujah!’ ”

Whistleblowers can come forward without making a big public display, and in fact, government investigators often prefer that, he said.

“Anonymous means you don’t have to set your hair on fire. You don’t have to burn your bridges,” he said. “And the government wants you to stay working in the company so you can provide additional information about violations. Once you have filed, sometimes the government agencies will share your information or you’re aware of other agencies that might be interested, and  … say, tell the SEC to share your information. So it begins a process. The bottom line is these laws make it easier to do the right thing to report misconduct and not necessarily lose your job and career.”

Provisions in the Dodd-Frank bill have changed the nature of whistleblowing and they include large financial incentives.

“The SEC alone has paid whistleblowers about $1.5 billion in rewards, and in almost every one of those cases, no one even knows who the whistleblower is. They don’t receive big press reports. It’s almost all under the radar,” Kohnm said.

Readers can listen to the entire interview, or read a transcript, at this site.  Kohn’s book is called  Rules for Whistleblowers, A Handbook for Doing What’s Right and will be available at  National Whistleblower Center and bookstores on June 1