Author Archives: BobSulli

Investing in the Cybersecurity Infrastructure to Reduce Third-party Remote Access Risks

The purpose of this second-annual research study is to understand how organizations are investing in their cybersecurity infrastructure to minimize third-party remote access risk and what primary factors are considered when making improvements to the cybersecurity infrastructure. In this year’s report, we include the best practices of organizations that are more effective in establishing a strong third-party risk management security posture.

Sponsored by SecureLink, Ponemon Institute surveyed 632 individuals who are involved in their organization’s approach to managing remote third-party data risks and cyber risk management activities. According to the research, 54 percent of respondents say their organizations experienced one or more cyberattacks in the past 12 months and the financial consequences of these attacks during this period averaged $9 million.

The average annual investment in the cybersecurity infrastructure is $50.8 million. According to the research, incentives to invest in the infrastructure include solving system complexity and effectiveness (reducing high false positives) and increasing in-house expertise.

Since last year’s research, no progress has been made in reducing third-party remote access risks. The security of third-party remote access is not improving. Therefore, the correct decisions regarding investment in the cybersecurity infrastructure to reduce these third-party risks are becoming increasingly important. Respondents were asked to rate the effectiveness of their response to third-party incidents, detection of third-party risks and mitigation of remote access third-party risks on a scale of 1 = not effective to 10 = highly effective.

Only 40 percent of respondents say mitigating remote access is very effective, 53 percent of respondents say detecting remote access risks is very effective and 52 percent of respondents say responding to these risks and controlling third-party access to their network is highly effective.

The risks of third-party remote access

 In the past 12 months, organizations that had a cyberattack (54 percent) spent an average of more than $9 million to deal with the consequences. Most of the $9 million ($2.7 million) was spent on remediation & technical support activities, including forensic investigations, incident response activities, help desk and customer service operations. This is followed by damage or theft of IT assets and infrastructure ($2.1 million).

 Investments in the cybersecurity infrastructure should focus on improving governance and oversight practices and deploying technologies to improve visibility of people and business processes. Investment in oversight is important because of the uncertainty about third-parties compliance with security and privacy regulations. On average, less than half (48 percent) of respondents say their third parties are aware of their industry’s data breach reporting regulations. Only 47 percent of respondents rate the effectiveness of their third parties in achieving compliance with security and privacy regulations that affect their organization as very high.

 Data breaches caused by third parties may be underreported. Respondents reporting their organization had a third-party data breach increased from 51 percent to 56 percent. However, organizations may not have an accurate understanding of the number of data breaches because only 39 percent of respondents say they are confident that the third party would notify them if the data breach originated in their organizations.

In the past 12 months, 49 percent of respondents say their organizations experienced a data breach caused by a third party either directly or indirectly, an increase from 44 percent in 2021. Of these respondents, in this year’s research 70 percent of respondents say it was the result of giving too much privileged access to third parties. A slight decrease from 74 percent of respondents in 2021.

Organizations are having to deal with an increasing volume of cyberthreats. Fifty-four percent of respondents say their organizations experienced one or more cyberattacks in the past 12 months. Seventy-five percent of respondents say in the past 12 months there has been a significant increase (25 percent), increase (27 percent) or stayed the same (23 percent) in the volume of cyberthreats. The security incidents most often experienced in the past 12 months were credential theft, ransomware, DDoS and lost or stolen devices.

Managing remote access to the network continues to be overwhelming but the security of third parties’ remote access to its network is not a an IT/IT security priority. Sixty-seven percent of respondents say managing third-party permissions and remote access to their networks is overwhelming and a drain on their internal resources. Consequently, 64 percent of respondents say remote access is becoming their organization’s weakest attack surface. Despite the risks, less than half (48 percent) of respondents say the IT/IT security function makes ensuring the security of third-parties remote access to its network a priority.

Remote access risks are created because only 43 percent of respondents say their organizations can provide third parties with just enough access to perform their designated responsibilities and nothing more. Further, only 36 percent of respondents say their organizations have visibility into the level of access and permissions for both internal and external users.

The ability to secure remote access requires an inventory of third parties that have this access. Only 49 percent of respondents say their organizations have a comprehensive inventory of all third parties with access to its network. Of the 51 percent of respondents who say their organizations don’t have an inventory or are unsure, say it is because there is no centralized control over third-party relationships (60 percent) and the complexity in third-party relationships (48 percent).

Organizations continue to rely upon contracts to manage the third-party risk of those vendors with access to their sensitive information. Only 41 percent of respondents say their organizations evaluate the security and privacy practices of all third parties before allowing them to have access to sensitive and confidential information.

Of these respondents, 56 percent of respondents say their organizations acquire signatures on contracts that legally obligates the third party to adhere to security and privacy practices followed by 50 percent of respondents who say written policies and procedures are reviewed. Only 41 percent of respondents say their organizations assess the third party’s security and privacy practices.

A good business reputation is the primary reason not to evaluate the security and privacy practices of third parties. Fifty-nine percent of respondents say their organizations are not evaluating third parties’ privacy and security practices or they are unsure if they do. The top two reasons are respondents (60 percent) have confidence in the third party’s business reputation and 58 percent of respondents say it is because the third party is subject to contractual terms.

Ongoing monitoring of third parties is not occurring in many organizations and a possible reason is few organizations have automated the process. Only 45 percent of respondents say their organizations are monitoring on an ongoing basis the security and privacy practices of hird parties with whom they share sensitive or confidential information.

Of these organizations, only 36 percent of respondents say the monitoring process of third parties is automated. These organizations spend an average of seven hours per week automatically monitoring third-party access. Those organizations that manually monitor access (64 percent of respondents) say that they spend an average of eight hours each week monitoring access. The primary reasons for not monitoring third parties’ access is reliance on the business reputation of the third party (59 percent of respondents), the third party is subject to contractual terms and not having the internal resources to monitor (both 58 percent of respondents).

 Poorly written security and privacy policies and procedures is the number one indicator of risk.  Only 41 percent of respondents say their third-party management program defines and ranks levels of risk. Sixty-three percent of respondents say poorly written security and privacy policies and procedures followed by a history of frequent data breach incidents (59 percent of respondents) are the primary indicators of risk. Only 35 percent say they view the third party’s use of a subcontractor that has access to their organizations’ information as an indicator.

To read the full report, including charts and graphs, visit SecureLink’s website here

‘Data broker’ Oracle misleads billions of consumers, lawsuit alleges, enables privacy end-arounds

Bob Sullivan

At least one Big Tech firm has glided mostly under the radar during the recent techlash — Oracle — but that relative obscurity might be coming to an end. A class-action lawsuit filed against the data giant by some heavy-hitters in the privacy world alleges that Oracle combines some of the worst qualities of Google and Facebook, at a scale even those firms have trouble matching.  Oracle has incredibly intimate information on 5 billion people around the planet — and the lawsuit alleges that the firm trades on that information largely without anyone’s consent.

Oracle combines a variety of data it collects through its own cookies, data it buys from third parties, and data it acquires from real-world retailers, to harmonize billions of data points into single identities that can be targeted with political or commercial messages, the lawsuit says.  This “onboarding” of offline with online data creates uniquely detailed profiles of consumers.

“The regularly conducted business practices of defendant Oracle America amount to a deliberate and purposeful surveillance of the general population,” the lawsuit alleges. “In the course of functioning as a worldwide data broker, Oracle has created a network that tracks in real-time and records indefinitely the personal information of hundreds of millions of people.”

Oracle holds data on 300 million Americans, or about 80% of the population, according to the suit. Those individual consumers can be tracked “seamlessly across devices.” In a video posted by the plaintiffs, Oracle founder Larry Ellison boats that Oracle data can track consumers into stores, micro-target them right to the location where they stand in an aisle, and connect that to store inventory in that very aisle.

“By collecting this data and marrying it to things like micro-location information, Internet users’ search histories, website visits and product comparisons along with their demographic data, and past purchase data, Oracle will be able to predict purchase intent better than anyone,” Ellison boasts in the video.

The firm also builds extensive profiles of individuals, then places them into marketable categories.

“Oracle then infers from this raw data that, for example, a person isn’t sleeping well, or is experiencing headaches or sore throats, or is looking to lose weight, and thousands of other invasive and highly personalized inferences,” the suit says.

One of the plaintiffs is Johnny Ryan, Senior Fellow of the Irish Council for Civil Liberties, who I interviewed extensively for our recent “Too Big to Sue” podcast with Duke University.

“Oracle has violated the privacy of billions of people across the globe. This is a Fortune 500 company on a dangerous mission to track where every person in the world goes, and what they do. We are taking this action to stop Oracle’s surveillance machine,” Ryan said in a statement about the lawsuit.

One serious claim the lawsuit makes: Oracle goes to great trouble to avoid consumers’ stated preferences *not* to be tracked — the firm combines various cookies to avoid third-party cookie blocking tools, for example.

“Data brokers participating in Oracle’s Data Marketplace freely portray themselves as able to defeat users’ anti-tracking precautions, a pitch at odds with Oracle’s privacy policies and its professed respect for the right of individuals to opt out,” the suit alleges. It cited a study that found “even when users specifically decline consent to be tracked, various adtech participants—including Oracle—ignore those expressions of consent and place trackers on users’ devices. The same study discovered that Oracle places tracking cookies on a user’s device before the user even has a chance to decline consent.”

The lawsuit also claims that Oracle also uses categories with clever names as an evasive maneuver to sell data the firm claims not to share.

“Oracle segments people based on intimate information, including a person’s views on their weight, hair type, sleep habits, and type of insurance,” it says. “Other categories appear to be proxies for medical information that Oracle purports not to share, like “Emergency Medicine,” “Imaging & Radiology,” “Nuclear Medicine,” “Respiratory Therapy,” “Aging & Geriatrics” “Pain Relief,” and “Allergy & Immunology.” ”

Oracle’s data marketplace also enabled racially-targeted advertising, even after Facebook took steps to stop it, the suit claims: “Oracle facilitates the creation of proxies for protected classes like race, and allows its clients to exclude on that basis. For example, one Oracle customer website describes how, after Facebook made it more difficult to target ads based on race in the employment and credit areas, Oracle helped it achieve the same result.”

Oracle’s data marketplace also permits activity that many would find a threat to democracy, the suit claims: “During the summer of 2020, Mobilewalla tracked mobile devices to collect data on 17,000 Black Lives Matter protesters including their home addresses and demographics. Mobilewalla also released a report entitled ‘George Floyd Protester Demographics: Insights Across 4 Major US Cities,’ which prompted a letter and investigation by Senator Elizabeth Warren and other Congress members.”

Some categories sold by data partners are incredibly intimate:

“OnAudience, a ‘data provider’ that profiles Internet users by ‘observing user activity based on websites visited, content consumed and history paths to find clear behavior patterns and proper level of intent,’ lets customers target individuals categorized as interested in ‘Brain Tumor,’ ‘AIDS & HIV,’ ‘Substance Abuse’ and ‘Incest & Abuse Support.’ ”

The suit alleges violation of California’s Unfair Competition Law and various other counts.  A good analysis of the plaintiff’s legal strategy can be found at this Twitter thread by Robert Bateman.

It’s good that Oracle’s time under the radar might be ending; the firm should be standing next to Google, Facebook, Apple, Microsoft and the other Big Tech names finally getting the scrutiny they deserve.

Email Data Loss Prevention: The Rising Need for Behavioral Intelligence

The purpose of this study is to learn what practices and technologies are being used to reduce one of the most serious risks to an organization’s sensitive and confidential data. The study finds that email is the top medium for data loss and the primary pathways are employees’ accidental and negligent data exfiltration through email. According to the research, 59 percent of respondents say their organizations experienced data loss and exfiltration that involved a negligent employee or an employee accidentally sending an email to an unintended recipient. On average, organizations represented in this research had 25 of these incidents each month.

To reduce these risks, organizations should consider technologies that leverage machine learning and behavioral capabilities. This approach enables organizations to proactively prevent data loss vulnerabilities so organizations can stop email data loss and exfiltration before they happen. Thirty-six percent of respondents say their organizations use behavior-based machine learning and artificial intelligence technology. Seventy-seven percent of these respondents report that it is very effective.

Sponsored by Tessian, Ponemon Institute surveyed 614 IT and IT security practitioners who are involved in the use of technologies that address the risks created by employees’ negligent email practices and insider threats. They are also familiar with their organizations’ data loss protection (DLP) solutions.

Current solutions and efforts to minimize risks caused by employees’ misuse of emails are ineffective. Respondents were asked to rate the effectiveness of their organizations’ ability in preventing data loss and exfiltration caused by vulnerabilities in employees’ use of emails. Only 41 percent of respondents say their current data loss prevention solutions are effective or very effective in preventing data loss caused by misdirected emails. As one consequence of not having the right solutions, and only 32 percent of respondents say their organizations are effective or very effective in preventing these incidents.

The following recommendations are based on the research findings. 

  • Data is most vulnerable in email. Employee negligence when using email is the primary cause of data loss and exfiltration. According to the research, 65 percent of respondents say data is most vulnerable in emails. In the allocation of resources, organizations should consider technologies that reduce risk in this medium. On average, enterprises have 13 full-time IT and IT security personnel assigned to securing sensitive and confidential data in employees’ emails.
  • Organizations should assess the ability of their current technologies to address employee negligence risks related to email. Forty percent of respondents say email data loss and exfiltration incidents were due to employee negligence or by accident. Additionally, 27 percent of respondents say it was due to a malicious insider. As revealed in this research, many current email data loss technologies are not considered effective in mitigating these risks. Accordingly, organizations should consider investing in technologies that incorporate machine learning and artificial intelligence to understand data loss vulnerabilities through a behavioral intelligence approach.
  • Identify the highest risk functions in the organization. According to respondents, the practices of the marketing and public relations functions are most likely to cause data loss and exfiltration (61 percent of respondents). Accordingly, organizations need to ensure they provide training that is tailored to how these functions handle sensitive and confidential information when emailing. As shown in this research, organizations are most concerned about data loss involving customer and consumer data, which is very often used by marketing and public relations as part of their work. Other high-risk functions are production and manufacturing (58 percent of respondents) and operations (57 percent of respondents). Far less likely to put data at risk are client services and relationship management functions (19 percent of respondents).
  • Despite the risk, many organizations do not have training and awareness programs with a focus on the sensitivity and confidentiality of data transmitted in employees’ email. Sixty-one percent of respondents say their organizations have training and awareness programs for employees and other stakeholders who have access to sensitive or confidential personal information. Only about half (54 percent of the 61 percent of respondents with programs) say the programs address the sensitivity and confidentiality of data in employees’ emails.
  • Sensitive and confidential information are at risk because of the lack of visibility and the ability to detect employee negligence and errors. Fifty-four percent of respondents say the primary barrier to securing sensitive data is the lack of visibility of sensitive data that is transferred from the network to personal email. Fifty-two percent of respondents say the greatest DLP challenges are the inability to detect anomalous employee data handling behaviors and the inability to identify legitimate data loss incidents.
  • On average, it takes 18 months to deploy and find value from the DLP solution. Organizations spend an average of slightly more than a year (12.3 months) to complete deployment of the DLP solution and more than half a year (6.5 months) to realize the value of the solution. The length of time to deploy and realize value can affect the ability for organizations to achieve a more mature approach to preventing email-related compromises by employees.
  • The length of time spent in detecting and remediating email compromises puts sensitive and confidential data at risk. According to the research, security and risk management teams spend an average of 72 hours to detect and remediate a data loss and exfiltration incident caused by a malicious insider on email and an average of almost 48 hours to detect and remediate an incident caused by a negligent employee. This places a heavy burden on these teams who must triage and investigate these incidents and become unavailable to address other security issues and incidents. 

Other takeaways

  • Regulatory non-compliance is the number one consequence of a data loss and exfiltration incident followed by a decline in reputation. These top two consequences can be considered interrelated because non-compliance with regulations (57 percent of respondents) will impact an organization’s reputation (52 percent of respondents). Regulatory non-compliance is considered to have the biggest impact on organizations’ decision to increase the budget for DLP solutions.
  • Organizations consider end-user convenience very important. Seventy-five percent of respondents say end-user convenience in DLP solutions is very important.

To read the full report, please visit Tessian’s website.

Data brokers, in bed with scammers, aimed their algorithms at millions of elderly, vulnerable

Bob Sullivan

Several large data brokers profited for years by selling what are known, cruelly, as “suckers lists” to criminals who used them to fine-tune scams designed to cheat elderly and vulnerable people, a new report on LawfareBlog explains. It’s a stomach-churning analysis which shines a harsh light on an open secret about many industries: Stealing from the elderly is good business, and rarely comes with much risk.

The Lawfare story — written by Justin Sherman and  Alistair Simmons, describes the prosecution of three large data brokers — Epsilon, Macromark, and KBM Group — during the past couple of years. Details in the guilty pleas are harrowing.  Much more below, but first, a quick step onto the soap box:

Medium-sized crime gangs, or even small-time criminals, are usually behind the scams I’ve written about for several decades — fake sweepstakes, fraudulent grant programs, and so on.  Many are life-altering for the victims. Often, their entire life savings is stolen. For the elderly, there is no time to recover from such a scam.  Some get sick, or even commit suicide after a bout with a scam like this.  The criminals who take their money should be vigorously prosecuted, of course. But for many years, I have seen that a slate of legitimate, multi-national companies facilitate these crimes. Sometimes, they even profit from these crimes.  And sometimes, their very business model depends on this dirty business. Yet, these companies that remain an arm’s length from the victims often suffer little to no consequence. That has to change.  Matt Stoller, a loud advocate for antitrust reforms, has a habit of yelling “Jail Time!” when obvious corporate malfeasance is largely ignored by our judicial system.  It’s a cry more should join. Stealing from the elderly and vulnerable should not be an acceptable business model, or even an acceptable by-product of a business model. People who help criminals steal from the elderly should go to jail.

Onto the details. Readers might remember Epsilon from an incident that’s a decade old, when the then-obscure data hoarding firm suffered what some called the largest data breach in history. Starting before that incident, and lasting through July 2017 — for more than a decade — Epsilon employees helped criminals send mail stuffed with all manner of obvious scams, according to court documents. There were fake sweepstakes, alleged personal astrology invitations, auto-warranty solicitations, dietary-supplement scams, and fraudulent government grant offers. Epsilon employees knew these were scams.  Clients would occasionally get arrested. In one case, a worker lamented that one client, “brought us rev[enue] for 5 years but the law caught up with them and shut them down.”

The solicitations were fraudulent on their face. Sweepstakes mailer recipients were told they were one of a kind; it was obviously impossible they could all be winners. Yet Epsilon continued to work with such firms. It earned money from selling targeted lists of those who were most likely to respond.  In fact, it had special names for the characters in this scam: targeted consumers were called euphemistically “opportunity seekers,” before they were victims. Clients who sent the fraudulent mailers were called “opportunistic.” The Justice Department leaves no doubt what these terms really meant — “opportunity seekers frequently fell within the same demographic pool: elderly and vulnerable Americans.”

During this decade, Epsilon helped criminals attack 30 million American consumers by selling these companies data that was used to facilitate “fraudulent mass-mailing schemes,” according to the Department of Justice.

Meanwhile, there was a devilish feedback loop also. Data from the criminal enterprises was used to hone Epsiolon’s algorithms, as Sherman and Simmons explain in their piece:

“Two employees ‘collaborated on a model in February 2016 ‘for clients engaged in fraud that used data from’ one of Epsilon’s clients. They expanded Epsilon’s databases by getting information back from scammers, and then used that information to determine which people would be most susceptible to future targeting. In other words, those who fell for a scam once would be documented in Epsilon’s database, so it could provide other scammers with lists of people who were identified to be … receptive to that kind of marketing.”

Epsilon agreed to “deferred prosecution” in its case, which means it essentially pled guilty and agreed to pay $150 million in fines and restitution.  Separately, two former Epsilon employees have been charged criminally, a welcome development. One year later, their federal cases are slowly moving their way through a Colorado federal court. The most recent filing action in the case involved Epsilon trying to quash a subpoena issued by the defendants, who seem to believe corporate documents could exonerate them by showing they were just following orders. Epsilon denies that and says the defendants are on an evidentiary fishing scheme.

Macromark’s prosecution followed similar lines, court documents say.   That firm also spent more than a decade helping criminals steal millions of dollars from thousands of victims who were targeted because they were likely to respond to a fraudulent psychic scam.

“In general, the most effective mailing lists for any particular fraudulent mass mailing were lists made up of victims of other mass-mailing campaigns that used similarly deceptive letters,” the Macromark guilty plea says.

There was no doubt Macromark knew what clients were doing, according to the plea document: “A Macromark executive sent a client a link to a newspaper article with the headline ‘Feds: Mail fraud schemes scam seniors,’ together with materials connecting the client’s own letters to the subject of the newspaper article.” The guilty plea says a Macromark employee actually helped a client change names to evade law enforcement.

“List brokers and service providers such as Macromark who facilitate these schemes are especially dangerous,” said Inspector in Charge Delany DeLeon-Colon of the U.S. Postal Inspection Service’s Criminal Investigations Group, which investigated the crime.  “Data firms such as this have extraordinary access to consumer’s personal information, not just their mailing address.  The sale and distribution of this data exponentially magnifies the scale and impact of these schemes. Macromark pleaded guilty to wire fraud, and admitted that the lists it provided to scammers led to losses of $9.5 million from victims. The company was sentenced to three years of probation and a $1  million fine.

Two Macromark executives were also indicted for mail and wire fraud as part of that investigation.

At KBM Group, an employee enjoyed a laugh at the expense of victims, court documents say. One solicitation sent using KBM data said recipients were entitled to $45,000 from an old dormant account, which would be released if a small fee was paid. A general manager at KBM said in an email, “Who responds to this stuff?? Obviously, we have those people.” Later, that same manager fought for a client that another employee had flagged as fraudulent, leading to the sale of 100,000 consumers’ data.

KBM pled guilty and paid agreed to pay victim compensation penalties totaling $42 million.

Fines are fine. Occasionally, victims of these frauds do get some money back thanks to restitution funds, and that’s fine, too, though often years late and many dollars short. Still, these examples show how brazen companies can be when providing a platform for criminals to connect with vulnerable people. Platform accountability calls for swift justice and jail time.  Each week as host of The Perfect Scam, I listen to people talk about their lives torn apart by crimes like these.

When your actions logically begin a chain of events that leads to ruined lives, well, your life should be ruined, too.

I’ll let Shermer and Simmons have the last word:

“Data brokers are extremely profitable and can overcome imposed fines while continuing their operations. The more money they make, the more money they will have to spend on legal defenses. In the three mentioned cases, the data brokers’ internal compliance measures were ineffective, because these companies already knew that they were partnering with scammers and continued to do so because they saw it as financially advantageous. If controls were in place, they were ignored. And in the one case where controls were enforced, the controls were overridden by data broker employees pushing for profit above all else. This raises a series of critical policy questions about the effectiveness of company controls today and how much company controls should be prioritized as part of a policy solution when there is evidence that they can be overridden.

Comprehensive legislation, at the federal if not state level, to regulate data brokerage and prevent and mitigate its harms is necessary to protect all Americans. This should include a focus on stopping the algorithmic revictimization of people who fall for scams. It should also include a focus on controlling the sale and licensing of data on vulnerable Americans—particularly when data brokers knowingly use that information to help scammers prey on the elderly, cognitively impaired, and otherwise vulnerable.

 

The secrets of high-performing security organizations

As the threat landscape becomes more sinister, the ability to close the IT security gap is more critical than ever.  Sponsored by HPE, this study has been tracking organizations’ efforts to close gaps in their IT security infrastructure that allow attackers to penetrate their defenses since 2018.

The IT security gap is defined as the inability of an organization’s people, processes and technologies to keep up with a constantly changing threat landscape. It diminishes the ability of organizations to identify, detect, contain and resolve data breaches and other security incidents. The consequences of the gap can include financial losses, diminishment in reputation and the inability to comply with privacy regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Only 30 percent of respondents say their organizations are highly effective in keeping up with a constantly changing threat landscape and closing the IT security gap.

Ponemon Institute surveyed 1,848 IT and IT security practitioners in North America, the United Kingdom, Germany, Australia and Japan. This report presents the global findings and compares them to the 2020 global findings.  All respondents are knowledgeable about their organizations’ IT security and strategy and are involved in decisions related to the investment in technologies.

Few respondents are confident that their organizations can prevent a persistent threat below the platform that would result in data stolen, modified or viewed by unauthorized entities according to 35 percent of respondents. Similar to the last study, 48 percent of respondents believe attacks that have reached inside the network have the potential to do the greatest damage. Forty-two percent of respondents say that attacks inside the IT infrastructure can be detected quickly before they break out and cause a cybersecurity breach resulting in data stolen, modified, or viewed by unauthorized entities.

Best practices from organizations that are effective in closing the IT security gap

Thirty percent of respondents self-reported that their organizations are highly effective in keeping up with a constantly changing threat landscape. We refer to these organizations as “high performers” and compare their responses to the non-high performer. We refer to these organizations as “other” respondents.

Following are the nine best practices of high-performing organizations.

High performers are more likely to have visibility and control into users’ activities and devices. Only 33 percent of high performers believe their security teams lack visibility and control into all activity of every user and device. In contrast, 80 percent of those in the other category say their teams lack visibility and control. High performers are also more likely to get value from their security investments (59 percent vs. 42 percent of respondents). However, both groups agree that the IT infrastructure has gaps that allow attackers to penetrate its defenses (60 percent of high performers and 61 percent of respondents in the other category).

High performers are more likely to agree that attacks that have reached inside the network have the potential to do the greatest damage. Fifty-six percent of high performers recognize the potential damage from attacks that have reached inside the network vs. 45 percent of respondents in the other category. Forty-seven percent of high performers are confident that their organizations have not experienced a persistent threat below the platform software that has resulted in data stolen, modified or viewed by unauthorized entities vs. 30 percent in the other category.

High-performing organizations are more likely to implement a Zero Trust Model. Sixty-four percent of high-performing organizations have a Zero Trust Model because government policies required it (25 percent), have a Zero Trust Model for other reasons (24 percent of respondents) or selected elements from the Zero-Trust framework to improve security (15 percent). Thirty-six percent of organizations in the other category are not interested in a Zero Trust approach (25 percent of respondents) or have chosen not to implement (11 percent of respondents).

High performers say as compute and storage moves from the data center to the edge it requires a combination of traditional security solutions and secure infrastructure (61 percent). The other respondents are more likely to say a new type of security will be required (59 percent).

IoT security is more of a concern for high performers. Eighty-five percent of respondents say identifying and authenticating IoT devices accessing the network is critical to their organization’s security strategy. Only slightly more than half (55 percent) of other respondents agree with this. In addition, high performers are more likely to say legacy IoT technologies are difficult to secure (80 percent vs. 69 percent of respondents in the other category. Forty percent of high-performer respondents say their IoT devices are appropriately secured with a proper security strategy in place vs. 15 percent of respondents in the other sample.

High-performing organizations say security technologies are very important for their digital transformation strategy. Seventy-seven percent of high-performing organizations say it is important (35 percent of respondents) or highly important (42 percent of respondents) to have security technologies to support digital transformation. In contrast, 53 percent of the other respondents say it is important or highly important. 

High performers take a different approach to server security and backup and recovery. Eighty-eight percent of high performer respondents say backup and recovery is a key component of their security strategy and 68 percent of high performers say their organizations make server decisions based on the security inherent within the platform.

 High-performing organizations are more aware of the benefits of automation. The most important benefits are the ability to find attacks before they do damage or gain persistence (78 percent of high performers) and reduction in the number of false positives that analysts must investigate (74 percent of high performers). They also say automation is critical when implementing an effective Zero Trust Security Model (71 percent of respondents).

High-performing organizations are more likely to see the important connection between privacy and security. Ninety-four percent of respondents in high-performing organizations say it is not possible to have privacy without a strong security posture. Eighty-seven percent of high performers believe a strong cybersecurity posture reduces the privacy risk to employees, business partners and customers. High performers are less likely to believe human error is a risk to privacy.

To read the rest of this report, download it from HPE.com

A million appeals for justice, and 14 cases overturned — Facebook Oversight Board off to a slow start

Bob Sullivan

A million appeals for justice, and 14 reversals.  That’s the scorecard from the Facebook Oversight Board’s first annual report, released this week. The creative project has plenty going for it, and I think some future oversight board can benefit greatly from the experience of this experiment, launched by Facebook parent Meta in 2020. Still, it’s hard to see how this effort is making a big impact on the problems dogging Facebook and Instagram right now.

A few months ago, I interviewed Duke University law student Alexys Ogorek about her ongoing research into the Oversight Board for our podcast, “Defending Democracy from Big Tech.”  Her conclusion: There are plenty of interesting ideas in the organization, but in practice, it’s not accomplishing much.  Only a tiny fraction of cases are considered, she found, and decisions take many months. Not very practical for people who feel like their innocent comment about a political candidate was wrongly removed a month before an election.  You can hear our discussion of this on Apple Podcasts, or by clicking play below.  The Oversight Board’s annual report confirmed most of Ogorek’s research, but there are plenty of interesting nuggets in it. I’ve cobbled them together below.

Facebook removes user posts all the time — perhaps it’s happened to you — with little or no explanation.  After years of public frustration with this practice, the firm launched an innovative project called the Facebook Oversight Board. It’s billed as an independent, outside entity that can make binding decisions — mainly, tasked with telling Facebook to restore posts it has removed incorrectly.  Most of the time, these takedown decisions are made by automated tools designed to detect hate speech, harassment, violence, or nudity.  In a typical scenario, a user posts a comment that contains language that is judged to include racial slurs, or language that encourages violence, or adult content, or medical misinformation, and the post is removed. Users who disagree can file an appeal, which might be judged by a person at Facebook.  If that appeal fails, users now have the option to appeal to this outside Oversight Board.

This is a good idea.  We should all be uncomfortable that a large corporation like Facebook gets to make decisions about what stays and what goes in the digital public square. Yes, the First Amendment doesn’t apply to Facebook in most of these cases, but because it’s such a powerful entity when Meta acts as judge and jury, it offends our notions of free speech. So, the experiment is worthwhile and like Ogorek, I’ve tried to look at it with an open mind.

One big problem revealed in the report is the tiny, tiny fraction of cases the board can take up, combined with the 83 days it took to decide cases.  About 1.1 million people appealed to the board from  October 2020 to December 2021, and only 20 cases were completed. Of them, the board overturned Facebook’s choice 14 times. To be fair, the board says it tried to choose cases that had wider impact, and could set precedent.  Still, the numbers show the board process, to put it politely,  doesn’t scale.

“I am struggling with this due to a cognitive disconnect. They had 1.1 million requests but only examined 20 cases. In those 20 cases they found that Meta was wrong 70% of the time. So, is it likely that over 700,000 mistakes by Meta have gone unexamined,” said Duke professor David Hoffman.  “The small number of decisions when compared to the demand indicates to me that the (board) is at best a sampling mechanism to see how Meta is doing, and based on this sample it appears that Meta’s efforts at enforcing their own policies are a dismal failure. It all begs the question, what additional structure is necessary so that all 1.1 million claims can be analyzed and resolved.”

Reading through the cases Facebook did pick, one can gain sympathy for the complexity of the task at hand. I’ve pasted a chart above to show a sample of cases that rose to the top of the heap. But here’s one example of competing interests that require nuanced decisions: in one case, a video of political protestors in Columbia included homophobic slurs in some chants. Facebook initially removed the video; the board restored it because it was newsworthy.  In another case, an image involving a women’s breast was removed for violating nudity rules, but the image was connected to health care advocacy. It was also restored.

Other items in the report I found interesting: the board openly criticized Facebook’s lack of transparency in many situations.  It urges the firm to explain initial takedown decisions, and notes that moderators “are not required to record their reasoning for individual content decisions.”

There are other critical comments:

  • “It is concerning that in just under 4 out of 10 shortlisted cases Meta found its decision to have
    been incorrect. This high error rate raises wider questions both about the accuracy of Meta’s content moderation and the appeals process Meta applies before cases reach the board.”
  • “The board continues to have significant concerns, including around Meta’s transparency and
    provision of information related to certain cases and policy recommendations.”
  • “We have raised concerns that some of Meta’s content rules are too vague, too broad, or unclear, prompting recommendations to clarify rules or make secretive internal guidance on interpretation of those rules public.”
  • “We made one recommendation to Meta more times than any other, repeating it in six decisions: when you remove people’s content, tell them which specific rule they broke.” Facebook has partly addressed this suggestion.

The board also briefly took up the issues raised by Facebook whistleblower Frances Haugen. Among her revelations, she exposed a practice by the company to “whitelist” certain celebrities, making them exempt from most content moderation rules.  The board mentions this issue, and its demands for more information from Facebook about it, but only in passing. Combine this issue with other references to secret or unknown internal moderation policies that Facebook maintains, and it’s easy to see how the Oversight Board has a very difficult job to do. One wonders if its work might end one day with members resigning in frustration. Until then, it’s still worth learning whatever lessons this experiment might teach.  There are plenty of good ideas being tested.

 

How Covid-19 pushed more organizations into the cloud

During Covid-19, many organizations began or accelerated efforts to migrate applications to public cloud environments. The purpose of this study is to learn important information about how COVID-19 changed the migration of applications and the effect it has had on organizations’ cloud security practices and costs. As defined in this research, the post-COVID cloud boom refers to the impact of the pandemic on corporate cloud migrations and deployment.

According to the research, the use of public cloud resources for securing critical applications outpaced on-premises deployment because of the need to maintain a higher level of agility, flexibility and resilience during the pandemic. Further, the “boom” refers to the innovations made by cloud users and providers to respond to threats and vulnerabilities that have emerged during the pandemic.

Sponsored by Anitian and conducted by Ponemon Institute, 643 IT and IT security respondents in the United States were surveyed in organizations that use all or mostly public clouds. A key takeaway from the research is that 61 percent of respondents say migration or expansion of cloud resources significantly increased (31 percent) or increased (30 percent) their organizations’ ability to achieve its business goals such as revenue growth, expansion into new markets, retention and hiring of in-house expertise and innovation.

Our study confirms that organizations’ migration and expansion of cloud resources during the COVID pandemic significantly increased their ability to achieve their business goals. Enterprise’s objectives such as revenue growth, expansion into new markets, retention and hiring of in-house expertise, and innovation were all prominent findings in our research.

The following findings reveal how the Post-Covid-19 boom is supporting three equally important objectives for organizations: business growth, security posture and financial strength.

 Business growth:

  • Despite the challenges of dealing with COVID, migration and transition to public clouds resulted in a boom. During this period, many organizations realized greater agility and innovation in responding to threats and vulnerabilities that emerged during the pandemic.
  • The use of most or all public cloud providers increased significantly in the post-Covid-19 era resulting in many organizations benefiting from the boom. The boom significantly increased or increased the ability of organizations to achieve their business growth despite risks due to a remote workforce, according to 61 percent of respondents. 
  • The primary benefits from the boom are to support business goals. According to the research, 62 percent of respondents say the migration or transition to the public cloud was to reduce cost, 53 percent of respondents say it is to increase efficiency and 41 percent of respondents say it is to support business growth.

Security posture:

  • Organizations’ cloud security improves in the post-Covid-19 boom. Pre-Covid-19 before transition or migration to the public cloud, 35 percent of respondents say their organizations had a very effective cloud security posture. Post-Covid-19 about half (49 percent) of respondents say their organizations’ security posture is very effective. Further, business risk did not significantly increase or increase during migration or transition to the public cloud. 
  • Remote worker productivity increased while supporting security in the cloud. Applications were moved to the cloud to improve remote worker productivity. Employees working remotely increased significantly during the pandemic and organizations moved their applications to the cloud for productivity and security reasons.

Special analysis: Financial strength

Ponemon Institute, as part of this research, conducted a benchmark study of 158 senior-level CISOs in companies that primarily transitioned or migrated to the public cloud during the pandemic (81 respondents) vs. companies that did not significantly transition or migrate from the on-premises environment (77 respondents) during this period.

As revealed, companies that primarily migrated or transitioned to the cloud have lower costs to secure the cloud and respond to the financial consequences of data breaches in the cloud. These organizations also made greater investments in security technologies because of the ability to reduce costs.

  • Lower costs to secure cybersecurity operations in the cloud. On average, in a comparison between those organizations that primarily migrated or transitioned to the public cloud during the pandemic had lower costs to ensure the security of the cloud ($14.5 million) vs. those organizations that primarily performed cybersecurity practices on-premises ($16.1 million) for a net benefit of $1.6 million.
  • Lower data breach costs. For those organizations that migrated and transitioned all or most of their cybersecurity practices to the public cloud had significantly lower data breach costs ($13.3 million vs. $18 million) for a net benefit of $4.7 million.
  • Higher annual investments in cybersecurity operations in the public cloud. Due to lower costs as described above, those organizations that performed cybersecurity operations in the public cloud were able to increase their annual investments ($16.8 million vs. $12.2 million), for a net benefit of an increase of $4.6 million in annual investments.

Visit Anitian’s website to download the full report. In it, you’ll find a complete analysis of the research findings. The report is organized according to the following themes.

  • The benefits of the post-COVID-19 cloud boom
  • Managing security risks in the cloud
  • Special analysis: The financial benefits of the post-COVID cloud boom
  • Steps taken to secure remote workers’ access to the cloud

Tim Hortons tracked when customers went to Starbucks … and much more

Bob Sullivan

How many sugars do you want with that coffee? And how much surveillance? If you were “cheating” on your favorite coffee shop with a different one, would you mind if an app told on you?

Earlier this month, Canada’s Privacy Commissioner found that the Tim Hortons chain violated the law by when it surveilled app users, who were “tracked and recorded every few minutes of every day, even when their app was not open.” That sounds bad enough, but the story behind the investigation reveals far more creepy surveillance capitalism was going on. Two years ago, Financial Post journalist James McLeod used Canadian law to demand every piece of information Tim Hortons had collected on him, and spun it into a dramatic narrative.

“I had no idea how extensive the tracking data was until I saw it. There were readings taken at all hours of the day and night, and (the app) kept tabs on me every time the app thought I was visiting one of its competitors,” he wrote.

The app, McLeod found, “identified where he lived and worked…and noted when it believed he entered a Starbucks, Second Cup, McDonald’s, Pizza Pizza, A&W, KFC or Subway,” according to the Canadian investigation.  It also knew when he went to a Toronto Blue Jays baseball game, when he went to Manitoba for a wedding, even when he arrived at Amsterdam’s Schiphol Airport.

The full investigation is worth reading; so is the original news report from 2020.

As conversation around a federal privacy law in the U.S. seems to be suddenly reignited, much to the delight of many who thought efforts to pass any legislation during this testy political season were doomed, there are still plenty of lingering questions. Have tech industry insiders had too much to say about the proposed language in the American Data Privacy and Protection Act? Will consumers really acquire new protections, or will the law entrench existing (bad) behaviors?  And how many exceptions will be made for law enforcement, for employers, even for data brokers?  Shoshana Wodinsky at Gizmodo offers a level-headed, skeptical analysis of the bill in its current form here. And a summary of its provisions is here (PDF).

But I think the timing of the Tim Hortons investigation is helpful, because however icky the story is, it also points to a couple of things that worked well. McLeod only had a hunch something was wrong because Google added a new privacy feature to his smartphone  — the option to limit sharing of location information with apps only when they are open. The Tim Hortons app was requesting more access than that, which led McLeod to file a so-called PIPEDA request. Under Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), users can ask companies to divulge all the data that’s been collected about them.  When McLeod got his response, he had his story, and Canada’s privacy commissioner had an investigation.

Under California’s state privacy law, consumers can now file what is known as DSAR’s — Data Subject Access Requests — and get reports similar to the one McLeod got from Tim Hortons. This disclosure right should be an essential tool for all Americans, made as easy as possible, and advertised broadly as a feature. In its current form, the American Data Privacy and Protection Act calls for such disclosure, and critically, for it to be made available “in a human-readable and downloadable format that individuals may understand without expertise.” Sure, most consumers won’t take advantage of the opportunity, but a few will. And who knows what stories might be uncovered as a result.

Architecting the Next Generation for OT Security

Ponemon Institute is pleased to have conducted the research behind the recent report Architecting the Next Generation for OT Security, sponsored by Applied Risk. I’ve included the executive summary of the report in this month’s column. The full report can be downloaded from the Applied Risk website.

“This is a time of change and challenges,” the Applied Risk report begins. “It’s an era that is both transformative and disruptive, shaped by digital technologies that are improving billions of lives around the world, even as they make us vulnerable in ways we never anticipated.

This digitalization has been a fact of life for quite some time, but it is also becoming a factor in the operation of critical infrastructure and other industrial environments at an accelerating speed. At the same time, the Operational Technology (OT) systems that monitor and control industrial equipment, assets, processes and events in critical infrastructure are facing more and more threats from increasingly sophisticated malicious actors, including nation-states.

“In this dynamic environment, it is important to understand the thoughts and concerns that drive organizations to take action to keep their OT domains safe, secure and resilient. Applied Risk has undertaken the research needed to gain that understanding and to take a forward-looking approach to crucial questions about how to architect the next generation of OT Security solutions.

“The report, entitled “Architecting the Next Generation for OT Security,” is based on data collected by the Ponemon Institute from more than 1,000 IT and OT Security practitioners in the United States and Europe. The research was then complemented by input from the knowledge and experience that Applied Risk’s team has accumulated over the years, as well as analysis from the company’s own subject matter experts (SMEs).

“In this document, we present the results of that research. We use these data to assess current trends in the OT Security space, paying special attention to people-, process-, and technology-related issues, and offer recommendations on responses to these trends. Additionally, we describe current conditions in the OT Security realm and offer insight into the OT Security trends that are likely to emerge over the next two to four years.

“Respondents to the survey were asked to answer questions about how to architect the next generation of OT Security solutions. All respondents have responsibility for securing or overseeing cyber risks in the OT environment and understand how these risks impact the state of cyber security within their organizations. The research was then complemented by input from Applied Risk’s own engagements and assessments as well as analysis from our subject matter experts.

“Maximizing safety and minimizing unplanned outages are the top operational priorities for the organizations represented in this research. Reducing inefficiencies and minimizing operating costs are also high priorities, as is the ability to maintain plant connectivity. Respondents see the convergence of IT and OT systems as one of the primary drivers toward meeting these organizational targets. At the same time, though, they note that attackers are focusing more and more on industrial environments and are quickly developing OT skills – and that this shift has resulted in more sophisticated and clandestine attacks.

“The results of the survey indicate that companies are struggling to develop their OT Security maturity at a pace comparable to the speed with which attackers are developing their own skill sets. Meanwhile, the OT landscape is becoming more complex due to IT/OT convergence and to the introduction of Industrial Internet of Things (IIoT) devices, virtualization, and cloud computing in these environments. The overall sense of the respondents is that they need to do more to ensure that the business benefits of these new technological developments can be realized in a secure manner.

“More than half of the respondents believe that their cyber readiness is not at the right level yet and that they are not able to adequately minimize the risk of cyber exploits and breaches in the OT environment. As such, it is clear that there is still work to be done in general and across the board. The respondents are aware that they need to upskill their staff and that of their service providers and that they need better procedures. But above all, they understand that they will need enabling technologies to accelerate OT Security maturity. In summary, a combination of people-, process-, and technology-centric controls will remain key.”

Click here to obtain the full report.

‘Don’t Break my Prime?’

Bob Sullivanbg

“Amazon Prime is *incredibly* popular with Americans. How popular? There are more than 150 million members in the U.S, with many (most?) there to enjoy “free”* two-day shipping. That’s roughly equal to the number of Americans who VOTED in 2020. And Amazon is betting all 150 million will do almost anything to keep that “free” shipping — including abandoning any pretense that they prefer to live in a free country that supports capitalism and free-market principles.

“Don’t Break Our Prime” is a deeply cynical ad campaign that’s being thrust onto your TVs and Internet space as pro-competition legislation makes its way through Congress. Amazon’s monopoly power has deeply hurt American small businesses for years — and made Jeff Bezos so much money that his hobby is going to space —  but lately, the tech giant’s tactics at crushing competitors have kicked up a notch.

Amazon’s customer base is so large that many small companies *have* to sell products on their platform. That’s weird, because it makes Amazon both a fulfillment service *and* a retailer.  There have long been accusations that Amazon’s data nerds study all these competitors on their platform, rip off their products, and then advantage Amazon brands when consumers search for items. More than accusations, actually. Congress recently made a criminal referral about this practice to the Department of Justice.

The kind, gentle term for this — minus the sneaky data harvesting — is “self-preferencing.” And yes, some supermarkets put their own brand of toilet paper on the best shelf, right there next to the Charmin.  If there were a few dozen Amazon-like services out there, self-preferencing there wouldn’t be so bad. But since Amazon has 150 MILLION U.S. MEMBERS on Prime, it’s deeply anti-competitive. Kind of like owning most of the gas refineries, and most of the gas trucks, and most of the gas stations….

So the U.S. Senate is considering legislation that would ban this practice of advantaging its own products over competitors.  Both Democrats and Republicans support the idea.

Amazon has now come out swinging.  As is tradition with such campaigns, it is not attacking the premise of the bill. It’s hitting consumers in an emotional spot, with the message that — given all these concerns about inflation, and supply chains, and the pandemic — now would be a terrible time to lose Amazon Prime’s free shipping! Don’t Break Our Prime!

What are the particulars of this argument? Please read up on it. Here’s a position piece from Project Disco (?) which attempts to explain why Amazon NEEDS its anti-competitive behavior in order to provide two-day shipping.  And here’s a Wired piece that does a good job of debunking that press release. 

It should be clear that this isn’t about two-day shipping.  Rather, Amazon is hoping the popularity of Prime gives it enough clout to beat back, or at least delay, reasonable efforts at reform.

This is just the tip of the spear, however.  Tech industry lobbyists are using this “Don’t Break Our Tech” model to defend the status quo in the face of various reform efforts. Google serves up the most self-serving links, rather than the best links, it has engaged in ad bid-rigging, its business has been called the biggest data breach of all time, but if Congress messes with that, maps won’t work!  Your privacy will be violated. Also, China will become more powerful!

These are emotionally compelling arguments; they just might work.  But as you begin to see all these “Don’t Break Our …” messages, please keep something in mind.  Silicon Valley invented the phrase “move fast and break things.” So it’s deeply ironic that Big Tech firms are suddenly afraid of trying new things that might break something.

The techlash is real.  Human beings are realizing that for all the great gifts tech has given us, there are serious costs. The pendulum has swung too far. Big Tech companies aren’t the only source of trouble, but they are a good place to start. Tech companies are so large they don’t really have to answer to anyone right now.  Facebook paid a $5 billion fine for ignoring a consent decree with the Federal Trade Commission and…didn’t really change anything. It’s time to draw some boundaries around the monoliths.  As Harvard’s Francella Ochillo said to me in my recent docu-podcast, Defending Democracy from Big Tech, while we keep arguing about the details, these firms are making billions of dollars.  We want techland to be free for competition. To do that, we’re going to have to break (up) a few things.  So what? We’re on iPhone 13. The beta version of tech reform might not be perfect. That’s shouldn’t stop us. The cost of inaction is far too high. I’m here for v2.0, and 3.0 and … 13 pro! You should be, too.

*Free shipping isn’t free, of course.  Prime costs money!  The cost is built into the products you buy.  As with Uber, the price is being (temporarily) subsidized by investment money, which is another way of saying it’s a Ponzi scheme. Also, Amazon drivers live awful lives because of Prime.  There’s all that cardboard. “Free” is never free.