Bouncing back from a breach; it’s getting better

As the threat landscape continues to worsen, it is more important than ever for organizations to be able to withstand or recover quickly from the inevitable data breach or security incident. Ponemon Institute and IBM Security are pleased to release the findings of the fifth annual study on the importance of cyber resilience to ensure a strong security posture. In this year’s study, we look at the positive trends in organizations improving their cyber resilience but also the persistent barriers that exist to achieving cyber resiliency.

The use of cloud services supports cyber resilience. As part of this research, we identified respondents in this study that self-reported their organizations have achieved a high level of cyber resilience and are better able to mitigate risks, vulnerabilities and attacks. We refer to these organizations as high performers.

As shown in Figure 1, 74 percent of respondents in high performing organizations vs. 58 percent of respondents in other organizations understand the importance of cyber resilience to achieving a strong cybersecurity posture. High performing organizations are also more likely to recognize the value of cloud services to achieving a high level of cyber resilience (72 percent of high performing respondents vs. 55 percent of respondents in the other organizations). According to these high performing organizations, cyber resilience is improved because of the cloud services’ distributed environment and economies of scale.

[NOTE: We define cyber resilience as the alignment of prevention, detection and response capabilities to manage, mitigate and move on from cyberattacks. This refers to an enterprise’s capacity to maintain its core purpose and integrity in the face of cyberattacks. A cyber resilient enterprise is one that can prevent, detect, contain and recover from a myriad of serious threats against data, applications and IT infrastructure.]

In this section of the report, we provide an analysis of the global consolidated key findings. Ponemon Institute surveyed more than 4,200 IT and IT security professionals in the following countries: The United States, Australia, the United Kingdom, France, Germany, Middle East (UAE/Saudi Arabia), ASEAN, Canada, India and Japan. Most of them are involved in securing systems, evaluating vendors, managing budgets and ensuring compliance.

The complete audited findings are presented in the Appendix of the full report, published on IBM’s website. The findings there are organized into the following topics:

  1. Since 2015, organizations have significantly improved their cyber resilience
  2. The steps taken that support the improvement in cyber resilience
  3. More work needs to be done to become cyber resilient
  4. Lessons from organizations with a high degree of cyber resilience
  5. Country differences

Here is a sample of section 1:

Since 2015, organizations have significantly improved their cyber resilience

More organizations have achieved a high level of cyber resilience. In 2016, less than one-third of respondents said their organizations had achieved a high level of cyber resilience and in this year’s research the majority of organizations say they have achieved a high level of cyber resilience. With the exception of the ability to contain a cyber attack, significant improvements have been made in the ability to prevent and detect an attack.

 Improvement in cyber resilience has steadily increased since 2016. Since 2016, the percentage of respondents who say their cyber resilience has significantly improved or improved has significantly increased from 27 percent of respondents in 2016 to almost half (47 percent of respondents) in 2020.

The number of cyber attacks prevented is how organizations measure improvement in cyber resilience. Of the 47 percent of respondents who say their organizations’ cyber resilience has improved, 56 percent say improvement is measured by the ability to prevent a cyber attack.

As discussed previously, since 2015 the ability to prevent such an incident has increased significantly from 38 percent of respondents who said they have a high ability to 53 percent in this year’s research. Another indicator of improvement is the time to identify the incident, according to 51 percent of respondents. Similar to prevention, the ability to detect an attack has improved since 2015.

Expertise, governance practices and visibility are the reasons for improvement. To achieve a strong cyber resilience posture, the most important factors are hiring skilled personnel (61 percent of respondents), improved information governance practices (56 percent of respondents) and visibility into applications and data assets (56 percent of respondents). These are followed by such enabling technologies as analytics, automation and use of AI and machine learning.

Least important is C-level buy-in and support for the cybersecurity function and board-level reporting on the organizations’ cyber resilience. Currently, only 45 percent of respondents say their organizations issue a formal report on the state of cyber resilience to C-level executives and/or the board.

Having skilled staff is the number reason cyber resilience improves and the loss of such expertise prevents organizations from achieving a high level of cyber resilience, according to 41 percent of respondents. Respondents also cite the need for an adequate budget, the ability to overcome silo and turf issues, visibility into applications and data assets and properly configured cloud services are essential to improving their organizations’ cyber resilience.

To read the full report, visit IBM’s website.

Tracking the Covid tracker apps — dangerous permissions and ‘legitimizing surveillance’

Bob Sullivan

One app requires permission to disable users’ screen locks. Another claims it doesn’t collect detailed location information, but accesses GPS data anyway.  Still another breaks its own privacy policy by sharing personal information with outside companies. And nearly all of them request what Google defines as “dangerous permissions.”

Is this the latest cache of hacker apps sold in the computer underground? No. These stories arise from the 121 Covid-19 apps that governments around the world have released in an attempt to track and control the virus. Security researchers are worried the apps can be used to track and control populations — long after the pandemic has passed. And even if governments have the best intentions in mind, cybercriminals might be able to access the treasure trove of data collected by these apps. After all, they’ve been built hastily, under pressure as Covid-19 has raged around the globe.

Megan DeBlois

It makes sense to use technology to fight the virus. Contact tracing — identifying anyone a sick patient might have infected — is a staple technique to stem outbreaks. It’s easy to imagine a system that uses smartphones to ease this complicated task. But balancing public health with privacy concerns is tricky, if not impossible.


Volunteers who are worried about these dark possibilities recently launched  Contributors keep track of security analyses completed of each app and have made their database available for free download. Qatar’s Ehteraz app – which is mandatory, and has been already downloaded 1 million times — allows the developer to unlock users’ smartphones, according to the organization’s database.  Amnesty International’s analysis discovered a vulnerability in Qatar’s app that would have allowed hackers to access highly sensitive information collected by the app.

“The speed at which this technology is being deployed …should terrify people,” said Megan DeBlois,’s volunteer product manager.  “I would argue in a lot of cases (this is) legitimizing surveillance with the lens of a public good, but without a lot of transparency.”

Most of the apps in Covid19Tracker’s database are made by governments outside the U.S. Contact tracers have been released rapidly across the E.U. and in places like Saudi Arabia and India. In the U.S., states have been slow to push out tracker apps, partly out of privacy and security concerns.

DeBlois recently presented the group’s findings at the virtual DefCon hacker convention in a talk titled “Who Needs Spyware When You Have Covid-19 Apps?

There were some obvious patterns. While EU apps were less invasive that apps generated by other governments, nearly all of them requested permissions that Google defines as “dangerous,” such as precise location information – in fact 74% of the apps in the database ask for GPS data. Fully 16 request microphone access and 44 ask for camera access. Seven try to access phone contacts.

The group’s database includes purely information apps, symptom trackers, and contact tracing. It’s not going to be easy to build a contract tracing app that respects people’s privacy, DeBlois cautioned.

“It’s really about the nature of contact tracing … The whole point is to track people, to associate linkages,” she said. “That makes it difficult to build and engineer something that works in the way everyone needs it to work.”

Contact tracing apps fall roughly into two categories — those that share all users’ location with a central, government-controlled database, and those that work by merely allowing phones to talk to each other through Bluetooth. In that model, data is only shared with a government agency after a confirmed infection. Google and Apple have recently tweaked their smartphone operating systems to encourage development of this kind of app.

“I’m cautiously optimistic about this minimalistic approach — that model has a lot of potential,” DeBlois said.

View the presentation

Still, she has other concerns.

“I’m a little bit nervous about the way the technology decisions were made,” she said. “A lot of the technology has been dictated by companies. They aren’t part of our democratically-elected government.”

The proliferation of such apps around the world should concern U.S. citizens, too, even those who don’t plan to download a U.S. tracker app, she said. The Qatar app is mandatory even for visitors, for example. That could have implications for business travelers for years to come.

“There absolutely will be implications that cross national boundaries,” she said. “For folks who do international travel, this should be on their radar.”

In the U.S. and western democracies, where use of tracker apps is expected to be voluntary, the apps will be useless unless a large percentage of citizens download them. That’s going to require a lot of trust – a trust that seems lacking in the U.S. right now.  DeBlois cited revelations made by Edward Snowden as one reason: Snowden confirmed some of Americans’ worst fears about government abuse of surveillance technology, she said.

How could U.S. health agencies overcome this lack of trust?

“It starts with transparency,” she said. “Making clear who has access to the information, for how long.  All those questions need to be answered,  And those answers need to be verified.”

Consumers very worried about privacy, but disagree on who’s to blame

Privacy and Security in a Digital World: A Study of Consumers in the United States was conducted to understand the concerns consumers have about their privacy as more of their lives become dependent upon digital technologies. Based on the findings, the report also provides recommendations for how to protect privacy when using sites that track, share and sell personal data. Sponsored by ID Experts, we surveyed 652 consumers in the US. For the majority of these consumers, privacy of their personal information does matter.

Consumers are very concerned about their privacy when using Facebook, Google and other online tools. Consumers were asked to rate their privacy concerns on a scale of 1 = not concerned to 10 = very concerned when using online tools, devices and online services. Figure 1 presents the very concerned responses (7+ responses).

The survey found that 86 percent of respondents say they are very concerned when using Facebook and Google, 69 percent of respondents are very concerned about protecting privacy when using devices and 66 percent of respondents say they are very concerned when shopping online or using online services.

When asked if they believe that Big Tech companies like Google, Twitter and Facebook will protect their privacy rights through self-regulation, 40 percent of consumers say industry self-regulation will suffice. However, 60 percent of consumers say government oversight is required (34 percent) or a combination of government oversight and industry self-regulation (26 percent) is required.

Following are the most salient findings:

 The increased use of social media and awareness about the potential threat to their digital privacy has consumers more concerned about their privacy. In fact, social media websites are the least trusted (61 percent of consumers) followed by shopping sites (52 percent of consumers).

  • Consumers are most concerned about losing their civil liberties and having their identity stolen if personal information is lost, stolen or wrongfully acquired by outside parties (56 percent and 54 percent of respondents, respectively). Only 25 percent of consumers say they are concerned about marketing abuses if their personal information is lost or stolen.
  • Seventy-four percent of consumers say they rarely (24 percent) or never (50 percent) have control over their personal data. Despite this belief, 54 percent of consumers say they do not limit the data they provide when using online services. Virtually all consumers believe their email addresses and browser settings & histories are collected when using their devices, according to 96 percent and 90 percent of consumers, respectively.
  • Home is where the trust is. Forty-six percent of consumers, when asked the one location they trust most when shopping online, banking and other financial activities online, say it is their home. Only 10 percent of consumers say it is when using public WiFi.
  • Consumers believe search engines, social media and shopping sites are sharing and selling their personal data, according to 92 percent, 78 percent and 63 percent of consumers. To increase trust in online sites, consumers want to be explicitly required to opt-in before the site shares or sells their personal information, according to 70 percent of consumers.
  • Consumers reject advertisers’ use of their personal information to market to them. Seventy-three percent of consumers say advertisers should allow them to “opt-out” of receiving ads on any specific topic at any time, and 68 percent of consumers say they should not be able to serve ads based on their conversations and messaging. Sixty-four percent of consumers say they do not want to be profiled unless they grant permission.
  • Online ads and the “creepy” factor. Sixty-six percent of consumers say they have received online ads that are relevant but not based on their online search behavior or publicly available information frequently (41 percent of consumers) or rarely (25 percent of consumers). Sixty-four percent of consumers say they think it is “creepy” when that happens.
  • Forty-five percent of consumers are not aware that their devices have privacy controls they can use to set their level of information sharing. Of the 55 percent of consumers who are aware, 60 percent say they review and update settings on their computers and 56 percent say they review and update settings on their smartphones.
  • Fifty-four percent of consumers say online service providers should be held most accountable for protecting consumers’ privacy rights when going online. Forty-five percent of consumers say they themselves should be most accountable.

Download the full report at the ID Experts website.

Is smartphone contact tracing doomed to be a privacy killer? Or can tech really help?

An app that tells you if you were exposed to someone with Covid? Sounds great. But, as usual, tech-as-silver-bullet ideas come full of booby-traps. There’s been a lot of scattershot discussion around smartphone contact tracing during the past several months, with privacy advocates saying the harms far outweigh the benefits, but many governments and technology are plowing ahead anyway.

But if tech *could* make us safer during this crisis, shouldn’t we try? Under what conditions might it actually be feasible, and fair? Prof. Jolynn Dellinger (Duke and UNC law professor, @MindingPrivacy) has put it all together in a thoughtful analysis, creating a 5-part test that could be considered before implementing contact tracing. Will it *really* work? Will it do more harm than good? Is there enough trust in institutions to ensure it won’t be abused later? Her structure would be useful for the launch of almost any new technology, and it deserves a careful reading on its own. It also deserves more discussion, so I reached out to Prof. Dellinger and Prof. David Hoffman at Duke’s Sanford School of Public Policy and invited them to a brief email dialog with me. I hope you’ll find it illuminating.

Disclosure: I was recently a visiting scholar at Duke, invited by Prof. Hoffman.

TO: David
CC: Jolynn

David: Jolynn’s piece is such an excellent state-of-play analysis. Not to put words in her mouth, but I read it as a polite and smart “this’ll never work.” We can’t even get Covid test results in less than a week, why are we even talking about some kind of sci-fi solution like smartphones that warn each other (or, gulp, tell on each other)? Every dollar and moment of attention spent on contract tracing apps should be redirected to finding more testing reagents, if you ask me. Still, this discussion is inevitable, because the apps – working or not – are coming. So I really welcome her criteria for use.

One thing I’ve thought a lot about, which she mentions in passing: Alert fatigue. I’d *definitely* want a text message if someone I spent time with got Covid, were that possible. But if I got five of these in one day I’d turn it off, especially if they proved to be false alarms. Or if I got none in the first 10 days, I’d probably turn it off, or it would age off my smartphone. Fine-tuning the alert criteria will be a hell of a job.

Meanwhile, my confidence level that data like this would *never* be used to hunt for immigrants, or deadbeat dads, or terrorists, or journalists, is about zero. It’s hard to imagine a technology more ripe for unintended consequences than an app that makes such liberal use of location information.

That being said, I sure wish something like this *could* work. Let’s imagine an alternative universe where the trust, law, and technology were already in place when Covid hit, so tech was ready and willing to ride in and save the day. How do we create that world, if not for now, but at least in time for the next pandemic/terrorist attack/asteroid strike/etc. ? We might have to reach back to the days after 9/11, as Jolynn hints, and start a 20-year effort at lawmaking and trust building. The best way to start a journey of 1,000 miles is with a single step. How would we get started?

FROM: David
TO: Bob
CC: Jolynn

David Hoffman

Thanks Bob, with any of these uses of technology the first question that should be asked is “what problem are we trying to solve?”  Are we using the technology to trace infections? Or are we allowing people to increase their chances that they will be notified if they have had exposure to the virus? Or are we using the technology to have individuals track whether they are having symptoms? Or to enforce a quarantine? Or to have people volunteer to donate plasma? Or just to provide people with up to date information about the virus? Depending on the problem we are attempting to solve, we will want to design very different technology implementations. For many of these problems we will likely need to merge other data with whatever data is collected through the technology. Based on what we have seen done in other countries these other data feeds can include information from manual contact tracers, credit card data, CCTV camera feeds and clinical health care data. Once we define what problem we are trying to solve and what data is necessary to solve it, then we can conduct a privacy assessment to determine the level of the risks.

Many of the smartphone apps that have been created have been described as “contact tracing apps”, but it is not clear to me that they will actually help much with contact tracing. To properly do contact tracing through manual efforts, with technology, or using a combination of both we will need to have enough data about whether people have contracted COVID-19 (this presumes broad and quick testing) and a mechanism to accurately measure whether people have been in close contact with each other for long enough to warrant a recommendation that they quarantine themselves, get tested, or both. Unfortunately, solutions that rely just on Bluetooth data from smartphones is likely to result in a large number of both false negatives and false positives. However, a system that integrates Bluetooth data with information learned from manual contact tracers has a higher likelihood of success. Manual contact tracing though suffers from an issue of a lack of centralized guidance, is under resourced and in most areas has not made clear what privacy protections will be put in place for the collected data. The US urgently needs a national strategy on contact tracing, with clear recommendations on what data to collect, what technology to use, and what cybersecurity and privacy protections to put in place.

FROM: Jolynn
TO: Bob
CC: David

Jolynn Dellinger

Bob, Thank you so much for reading the post and for your thoughtful comments and questions. The covid crisis highlights the numerous ways data and emerging tech could be used to benefit society.  Benefitting society while preventing harm to individuals is not an unobtainable goal, but it will take concerted effort. We have long recognized as a society the sensitivity of health information and we are getting there (slowly but surely) on location data. Acting on what we know by taking proactive (as opposed to merely reactive) steps to protect the privacy of personal information – through design, policy and law – is the place to start. A reactive step at this moment is passing a limited law dealing with the privacy of information collected for covid-19 purposes — and this is absolutely better than nothing.  A proactive step would be passing comprehensive privacy legislation that circumscribes collection and use of data more generally and contributes to the creation of an environment in which people can trust companies and governments not to repurpose, exploit or misuse their personal data. (Arguably, because we have waited so long to take obvious necessary legislative action, even a comprehensive privacy law could be broadly characterized as “reactive” at this point, but that is a topic for another post).

Regarding the original post, my personal view is that voluntary digital contact tracing apps are not likely to be worth the existing privacy and security risks at this time given our failure to implement the other necessary elements of a comprehensive, holistic response to the health crisis and the likelihood that they will not be used by sufficient numbers of citizens to make the notifications helpful or reliable. You mentioned in your introductory comments “feasibility” and the relevance of the dollars spent on contact tracing.  I did not cover this topic adequately in my original post but certainly think it is a crucial consideration. Budgets are limited and strained, and every response we choose to invest in necessarily represents another option we do not pursue. So the question of whether to pursue digital contact tracing apps should not be considered in a vacuum but rather should be analyzed in terms of bang for the buck, so to speak. Is an investment in such apps the best, most effective use of our limited funds? And what potentially more useful responses are we foregoing? This question further highlights one of the downsides of the state by state approach the US is currently taking. How much more economically efficient might it be to have regional approaches or, sigh, leadership at the federal level? I strongly agree with David’s comment that the US needs a national strategy with clear recommendations on what data to collect, what technology to use, and what cybersecurity and privacy protections to put in place. I would add that these guidelines, like the 5 question analysis proposed in the blog post, should be applied to any and all personal data collected for the purposes of managing the covid crisis.

TO: Jolynn
CC: David
So is there one thing that readers might urge their leaders to do, or urge technology companies to do, during the next couple of months that might bring us closer to these goals?
It seems like a federal privacy law is probably off the table between now and election day, so that won’t come in time to help with Covid.
Is there something else that might? Could a state pass a law? Could a tech firm adopt a model privacy policy around contact tracing apps?  What kind of steps might any of these interested parties that would at least move us a bit in the right direction? Sadly, I’m quite sure we’ll be dealing with Covid long after November.


FROM: Jolynn
TO: Bob
CC: David

Jolynn Dellinger

State legislatures could pass laws or, in the alternative, Governors might issue executive orders to accomplish immediate goals. States can work to ensure that all local and state level health departments are on the same page and are employing similar privacy and security protections for data collected by manual contact tracers and any digital contact tracing apps or other technologies designed to manage Covid issues.

Tech firms and app developers should certainly have privacy policies in place, but those entities should also make explicit, affirmative guarantees that any data collected for purposes of responding to the Covid crisis (health, location or other personal data) will not be used for any other purpose or monetized, and will not be sold to or shared with any third parties, including law enforcement of any kind. Google and Apple could also bar apps from inclusion in the Google Play Store or App Store if they do not make such explicit commitments.

Want to participate in this dialog? Leave your comments below. We’ll keep the conversation going.

Digital transformation & cyber risk: what you need to know to stay safe

Larry Ponemon

CyberGRX and Ponemon Institute surveyed 581 IT security and 302 C-suite executives to determine what impact digital transformation is having on cybersecurity and how prepared organizations are to deal with that impact. All 883 respondents are involved in managing digital transformation and cybersecurity activities within their organizations. The results show that while digital transformation is widely accepted as critical, the rapid adoption of it is creating significant vulnerabilities for most organizations—and these are only exacerbated by misalignment between IT security professionals and the C-suite.

The full report can be downloaded from the CyberGRX website.

Our research think tank is dedicated to advancing privacy and data protection practices—and these report findings underscore a growing need for such mitigation tools, at a time when we see rapid digital transformation across industries. We chose to study both IT security professionals and C-suite executives to tap into the intersection of two groups making the biggest impact on organizations as they adopt new digital practices.

Here are the key themes that will be reviewed in this report.

Digital transformation is increasing cyber risk.

  • IT security has very little involvement in directing efforts to ensure a secure digital transformation process. Only 37 percent of respondents say the CIO is most involved and only 24 percent of respondents say the CISO is most involved. Both roles are trailing behind general managers, lines of business managers and data sciences.
  • Eighty-two percent of respondents believe their organizations experienced at least one data breach as a result of digital transformation. Forty-two percent of respondents believe they experienced at least two to five cyber events and 55 percent of respondents say with certainty that at least one of these breaches was caused by a third party.

Digital transformation has significantly increased reliance on third parties, specifically cloud providers, IoT and shadow IT.

  •  Fifty-eight percent of respondents say the primary change to their organizations is increased migration to the cloud, which relies upon third parties. This is followed by the increased use of IoT and increased outsourcing to third parties. Despite the increasing risk, 58 percent of respondents say their organizations do not have a third-party cyber risk management program.

Conflicting priorities between IT security teams and the C-suite create vulnerabilities and risk.

  • Seventy-one percent of IT security respondents say the rush to achieve digital transformation increases the risk of a data breach and/or a cybersecurity exploit compared to 53 percent of the C-level respondents. Sixty-three percent of C-level respondents vs. only 41 percent of IT security respondents do not want the security measures used by IT security to prevent the free flow of information and an open business model.

Unless things change, the future doesn’t look any more secure

  • Currently only 29 percent of respondents say their organizations are very prepared to address top threats related to digital transformation in two years. Only 43 percent of respondents are very optimistic their organizations will be prepared to reduce the risk of these threats.
  • Organizational size and industry differences have an impact on the consequences of digital transformation. Most industries do not have a security budget for protecting data assets during the digital transformation process.

“If there’s one major takeaway from our research, it’s that digital transformation is not going anywhere. In fact, organizations should expect—and plan for—digital transformation to become more of an imperative over time,” says Dave Stapleton, Chief Information Security Officer, CyberGRX. “For this reason, organizations must consider the security implications of digital transformation and shift their strategy to build in resources that mitigate risk of cyberattacks. Based on these findings, we recommend involving organizations’ IT security teams in the digital transformation process, identifying the essential components for a successful process, educating colleagues on cyber risk and prevention, and creating a strategy that protects what matters most.”

Key findings overview:

The rush towards digital transformation has increased cyber risks.

IT security respondents who are in the trenches are far more cognizant than C-level respondents of the risk if not enough time and resources are allocated to the digital transformation process. Most respondents say their corporate leaders are not aware of how the inability to secure digital assets could significantly hurt their organization’s brand and reputation. Less than half of C-level respondents (49 percent) say senior management recognizes the potential harm to brand and reputation.

Conflicting priorities between IT security teams and the C-suite create vulnerabilities and risk. Only 16 percent of respondents say IT security and lines of business are fully aligned with respect to achieving security during the digital transformation process. As a result, there are gaps in perceptions about risk to the digital transformation process. Specifically, far more IT security respondents (64 percent) than C-level respondents (41 percent) say that the digital economy significantly increases risk to high value assets such as IP and trade secrets. Sixty-three percent of C-level respondents vs. only 41 percent of IT security respondents do not want the security measures used by IT to prevent the free flow of information and an open business model.

Organizations are not protecting what matters most. Analytics and private communications are the digital assets most difficult to secure according to 51 percent and 44 percent of respondents, respectively. However, only 35 percent of respondents say analytics is appropriately secured and only 38 percent of respondents say private communications are secured. Surprisingly, only 25 percent of respondents say consumer data, which is considered highly sensitive and confidential, is appropriately secured. However, the difficulty to secure this data is very low. Only 10 percent of respondents say such data is difficult to secure.

A secure digital transformation process is affected by a lack of expertise and a lack of visibility. Fifty-three percent of respondents say the most significant barrier to achieving a secure digital transformation process followed by insufficient visibility of people and business processes (51 percent of respondents).

Organizations have experienced multiple data breaches as a result of digital transformation. Eighty-two percent of respondents believe their organizations experienced at least one data breach during the digital transformation process. Forty-two percent of respondents say their organizations could have experienced between two and five data breaches and 22 percent say their organizations could have experience between six to ten data breaches. Fifty-five percent of respondents say with certainty that at least one of these breaches was caused by a third party.

Digital transformation has significantly increased reliance on third parties, specifically cloud providers, IoT and shadow IT.

Current tools or solutions to manage third-party risk are still not considered effective. Slightly more than half (51 percent) of organizations represented in this research have a strategy for achieving digital transformation and of these 73 percent of respondents say their strategy involves assessing third-party relationships and vulnerabilities. Forty-two percent of respondents say their organizations have a third-party risk management program and assessments are the most commonly used solution. However, when asked if they are effective, 53 percent say the tools and solutions used are only somewhat effective (28 percent) or not effective (25 percent).

A secure cloud environment is a significant challenge to achieving a secure digital transformation process. Sixty-three percent of respondents say their organizations have difficulty in ensuring there is a secure cloud environment and 54 percent of IT security say the ability to avoid security exploits is a challenge. Fifty-six percent of C-level executives say their organizations find it a challenge to ensure third parties have policies and practices that ensure the security of their information.

Challenges for securing the future of digital transformation

Budgets are and will continue to be inadequate to secure the digital transformation process. Only 35 percent of respondents say they have such a budget. If they do, these budgets are and will continue to be inadequate to secure the digital transformation process. Because of the risks created by digital transformation, respondents believe the percentage of IT security allocated to digital transformation today should almost be doubled from an average of 21 percent to 37 percent. In two years, the average percentage will be only 37 percent and respondents say ideally it should be 45 percent.

More progress needs to be made in the ability to mitigate cyber threats. The top three threats respondents are most concerned about are system downtime, cybersecurity attacks and data breaches caused by third parties. Currently, only 29 percent say they are very prepared to address these threats. In two years, only 43 percent are very optimistic their organizations will be very prepared to reduce the risk of these threats.

A secure digital transformation process is dependent upon the expertise of the IT security team and they are not very influential. Today, only 35 percent of respondents say IT security is very influential. In the next two years, their influence increases only slightly. In two years, 43 percent of respondents say IT security will be very influential.

Digital transformation impacts industries differently.

Across industries digital transformation has significantly increased reliance on third parties, specifically cloud providers, IoT and shadow IT. Respondents in healthcare, industrial and retail say the most significant change caused by digital transformation is the increased migration to the cloud. The public sector and healthcare industries are less likely to say the increased use of IoT has changed their organizations. Retail and financial services respondents are most likely to say increased outsourcing to third parties as a result of digital transformation has had an impact.

Industrial manufacturing is most likely to have a strategy for achieving digital transformation. Healthcare is least likely to have a strategy. As part of their strategy, retailers are most likely to include assessing third-party relationships and vulnerabilities, including supply chain partners.

Perceptions of digital transformation risk vary among industries. Leaders in services and financial services are most likely to recognize that digital transformation creates IT security risk. Respondents in the industrial manufacturing sector are least likely to say their leaders recognize the risk.

Retail, public sector and services are the industries most concerned about the rush to achieve digital transformation. Sixty-eight percent of respondents in retail, 65 percent of respondents in services and public sector say the rush to achieve digital transformation increases the risk of a data breach and/or a cybersecurity exploit.

A successful digital transformation process requires IT security to balance the securing of digital assets without stifling innovation. IT security faces the challenge of a secure process without stifling innovation. Because digital transformation is considered essential, most industries say that IT security should support innovation with a minimal impact on the goals of digital transformation. Eighty-three percent of respondents in financial services say such a balance is essential.

Most industries do not have a security budget for protecting data assets during the digital transformation process. Despite the need to have the necessary expertise and technologies to ensure a secure digital transformation process, industries are not allocating funds specifically to digital transformation. Healthcare organizations are most likely to have funds for protecting data assets during the digital transformation process.

Organizational size affects the digital transformation process

Following are the most salient differences according to organizational size. Our analysis looked at organizations with a headcount of less than 5,000 and greater than 10,000.

The increased migration to the cloud and the use of IoT are having the greatest impact during the global transformation on smaller organizations. Larger organizations are seeing the greatest impact due to increased outsourcing to third parties.

More larger organizations have a strategy for digital transformation. Larger organizations (54 percent of respondents) are more likely than smaller organizations (43 percent of respondents) to have a strategy for achieving digital transformation. As part of their strategy, 80 percent of respondents in larger organizations vs. 69 percent of respondents in smaller organizations are assessing third-party relationships and vulnerabilities, including supply chain partners, as part of their digital strategy.

Larger organizations are far more likely to recognize the risk of digital transformation. Seventy-nine percent of respondents in larger organizations vs. 61 percent of respondents in smaller organizations believe the rush to achieve digital transformation increases the risk of a breach and/or cybersecurity exploit. Larger organizations are less likely to say that it is important to balance security with the need to enable the free flow of information. Seventy-two percent of respondents in larger organization say digital transformation increases risk to high value assets such as intellectual property, trade secrets and so forth.

Smaller organizations are more likely to be vulnerable to a cyberattack or data breach following digital transformation. Seventy-one percent of respondents in smaller organizations and 64 percent of respondents in larger organizations believe the risk of digital transformation makes it more likely to have a data breach or cyberattack. Larger organizations are more likely to say the rush to produce and release apps, the increased use of shadow IT and increased migration to the cloud have made their organizations more vulnerable following digital transformation.

Characteristics of organizations with mature digital transformation programs

In this study, we analyzed the responses from those organizations that self-reported they have a achieved a mature digital transformation process. Twenty-three percent or 131 respondents self-reported that their organizations’ core digital transformation activities are deployed, maintained and refined across the enterprise. We compare the findings from this group to the 77 percent of the other 450 respondents.

Mature organizations are more likely to have strategies to protect data assets and assess third-party relationships. Fifty-six percent of the most mature organizations have a strategy for achieving digital transformation. In contrast, 47 percent of the other respondents say they have such a strategy. Those in mature organizations say their strategies are more likely to protect data assets and assess third-party relationships and vulnerabilities, including supply chain partners.

Mature organizations are more likely to understand and anticipate the risks associated with digital transformation. Respondents in mature organizations are far more likely to make reducing the third-party risk a priority than the other organizations (78 percent vs. 51 percent). Mature organizations are also more likely to recognize the digital economy increases the risk to high value assets such as intellectual property, trade secrets and so forth (78 percent vs. 60 percent). Mature organizations are also more likely to believe in the importance of balancing the security of our high value assets while enabling the free flow of information and an open business model.

Digital transformation is considered essential to the company’s business. More mature organizations are likely to believe in the importance of IT security to supporting innovation with minimal impact on the goals of digital transformation (90 percent vs. 81 percent) and that digital transformation is essential to the company’s business (84 percent vs. 79 percent).

All organizations struggle with having an adequate budget for protecting data assets during the digital transformation process. Forty-three percent of respondents of mature organizations vs. 34 percent of other organizations say their budgets are adequate for protecting data assets during the digital transformation process.

For more detailed findings, please download the full report from the CyberGRX website at

How to detect fake anything in a zero trust world

Bob Sullivan

Fake News is stoking violence and helping destroy our democracy. Fake pills make people sick and can even kill them. Fake foods, like fake olive oil, or mislabeled fish, rip consumers off and steal profits from honest companies. The world is becoming overrun by fake everything, says Avivah Litan, renowned fraud analyst at consultancy firm Gartner. Counterfeit products are a $3 trillion problem, she says…But today’s topic is even bigger than fraud. It’s about a threat to reality itself.

In a new paper, called How to Detect Fake Anything in a Zero Trust World, Litan argues that a mix of technology and human intelligence can beat back this problem of fake everything. But only if someone — consumers? government regulators? corporations? — is willing to pay the price. I spoke with her recently: You can listen to our conversation at the link below.

A few highlights from our talk:

Imagine being able to scan a barcode on a piece of salmon at the supermarket and being able to see the fish’s journey from the river where it was caught, to the port where it was dropped off, to the plane that took it to your city, to the truck that took it to your store.  That’s the promise of blockchain, which could help consumers decide they prefer fish caught from a specific place. They could also demand it be caught in a certain way, and report fraud or mislabeled products.  Litan gets most fired up talking about fake Olive Oil. She thinks blockchain public audit trails could help stop that, too.

Using these tools to cast a wider net — pun intended — Litan thinks tech could help consumers/citizens regain the grasp of reality they are losing. Fake news and fake cures have been a problem for years, but the Covid-19 pandemic has brought the issue into sharp relief.

“The hope is there is no shortage of innovation in this space,” she tells me. “The problem is (companies) won’t do it unless (consumers) pay a premium.”

Litan is perhaps the media’s most-quoted expert on credit card fraud and identity theft, dating back to the early years of credit card database hacking and the rise of fraud-fighting software.  She sees some parallels between the race for banks and retailers to stop credit card hacking — which costs the companies billions — and their relative indifference to identity theft — in which consumers bear a lot of the cost.

But the rise of fake everything, and the collapse of trust worldwide, is a far bigger problem. I’ve started calling it the trust market crash.  It’s an enormous challenge in a world of commerce that’s built on trust.

Innovation will be critical, she said, because government regulators — well-intentioned as they may be — won’t be able to keep up.

“The world is moving way too fast for our political systems,” she said.  Current solutions fall far short. What is an Italian olive oil consumer to do, outside of grow their own olives, Litan joked.   “Hopefully the technology will evolve where you have the solutions at your fingertips.”

Her paper offers the “Gartner Model for Truth Assessment,” with different blended technology and human solutions for the problem of fake.  But much more needs to be done.

“The best hope is a consumer revolution,” she said. “We’ve had enough of this fake news, (people) shoving all this fake stuff down our throats.”

Avivah has posted a short blog entry about her paper. The paper itself is behind Gartner’s paywall.

The 2020 Study on the State of Industrial Security

Larry Ponemon

Ponemon Institute is pleased to present the findings from The 2020 State of Industrial Security Study, sponsored by TÜV Rhineland. The purpose of the research is to understand cyber risks across a broad spectrum of industries and the steps organizations are taking to reduce cyber risk in the operational technology (OT) environment.

Ponemon Institute surveyed 2,258 cybersecurity practitioners in the following industries: automotive, oil and gas, energy and utilities, health and life science, industrial manufacturing and logistics and transportation. All respondents are responsible for securing or overseeing cyber risks in the OT environment and are aware of how cybersecurity threats could affect their organization.

In the context of this research, Operational Technology (OT) is the hardware and software dedicated to detecting or causing changes in physical processes through direct monitoring and/or control of physical devices. Simply put, OT is the use of computers to monitor or alter the physical state of a system, such as the control system for a power station. The term has become established to demonstrate the technological and functional differences between traditional IT systems and industrial control systems environment.

The OT environment is vulnerable to cyberattacks: 57 percent of respondents say their organizations’ security operations and/or business continuity management teams believe there will be one or more serious attacks within the OT environment. Almost half (49 percent and 48 percent of respondents) say it is difficult to mitigate cyber risks across the OT supply chain and cyber threats present a greater risk in the OT than the IT environment.

The following findings reveal the cybersecurity vulnerabilities in the OT environment.  

  • OT and IT security risk management efforts are not aligned. Sixty-three percent of respondents say OT and IT security risk management efforts are not coordinated making it difficult to achieve a strong security posture in the OT environment. The management of OT security is painful because of the lack of enabling technologies in OT networks, complexity and insufficient resources. 
  • On average, organizations had four security compromises that resulted in the loss of confidential information or disruption to OT operations. Forty-seven percent of respondents say OT technology-related cybersecurity threats have increased in the past year. The top three cybersecurity threats are phishing and social engineering, ransomware and DNS-based denial of service attacks. One-third of respondents say such exploits have resulted in the loss of OT-related intellectual property. 
  • The majority of organizations have not achieved a high degree of cybersecurity effectiveness. Less than half of respondents say they are very effective in responding to and containing a security exploit or breach (48 percent), continually monitoring the infrastructure to prioritize threats and attacks (47 percent) and pinpointing sources of attacks and mobilizing the right set of technologies and resources to remediate the attack (47 percent of respondents). 
  • To minimize OT-related risks organizations need to replace outdated and aging connected control systems in facilities, according to 61 percent of respondents. More than half (52 percent of respondents) say vulnerable software is creating risks in the OT environment. 
  • Not enough expertise and budget are often cited as reasons for not having a strong security posture in the OT environment. Organizations represented in this research are spending annually an average of $64 million on cybersecurity operations and defense (OT and IT combined). An average of 26 percent of this budget or approximately $17 million is allocated to the security of OT assets and infrastructure and an average of 17 percent or approximately $10 million is allocated specifically to OT cybersecurity. Respondents say their OT budgets are inadequate to properly execute their cybersecurity strategy. 
  • Accountability for executing a successful cybersecurity strategy. Respondents were asked who is most accountable for executing a successful cybersecurity strategy. Only 20 percent of respondents say it is the OT security leader followed by the CIO/CTO (18 percent) and the IT security leader (17 percent). 
  • Organizations are lagging behind in adopting advanced security technologies. Only 38 percent of respondents say their organizations are using automation, machine learning and artificial intelligence to monitor OT assets. The majority of companies are not integrating security and privacy by design in the engineering of OT control systems.

To read the full report, visit TUV Rheinland’s website.

If we’re going to talk about Section 230, let’s get it right

Now we’ve started something

Bob Sullivan

With President Donald Trump threatening retribution against Twitter with an executive order, you’re going to hear a lot about Section 230 this week — and maybe for many weeks. The ensuing discussion could shake the Internet to its very roots.  That’s going to make legal scholars very happy, but it might seem like a dizzying discussion for most.  That’s by design. Interested parties are conflating all kinds of big ideas to muddy the waters here: the First Amendment, innovation, bias, abuse, millions of followers, billions of dollars.  I’m going to try to sort it out for you here.  Who am I to do that? Well, I’m old enough to remember when the Communications Decency Act and its Section 230 was passed into law.

But if you are going on this journey with me, here’s the rules: Nothing is as simple nor as absolute as it sounds.  Free speech isn’t limitless.  “Speech” isn’t even what you think it is. Immunity isn’t limitless. The First Amendment doesn’t generally apply to private companies…most of the time.  But in a rare confluence of events, there are reasons for both conservatives and liberals to take a good long look at updating and fixing Section 230, which has been the source of much profit for corporations and much pain for Internet users since it became law in 1996.

(And if you really want to understand Section 230, I recommend reading this very readable 25-page academic paper titled The Internet As a Speech Machine and Other Myths Confounding Section 230 Speech Reform. Authors Danielle Keats Citron and Mary Ann Franks do a great job explaining the history of the law and the myths that hold America back from reasonable reform. Or, even better, consider The Twenty-Six Words That Created the Internet, a book by Jeff Kosseff, all about Section 230.)

Section 230 was written at the time of Prodigy and Compuserve, when online services were mainly text-based chat tools, and virtually no consumers used websites.  These services had a problem: Were they liable for everything users said? Could they be sued for defamation, or charged criminally, if users misbehaved? To use the kind of shorthand that journalists love but lawyers hate, should they be treated like publishers of the content — akin to a newspaper editor or book publisher — or mere distributors, akin to a newstand owner?  Courts were split on the matter, and that terrified tech firms. Imagine the liability a company like Google, or Facebook, or America Online, would face if it could be charged with every crime committed on their service.

The defensive shorthand I was taught at my startup, inaccurate as it might be, was this: When a tech company actively moderates user content, it becomes a publisher and increases liability. When a tech company just shoves the stuff automatically out into the world, it’s merely a newsstand, a distributor.  So: Don’t touch!

That free-for-all worked about as well as you might imagine (Porn! Stolen goods! Harassment!) so lawmakers tried to help by passing Section 230.  It sounds straightforward: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The idea was to shield online service providers who tried to do the right thing (stop harassment and other crimes) from liability.  The law was actually meant to encourage content moderation. It gave service providers a shield against responsibility for third-party content.  But what a winding road it’s been since then.

First, the good: Plenty of folks see Section 230 as the First Amendment of the Internet. Scholar Eric Goldman actually argues that it’s better than the First Amendment. It’s inarguable that online services have thrived since then, and plenty of them credit Section 230.

However, this simple you’re-not-responsible-for-third-party-content rule has been extended by courts and corporations far beyond its original intention. Recall, it was written right about the time Amazon was invented.  The Internet was nearly 100% text-based speech, digital conversation, at the time.  Today, it’s Zoom and car buying and television and it elects a U.S. president.

So that leads us to the moment at hand. The Internet is awash in disinformation, harassment, crime, racism…the dark side of humanity thrives there.  Plenty of people have been driven from its various platforms through doxxing, gender abuse, or simple exhaustion from nasty arguments. I argue all the time that it has made us dumber as a people, offering the Flat Earth Movement as proof. In short, the Internet sucks (More than a decade later, this is still a great read). As Citron and Frank say:

“Those with political ambitions are deterred from running for office. Journalists refrain from reporting on controversial topics. Sexual assault victims are discouraged from holding perpetrators accountable…. An overly capacious view of Section 230 has undermined equal opportunity in employment, politics, journalism, education, cultural influence, and free speech. The benefits Section 230’s immunity has enabled surely could have been secured at a lesser price.”

For better and worse, this is a good time to reconsider what Section 230 hath wrought.

For a quick moment, here are some obstacles to the discussion, forged by confusion. You, and I, and President Trump, can’t have our First Amendment right to free speech suppressed by Twitter or Facebook or Instagram. Generally, the First Amendment applies to governments, not private enterprises.  Facebook, as any true conservative or libertarian would tell you, is free to do what it wants with its company, and the president is free to not use it.  In fact, the government compelling a social media company to say certain things or not say other things — to argue it could not add a link for fact-checking — is a rather obscene violation of the company’s First Amendment rights.

Even on this fairly clear point, there is some room for discussion, however.  In Canada, courts have ruled that social media is so ubiquitous that it can be akin to a public square, according to Sinziana Gutui, a Vancouver privacy lawyer.  So might the US some day feel that cutting off someone’s Twitter account is akin to cutting off their telephone line or electricity? Perhaps.  It sure seems less strained to suggest President Trump simply find another platform to use for his 320-character messages.

And even on this issue of speech, there is confusion. U.S. courts have broadly expanded the definition of speech far beyond talking, publishing pamphlets, or writing posts on an electronic bulletin board.  Commercial activity can be considered speech now.  And that expanded definition has helped websites argue for Section 230 immunity when their members are committing illegal acts — such as facilitating the sale of counterfeit goods, or guns to criminals known to be evading background checks.

Immunity often encourages bad behavior, a classic “moral hazard,” as Franks has written. Set aside fake autographs and illegally purchased domestic violence murder weapons for the moment — the Internet is drowning in antagonism, bots, and harassment that has made it inhospitable for women and men of good faith. It rewards extremism.  It is unhealthy for people and society. It’s not going to fix itself. Citron and Franks again:

“Market forces alone are unlikely to encourage responsible content moderation. Platforms make their money through online advertising generated when users like, click, and share,. Allowing attention-grabbing abuse to remain online often accords with platforms’ rational self-interest. Platforms “produce nothing and sell nothing except advertisements and information about users, and conflict among those users may be good for business.” On Twitter, ads can be directed at users interested in the words “white supremacist” and “anti-gay.” If a company’s analytics suggest that people pay more attention to content that makes them sad or angry, then the company will highlight such content. Research shows that people are more attracted to negative and novel information. Thus, keeping up destructive content may make the most sense for a company’s bottom line.”

Facebook profits massively off all this social destruction. We learned this week that employees inside Facebook have come up with some very clever technological solutions to this problem, only to be kneecapped by Mark Zuckerberg, clearly drunk on conveniently-profitable take-no-responsibility libertarian ideals.

What’s the solution? For sure, that’s much harder.  Citron and Franks suggest adding a simple “reasonable” requirement on companies like Facebook, meaning they have to take reasonable steps to police users in order to maintain Section 230 immunity. Reasonable is a difficult standard, possibily leading to endless ’round-the-rosie’ debate, but it is a common standard in U.S. law. Facebook’s engineers came up with notions worth trying, detailed in this Wall Street Journal story, such as shifting extreme discussions into sub-groups.  The firm could also stop giving extra algorithm juice to obsessives who post 1,000 times a day.

As always, a mix of innovation and smart rules that balance interests is needed.

It won’t be easy, but we have to try. So, it’s good that President Trump has shined a light on Section 230. The discussion is long overdue, as is the will to act. Will the discussion be productive? Probably not if it happens on Twitter. Definitely not if it’s focused on an imaginary social media bias against Trump or Trump’s 80 million followers, who clearly have no trouble finding each other. Instead, let’s focus on making the world safe again for reasonable people.

The state of endpoint security risk: it’s skyrocketing

Larry Ponemon

The Third Annual Study on the State of Endpoint Security Risk, sponsored by Morphisec, reveals that organizations are not making progress in reducing their endpoint security risk, especially against new and unknown threats. In fact, in this year’s research, 68 percent of respondents report that their company experienced one or more endpoint attacks that successfully compromised data assets and/or IT infrastructure over the past 12 months, an increase from 54 percent of respondents in 2017.

A webinar on the report is available for free at Morphisec’s website.

“Corporate endpoint breaches are skyrocketing and the economic impact of each attack is also growing due to sophisticated actors bypassing enterprise antivirus solutions,” said Larry Ponemon, Chairman and Founder of Ponemon Institute. “Over half of cybersecurity professionals say their organizations are ineffective at thwarting major threats today because their endpoint security solutions are not effective at detecting advanced attacks.”

Ponemon Institute surveyed 671 IT security professionals responsible for managing and reducing their organization’s endpoint security risk. Companies represented in this research are very concerned about the significant increase in new and unknown threats against their organization (an increase from 69 percent of respondents in 2017 to 73 percent in 2019). On a positive note, since 2017 more respondents say their organizations have ample resources to minimize IT endpoint risk due to infection or compromise (an increase from 36 percent to 44 percent).

Following are 10 key findings from this research.

  1. The frequency of attacks against endpoints is increasing and detection is difficult. Sixty-eight (68) percent of respondents say the frequency of attacks has increased over the past 12 months. More than half of respondents (51 percent) say their organizations are ineffective at surfacing threats because their endpoint security solutions are not effective at detecting advanced attacks.
  1. The cost of successful attacks has increased from an average of $7.1 million to $8.94 million. Costs due to the loss of IT and end-user productivity and theft of information assets have increased. The cost of system downtime has decreased significantly since 2017. 
  1. New or unknown zero-day attacks are expected to more than double in the coming year. The frequency of existing or known attacks is expected to decrease significantly from 77 percent to an anticipated 58 percent in the coming year. In contrast, the frequency of new or unknown zero-day attacks is expected to increase to 42 percent next year. 
  1. An average of 80 percent of successful breaches are new or unknown “zero-day attacks.” These attacks either involved the exploitation of undisclosed vulnerabilities or the use of new/polymorphic malware variants that signature-based detection solutions do not recognize.
  2. Zero-day attacks continue to increase in frequency. In addition to being more successful, zero-day attacks have also become more prevalent. As a result, organizations are investing more budget to protect against these threats. 
  1. Most organizations either use or plan to use Microsoft Windows Defender antivirus solution. Eighty percent (80) of respondents say they currently have (34 percent) or plan to have in the near future (46 percent) the Microsoft Windows Defender antivirus solution. The top two reasons are to reduce the number of separate endpoint security tools and the solution is on par with other antivirus tools. 
  1. The challenge in the use of traditional antivirus solutions are a high number of false positives and security alerts, inadequate protection and too much complexity. Fifty-six (56) percent of respondents say their organizations replaced their endpoint security solution in the past two years. Of these respondents, 51 percent say they kept their traditional antivirus solution but added an extra layer of protection. According to these respondents, the challenges with traditional antivirus solutions are a high number of false positives and security alerts, inadequate protection and too much complexity in the deployment and management of these solutions. 
  1. Antivirus products missed an average of 60 percent of attacks. Confidence in traditional antivirus (AV) solutions continues to drop. On average, respondents estimate their current AV is effective at blocking only 40 percent of attacks. In addition to the lack of adequate protection, respondents cite high numbers of false positives and alerts as challenges associated with managing their current AV solutions. 
  1. The average time to apply, test and fully deploy patches is 97 days. The findings reveal the difficulties in keeping endpoints effectively patched. Forty percent (40) of respondents say their organizations are taking longer to test and roll out patches in order to avoid issues and assess the impact on performance.
  1. Ineffectiveness and lack of in-house expertise are reasons not to use an EDR. Sixty-four (64) percent of respondents who say their organizations do not have an EDR cite its ineffectiveness against new or unknown threats (65 percent of respondents) followed by 61 percent who say they don’t have the staff to support.

Go to Morphisec’s website to read the full report.


Covid-19: The Golden Age of Scams

Bob Sullivan

Nearly 100,000 scam-ready domains have been registered since the Covid-19 pandemic began. It’s the Super Bowl for digital criminals, the golden age of computer fraud. Why? Because a con artist’s best friend is urgency.

We are living through the golden age of scams right now, so I’m going to do an ongoing series about coronavirus crimes.  First up: My conversation with Grace Brombach, who just wrote a report on scams(PDF) for the U.S. Public Interest Research Group.

“We are dealing with so much fear and confusion right now,” Brombach tells me. “People are being put in a very difficult situation where they don’t really know what to believe.”

Of particular worry: Homebound computer users are being told to download all kinds of new software and fill out forms full of personal information, doing things that ordinarily they would never do. For example: Employees are working from home, Zooming everywhere.  Think about how believable an email might be that appeared to come from an HR department, promising new video conference guidelines or requiring new software installation.

Making matters worse, as cybersecurity expert Harri Hursti has told me, a lot of corporate security software is designed to look for unusual patterns in network traffic — like massive downloads or a surprising number of remote logins. Everything is unusual now.

In addition, there’s also a lot of burden on parents (and grandparents) to help their kids do schoolwork from home. That opens up a big attack vector.   Urgent messages claiming to be from schools, including assertions that children have been infected are particularly insidious.

Brombach says most scams fall into two categories: Sale of false cures; and phishing scams designed to commit ID theft. Some of these emails are incredibly believable. There are email alerts from scammers posing as the CDC or WHO promising Covid alerts. Criminals benefit from trading off the trust big brand names have.

“There was a recent map that came out tracking coronavirus cases … posing from Johns Hopkins and when people would click on the map it would actually download malware onto their computers to steal their personal information,” Brombach said. “It’s all across the board…They really are difficult to identify.”

NOTE: Organizations like WHO or the CDC will not send you unsolicited texts or emails unless you’ve already signed up for them.  But given all the talk about contact tracing apps, it’s easy to understand why a consumer might fall for a text message with an alert warning them they’d been near someone who’d tested positive for Covid.

“There’s this misconception that people have of, ”I would never fall for a scam,’ but some of them are so, so believable, so it’s really important to be on your guard as much as possible,” Brombach warned.

Here’s the scams she’s most worried about in the near future:

  • Criminals offering help with economic impact payments. In some cases, only an SSN and a birthdate are needed to access government benefits.  In other cases, criminals are promising frustrated aid recipients they can help get faster payments.
  • Fake Covid testing sites
  • Price gouging
  • Fake cures and treatments. “It’s so hard for the FDA to keep up with all these claims,” she said. Also, remember that it’s generally legal to sell supplements with broad claims like immune system boosting.

You can hear my conversation with PIRG’s Grace Brombach by clicking play below or by clicking on this link