Author Archives: BobSulli

Consumers very worried about privacy, but disagree on who’s to blame

Privacy and Security in a Digital World: A Study of Consumers in the United States was conducted to understand the concerns consumers have about their privacy as more of their lives become dependent upon digital technologies. Based on the findings, the report also provides recommendations for how to protect privacy when using sites that track, share and sell personal data. Sponsored by ID Experts, we surveyed 652 consumers in the US. For the majority of these consumers, privacy of their personal information does matter.

Consumers are very concerned about their privacy when using Facebook, Google and other online tools. Consumers were asked to rate their privacy concerns on a scale of 1 = not concerned to 10 = very concerned when using online tools, devices and online services. Figure 1 presents the very concerned responses (7+ responses).

The survey found that 86 percent of respondents say they are very concerned when using Facebook and Google, 69 percent of respondents are very concerned about protecting privacy when using devices and 66 percent of respondents say they are very concerned when shopping online or using online services.

When asked if they believe that Big Tech companies like Google, Twitter and Facebook will protect their privacy rights through self-regulation, 40 percent of consumers say industry self-regulation will suffice. However, 60 percent of consumers say government oversight is required (34 percent) or a combination of government oversight and industry self-regulation (26 percent) is required.

Following are the most salient findings:

 The increased use of social media and awareness about the potential threat to their digital privacy has consumers more concerned about their privacy. In fact, social media websites are the least trusted (61 percent of consumers) followed by shopping sites (52 percent of consumers).

  • Consumers are most concerned about losing their civil liberties and having their identity stolen if personal information is lost, stolen or wrongfully acquired by outside parties (56 percent and 54 percent of respondents, respectively). Only 25 percent of consumers say they are concerned about marketing abuses if their personal information is lost or stolen.
  • Seventy-four percent of consumers say they rarely (24 percent) or never (50 percent) have control over their personal data. Despite this belief, 54 percent of consumers say they do not limit the data they provide when using online services. Virtually all consumers believe their email addresses and browser settings & histories are collected when using their devices, according to 96 percent and 90 percent of consumers, respectively.
  • Home is where the trust is. Forty-six percent of consumers, when asked the one location they trust most when shopping online, banking and other financial activities online, say it is their home. Only 10 percent of consumers say it is when using public WiFi.
  • Consumers believe search engines, social media and shopping sites are sharing and selling their personal data, according to 92 percent, 78 percent and 63 percent of consumers. To increase trust in online sites, consumers want to be explicitly required to opt-in before the site shares or sells their personal information, according to 70 percent of consumers.
  • Consumers reject advertisers’ use of their personal information to market to them. Seventy-three percent of consumers say advertisers should allow them to “opt-out” of receiving ads on any specific topic at any time, and 68 percent of consumers say they should not be able to serve ads based on their conversations and messaging. Sixty-four percent of consumers say they do not want to be profiled unless they grant permission.
  • Online ads and the “creepy” factor. Sixty-six percent of consumers say they have received online ads that are relevant but not based on their online search behavior or publicly available information frequently (41 percent of consumers) or rarely (25 percent of consumers). Sixty-four percent of consumers say they think it is “creepy” when that happens.
  • Forty-five percent of consumers are not aware that their devices have privacy controls they can use to set their level of information sharing. Of the 55 percent of consumers who are aware, 60 percent say they review and update settings on their computers and 56 percent say they review and update settings on their smartphones.
  • Fifty-four percent of consumers say online service providers should be held most accountable for protecting consumers’ privacy rights when going online. Forty-five percent of consumers say they themselves should be most accountable.

Download the full report at the ID Experts website.

Is smartphone contact tracing doomed to be a privacy killer? Or can tech really help?

An app that tells you if you were exposed to someone with Covid? Sounds great. But, as usual, tech-as-silver-bullet ideas come full of booby-traps. There’s been a lot of scattershot discussion around smartphone contact tracing during the past several months, with privacy advocates saying the harms far outweigh the benefits, but many governments and technology are plowing ahead anyway.

But if tech *could* make us safer during this crisis, shouldn’t we try? Under what conditions might it actually be feasible, and fair? Prof. Jolynn Dellinger (Duke and UNC law professor, @MindingPrivacy) has put it all together in a thoughtful analysis, creating a 5-part test that could be considered before implementing contact tracing. Will it *really* work? Will it do more harm than good? Is there enough trust in institutions to ensure it won’t be abused later? Her structure would be useful for the launch of almost any new technology, and it deserves a careful reading on its own. It also deserves more discussion, so I reached out to Prof. Dellinger and Prof. David Hoffman at Duke’s Sanford School of Public Policy and invited them to a brief email dialog with me. I hope you’ll find it illuminating.

Disclosure: I was recently a visiting scholar at Duke, invited by Prof. Hoffman.

TO: David
CC: Jolynn

David: Jolynn’s piece is such an excellent state-of-play analysis. Not to put words in her mouth, but I read it as a polite and smart “this’ll never work.” We can’t even get Covid test results in less than a week, why are we even talking about some kind of sci-fi solution like smartphones that warn each other (or, gulp, tell on each other)? Every dollar and moment of attention spent on contract tracing apps should be redirected to finding more testing reagents, if you ask me. Still, this discussion is inevitable, because the apps – working or not – are coming. So I really welcome her criteria for use.

One thing I’ve thought a lot about, which she mentions in passing: Alert fatigue. I’d *definitely* want a text message if someone I spent time with got Covid, were that possible. But if I got five of these in one day I’d turn it off, especially if they proved to be false alarms. Or if I got none in the first 10 days, I’d probably turn it off, or it would age off my smartphone. Fine-tuning the alert criteria will be a hell of a job.

Meanwhile, my confidence level that data like this would *never* be used to hunt for immigrants, or deadbeat dads, or terrorists, or journalists, is about zero. It’s hard to imagine a technology more ripe for unintended consequences than an app that makes such liberal use of location information.

That being said, I sure wish something like this *could* work. Let’s imagine an alternative universe where the trust, law, and technology were already in place when Covid hit, so tech was ready and willing to ride in and save the day. How do we create that world, if not for now, but at least in time for the next pandemic/terrorist attack/asteroid strike/etc. ? We might have to reach back to the days after 9/11, as Jolynn hints, and start a 20-year effort at lawmaking and trust building. The best way to start a journey of 1,000 miles is with a single step. How would we get started?

FROM: David
TO: Bob
CC: Jolynn

David Hoffman

Thanks Bob, with any of these uses of technology the first question that should be asked is “what problem are we trying to solve?”  Are we using the technology to trace infections? Or are we allowing people to increase their chances that they will be notified if they have had exposure to the virus? Or are we using the technology to have individuals track whether they are having symptoms? Or to enforce a quarantine? Or to have people volunteer to donate plasma? Or just to provide people with up to date information about the virus? Depending on the problem we are attempting to solve, we will want to design very different technology implementations. For many of these problems we will likely need to merge other data with whatever data is collected through the technology. Based on what we have seen done in other countries these other data feeds can include information from manual contact tracers, credit card data, CCTV camera feeds and clinical health care data. Once we define what problem we are trying to solve and what data is necessary to solve it, then we can conduct a privacy assessment to determine the level of the risks.

Many of the smartphone apps that have been created have been described as “contact tracing apps”, but it is not clear to me that they will actually help much with contact tracing. To properly do contact tracing through manual efforts, with technology, or using a combination of both we will need to have enough data about whether people have contracted COVID-19 (this presumes broad and quick testing) and a mechanism to accurately measure whether people have been in close contact with each other for long enough to warrant a recommendation that they quarantine themselves, get tested, or both. Unfortunately, solutions that rely just on Bluetooth data from smartphones is likely to result in a large number of both false negatives and false positives. However, a system that integrates Bluetooth data with information learned from manual contact tracers has a higher likelihood of success. Manual contact tracing though suffers from an issue of a lack of centralized guidance, is under resourced and in most areas has not made clear what privacy protections will be put in place for the collected data. The US urgently needs a national strategy on contact tracing, with clear recommendations on what data to collect, what technology to use, and what cybersecurity and privacy protections to put in place.

FROM: Jolynn
TO: Bob
CC: David

Jolynn Dellinger

Bob, Thank you so much for reading the post and for your thoughtful comments and questions. The covid crisis highlights the numerous ways data and emerging tech could be used to benefit society.  Benefitting society while preventing harm to individuals is not an unobtainable goal, but it will take concerted effort. We have long recognized as a society the sensitivity of health information and we are getting there (slowly but surely) on location data. Acting on what we know by taking proactive (as opposed to merely reactive) steps to protect the privacy of personal information – through design, policy and law – is the place to start. A reactive step at this moment is passing a limited law dealing with the privacy of information collected for covid-19 purposes — and this is absolutely better than nothing.  A proactive step would be passing comprehensive privacy legislation that circumscribes collection and use of data more generally and contributes to the creation of an environment in which people can trust companies and governments not to repurpose, exploit or misuse their personal data. (Arguably, because we have waited so long to take obvious necessary legislative action, even a comprehensive privacy law could be broadly characterized as “reactive” at this point, but that is a topic for another post).

Regarding the original post, my personal view is that voluntary digital contact tracing apps are not likely to be worth the existing privacy and security risks at this time given our failure to implement the other necessary elements of a comprehensive, holistic response to the health crisis and the likelihood that they will not be used by sufficient numbers of citizens to make the notifications helpful or reliable. You mentioned in your introductory comments “feasibility” and the relevance of the dollars spent on contact tracing.  I did not cover this topic adequately in my original post but certainly think it is a crucial consideration. Budgets are limited and strained, and every response we choose to invest in necessarily represents another option we do not pursue. So the question of whether to pursue digital contact tracing apps should not be considered in a vacuum but rather should be analyzed in terms of bang for the buck, so to speak. Is an investment in such apps the best, most effective use of our limited funds? And what potentially more useful responses are we foregoing? This question further highlights one of the downsides of the state by state approach the US is currently taking. How much more economically efficient might it be to have regional approaches or, sigh, leadership at the federal level? I strongly agree with David’s comment that the US needs a national strategy with clear recommendations on what data to collect, what technology to use, and what cybersecurity and privacy protections to put in place. I would add that these guidelines, like the 5 question analysis proposed in the blog post, should be applied to any and all personal data collected for the purposes of managing the covid crisis.

TO: Jolynn
CC: David
So is there one thing that readers might urge their leaders to do, or urge technology companies to do, during the next couple of months that might bring us closer to these goals?
It seems like a federal privacy law is probably off the table between now and election day, so that won’t come in time to help with Covid.
Is there something else that might? Could a state pass a law? Could a tech firm adopt a model privacy policy around contact tracing apps?  What kind of steps might any of these interested parties that would at least move us a bit in the right direction? Sadly, I’m quite sure we’ll be dealing with Covid long after November.


FROM: Jolynn
TO: Bob
CC: David

Jolynn Dellinger

State legislatures could pass laws or, in the alternative, Governors might issue executive orders to accomplish immediate goals. States can work to ensure that all local and state level health departments are on the same page and are employing similar privacy and security protections for data collected by manual contact tracers and any digital contact tracing apps or other technologies designed to manage Covid issues.

Tech firms and app developers should certainly have privacy policies in place, but those entities should also make explicit, affirmative guarantees that any data collected for purposes of responding to the Covid crisis (health, location or other personal data) will not be used for any other purpose or monetized, and will not be sold to or shared with any third parties, including law enforcement of any kind. Google and Apple could also bar apps from inclusion in the Google Play Store or App Store if they do not make such explicit commitments.

Want to participate in this dialog? Leave your comments below. We’ll keep the conversation going.

Digital transformation & cyber risk: what you need to know to stay safe

Larry Ponemon

CyberGRX and Ponemon Institute surveyed 581 IT security and 302 C-suite executives to determine what impact digital transformation is having on cybersecurity and how prepared organizations are to deal with that impact. All 883 respondents are involved in managing digital transformation and cybersecurity activities within their organizations. The results show that while digital transformation is widely accepted as critical, the rapid adoption of it is creating significant vulnerabilities for most organizations—and these are only exacerbated by misalignment between IT security professionals and the C-suite.

The full report can be downloaded from the CyberGRX website.

Our research think tank is dedicated to advancing privacy and data protection practices—and these report findings underscore a growing need for such mitigation tools, at a time when we see rapid digital transformation across industries. We chose to study both IT security professionals and C-suite executives to tap into the intersection of two groups making the biggest impact on organizations as they adopt new digital practices.

Here are the key themes that will be reviewed in this report.

Digital transformation is increasing cyber risk.

  • IT security has very little involvement in directing efforts to ensure a secure digital transformation process. Only 37 percent of respondents say the CIO is most involved and only 24 percent of respondents say the CISO is most involved. Both roles are trailing behind general managers, lines of business managers and data sciences.
  • Eighty-two percent of respondents believe their organizations experienced at least one data breach as a result of digital transformation. Forty-two percent of respondents believe they experienced at least two to five cyber events and 55 percent of respondents say with certainty that at least one of these breaches was caused by a third party.

Digital transformation has significantly increased reliance on third parties, specifically cloud providers, IoT and shadow IT.

  •  Fifty-eight percent of respondents say the primary change to their organizations is increased migration to the cloud, which relies upon third parties. This is followed by the increased use of IoT and increased outsourcing to third parties. Despite the increasing risk, 58 percent of respondents say their organizations do not have a third-party cyber risk management program.

Conflicting priorities between IT security teams and the C-suite create vulnerabilities and risk.

  • Seventy-one percent of IT security respondents say the rush to achieve digital transformation increases the risk of a data breach and/or a cybersecurity exploit compared to 53 percent of the C-level respondents. Sixty-three percent of C-level respondents vs. only 41 percent of IT security respondents do not want the security measures used by IT security to prevent the free flow of information and an open business model.

Unless things change, the future doesn’t look any more secure

  • Currently only 29 percent of respondents say their organizations are very prepared to address top threats related to digital transformation in two years. Only 43 percent of respondents are very optimistic their organizations will be prepared to reduce the risk of these threats.
  • Organizational size and industry differences have an impact on the consequences of digital transformation. Most industries do not have a security budget for protecting data assets during the digital transformation process.

“If there’s one major takeaway from our research, it’s that digital transformation is not going anywhere. In fact, organizations should expect—and plan for—digital transformation to become more of an imperative over time,” says Dave Stapleton, Chief Information Security Officer, CyberGRX. “For this reason, organizations must consider the security implications of digital transformation and shift their strategy to build in resources that mitigate risk of cyberattacks. Based on these findings, we recommend involving organizations’ IT security teams in the digital transformation process, identifying the essential components for a successful process, educating colleagues on cyber risk and prevention, and creating a strategy that protects what matters most.”

Key findings overview:

The rush towards digital transformation has increased cyber risks.

IT security respondents who are in the trenches are far more cognizant than C-level respondents of the risk if not enough time and resources are allocated to the digital transformation process. Most respondents say their corporate leaders are not aware of how the inability to secure digital assets could significantly hurt their organization’s brand and reputation. Less than half of C-level respondents (49 percent) say senior management recognizes the potential harm to brand and reputation.

Conflicting priorities between IT security teams and the C-suite create vulnerabilities and risk. Only 16 percent of respondents say IT security and lines of business are fully aligned with respect to achieving security during the digital transformation process. As a result, there are gaps in perceptions about risk to the digital transformation process. Specifically, far more IT security respondents (64 percent) than C-level respondents (41 percent) say that the digital economy significantly increases risk to high value assets such as IP and trade secrets. Sixty-three percent of C-level respondents vs. only 41 percent of IT security respondents do not want the security measures used by IT to prevent the free flow of information and an open business model.

Organizations are not protecting what matters most. Analytics and private communications are the digital assets most difficult to secure according to 51 percent and 44 percent of respondents, respectively. However, only 35 percent of respondents say analytics is appropriately secured and only 38 percent of respondents say private communications are secured. Surprisingly, only 25 percent of respondents say consumer data, which is considered highly sensitive and confidential, is appropriately secured. However, the difficulty to secure this data is very low. Only 10 percent of respondents say such data is difficult to secure.

A secure digital transformation process is affected by a lack of expertise and a lack of visibility. Fifty-three percent of respondents say the most significant barrier to achieving a secure digital transformation process followed by insufficient visibility of people and business processes (51 percent of respondents).

Organizations have experienced multiple data breaches as a result of digital transformation. Eighty-two percent of respondents believe their organizations experienced at least one data breach during the digital transformation process. Forty-two percent of respondents say their organizations could have experienced between two and five data breaches and 22 percent say their organizations could have experience between six to ten data breaches. Fifty-five percent of respondents say with certainty that at least one of these breaches was caused by a third party.

Digital transformation has significantly increased reliance on third parties, specifically cloud providers, IoT and shadow IT.

Current tools or solutions to manage third-party risk are still not considered effective. Slightly more than half (51 percent) of organizations represented in this research have a strategy for achieving digital transformation and of these 73 percent of respondents say their strategy involves assessing third-party relationships and vulnerabilities. Forty-two percent of respondents say their organizations have a third-party risk management program and assessments are the most commonly used solution. However, when asked if they are effective, 53 percent say the tools and solutions used are only somewhat effective (28 percent) or not effective (25 percent).

A secure cloud environment is a significant challenge to achieving a secure digital transformation process. Sixty-three percent of respondents say their organizations have difficulty in ensuring there is a secure cloud environment and 54 percent of IT security say the ability to avoid security exploits is a challenge. Fifty-six percent of C-level executives say their organizations find it a challenge to ensure third parties have policies and practices that ensure the security of their information.

Challenges for securing the future of digital transformation

Budgets are and will continue to be inadequate to secure the digital transformation process. Only 35 percent of respondents say they have such a budget. If they do, these budgets are and will continue to be inadequate to secure the digital transformation process. Because of the risks created by digital transformation, respondents believe the percentage of IT security allocated to digital transformation today should almost be doubled from an average of 21 percent to 37 percent. In two years, the average percentage will be only 37 percent and respondents say ideally it should be 45 percent.

More progress needs to be made in the ability to mitigate cyber threats. The top three threats respondents are most concerned about are system downtime, cybersecurity attacks and data breaches caused by third parties. Currently, only 29 percent say they are very prepared to address these threats. In two years, only 43 percent are very optimistic their organizations will be very prepared to reduce the risk of these threats.

A secure digital transformation process is dependent upon the expertise of the IT security team and they are not very influential. Today, only 35 percent of respondents say IT security is very influential. In the next two years, their influence increases only slightly. In two years, 43 percent of respondents say IT security will be very influential.

Digital transformation impacts industries differently.

Across industries digital transformation has significantly increased reliance on third parties, specifically cloud providers, IoT and shadow IT. Respondents in healthcare, industrial and retail say the most significant change caused by digital transformation is the increased migration to the cloud. The public sector and healthcare industries are less likely to say the increased use of IoT has changed their organizations. Retail and financial services respondents are most likely to say increased outsourcing to third parties as a result of digital transformation has had an impact.

Industrial manufacturing is most likely to have a strategy for achieving digital transformation. Healthcare is least likely to have a strategy. As part of their strategy, retailers are most likely to include assessing third-party relationships and vulnerabilities, including supply chain partners.

Perceptions of digital transformation risk vary among industries. Leaders in services and financial services are most likely to recognize that digital transformation creates IT security risk. Respondents in the industrial manufacturing sector are least likely to say their leaders recognize the risk.

Retail, public sector and services are the industries most concerned about the rush to achieve digital transformation. Sixty-eight percent of respondents in retail, 65 percent of respondents in services and public sector say the rush to achieve digital transformation increases the risk of a data breach and/or a cybersecurity exploit.

A successful digital transformation process requires IT security to balance the securing of digital assets without stifling innovation. IT security faces the challenge of a secure process without stifling innovation. Because digital transformation is considered essential, most industries say that IT security should support innovation with a minimal impact on the goals of digital transformation. Eighty-three percent of respondents in financial services say such a balance is essential.

Most industries do not have a security budget for protecting data assets during the digital transformation process. Despite the need to have the necessary expertise and technologies to ensure a secure digital transformation process, industries are not allocating funds specifically to digital transformation. Healthcare organizations are most likely to have funds for protecting data assets during the digital transformation process.

Organizational size affects the digital transformation process

Following are the most salient differences according to organizational size. Our analysis looked at organizations with a headcount of less than 5,000 and greater than 10,000.

The increased migration to the cloud and the use of IoT are having the greatest impact during the global transformation on smaller organizations. Larger organizations are seeing the greatest impact due to increased outsourcing to third parties.

More larger organizations have a strategy for digital transformation. Larger organizations (54 percent of respondents) are more likely than smaller organizations (43 percent of respondents) to have a strategy for achieving digital transformation. As part of their strategy, 80 percent of respondents in larger organizations vs. 69 percent of respondents in smaller organizations are assessing third-party relationships and vulnerabilities, including supply chain partners, as part of their digital strategy.

Larger organizations are far more likely to recognize the risk of digital transformation. Seventy-nine percent of respondents in larger organizations vs. 61 percent of respondents in smaller organizations believe the rush to achieve digital transformation increases the risk of a breach and/or cybersecurity exploit. Larger organizations are less likely to say that it is important to balance security with the need to enable the free flow of information. Seventy-two percent of respondents in larger organization say digital transformation increases risk to high value assets such as intellectual property, trade secrets and so forth.

Smaller organizations are more likely to be vulnerable to a cyberattack or data breach following digital transformation. Seventy-one percent of respondents in smaller organizations and 64 percent of respondents in larger organizations believe the risk of digital transformation makes it more likely to have a data breach or cyberattack. Larger organizations are more likely to say the rush to produce and release apps, the increased use of shadow IT and increased migration to the cloud have made their organizations more vulnerable following digital transformation.

Characteristics of organizations with mature digital transformation programs

In this study, we analyzed the responses from those organizations that self-reported they have a achieved a mature digital transformation process. Twenty-three percent or 131 respondents self-reported that their organizations’ core digital transformation activities are deployed, maintained and refined across the enterprise. We compare the findings from this group to the 77 percent of the other 450 respondents.

Mature organizations are more likely to have strategies to protect data assets and assess third-party relationships. Fifty-six percent of the most mature organizations have a strategy for achieving digital transformation. In contrast, 47 percent of the other respondents say they have such a strategy. Those in mature organizations say their strategies are more likely to protect data assets and assess third-party relationships and vulnerabilities, including supply chain partners.

Mature organizations are more likely to understand and anticipate the risks associated with digital transformation. Respondents in mature organizations are far more likely to make reducing the third-party risk a priority than the other organizations (78 percent vs. 51 percent). Mature organizations are also more likely to recognize the digital economy increases the risk to high value assets such as intellectual property, trade secrets and so forth (78 percent vs. 60 percent). Mature organizations are also more likely to believe in the importance of balancing the security of our high value assets while enabling the free flow of information and an open business model.

Digital transformation is considered essential to the company’s business. More mature organizations are likely to believe in the importance of IT security to supporting innovation with minimal impact on the goals of digital transformation (90 percent vs. 81 percent) and that digital transformation is essential to the company’s business (84 percent vs. 79 percent).

All organizations struggle with having an adequate budget for protecting data assets during the digital transformation process. Forty-three percent of respondents of mature organizations vs. 34 percent of other organizations say their budgets are adequate for protecting data assets during the digital transformation process.

For more detailed findings, please download the full report from the CyberGRX website at

How to detect fake anything in a zero trust world

Bob Sullivan

Fake News is stoking violence and helping destroy our democracy. Fake pills make people sick and can even kill them. Fake foods, like fake olive oil, or mislabeled fish, rip consumers off and steal profits from honest companies. The world is becoming overrun by fake everything, says Avivah Litan, renowned fraud analyst at consultancy firm Gartner. Counterfeit products are a $3 trillion problem, she says…But today’s topic is even bigger than fraud. It’s about a threat to reality itself.

In a new paper, called How to Detect Fake Anything in a Zero Trust World, Litan argues that a mix of technology and human intelligence can beat back this problem of fake everything. But only if someone — consumers? government regulators? corporations? — is willing to pay the price. I spoke with her recently: You can listen to our conversation at the link below.

A few highlights from our talk:

Imagine being able to scan a barcode on a piece of salmon at the supermarket and being able to see the fish’s journey from the river where it was caught, to the port where it was dropped off, to the plane that took it to your city, to the truck that took it to your store.  That’s the promise of blockchain, which could help consumers decide they prefer fish caught from a specific place. They could also demand it be caught in a certain way, and report fraud or mislabeled products.  Litan gets most fired up talking about fake Olive Oil. She thinks blockchain public audit trails could help stop that, too.

Using these tools to cast a wider net — pun intended — Litan thinks tech could help consumers/citizens regain the grasp of reality they are losing. Fake news and fake cures have been a problem for years, but the Covid-19 pandemic has brought the issue into sharp relief.

“The hope is there is no shortage of innovation in this space,” she tells me. “The problem is (companies) won’t do it unless (consumers) pay a premium.”

Litan is perhaps the media’s most-quoted expert on credit card fraud and identity theft, dating back to the early years of credit card database hacking and the rise of fraud-fighting software.  She sees some parallels between the race for banks and retailers to stop credit card hacking — which costs the companies billions — and their relative indifference to identity theft — in which consumers bear a lot of the cost.

But the rise of fake everything, and the collapse of trust worldwide, is a far bigger problem. I’ve started calling it the trust market crash.  It’s an enormous challenge in a world of commerce that’s built on trust.

Innovation will be critical, she said, because government regulators — well-intentioned as they may be — won’t be able to keep up.

“The world is moving way too fast for our political systems,” she said.  Current solutions fall far short. What is an Italian olive oil consumer to do, outside of grow their own olives, Litan joked.   “Hopefully the technology will evolve where you have the solutions at your fingertips.”

Her paper offers the “Gartner Model for Truth Assessment,” with different blended technology and human solutions for the problem of fake.  But much more needs to be done.

“The best hope is a consumer revolution,” she said. “We’ve had enough of this fake news, (people) shoving all this fake stuff down our throats.”

Avivah has posted a short blog entry about her paper. The paper itself is behind Gartner’s paywall.

The 2020 Study on the State of Industrial Security

Larry Ponemon

Ponemon Institute is pleased to present the findings from The 2020 State of Industrial Security Study, sponsored by TÜV Rhineland. The purpose of the research is to understand cyber risks across a broad spectrum of industries and the steps organizations are taking to reduce cyber risk in the operational technology (OT) environment.

Ponemon Institute surveyed 2,258 cybersecurity practitioners in the following industries: automotive, oil and gas, energy and utilities, health and life science, industrial manufacturing and logistics and transportation. All respondents are responsible for securing or overseeing cyber risks in the OT environment and are aware of how cybersecurity threats could affect their organization.

In the context of this research, Operational Technology (OT) is the hardware and software dedicated to detecting or causing changes in physical processes through direct monitoring and/or control of physical devices. Simply put, OT is the use of computers to monitor or alter the physical state of a system, such as the control system for a power station. The term has become established to demonstrate the technological and functional differences between traditional IT systems and industrial control systems environment.

The OT environment is vulnerable to cyberattacks: 57 percent of respondents say their organizations’ security operations and/or business continuity management teams believe there will be one or more serious attacks within the OT environment. Almost half (49 percent and 48 percent of respondents) say it is difficult to mitigate cyber risks across the OT supply chain and cyber threats present a greater risk in the OT than the IT environment.

The following findings reveal the cybersecurity vulnerabilities in the OT environment.  

  • OT and IT security risk management efforts are not aligned. Sixty-three percent of respondents say OT and IT security risk management efforts are not coordinated making it difficult to achieve a strong security posture in the OT environment. The management of OT security is painful because of the lack of enabling technologies in OT networks, complexity and insufficient resources. 
  • On average, organizations had four security compromises that resulted in the loss of confidential information or disruption to OT operations. Forty-seven percent of respondents say OT technology-related cybersecurity threats have increased in the past year. The top three cybersecurity threats are phishing and social engineering, ransomware and DNS-based denial of service attacks. One-third of respondents say such exploits have resulted in the loss of OT-related intellectual property. 
  • The majority of organizations have not achieved a high degree of cybersecurity effectiveness. Less than half of respondents say they are very effective in responding to and containing a security exploit or breach (48 percent), continually monitoring the infrastructure to prioritize threats and attacks (47 percent) and pinpointing sources of attacks and mobilizing the right set of technologies and resources to remediate the attack (47 percent of respondents). 
  • To minimize OT-related risks organizations need to replace outdated and aging connected control systems in facilities, according to 61 percent of respondents. More than half (52 percent of respondents) say vulnerable software is creating risks in the OT environment. 
  • Not enough expertise and budget are often cited as reasons for not having a strong security posture in the OT environment. Organizations represented in this research are spending annually an average of $64 million on cybersecurity operations and defense (OT and IT combined). An average of 26 percent of this budget or approximately $17 million is allocated to the security of OT assets and infrastructure and an average of 17 percent or approximately $10 million is allocated specifically to OT cybersecurity. Respondents say their OT budgets are inadequate to properly execute their cybersecurity strategy. 
  • Accountability for executing a successful cybersecurity strategy. Respondents were asked who is most accountable for executing a successful cybersecurity strategy. Only 20 percent of respondents say it is the OT security leader followed by the CIO/CTO (18 percent) and the IT security leader (17 percent). 
  • Organizations are lagging behind in adopting advanced security technologies. Only 38 percent of respondents say their organizations are using automation, machine learning and artificial intelligence to monitor OT assets. The majority of companies are not integrating security and privacy by design in the engineering of OT control systems.

To read the full report, visit TUV Rheinland’s website.

If we’re going to talk about Section 230, let’s get it right

Now we’ve started something

Bob Sullivan

With President Donald Trump threatening retribution against Twitter with an executive order, you’re going to hear a lot about Section 230 this week — and maybe for many weeks. The ensuing discussion could shake the Internet to its very roots.  That’s going to make legal scholars very happy, but it might seem like a dizzying discussion for most.  That’s by design. Interested parties are conflating all kinds of big ideas to muddy the waters here: the First Amendment, innovation, bias, abuse, millions of followers, billions of dollars.  I’m going to try to sort it out for you here.  Who am I to do that? Well, I’m old enough to remember when the Communications Decency Act and its Section 230 was passed into law.

But if you are going on this journey with me, here’s the rules: Nothing is as simple nor as absolute as it sounds.  Free speech isn’t limitless.  “Speech” isn’t even what you think it is. Immunity isn’t limitless. The First Amendment doesn’t generally apply to private companies…most of the time.  But in a rare confluence of events, there are reasons for both conservatives and liberals to take a good long look at updating and fixing Section 230, which has been the source of much profit for corporations and much pain for Internet users since it became law in 1996.

(And if you really want to understand Section 230, I recommend reading this very readable 25-page academic paper titled The Internet As a Speech Machine and Other Myths Confounding Section 230 Speech Reform. Authors Danielle Keats Citron and Mary Ann Franks do a great job explaining the history of the law and the myths that hold America back from reasonable reform. Or, even better, consider The Twenty-Six Words That Created the Internet, a book by Jeff Kosseff, all about Section 230.)

Section 230 was written at the time of Prodigy and Compuserve, when online services were mainly text-based chat tools, and virtually no consumers used websites.  These services had a problem: Were they liable for everything users said? Could they be sued for defamation, or charged criminally, if users misbehaved? To use the kind of shorthand that journalists love but lawyers hate, should they be treated like publishers of the content — akin to a newspaper editor or book publisher — or mere distributors, akin to a newstand owner?  Courts were split on the matter, and that terrified tech firms. Imagine the liability a company like Google, or Facebook, or America Online, would face if it could be charged with every crime committed on their service.

The defensive shorthand I was taught at my startup, inaccurate as it might be, was this: When a tech company actively moderates user content, it becomes a publisher and increases liability. When a tech company just shoves the stuff automatically out into the world, it’s merely a newsstand, a distributor.  So: Don’t touch!

That free-for-all worked about as well as you might imagine (Porn! Stolen goods! Harassment!) so lawmakers tried to help by passing Section 230.  It sounds straightforward: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The idea was to shield online service providers who tried to do the right thing (stop harassment and other crimes) from liability.  The law was actually meant to encourage content moderation. It gave service providers a shield against responsibility for third-party content.  But what a winding road it’s been since then.

First, the good: Plenty of folks see Section 230 as the First Amendment of the Internet. Scholar Eric Goldman actually argues that it’s better than the First Amendment. It’s inarguable that online services have thrived since then, and plenty of them credit Section 230.

However, this simple you’re-not-responsible-for-third-party-content rule has been extended by courts and corporations far beyond its original intention. Recall, it was written right about the time Amazon was invented.  The Internet was nearly 100% text-based speech, digital conversation, at the time.  Today, it’s Zoom and car buying and television and it elects a U.S. president.

So that leads us to the moment at hand. The Internet is awash in disinformation, harassment, crime, racism…the dark side of humanity thrives there.  Plenty of people have been driven from its various platforms through doxxing, gender abuse, or simple exhaustion from nasty arguments. I argue all the time that it has made us dumber as a people, offering the Flat Earth Movement as proof. In short, the Internet sucks (More than a decade later, this is still a great read). As Citron and Frank say:

“Those with political ambitions are deterred from running for office. Journalists refrain from reporting on controversial topics. Sexual assault victims are discouraged from holding perpetrators accountable…. An overly capacious view of Section 230 has undermined equal opportunity in employment, politics, journalism, education, cultural influence, and free speech. The benefits Section 230’s immunity has enabled surely could have been secured at a lesser price.”

For better and worse, this is a good time to reconsider what Section 230 hath wrought.

For a quick moment, here are some obstacles to the discussion, forged by confusion. You, and I, and President Trump, can’t have our First Amendment right to free speech suppressed by Twitter or Facebook or Instagram. Generally, the First Amendment applies to governments, not private enterprises.  Facebook, as any true conservative or libertarian would tell you, is free to do what it wants with its company, and the president is free to not use it.  In fact, the government compelling a social media company to say certain things or not say other things — to argue it could not add a link for fact-checking — is a rather obscene violation of the company’s First Amendment rights.

Even on this fairly clear point, there is some room for discussion, however.  In Canada, courts have ruled that social media is so ubiquitous that it can be akin to a public square, according to Sinziana Gutui, a Vancouver privacy lawyer.  So might the US some day feel that cutting off someone’s Twitter account is akin to cutting off their telephone line or electricity? Perhaps.  It sure seems less strained to suggest President Trump simply find another platform to use for his 320-character messages.

And even on this issue of speech, there is confusion. U.S. courts have broadly expanded the definition of speech far beyond talking, publishing pamphlets, or writing posts on an electronic bulletin board.  Commercial activity can be considered speech now.  And that expanded definition has helped websites argue for Section 230 immunity when their members are committing illegal acts — such as facilitating the sale of counterfeit goods, or guns to criminals known to be evading background checks.

Immunity often encourages bad behavior, a classic “moral hazard,” as Franks has written. Set aside fake autographs and illegally purchased domestic violence murder weapons for the moment — the Internet is drowning in antagonism, bots, and harassment that has made it inhospitable for women and men of good faith. It rewards extremism.  It is unhealthy for people and society. It’s not going to fix itself. Citron and Franks again:

“Market forces alone are unlikely to encourage responsible content moderation. Platforms make their money through online advertising generated when users like, click, and share,. Allowing attention-grabbing abuse to remain online often accords with platforms’ rational self-interest. Platforms “produce nothing and sell nothing except advertisements and information about users, and conflict among those users may be good for business.” On Twitter, ads can be directed at users interested in the words “white supremacist” and “anti-gay.” If a company’s analytics suggest that people pay more attention to content that makes them sad or angry, then the company will highlight such content. Research shows that people are more attracted to negative and novel information. Thus, keeping up destructive content may make the most sense for a company’s bottom line.”

Facebook profits massively off all this social destruction. We learned this week that employees inside Facebook have come up with some very clever technological solutions to this problem, only to be kneecapped by Mark Zuckerberg, clearly drunk on conveniently-profitable take-no-responsibility libertarian ideals.

What’s the solution? For sure, that’s much harder.  Citron and Franks suggest adding a simple “reasonable” requirement on companies like Facebook, meaning they have to take reasonable steps to police users in order to maintain Section 230 immunity. Reasonable is a difficult standard, possibily leading to endless ’round-the-rosie’ debate, but it is a common standard in U.S. law. Facebook’s engineers came up with notions worth trying, detailed in this Wall Street Journal story, such as shifting extreme discussions into sub-groups.  The firm could also stop giving extra algorithm juice to obsessives who post 1,000 times a day.

As always, a mix of innovation and smart rules that balance interests is needed.

It won’t be easy, but we have to try. So, it’s good that President Trump has shined a light on Section 230. The discussion is long overdue, as is the will to act. Will the discussion be productive? Probably not if it happens on Twitter. Definitely not if it’s focused on an imaginary social media bias against Trump or Trump’s 80 million followers, who clearly have no trouble finding each other. Instead, let’s focus on making the world safe again for reasonable people.

The state of endpoint security risk: it’s skyrocketing

Larry Ponemon

The Third Annual Study on the State of Endpoint Security Risk, sponsored by Morphisec, reveals that organizations are not making progress in reducing their endpoint security risk, especially against new and unknown threats. In fact, in this year’s research, 68 percent of respondents report that their company experienced one or more endpoint attacks that successfully compromised data assets and/or IT infrastructure over the past 12 months, an increase from 54 percent of respondents in 2017.

A webinar on the report is available for free at Morphisec’s website.

“Corporate endpoint breaches are skyrocketing and the economic impact of each attack is also growing due to sophisticated actors bypassing enterprise antivirus solutions,” said Larry Ponemon, Chairman and Founder of Ponemon Institute. “Over half of cybersecurity professionals say their organizations are ineffective at thwarting major threats today because their endpoint security solutions are not effective at detecting advanced attacks.”

Ponemon Institute surveyed 671 IT security professionals responsible for managing and reducing their organization’s endpoint security risk. Companies represented in this research are very concerned about the significant increase in new and unknown threats against their organization (an increase from 69 percent of respondents in 2017 to 73 percent in 2019). On a positive note, since 2017 more respondents say their organizations have ample resources to minimize IT endpoint risk due to infection or compromise (an increase from 36 percent to 44 percent).

Following are 10 key findings from this research.

  1. The frequency of attacks against endpoints is increasing and detection is difficult. Sixty-eight (68) percent of respondents say the frequency of attacks has increased over the past 12 months. More than half of respondents (51 percent) say their organizations are ineffective at surfacing threats because their endpoint security solutions are not effective at detecting advanced attacks.
  1. The cost of successful attacks has increased from an average of $7.1 million to $8.94 million. Costs due to the loss of IT and end-user productivity and theft of information assets have increased. The cost of system downtime has decreased significantly since 2017. 
  1. New or unknown zero-day attacks are expected to more than double in the coming year. The frequency of existing or known attacks is expected to decrease significantly from 77 percent to an anticipated 58 percent in the coming year. In contrast, the frequency of new or unknown zero-day attacks is expected to increase to 42 percent next year. 
  1. An average of 80 percent of successful breaches are new or unknown “zero-day attacks.” These attacks either involved the exploitation of undisclosed vulnerabilities or the use of new/polymorphic malware variants that signature-based detection solutions do not recognize.
  2. Zero-day attacks continue to increase in frequency. In addition to being more successful, zero-day attacks have also become more prevalent. As a result, organizations are investing more budget to protect against these threats. 
  1. Most organizations either use or plan to use Microsoft Windows Defender antivirus solution. Eighty percent (80) of respondents say they currently have (34 percent) or plan to have in the near future (46 percent) the Microsoft Windows Defender antivirus solution. The top two reasons are to reduce the number of separate endpoint security tools and the solution is on par with other antivirus tools. 
  1. The challenge in the use of traditional antivirus solutions are a high number of false positives and security alerts, inadequate protection and too much complexity. Fifty-six (56) percent of respondents say their organizations replaced their endpoint security solution in the past two years. Of these respondents, 51 percent say they kept their traditional antivirus solution but added an extra layer of protection. According to these respondents, the challenges with traditional antivirus solutions are a high number of false positives and security alerts, inadequate protection and too much complexity in the deployment and management of these solutions. 
  1. Antivirus products missed an average of 60 percent of attacks. Confidence in traditional antivirus (AV) solutions continues to drop. On average, respondents estimate their current AV is effective at blocking only 40 percent of attacks. In addition to the lack of adequate protection, respondents cite high numbers of false positives and alerts as challenges associated with managing their current AV solutions. 
  1. The average time to apply, test and fully deploy patches is 97 days. The findings reveal the difficulties in keeping endpoints effectively patched. Forty percent (40) of respondents say their organizations are taking longer to test and roll out patches in order to avoid issues and assess the impact on performance.
  1. Ineffectiveness and lack of in-house expertise are reasons not to use an EDR. Sixty-four (64) percent of respondents who say their organizations do not have an EDR cite its ineffectiveness against new or unknown threats (65 percent of respondents) followed by 61 percent who say they don’t have the staff to support.

Go to Morphisec’s website to read the full report.


Covid-19: The Golden Age of Scams

Bob Sullivan

Nearly 100,000 scam-ready domains have been registered since the Covid-19 pandemic began. It’s the Super Bowl for digital criminals, the golden age of computer fraud. Why? Because a con artist’s best friend is urgency.

We are living through the golden age of scams right now, so I’m going to do an ongoing series about coronavirus crimes.  First up: My conversation with Grace Brombach, who just wrote a report on scams(PDF) for the U.S. Public Interest Research Group.

“We are dealing with so much fear and confusion right now,” Brombach tells me. “People are being put in a very difficult situation where they don’t really know what to believe.”

Of particular worry: Homebound computer users are being told to download all kinds of new software and fill out forms full of personal information, doing things that ordinarily they would never do. For example: Employees are working from home, Zooming everywhere.  Think about how believable an email might be that appeared to come from an HR department, promising new video conference guidelines or requiring new software installation.

Making matters worse, as cybersecurity expert Harri Hursti has told me, a lot of corporate security software is designed to look for unusual patterns in network traffic — like massive downloads or a surprising number of remote logins. Everything is unusual now.

In addition, there’s also a lot of burden on parents (and grandparents) to help their kids do schoolwork from home. That opens up a big attack vector.   Urgent messages claiming to be from schools, including assertions that children have been infected are particularly insidious.

Brombach says most scams fall into two categories: Sale of false cures; and phishing scams designed to commit ID theft. Some of these emails are incredibly believable. There are email alerts from scammers posing as the CDC or WHO promising Covid alerts. Criminals benefit from trading off the trust big brand names have.

“There was a recent map that came out tracking coronavirus cases … posing from Johns Hopkins and when people would click on the map it would actually download malware onto their computers to steal their personal information,” Brombach said. “It’s all across the board…They really are difficult to identify.”

NOTE: Organizations like WHO or the CDC will not send you unsolicited texts or emails unless you’ve already signed up for them.  But given all the talk about contact tracing apps, it’s easy to understand why a consumer might fall for a text message with an alert warning them they’d been near someone who’d tested positive for Covid.

“There’s this misconception that people have of, ”I would never fall for a scam,’ but some of them are so, so believable, so it’s really important to be on your guard as much as possible,” Brombach warned.

Here’s the scams she’s most worried about in the near future:

  • Criminals offering help with economic impact payments. In some cases, only an SSN and a birthdate are needed to access government benefits.  In other cases, criminals are promising frustrated aid recipients they can help get faster payments.
  • Fake Covid testing sites
  • Price gouging
  • Fake cures and treatments. “It’s so hard for the FDA to keep up with all these claims,” she said. Also, remember that it’s generally legal to sell supplements with broad claims like immune system boosting.

You can hear my conversation with PIRG’s Grace Brombach by clicking play below or by clicking on this link

The economic value of prevention in the cybersecurity lifecycle

Larry Ponemon

Ponemon Institute is pleased to present the findings of The Economic Value of Prevention in the Cybersecurity Lifecycle, sponsored by Deep Instinct. The cybersecurity lifecycle is the sequence of activities an organization experiences when responding to an attack. The five high-level phases are prevention, detection, containment, recovery and remediation.

We surveyed 634 IT and IT security practitioners who are knowledgeable about their organizations’ cybersecurity technologies and processes. Within their organizations, most of these respondents are responsible for maintaining and implementing security technologies, conducting assessments, leading security teams and testing controls.

“If we could quantify the cost savings of the prevention of attacks, we would be able to increase our IT security budget and debunk the C-suite’s myth that AI is a gimmick. I believe AI is critical to preventing attacks.” —  CISO, financial services industry.

The key takeaway from this research is that when attacks are prevented from entering and causing any damage, organizations can save resources, costs, damages, time and reputation.

To determine the economic value of prevention, respondents were first asked to estimate the cost of one of the following five types of attacks: phishing, zero-day, spyware, nation-state and ransomware. They were then asked to estimate what percentage of the cost is spent on each phase of the cybersecurity lifecycle, including prevention. Because there are fixed costs associated with the prevention phase of the cybersecurity lifecycle, such as in-house expertise and investments in technologies, there will be a cost even if the attack is stopped before doing damage. For example, the average total cost of a phishing attack is $832,500 and of that 82 percent is spent on detection, containment, recovery and remediation. Respondents estimate 18 percent is spent on prevention. Thus, if the attack is prevented the total cost saved would be $682,650 (82 percent of $832,500).

Seventy percent of respondents (34 percent + 36 percent) believe the ability to prevent cyberattacks would strengthen their organization’s cybersecurity posture. However, 76 percent of respondents (40 percent + 36 percent) say they have given up on improving their ability to prevent an attack because it is too difficult to achieve.

The following are the most noteworthy findings from the research.

  • Organizations are most effective in containing cyberattacks. Fifty-five percent of respondents say their organizations are very or highly effective at containing attacks in the cybersecurity lifecycle. Less than half of respondents (46 percent) say their organizations are very or highly effective in preventing cyberattacks. Organizations are also allocating more of the IT security budget to technologies and processes in the containment phase than in the prevention phase. 
  • Prevention of a cyberattack is the most difficult to achieve in the cybersecurity lifecycle. Eighty percent of respondents say prevention is very difficult to achieve followed by recovery from a cyberattack. The reason for the difficulty is that it takes too long to identify an attack. Other reasons are outdated or insufficient technologies and lack of in-house expertise. The technology features considered most important are the ability to prevent attacks in real-time and based on different types of files. 
  • Automation and advanced technologies increase the ability to prevent cyberattacks. Sixty percent of respondents say their organizations currently deploy AI-based or plan to deploy AI for cybersecurity within the next 12 months. Sixty-seven percent of respondents believe the use of automation and advanced technologies would increase their organizations’ ability to prevent cyberattacks. Further, 67 percent of respondents expect to increase their investment in these technologies as they mature. 
  • Deep learning is a form of AI and is inspired by the brain’s ability to learn. In the context of this research, deep learning is defined as follows: once a human brain learns to identify an object, its identification becomes second nature. Deep learning’s artificial brains consist of complex neural networks and can process high amounts of data to get a profound and highly accurate understanding of the data analyzed. The top three reasons to incorporate a deep- learning-based-solution are to lower false positive rates, increase detection rates and prevent unknown first-seen cyberattacks. 
  • Perceptions that AI could be a gimmick and lack of in-house expertise are the two challenges to deployment of AI-based technologies. Fifty percent of respondents say when trying to gain support for the adoption of AI there is internal resistance because it is considered a gimmick. This is followed by the inability to recruit personnel with the necessary expertise (49 percent of respondents).
  • Organizations are making investments in technology that do not strengthen their cybersecurity budget based on the wrong metrics. Fifty percent of respondents say their organizations are wasting limited budgets on investments that don’t improve their cybersecurity posture. The primary reasons for the failure are system complexity, personnel and vendor support issues. Another reason is that most organizations are using return on investment (ROI) to justify investments and is not based on the technology’s ability to increase prevention and detection rates. 
  • IT security budgets are considered inadequate. Only 40 percent of respondents say their budgets are sufficient to achieve a strong cybersecurity posture. The average total IT budget is $94.3 million and of this 14 percent or approximately $13 million is allocated to IT security. Nineteen percent or approximately $2.5 million will be allocated to investments in enabling security technologies such as AI, machine learning, orchestration, automation, blockchain and more.

Sample finding:

With the exception of the exploitation phase of the kill chain, zero-day attacks are very difficult to prevent in the cyber kill chain. The cyber kill chain is a way to understand the sequence of events involved in an external attack on an organization’s IT environment. Understanding the cyber kill chain model is considered helpful in putting the strategies and technologies in place to “kill” or contain the attack at various stages and better protect the IT ecosystem. Following are the 7 steps in the cyber kill chain:

  1. Reconnaissance: the intruder picks a target, researches it and looks for vulnerabilities
  2. Weaponization: the intruder develops malware designed to exploit the vulnerability
  3. Delivery: the intruder transmits the malware via a phishing email or another medium
  4. Exploitation: the malware begins executing on the target system
  5. Installation: the malware installs a backdoor or other ingress accessible to the attacker
  6. Command and Control (C2): the intruder gains persistent access to the organization’s systems/network
  7. Actions on Objective: the Intruder initiates end goal actions, such as data theft, data corruption or data destruction

Respondents were asked to rate the difficulty in preventing a zero-day attack in every phase of the cyber kill chain on a scale of 1 = not difficult to 10 = very difficult. Figure 16 presents the very difficult responses (7+ on the 10-point scale). The most difficult phase to prevent the zero-day attack is the command and control phase (80 percent) in which the intruder gains persistent access to the organization’s systems/network followed by the delivery phase of the kill chain (78 percent).


Read the full report by visiting Deep Instinct’s website





New Podcast: Erin and Noah on the run, why Americans carry tracking devices everywhere now

Alia “followed” me around through cyberspace during one day in Los Angels.

Bob Sullivan

Erin and her son Noah think they’ve finally found a safe place to live, in a quiet Ohio town, invisible to Erin’s abusive ex-husband.  But that life is shattered by a disturbing voice mail after a single photo of Noah accidentally appears on a school website. The call sends mother and child on the run again, but not before a near-disaster at Noah’s school.

Sometimes, privacy is a matter of life and death. And while Erin and Noah’s story is fiction, the story of how privacy advocate Brian Hofer ended up in a police car, with a gun pointed at his brother’s head, is chillingly real.

This week we begin this second season of No Place to Hide. You’re going to hear something very new, and very different: a combination of fiction and non-fiction storytelling. Each episode begins with a scene from the story of Erin and Noah, a mom and her son on the run from his abusive father.  The story is designed to make listeners feel the way victims feel when they are stalked through cyberspace. Then, Alia and I take on the big privacy topics of our day, concluding with a look at the world in 2030 if nothing is done to manage the coming tsunami of privacy invasive technologies.

No Place to Hide is sponsored by Intel Corp.

I’m really proud of this concept, and this series. Privacy isn’t some esoteric idea, or a first-world problem. Privacy is about freedom, and free will, and personal safety, and creativity.

This week’s episode is about location data, a topic that’s in the news a lot right now. Authoritarian regimes around the world are trying to stem the tide of coronavirus by tracking citizens’ movements via their cell phones. Well, every country is trying to do that. In places like China, there is no pretense of worry about civil liberties. In the U.S., Apple and Google have announced a system that uses Bluetooth to alert people who’ve been near a patient that’s tested positive. Theoretically, that limits the information to a small group who really needs it. Still, plenty of firms and governments are bragging about use of cell phone location data as a public health tool  — the state of New Mexico, for example, is using data to see how well residents are social distancing.   The data is being examined nationally, also.

Only a fool wouldn’t try all available tools to beat back coronavirus. But what are the long-term implications of these more aggressive steps by governments to track citizens via their cell phones? And how did we all end up with tracking devices in our pockets in the first place?

This week’s episode of No Place to Hide delves deep into the history of location data; I hope it will help inform public discussion as we move forward to the next step in this crisis, which is sure to include a lot of arm wrestling between the good technology can do and the potential harms.

Ep 4: On location Summary

Erin and Noah — Dad finds them in Ohio because an errant photo ended up on a school website. They have to drop everything and flee, right as their dad shows up at school.

Bob and Alia: Cell phones track our every move, in perhaps the biggest attack on privacy of our time. On location in Los Angeles, Bob and Alia discuss the past ten years of location-specific data hoarding by large companies. Then we hear why Oakland Privacy Commission chair Brian Hofer ended up in a police squad car, and his brother had a gun pointed to his head ‘executioner-style,’ all over a database error.


Partial transcript

BOB: A single piece of location information doesn’t seem that distressing. But when you can put it all on a map, over time, and build a picture of someone’s life, that’s when you’ve really, really invaded their privacy.

ALIA: You know, it kind of reminds me of this person I knew a long time ago, Bob. And I remember one day we were having coffee, and he was telling me about how, uh, assassinations worked. And I thought that was really creepy, but do you know what the first rule was to figure out how to assassinate someone? The rule was you get to know their habits, and you get to know their days, and you watch them. Where they go, how they get there, when they get there, every single day. Because if you know their habits, then you know where the holes are when you might do the deed.

BOB: …That’s what Liam Youens did to Amy Boyer…

ALIA: Yeah… that’s really scary. So what you’re talking about , in like learning someone’s habits– their daily whereabouts– you can look for opportunities to do something terrible potentially. And he was talking about it in like the old school sense of, you know, like stakeouts. You’re watching this person. And what you’re talking about is, basically, you don’t have to do that anymore, because Google does it for you.

BOB: And not just Google of course, any cellphone does this for you.

ALIA: Right. Ugh. 

BOB: Mobile devices are tracking devices, and so who has access to that information? Maybe through that Terms & Conditions box you checked? Your mobile provider.  Your apps. Hundreds of companies in between that are collecting these incredibly detailed profiles of your movements. You know, I recently wrote about a selfie app that teenagers love — it has 300 million downloads. And sends all their location information to the developer…in China.

ALIA: And there’s that NYTimes exposé on location data, that we’ve both been obsessed with. Someone gave the reporters at the Times a copy of a location database with a year’s worth of data.  Using that, they were able to track specific people, like a secret service agent, someone protecting the president, from their home to the White House to their church. And they had this location data for over 12 million people.

BOB: Just imagine what our fictional angry ex-husband from the opening could do with data like that.

ALIA: That’s so scary. 

BOB: When we talk about issues like privacy and data security, I get emotional and philosophical about civil liberties. And maybe you don’t care if Google knows what websites you visit or Amazon knows what kind of dog food you buy. But location data is next level. And as our little experiment showed, as the NYT story showed, something incredible happened in the past decade. The advent of smartphones means that most Americans, and about half the people on Earth, now carry small, incredibly accurate tracking devices with them at all times. And… I don’t remember anyone having a great, open, honest debate about the wisdom of that.

ALIA: Me neither. But I think we should.


BRIAN HOFER: Yeah, I, you know, I, I can’t get half of my friends to use like Signal or other encrypted software, or to, you know, have two factor authorization, cause you know, we’ll trade anything for convenience and speed. 

ALIA: That’s Brian Hofer. He’s a community activist in Oakland, California. We’ll hear a lot more about his activism later, but for now, he paints an amazing picture about the importance of location information.

BRIAN HOFER: It only takes four geospatial data points. So that’s time and location. Four different geospatial data points to identify over 95% of humans. Why? Cause I drive to work the same way, I drive to the gym the same way, I go to the same grocery store. We’re creatures of habit. So you know, whether it’s your scooter, uh, whether it’s even public transit that now mostly use like electronic payments, uh, obviously license plate readers, and obviously cell phones, you only need a couple of, you know, four or five data points and you, and your, you can map somebody out, you can figure out who it is.

The question usually is, Well, I have nothing to hide, so I have nothing to fear. And that, and that’s totally wrong. And I like how Edward Snowden, uh, flipped that on its head and said, you have something to protect. What if we just did have an abortion and there’s cameras right outside of that clinic and a license plate scanner, uh, and you’re tracking my phone calls, you know, to the clinic and my location? Or what if I keep driving and parking in front of the same cancer doctor’s office? Maybe I didn’t want to tell you I had cancer. Um, what if I am exploring my sexuality and there is facial recognition on the front of, uh, bars, you know, a same-sex bar that I wanted to walk into, but now I’m scared because there’s facial recognition. So all these little data points by themselves, probably not a civil Liberty threat, probably not, uh, invading my privacy, but together because of the nature of all the commingled data and databases together, what we now call it, and you’ve seen it in, uh, Sotomayor’s, uh, uh, some of her opinions, we call it the Mosaic Theory, that there’s all these little tiles, these little pieces…

BOB: So, Mosaic theory. This is really important.  It’s super creepy that in an instant, you could see everywhere I went all morning. But it should be even creepier to think that with just a few details, I could pretty much size up your whole life. I mean, imagine you are Erin and Noah, trying to get away from an angry ex husband. In just a moment, with data like this, he would know exactly when to show up at school to snatch a child. You see, most people’s lives aren’t really that complex. We only go to a few places 95% of the time. 

We talked Marc Groman about this — he was the first-ever chief privacy officer at the FTC and senior advisor for privacy in the Obama White House

MARC GROMAN – If you have my precise location over say a couple of weeks, you essentially can draw highly sensitive inferences about my entire life. You will understand my religious beliefs, my political beliefs, my health issues potentially. And by the way, it’s so precise now we know not just that you’re in the hospital, but if you’re in a 12 story building, we know what floor in the hospital

ALIA: Wow. 

ALIA:  Susan Grant of the Consumer Federation of America. We talked to her for a while about location data and I gotta say, when she talked about the creation of ‘megaprofiles’ I got chills.

SUSAN GRANT: Location is just one of the many, um, very revealing things about you that can be compiled into a mega profiles about you. So it’s not just where you are at any given moment, but it’s where you go most frequently.  Um, which can tell a lot about you. Um, you know, uh, uh, where you go to church reveals what your religion is, for instance. Um, these are things that people have a right to keep private if they want. Um, and uh, yet this information is being collected when it’s not needed

ALIA: Ok, this all sounds pretty awful. Tracking gadgets in our pockets and purses. Really precise data being sent to companies we work with, all around the world, available to the government…but I have to ask a question I know you love, Bob. So, Bob …who’s making money off all this location information?

BOB:  That’s always the important question to ask. And, we have to credit Buzzfeed for a great story explaining how valuable location information is.

BOB: “Location-sharing agreements between app developers and app brokers – where apps can send your GPS coordinates up to 14,000 times per day – can bring in a lot of revenue. With just 1,000 users, app developers can get $4/month. If they have 1 million active users, they can get $4,000/month. And that’s from just one broker. If they work with two app brokers with similar payouts, and have at least 10 million active monthly users, they could stand to make $80,000/month.”

BOB: Quote: “With more dangerous permissions given by the user, they will get more sensitive data, which means they’ll make more money.” End Quote. 

ALIA: So…that selfie app we talked about. It had 300 million downloads! OMG, how much money they must be making.

BOB: Exactly. But to me, it’s important to remember that big fish eat little fish metaphor from the first half of the series.

ALIA: Bob, I was waiting for a metaphor.

BOB: So a consumer group in Norway recently investigated dating apps like  Grindr, Tinder, OkCupid, and so on, and found they were selling sensitive data like location data into this ecosystem…but one of the buyers was a firm named MoPub. Which is owned by Twitter.

ALIA: Ahh Twitter. Because someone has to be writing those big checks, driving this whole ecosystem. And again, when did we decide as a society that we were ok with this? We didn’t. It just kind of…happened

BRIAN HOFER: And what is so scary, you know, back when we all read, like, 1984, we thought the government was just going to force everything on us–

ALIA: Here’s Brian Hofer again–

BRIAN: and what the American business genius was is nah, just ask people to do it voluntarily, you know, we’ll just offer them convenience and they’ll do all these things on their own and most people don’t look beneath the hood and don’t really look to see what the ramifications are.