The 2021 Global Encryption Trends Study

Ponemon Institute is pleased to present the findings of the 2021 Global Encryption Trends Study, sponsored by Entrust. We surveyed 6,610 individuals across multiple industry sectors in 17 countries – Arabian Cluster (which is a combination of respondents located in Saudi Arabia and the United Arab Emirates), Australia, Brazil, France, Germany, Hong Kong, Japan, Mexico, Netherlands, the Russian Federation, Spain, Southeast Asia, South Korea, Sweden, Taiwan, the United Kingdom, and the United States.

The purpose of this research is to examine how the use of encryption has evolved over the past 16 years and the impact of this technology on the security posture of organizations. The first encryption trends study was conducted in 2005 for a US sample of respondents.  Since then we have expanded the scope of the research to include respondents in all regions of the world.

Since 2015 the deployment of encryption has steadily increased. This year, 50 percent of respondents say their organizations have an overall encryption plan that is applied consistently across the entire enterprise and 37 percent of respondents say they have a limited encryption plan or strategy that is applied to certain applications and data types, a slight decrease from last year. Following are the findings from this year’s research:

Strategy and adoption of encryption

Enterprise-wide encryption strategies increase. Since conducting this study 16 years ago, there has been a steady increase in organizations with an encryption strategy applied consistently across the entire enterprise. In turn, there has been a steady decline in organizations not having an encryption plan or strategy. The results have essentially reversed over the years of the study.

Certain countries have more mature encryption strategies. The prevalence of an enterprise encryption strategy varies among the countries represented in this research. The highest prevalence of an enterprise encryption strategy is reported in Germany, the United States, Japan and the Netherlands. Respondents in the Russian Federation and Brazil report the lowest adoption of an enterprise encryption strategy. The global average of adoption is 50 percent.

The IT operations function is the most influential in framing the organization’s encryption strategy over the past 14 years. However, in the United States the lines of business are more influential (305percent of respondents). IT operations are most influential in Sweden, Korea and France.

Trends in adoption of encryption

 The use of encryption increases in all industries Results suggest a steady increase in all industry sectors, with the exception of communications and service organizations. The most significant increases in extensive encryption usage occur in manufacturing, hospitality and consumer products.

 The extensive use of encryption technologies increases. Since we began tracking the enterprise-wide use of encryption in 2005, there has been a steady increase in the encryption solutions extensively used by organizations.

Threats, main drivers and priorities

Employee mistakes continue to be the most significant threats to sensitive data. The most significant threats to the exposure of sensitive or confidential data are employee mistakes.

In contrast, the least significant threats to the exposure of sensitive or confidential data include government eavesdropping and lawful data requests. Concerns over inadvertent exposure (employee mistakes and system malfunction) significantly outweigh concerns over actual attacks by temporary or contract workers and malicious insiders.

The main driver for encryption is the protection of customer’s personal information. Organizations are using encryption to protect customers’ personal information (54 percent of respondents), to protect information against specific, identified threats (50 percent of respondents), and the protection of enterprise intellectual property (49 percent of respondents).

A barrier to a successful encryption strategy is the ability to discover where sensitive data resides in the organization. Sixty-five percent of respondents say discovering where sensitive data resides in the organization is the number one challenge. Forty-three percent of all respondents cite initially deploying encryption technology as a significant challenge. Thirty-four percent cite classifying which data to encrypt as difficult.

Deployment choices

No single encryption technology dominates in organizations. Organizations have very diverse needs. Internet communications, databases and internal networks are the most likely to be deployed and correspond to mature use cases. For the fourth year, the study tracked the deployment of encryption of IoT devices and platforms. Sixty-one percent of respondents say encryption of IoT platforms devices and 61 percent of respondents say encryption of IoT platforms have been at least partially deployed.

Encryption features considered most important

Certain encryption features are considered more critical than others. According to the consolidated findings, system performance and latency, management of keys and enforcement of policy are the three most important encryption features.

Which data types are most often encrypted? Payment related data and financial records are most likely to be encrypted as a result of high-profile data breaches in financial services. The least likely data type to be encrypted is health-related information and non-financial information, which is a surprising result given the sensitivity of health information.

Attitudes about key management

How painful is key management? Fifty-six percent of respondents rate key management as very painful, which suggests respondents view managing keys as a very challenging activity. The highest percentage pain threshold of 69 percent occurs in Spain. At 37 percent, the lowest pain level occurs in France. No clear ownership and lack of skilled personnel are the primary reasons why key management is painful.

Importance of hardware security modules (HSMs)

The United States, Germany and Japan organizations are more likely to deploy HSMs. T United States, Germany and Japan are more likely to deploy HSMs than other countries. The overall average deployment rate for HSMs is 49 percent.

How HSMs in conjunction with public cloud-based applications are primarily deployed today and in the next 12 months. Forty-one percent of respondents say their organizations own and operate HSMs on-premise, accessed real-time by cloud-hosted applications and 39 percent of respondents rent/use HSMs from a public cloud provider for the same purpose. The use of HSMs with Cloud Access Security Brokers and the ownership and operation of HSMs on premise are expected to increase significantly.

The overall average importance rating for HSMs, as part of an encryption and key management strategy in the current year is 66percent. The pattern of responses suggests the United States, Arabia (Middle East) and the Netherlands are most likely to assign importance to HSMs as part of their organization’s encryption or key management activities.

What best describes an organization’s use of HSMs? Sixty-one percent of respondents say their organization has a centralized team that provides cryptography as a service (including HSMs) to multiple applications/teams within their organization (i.e. private cloud model). Thirty-nine percent say each individual application owner/team is responsible for their own cryptographic services (including HSMs), indicative of the more traditional siloed application-specific data center deployment approach.

What are the primary purposes or uses for HSMs? The three top uses are application-level encryption, SSL/TLS, followed by container encryption/signing services. There is a significant increase in the use of database encryption 12 months from now.

Cloud encryption

 Sixty percent of respondents say their organizations transfer sensitive or confidential data to the cloud whether or not it is encrypted or made unreadable via some other mechanism such as tokenization or data masking. Another 24 percent of respondents expect to do so in the next one to two years. These findings indicate the benefits of cloud computing outweigh the risks associated with transferring sensitive or confidential data to the cloud.

How do organizations protect data at rest in the cloud? Thirty-eight percent of respondents say encryption is performed on-premise prior to sending data to the cloud using keys their organization generates and manages. However, 36 percent of respondents perform encryption in the cloud, with cloud provider generated/managed keys. Twenty-one percent of respondents are using some form of Bring Your Own Key (BYOK) approach.

What are the top three encryption features specifically for the cloud? The top three features are support for the KMIP standard for key management (59 percent of respondents), SIEM integration, visualization and analysis of logs (59 percent of respondents) and granular access controls (50 percent of respondents).

 Read the full Global Encryption Trends story at Entrust’s website.


Facebook earns billions from scam ads, lawsuit alleges

Bob Sullivan

Facebook profits from advertisements it knows, or should know, are fraudulent, a federal lawsuit filed in  California alleges. The social media giant makes it easy for criminals to target consumers who are not only likely to click on certain kinds of ads, but also likely to follow through with purchases, the case claims.  The firm is “actively soliciting, encouraging, and assisting scammers,” the suit claims.

Alleged frauds include ads for products that never ship, or are substantially different from what is advertised. Fraud rates for some types of ads are as high as 30%, the suit claims.

Not only does Facebook look the other way when such ads are placed, but it has actively recruited suspicious sellers through conferences and other means, the case claims. Lawyers for the plaintiffs seek class-action status for the case, and claim there are potentially millions of victims and Facebook has earned billions of dollars.

Facebook did not immediately respond to a request for comment about the lawsuit (I’ll update the story if needed).

Tech companies have faced allegations they profit off fraud enabled by their platforms for a long time. Journalists have been writing about fake Google Maps businesses for at least seven years.  Instagram fraud had its day in the sun back in 2018. The firms make money off disinformation, too. Recently, I searched for “Can I get the vaccine from my doctor” on Google and was presented with a long list of anti-vaxx links and products for sale.

There have long been questions about how hard these services work to correct these problems. “More than a third (34%) of people that reported a scam ad to Google said it was not taken down while just over a quarter (26%) said the same had happened with Facebook, according to a study published by British consumer group Which?” BusinessInsider has reported.

The recent Facebook case, filed in August, alleges negligence, breach of contract, and breach of covenant of good faith and fair dealing. It builds on the work of several journalists who have written about Facebook ad fraud in recent years — most notably Zeke Faux’s story in 2018, which includes details from a Facebook ad conference that Bloomberg attended; and a Buzzfeed story from last year, titled Facebook Gets Rich Off Of Ads That Rip Off Its Users. 

The California lawsuit claims that “Facebook’s sales teams have also been aggressively soliciting ad sales in China and providing extensive training services and materials to China-based advertisers, despite an internal study showing that nearly thirty percent (30%) of the ads placed by China-based advertisers — estimated to account for $2.6 billion in 2020 ad sales alone — violated at least one of Facebook’s own ad policies.”

It also cites increased social media advertising fraud complaints, driven most recently by stay-at-home orders during the pandemic. “In October 2020, the Federal Trade Commission (“FTC”) reported that about 94% of the complaints it collected concerning online shopping fraud on social media identified Facebook (or its Instagram site) as the source,” the case notes.

Facebook denied to Buzzfeed that it profits off fraud. It told the news site: “Bad ads cost Facebook money and create experiences people don’t want. Some of the things raised in this piece are either misconstrued or missing important context. We have every incentive — financial and otherwise — to prevent abuse and make the ads experience on Facebook a positive one. To suggest otherwise fundamentally misunderstands our business model and mission.”

But it’s hard to deny the incentives large tech companies have to look the other way when companies are paying them millions of dollars to get finely-tuned ads in front of users.

In the lawsuit, plaintiff Christopher Calise says he spent about $50 to buy a car engine assembly kit and never received it. He reported the ad as fraud to Facebook, and the social media company took it down, but the alleged scam firm was able to re-place the ad using a slightly different name soon after.   Plaintiff Anastasia Groschen says she responded to an ad for a child’s activity board. When a simple puzzle arrived instead, she complained to the company, only to be instructed that she’d have to pay to ship the puzzle back to China.

The lawsuit seeks monetary damages for all impacted members of the class, and wants the court to force Facebook to make immediate changes to the way it patrols ads.

Phishing costs have tripled since 2015

Ponemon Institute is pleased to present the results of The 2021 Cost of Phishing Study sponsored by Proofpoint. Initially conducted in 2015, the purpose of this research is to understand the risk and financial consequences of phishing. For the first time in this year’s study we look at the threats and costs created by business email compromise (BEC), identity credentialing and ransomware in the workplace.

The key takeaway from this research is that the costs have increased significantly since 2015. Moreover, with the difficulty many organizations have in securing a growing remote workforce due to COVID-19, successful phishing attacks are expected to increase.

We surveyed 591 IT and IT security practitioners in organizations in the United States. Forty-four percent of respondents are from organizations with 1,000 or more employees who have access to corporate email systems.

The following findings reveal that phishing attacks are having a significant impact on organizations not only because of the financial consequences but also because these attacks increase the likelihood of a data breach, decrease employee productivity and increase the likelihood of a business disruption.

The cost of phishing more than tripled since 2015. The average annual cost of phishing has increased from $3.8 million in 2015 to $14.8 million in 2021.The most time-consuming tasks to resolve attacks are the cleaning and fixing of infected systems and conducting forensic investigations. Documentation and planning represent the least time-consuming tasks.

Loss of employee productivity represents a significant component of the cost of phishing. Employee productivity losses are among the costliest to organizations and have increased significantly from an average of $1.8 million in 2015 to $3.2 million in 2021. Employees are spending more time dealing with the consequences of phishing scams. We estimate the productivity losses based on hours spent each year by employees/users viewing and possibly responding to phishing emails averages 7 hours annually, an increase from 4 hours in 2015.

The cost of resolving malware infections has doubled total cost of phishing. The average total cost to resolve malware attacks is $807,506 in 2021, an increase from $338,098. Costs due to the inability to contain malware have more than doubled from an average of $3.1 million to $5.3 million.

Credential compromises increased dramatically. As a result, organizations are spending more to respond to these attacks. The average cost to contain phishing-based credential compromises increased from $381,920 in 2015 to $692,531 in 2021. Organizations are experiencing an average of 5.3 compromises over the past 12-month period.

Credential compromises not contained have more than doubled. The average total cost of credential compromised not contained is $2.1 million and has increased significantly from $1 million in 2015.

BEC is a security exploit in which the attacker targets employees who have access to an organization’s funds or data. The average total cost of BEC’s exploits was $5.96 million (see Table 1a). Based on the findings, the extrapolated average maximum loss resulting from a BEC attack is $8.12 million. The average total amount paid to BEC attackers was $1.17 million.

What is the cost of business disruption due to ransomware? Ransomware is a sophisticated piece of malware that blocks the victim’s access to his/her files. The average total cost of ransomware last year was $5.66 million, and the average percentage rate of ransomware attacks from phishing was 17.6 percent.

Employee training and awareness programs on the prevention of phishing attacks can reduce costs. Phishing attacks are costing organizations millions of dollars. According to the research, the average annual cost of phishing scams is $14.8 million, an increase from $3.8 million in 2015.

Respondents were asked to estimate what percentage of phishing costs that could be reduced through training and awareness programs that specifically address the risks of phishing attacks targeting the workforce.  The cost can be reduced by an average of more than half (53 percent) if training is conducted.

Part 2. Key findings

Loss of employee productivity represents a significant component of the cost of phishing.
The average annual cost of phishing has increased from $3.8 million in FY2015 to $14.83 million in 2021. As shown, productivity losses have increased significantly from $1.8 million in 2015 to $3.2 million in FY2021. Please note that information about BEC and ransomware was not available in FY2015. In the current study, we estimate an annual cost of phishing for BEC at $5.97 million and ransomware at $996 thousand.

Employees are spending more time dealing with the consequences of phishing scams. The range of hours is less than 1 to more than 25 hours per employee each year. We estimate the productivity losses based on hours spent each year by employees/users viewing and possibly responding to phishing emails. As shown, each employee wastes an average of 7 hours annually due to phishing scams, an increase from 4 hours in 2015.

As discussed, the costliest consequence of a successful phishing attack is employees’ diminished productivity. Here we assume an average-sized organization with a headcount of 9,567 individuals with user access to corporate email systems.  Based on an average of 7 hours per employee we calculate 65,343 hours wasted because of phishing.  Assuming an average labor rate of $49.5 for non-IT employees (users) we calculate a total productivity loss of $3.2 million annually, an increase from $1.8 million in 2015.

An average of 15 percent of an organization’s malware infections are caused by phishing scams. Respondents were asked to estimate the percentage of malware infections caused by phishing scams. The estimated range is less than 1 percent to more than 50 percent. The extrapolated average rate is 15 percent. As discussed above, the cost to contain malware is estimated to be $353,582 (see Table 1).

The likelihood of a malware attack causing a material data breach due to data exfiltration has increased since 2015. In the context of this research, a material data breach involves the loss or theft of more than 1,000 records. Respondents were asked to estimate the likelihood of this occurring. The probability distribution ranged from less than .1 percent to more than 5 percent. The extrapolated average likelihood of occurrence is 2.3 percent over a 12-month period, an increase from 1.9 percent.

The total cost attributable to malware attacks caused by phishing scams more than doubles. The total cost to resolve malware attacks is $807,506 in 2021, an increase from $338,098 in 2015.

Phishing costs due to the inability to contain malware have more than doubled and represents 11 percent of the total cost of phishing.  Malware not contained is malware at the device level that has evaded traditional defenses such as firewalls, anti-malware software and intrusion prevention systems. Following are two attacks caused by an active malware attack that are difficult to contain: (1) data exfiltration (a.k.a. material data breach) and (2) business disruptions. The total cost of malware not contained has increased from $3.1 million to $5.3 million.

A malware attack resulting in a data breach due to data exfiltration could cost an organization an average of $137.2 million. The following formula is used to determine the probable maximum loss (PML) and the likelihood of such an attack:

What is the cost of business disruption due to a malware attack? Respondents were asked to estimate the PML resulting from business disruptions caused by a malware attack. Business disruptions include denial of services, damage to IT infrastructure and revenue losses. The distribution of maximum losses ranges from less than $10 million to $500 million. The extrapolated average PML resulting from data exfiltration is $117.3 million, an increase from $66.3 million.

How likely are business disruptions caused by a malware attack will affect your organization? Respondents were asked to estimate the likelihood of material business disruptions caused by malware. The probability distribution ranges from less than .1 percent to more than 5 percent. The extrapolated average likelihood of occurrence is 2.1 percent over a 12-month period, an increase from 1.6 percent in 2015.

Visit Proofpoint’s website to download the entire 2021 Cost of Phishing Report

Hear how an FBI agent conned a con artist; got him to fly to the US for prosecution

Bob Sullivan

How do you catch Internet con artists? Well, you con them.

Alan, who lives near Washington D.C., had traveled to Dubai and to Ghana thinking he was helping a princess gain access to her multi-million dollar inheritance.  Before his fever was broken, Alan — not his real name — sent about $600,000 to a man named Eric and a woman who called herself “Precious.”  By the time the FBI got involved, it was too late for Alan’s money.  But Alan did have a photograph of the two criminals, taken in Dubai.  When a rookie FBI agent named Mike saw that, he decided there might be just enough evidence to pursue the criminals through cyberspace.

He had one big problem, however. Agent Mike — we’re protecting his identity — couldn’t fly to Dubai or Ghana and arrest them. He had to get them to fly, willingly, to the U.S.

You don’t often get to hear an FBI agent talk about chasing after online criminals. And rarely do stories involving $600,000 sent overseas to criminals have a happy ending. But in this recent episode of The Perfect Scam, I pull back the curtain on a remarkable piece of crime-fighting and a relentless pursuit by one very determined agent.

Listen to this episode by clicking here, or by clicking the play button below.  Below that, a partial transcript appears. It’s a two-part episode. You can hear part 2 at this link, or hit the second play button below.




[00:06:25] Mike: Yeah, it’s almost heartbreaking, because when you read the transcripts, you can see how the victim actually thinks it’s real. I mean he’s actually saying things like, “Well when can I meet you,” or “Can you send me more pictures?” The scammer usually almost always just returns to, “Oh I love you so much,” et cetera, et cetera, you know, “How are you?” that kind of thing. And it’s just so one-sided in terms of the victim is like actually trying to have a relationship, but the scammers are just uh clearly have an agenda on their mind. And then uh, it will, you know, usually transition then to, “Oh, hey something terrible just happened. My mother just got into a car accident,” “We’re overseas for the moment,” or “My, my dad’s uh late on his rent,” or “I’m late on my rent,” or something like that. “Can you just send me, MoneyGram me uh, 500 bucks, ” or something like that.

[00:07:13] Bob: But those smaller asks are just the beginning of the crime. Soon, Precious starts to tell a much bigger story to Alan.

[00:07:21] Mike: One thing that happened, which I’ve come to realize this might be a common thing for scammers from Ghana, is that the uh the women, in this case, Precious, eventually will let the victim know that, “I am actually an African princess, I’ve actually inherited millions of dollars’ worth of gold, it’s back overseas in Ghana, and here’s my lawyer,” you know. In, in this case Precious had a lawyer named Eric, and other, other scammers will introduce other like a, they’ll, they’ll almost always introduce a second player, um, like a second figure. And then the lawyer will come in, in this case, Eric, with a very formal sounding, uh you know, email signature block and very formal sounding, uh, language and write, you know, big, long paragraphs, with very lawyerly sounding text to say, “I understand, Alan, that you’re here to help Precious. Uh, that’s a great thing that you’re doing. And in order to have her, you know, receive her inheritance of millions of dollars which will help her and her family, you need to start paying,” you know, this and that for legal documents, for shipping fees, et cetera, et cetera.

[00:08:40] Bob: So far, this looks like a crime that FBI agents unfortunately see pretty often, but as Mike keeps reading, he confirms one of his chief suspicions. That trip to Dubai and Ghana, that means the crime went a whole lot farther.

[00:08:55] Mike: Eventually, I think the reason to get Alan on a plane was so that he could meet the supposed lawyer, Eric, in Dubai so that they could sign some legal documents towards the uh, the release of the gold.

[00:09:08] Bob: And what did he actually sign when he got to Dubai?

[00:09:11] Mike: It was just, you know, something that you could drum up on Microsoft Word in 10 minutes.

[00:09:16] Bob: So this, this was still all just a, a movie scene that they were playing out for him. Um, well did you see the part of the discussion where he said, yes, I’ll, I’ll fly, I’ll get on an airplane? I mean, that must be amazing to see in black and white.

[00:09:30] Mike: Yep, we saw that. I think it must have been several months into their, the scam where he actually got onto the plane, if I recall correctly. But uh yep, they met in Dubai. That was actually one of the reasons why I decided that we could probably take on this case, because he had actually gone overseas, and he actually met these people in person, at least Alan could pick them out from a lineup, for example.

[00:09:55] Bob: He could pick Precious and Eric out of a lineup, if ever there were a way to get them into a lineup. But maybe even more important, there’s pictures.

[00:10:06] Mike: Yeah, so they meet in Dubai. It’s Eric and Precious. It’s uh, an African man and a Caucasian woman claiming to be Eric and Precious, and they have Alan pay for the hotel, they have Alan pay for the meals, everything. In fact, I think there’s this picture of Alan with uh, Eric and Precious in like, it looks like a Chili’s or something in the Dubai airport. It was one of the first times that we actually saw the scammers for real when, uh Alan shared that picture with us.

[00:10:34] Bob: Okay, yeah. Now before we go on, you have a picture of them at, of the three of them at a, at a Chili’s in Dubai?

[00:10:40] Mike: I, I don’t know what restaurant it is, but it looked like, you know, uh there’s, there’s a few more pictures of them at the uh, the Dubai airport, so you know, it’s just pretty good uh proof that corroborates the story.

[00:10:53] Bob: That’s, I’m almost, I’m kind of amazed that they were brazen enough to pose for a picture like that.

[00:10:58] Mike: You know, it’s um, sometimes I think about that, too. So I think from the perspective of a scammer, it’s really a risk/reward calculation they have to make because uh, when you’re trying to scam these folks, if you, you know, obviously, you know, it’s a romance scam, so your victim wants to meet you because you guys are supposed to be in love. So if you never meet with the victim, obviously they will start to get suspicious after a while. And there’s only so much that you can keep the victim on the hook for, there’s only so, so much money you can squeeze out of them. However, if you take the risk and you actually meet with the victim, and you have the uh, I guess the props to show that this is actually a true story, then you’ve got the victim hooked for even more, right. Now he knows it’s real.


[00:11:43] Bob: Mike says the three of them looked pretty jolly in the photos, like they’re on vacation together.

[00:11:48] Mike: Eric and Alan, just kind of posing, big smiles somewhere, it must have been somewhere in Dubai, if I recall correctly. And then, when we saw pictures of Precious, she was indeed a Caucasian young woman. She must, she must have been in her mid–, she looked like she was in her mid-20s. Eric looked like he was a bit older, probably in his 40s, but you know, the pictures that Precious had been sending to Alan via Skype were, you know, pictures of just gorgeous women that you find on the internet, right, and it was pretty clear that the Precious in real life was not the same.

[00:12:26] Bob: Eventually, the group gets down to business. But they don’t stay in Dubai very long.

[00:12:31] Mike: After signing these documents for the uh, supposed gold, Eric kind of suddenly proposes to Alan, “Hey why don’t I take you to Ghana so that you could actually see the gold for yourself, and so that you can actually see all of Precious’s inheritance, so that you know it’s real.” And then um, the real reason that Eric’s doing this is because he wants to get Alan on some more scams that he has waiting for him back in Ghana. From there, Precious actually goes back to her home country; we found out later that was Ukraine. Eric, I think he just takes Alan’s credit cards, and he just buys tickets for them to go from Dubai to Ghana.

[00:13:08] Bob: And when they get to Ghana, Eric puts on quite a show for Alan.

[00:13:12] Mike: Pretty shortly after they landed, Eric takes Alan to what sounds like some sort of compound or some sort of building that he has, and inside is what Alan described as some sort of safety deposit box. Unfortunately, there was no pictures uh really from, that describe this, so I don’t, we don’t really have a good visual on it, but it looked pretty official. Uh, Alan, you know, Alan described that there was like a bank guard there, and there were some other folks there, and so, you know, Eric does the whole “bring forth the gold” kind of thing. Alan describes um, the guards bringing over I guess a chest of, you know, gold bars, and uh Alan picked one up, and, and he said it sure felt like they were pretty heavy, so it must be gold.

[00:14:05] Bob: Wow, and but to Alan’s estimation, it was maybe millions of dollars’ worth of gold?

[00:14:10] Mike: Yeah, that’s what Eric was claiming the whole time. That was part of the story, so…

[00:14:14] Bob: Of course, Eric has another reason to bring Alan to Ghana. He wants to introduce Alan to another criminal with another elaborate story.

[00:14:23] Mike: Eric kind of uses this opportunity to, to introduce a, another scam. It’s another scam that we’ve heard of before, it’s sometimes you call it like, uh I’ve heard it referred to as like a kind of a washing the money scam, or the black money scam. There’s different variations of it, but really what it amounts to is a magic trick that is really impressive in the moment and really uh hooks your victim. And what Eric does is he says, okay, great, now you’ve seen Precious’s gold. I’d like to introduce you to another person. This person here is Daniel. Daniel’s about 18 years old. You know, he’s also some sort of African nobility. And Daniel’s there, and he’s smiling and he’s, you know, playing the part of a, a poor 18-year-old kid, and Eric’s just trying to help him out too, just the way like he’s trying to help out Precious. And Daniel has inherited a large quantity of, of sheets, uh, you know, just like you’ve seen those sheets that are uncut at the Treasury Department. But these sheets are worthless unless you start cutting them, and once you cut them all, then they’ll be worth millions, but uh, the way to cut them is, you can’t just use scissors. You need a chemical that only, Frank, the other character, he uses introduces guy, Frank. Frank was kind enough to bring it, so let me show you how it works. Puts the chemicals in a bowl, pours water over it, mixes it all up, and then he dips one of these sheets into the bowl, and you know, before Alan’s eyes, the sheets separate into the separate individual $100 bills. So, just like that, we made 400 bucks. And so Eric says, you know, it’s as simple as that, so if we want to start getting Daniel’s uh money, then we need to start paying money for the rest of the chemicals from Frank.


[00:04:24] Mike: Plan C was, you know with Alan’s permission, and also his wife, too, we, we kept the wife in the loop the entire time. I didn’t want her to feel like she was being excluded, but I asked them if they’d be willing to, you know, take some pictures of Alan in the hospital undergoing, well, post, uh, his medical procedures, and to go back to Eric and Precious and say, “Hey, you know, my health is really declining. I really want to help you, Precious, so uh, here’s proof that I can’t go overseas and see you. Why don’t you guys come over here, and we can do things like, I’ll put you in my will,” and so Precious will have, you know, $10,000 a month in perpetuity or something like that, or, or uh, I got a, another, another ruse that we started coming up with was, we had Alan say, “I’ve got a really rich businessman man and he’s really looking to invest in uh, in Africa, you know, and Africa’s the next place to invest in, especially with uh, raw minerals, and you, you seem to know a lot about gold, so yeah, this, yeah this rich businessman wants to meet you. He wants to talk about gold.” So we kind of started coming up with stories to uh, you know, scam Eric and Precious and Daniel with.

[00:05:40] Bob: It strikes me that it’s a good thing you work for the FBI. Otherwise you’d have another career that might not be as wholesome.

[00:05:46] Mike: Oh, (laugh). Well, sometimes you’ve got to think like scammers to catch them, so.

[00:05:52] Bob: Mike has to really work to open the door to the US for Precious and Eric. One of the most important steps, getting the State Department to issue a visa.

[00:06:01] Mike: We were kind of playing multiple stories at the same time, and I think the pictures of Alan, uh, you know, in the hospital, they were effective, but what was even more effective was we were telling Alan to ask Eric like, “Hey look, go to the Embassy in Ghana, apply for a visa, just get that process started,” and then I actually started kind of going behind the scenes, and I started, uh, talking to some reps at the State Department to say, like, hey look, we’re going to, we’re trying to set this up now, you know. This person, Eric, which I, you know, by that time we had kind of identified who Eric really was, he’s a criminal, he’s a, he’s a subject of an investigation, there’s an active FBI investigation on him. Here’s what, here’s kind of what’s going on. Basically I said, “I need you to give him a visa.”


[00:06:51] Bob: Meanwhile, Mike is coaching Alan on what to say to Eric and Precious. They lay it on pretty thick.

[00:06:57] Mike: We had Alan say, “Hey, you got your visa, because my friend, my rich friend pulled some strings with the government.” That was the story that we were spinning towards him, and then we were saying, “Okay, well now my rich friend really wants to talk to you about gold, so he’s going to buy you a plane ticket.”

[00:07:11] Bob: Even after Mike gets the State Department to play along, there are still a whole lot of steps before Eric and Precious might actually get on a plane and land in a US airport where they can be apprehended. Would Eric even show up for his visa appointment? What if he got cold feet right before boarding the plane? You’d think Mike might be worried by these things, but he says he wasn’t.

[00:07:32] Mike: You know, it wasn’t really so much nervousness, I think, uh, just the way that we think here, we always have plan A, plan B, plan C, so this plan that we were setting forth, even though it was plan C, which is now plan A, we still had backup plans, you know, in place, so I knew that if Eric never really came here, or if uh, he never followed up on his visa appointment. We had other ways, you know, it’s just, you know at the FBI we just, we, time is on our side. So something would have come eventually. Like, for example, the UAE would have told us who these people really were. And so we could have, you know, these investigations drag on for a while, and eventually something would have broken, so I wasn’t um, part of me was like, there’s no way he’s actually going to do this. But even, and even if he didn’t, that would be okay, because this investigation will still be going forward.

[00:08:22] Bob: But, to the surprise of many agents involved, the plan works. Eric gets his visa and gets on a plane headed for Dulles Airport outside Washington DC.

[00:08:33] Bob: So it worked, okay. Are, are you there at the airport when they arrive?

[00:08:36] Mike: Yep, yes, it’s myself and a few more agents, and uh, um, you know we kind of confirmed with the Department of Homeland Security that he did indeed board the flight. We actually bought his plane ticket for him. And so we knew exactly when he was arriving. As he’s going through the immigration queues, one of the uh customs and border protection officers was with us, had kind of taken us behind the scenes at the airport. We saw him just going through. I think he had like a blue suit on, and uh, he had like one of those neck pillows. He looked very tired, obviously. We kind of pulled him out of the queue. We told him to sit in a, a place that’s called secondary inspection, uh with CBP at the airport. We just kind of looked at his uh travel documents again, just to confirm that he really was the person we were looking for, and we kind of went to him, kind of broke the bad news.

[00:09:34] Bob: What was the, the expression on his face when you did that?

[00:09:36] Mike: I think he was very tired. He was very jetlagged. He was very like, uh just absolute resignation. No fight, no denial, just okay, sure. Take me.


Managing digital fraud risks in government — the 2021 study

The treasure trove of customer data and financial information that government generates, stores and processes on a daily basis makes it a rich target for hackers. The purpose of this study is to understand the steps government agencies are taking to mitigate digital fraud risks and protect customer data. In the context of this study, customers are individuals who receive and use services from federal, state and local governmental agencies.

Digital services enable government to conveniently deliver information and services to customers anytime, anywhere and on any platform or device. However, such convenience needs to be supported by a strong security posture.

Sponsored by TransUnion, Ponemon Institute surveyed 594 IT and IT security practitioners who work in the federal, state and local/municipal government organizations  (click for full study). All respondents are familiar with their agency’s efforts to prevent and detect fraud and are aware of their agency’s information security vulnerabilities and threats. In addition to studying the state of security in government agencies, the research reveals respondents’ awareness of the extreme dissatisfaction customers have with the security and convenience of agencies’ websites. 

A key finding is that government agencies are not making the necessary investment in security technologies to protect customer data and make online access to accounts convenient. Only 43 percent of respondents say their agency has the security technologies necessary to provide customers with both a secure and convenient online experience when accessing their accounts. Another fraud risk is that only 37 percent of respondents say their agency makes it as easy as possible for customers to notify them if they believe their account has been compromised.

Following is a summary of digital fraud risks revealed in this research and the solutions needed to protect customer data and improve the online experience.

Similar to the commercial sector, government agencies are having multiple data breaches involving the loss or theft of customers’ personal information. These data breaches are most often caused by employee carelessness (64 percent of respondents). This indicates the need for regular training and awareness programs and enforceable privacy and security policies. An important part of this training should be to prevent phishing and social engineering attacks. The other top root causes are hackers (56 percent of respondents) and lost or stolen devices (52 percent of respondents).

Without having the necessary security technologies and in-house expertise most agencies find it impossible or very difficult to detect attacks. The most difficult types of attacks to detect are social engineering (66 percent of respondents), credential stuffing (60 percent of respondents) and knowing the real customer from a criminal imposter using stolen credentials (60 percent of respondents).

ATOs are on the rise and respondents believe mobile phones are most vulnerable to attacks. Mobile phones are ubiquitous, and 62 percent of respondents say they are the most vulnerable to ATO fraud. Further, not only are ATO attacks increasing, the severity of these attacks is also on the rise according to 59 percent of respondents.

Most agencies’ senior leadership are not prioritizing the ATO risk. Barriers to managing the risk of ATO fraud is that only 41 percent of respondents say senior leadership makes it a priority to prevent ATOs and only 38 percent of respondents say their agency regularly assesses the ability of its IT systems to prevent and detect fraud. As a consequence, only 37 percent of respondents say most ATO attacks are quickly detected and remediated and only 28 percent of respondents say their agency has a comprehensive view of its customers’ accounts.

To address the increase in account takeover attacks, agencies should consider a layered fraud management solution based on risk-based identity and device authentication. On average in the past two years, agencies have experienced 18 ATOs. More than half (53 percent) of respondents say they have significantly increased (19 percent) or increased (34 percent). Sixty-five percent of respondents say their agencies use two-factor authentication to reduce ATOs.

Polices for the prevention and detection of fraud are not reviewed as frequently as they should be. The threat landscape is constantly evolving but only 25 percent of respondents say these policies are reviewed monthly (12 percent) or quarterly (13 percent).

Accountability for managing fraud risks is dispersed throughout the organization. A possible reason for not prioritizing the digital fraud risks is that no single function emerges as most accountable. Twenty-one percent of respondents say it is in compliance/legal. Only 19 percent of respondents say the IT security leader is most accountable.

Without the necessary security technologies and support from senior leadership, it is difficult for agencies to be effective in protecting customers’ personal information.

Fifty percent say they are very effective or effective in reducing customer fraud, which means many agencies (50 percent of respondents) are not effective. Only 41 percent of respondents say their agencies are very effective in protecting customers’ personal information and only 38 percent are very effective in detecting and preventing account takeover fraud.

Customers are frustrated with the security and convenience of government websites. Only 27 percent of respondents say customers are satisfied with the convenience and 26 percent of respondents say customers are satisfied with the security of the website. As discussed previously, only 37 percent of respondents believe it is easy for customers to notify agencies if they believe their account has been compromised.

Artificial intelligence (AI) and improvements in identity authentication are considered important to improving the customer experience. Sixty-five percent of respondents say AI decision-making tools/technologies and interconnected devices will improve the ability to track customers’ status in order to improve the security and convenience when accessing online accounts. Sixty-one percent of respondents say improvement in identity authentication will improve the state of access governance and, therefore, improve customers’ user experience.

Download the full study at

Debugger: The 1,000 companies that track you with every click and swipe

Bob Sullivan

I like a good challenge as a storyteller.  And let me tell you, making “third-party code” sound sexy is a challenge.  But that recent Apple ad showing an army of people following a cell phone user around, cataloging his every move — well, that’s real.  It’s hard to believe, but the commercial actually understates the problem.

Every day, with every click you make and every app you swipe, a virtual army of companies you’ve never heard of are tracking you.  Their 1×1 pixels and browser fingerprints lurk in every corner of the Internet, privacy landmines vacuuming up data about every aspect of your life.

I know you’ve heard this before, and I’ll bet you assume companies like Facebook and Google are tracking you to this degree — but you probably don’t realize, or don’t think about, the hundreds of small companies that attach themselves to name-brand websites like which track you in the same way.

Why do they do that? So later, the billions of pieces of information collected about us can be married to billions of dollars being spent trying to get our attention.  Over and over and over: a mindless search for anxiety medicine or sexual dysfunction is auctioned off to the highest bidder, and shared with thousands of other firms. The result? A gigantic one-way mirror that not only intrudes on our most intimate thoughts but logs them forever, making them easy prey for murky data brokers and creepy hackers alike. For the rest of Internet time.

That’s what this podcast is about.  The Internet has a third-party problem — a number of third-party problems, really — and it’s time we talked about them.

To help me with this storytelling problem, I’ve enlisted the help of some real experts for this podcast — including Jeff Orlowski, director of the hit Netflix docu-drama The Social Dilemma. Orlowski certainly succeeded in making technology and privacy conflicts sexy. Also, you’ll hear from Chris Olson of The Media Trust; Jason Kint from Digital Content Next; and Jolynn Dellinger from Duke University. Debugger is brought to you by Duke University’s Sanford School of Public Policy and the Kenan Institute for Ethics.

Please listen to the podcast: If you prefer, a transcript of the show is below. If a play button doesn’t appear for you, click this link to listen.





1. Chris Olson: Imagine this: it’s the beginning of Spring, and you’re excited to begin a long-needed home renovation project. You select Bob, a trusted and well-known contractor.


On the first day, Bob shows up with a handful of subcontractors, many of them expected. But as the days wear on, the number of subcontractors grows substantially, several of them questionable in need, skill, and character. Even if you fire the bad ones, new ones appear. By the end of the following week, your property is littered with trash, the newly installed toilets aren’t working, and things are starting to go missing—silverware, jewelry, and even a painting.


By the end of the month, the unvetted subcontractors that Bob brought to your home are knocking on your door daily trying to sell you things. Even worse, they’ve sold your address to others who are also calling on you with unrelated offers. It’s clear that Bob hasn’t done a great job vetting his subcontractors ….

2. BOB SULLIVAN: Welcome to Debugger, a podcast where we ask the big questions about technology. It’s brought to you by the Duke University’s Sanford School of Public Policy and the Keenan Institute for Ethics. I’m your host, Bob Sullivan. And we begin with that extended metaphor because today we’re going to talk about how web pages are built.  Sort of.
3. BOB : You’re not going to believe this but….most of the time you are visiting websites or firing up apps on your phone, you aren’t visiting the company the you *think* you are.  Really.  Imagine if you were at the mall, and you walked into a Macy’s to buy some shoes….but the second you stepped in, you weren’t *really* at Macy’s. You were really at Ace’s Gender Information Collection Company. And Bob’s Income Estimators Inc. And Charlie’s One Stop Shop for Likely Diabetics. Pretty soon I’m going to run out of the alphabet here.  There might be hundreds of others.

That’s what your life as a consumer is really like. Really.  Just one example: When you visit a site like CNN, you imagine you are getting information from CNN…FROM a bunch of computers sitting somewhere in the CNN center in Atlanta, or at least controlled by someone in the CNN Center in Atlanta.

4. BOB  : But when I visited just now, I was really visiting….Outbrain. And Bounce Exchange. And Integral Ad Science. And Salesforce. And Onetag. And Sourcepoint. And appnexus. [speeding up] And the Rubicon Project. And Criteo. And Quantcast International. And Nati. And Axciom. And the Trade Desk.  And Sharethrough Inc.  And Oracle.  And Pubmatic Inc. And Mediamath. [Can barely make it out here] And RTL Group. And Openx software. And dyonomi. And Rhytmone. And Iponweb. And Openxsoftware. And Zeta Global.
5. BOB : And you thought you were just getting some headlines and checking on your stocks.
6. BOB : You might not have heard of any of these companies, but they sure have heard of you.  In fact every one of these companies knows you really well, maybe better than you know yourself, thanks to all these opportunities they get to learn about you.  But if you haven’t heard of them, don’t feel bad.  Most of the people who work for the brand-name websites you *think* you are visiting probably haven’t heard of them, either. Fully 90% — maybe more! — of the computer code that lands on your computer when you open up a website *isn’t* written by people at that website.  It comes from third party companies, like the ones I just listed. THAT…is how web pages are built. One strange company you’ve never heard of before at a time.


As you might imagine, that’s a big risk for the brand-name companies you *think* you are visiting, like CNN as it would be for that imaginary construction company at the beginning of this podcast. Chris Olson runs a company named The Media Trust, which tries to help companies deal with this third-party risk. That was him reading the opening metaphor about housing contractors. But in real life…real, virtual life…it’s his job to protect companies from getting in bed with the wrong sub-contractors.  And he’s … seen a lot of things. The world he works in is….amazing. Here, he tries to explain the morass to me

7. Bob: of the first things you said, which I think would surprise most listeners, is that you help digital companies maintain control of their websites, their digital assets. How did they lose control of them?

Chris: Great question.When you go you as a consumer, you visit a website or an app, what renders on your device, it’s not what you see that that is rendering on your device, but really the backend of that website, all of the source code. … The place where control all has been lost and it’s almost in its entirety, um, uh, for most companies, is that 20 years, 20%, roughly of all of the source code that rendered on you was third or fourth or fifth party.

The transition and where control has been lost is that that 20% of third fourth party is now 90 95 on news or information websites.

It’s often 98 or 99% of what actually renders on the consumer. The application security teams remain the same. Meaning, they still cover the owned and operated portion of the consumer’s device experience. Um, but that leaves roughly 90 to 95% open to thousands of third parties that are looking to gain access to the consumer.

So that, that idea of third-party code, is where the control has been lost and really where that battle, I think is, is, is really raging today.

Bob: Uh, this sounds crazy. 99% of the stuff that’s on a webpage. I look at when I go to look at a major news site is written by somebody else?

Chris: The source code.

8. BOB : So, let’s talk about what I think of as the third-party problem. Actually, on the Internet, you have a lot of third-party problems.  Third party code can be behind all kinds of malicious attacks — ransomware for example – and we’ll talk about those in future episodes. But today we’re going to talk about third-party tracking, just to illustrate the problem.  You think you are visiting, but you are really getting code from Secretive Data Collection Inc. on your computer.  You think you are buying from, but you are really telling BigPharma Inc. how likely it is that you are a diabetic.  You think you have a relationship with CNN, when you really have a relationship — a rather one-sided relationship — with Acxiom and all the rest. Every day, with every click, or every smartphone swipe, you are letting hundreds and hundreds of strange subcontractors into your life, into you digital home. And you have no idea who they are.
9. BOB : Of course, companies aren’t invading your space like this this for fun. They’re doing it for data.  The Web is awash in mysterious companies that sneak onto your laptops and smartphones, watching your every move.
10. BOB : At this point, you might be thinking, ‘the joke is on them.’ You’re just reading stories with headlines like “29 Optical Illusions to Kill Time” or “Dealing with cat depression.”  Who cares?  Well, these companies don’t just follow your click-trail around CNN. They follow you all over the web. Your next click, and your next, and your next, building an ever more detailed profile about you.
11. BOB : They do this by implementing a series of trackers. Not just cookies, which you might have heard of, but all kinds of hidden tracking tools. Browser fingerprints. 1×1 pixel images. MAC addresses.  All because the most essential element of all this data collection is that it can be tied back to you, specifically…your device, anyway.  The Electronic Frontier Foundation has called this a “One Way Mirror” and in a recent report, argued that trackers are hiding in nearly every corner of today’s Internet.  Placed there, like privacy land mines, by companies you’ve never heard of.  The only time you might become aware of their presence? When you get an ad that seems….just perfect for you.
12. BOB : Ever wonder why you get that ad for a power nap pillow (with pockets for your hands!) right as you’re about to nod off at your desk?  Jeff Orlowski, Director of The Social Dilemma has noticed this phenomenon.
13. JEFF ORLOWSKI: I’m no longer on Facebook. I stopped using it while making this film, but when I was on Facebook, I was a very, very heavy user of it. And I remember seeing countless cinematography tools being targeted.

Specifically to me, some that I actively bought and regretted purchasing some that were like terrible waste of money, but were effective at convincing me to buy them. Um, for me personally, the ads were effective. Um, there was a, there’s a skateboard thing that I saw that was like, it totally figured me out.

It knew that I was into adventurous sports. It was this one wheel, um, a skateboard platform. You could take it on trails, you could take it off road. It looked like a really fun thing. And then when they started showing me ads of using it as a cinematography platform, there were people in the ads like using it to get really awesome shots in places that you couldn’t get like it totally reverse engineered me and I freaking cracked.

Right. I bought the thing and I don’t use it ever anymore.

14. BOB : These ads appear thanks to a rapacious ecosystem fueled by thousands of companies and billions of dollars that relentlessly tracks you everywhere you go. At the visible part of this iceberg are companies like Google and Facebook, and the money they make from your data is…..incredible.  Google generates $183 billion in revenue annually — about $150 billion of that from advertising. That’s more money…just from showing ephemeral, digital ads, than General Motors generates from selling cars all over the world. And Facebook, well it’s annual revenue is $87 billion — nearly all of it, $84 billion is from ads.
15. BOB : But under the surface of this iceberg are thousands of third-party companies which are playing the same game. Tracking you.  Most of them would fit neatly under the broad descriptor of “data brokers.” And the obvious problem is that if you’ve never heard of them, you have no way of knowing what they know about you.  Whether it’s right or wrong, whether it’s hurting your ability to rent a home, or get a job, or even get a fair price.
16. JEFF O: And once again, it’s not the ads themselves that are the problem. It’s everything that incentivizes them, leads up to the need to show you that ad at that moment in time, you know, every time you do a search on Google, An auction is being run on you. There are 40,000 Google searches that happen every second.

So there are 40,000 auctions of human interaction happening every single second on Google. And the question is like, who’s willing to spend the most money to put something in front of your eyes right now, in that particular case, like those cinematography, platforms were the thing that, that people were spending money to get in front of me.

And that could be something used very innocently and innocuously to sell a product to me, or it could be something that is very maliciously being used to put a political ideology in front of me. Like it’s the same tools that do that same work. Um, and so it becomes an incredibly slippery slope.

17. BOB : 40,000 Google searches a second….all generating ads hand-picked….or AI picked…for you.  It sounds almost nonsensical, that there is a place in the world where the billions of pieces of information known and stored about us is married to the billions of dollars chasing out attention…wll, there isn’t a place exactly, but there is a process. It’s called RTB, or real time bidding.  Estimates are that a billion times a day — a billion times a day! — all this data and all those dollars are throw into a virtual trading floor and out comes …a lot of personalized ads.  As Privacy International puts it, ” If you’re reading an article about erectile dysfunction, depression or self-harm, chances are high, that this will be broadcast to thousands of companies. If you’re booking a table at a restaurant, purchased an item at an online retailer, searched for a flight, or read the news, this will also likely be shared with ad exchanges.”  Your most intimate information, shared thousands of times per second, without thousands of companies, all in an effort to sell you a desk nap pillow.
18. BOB : Underneath all this is advertising technology — AdTech — that makes order out of this chaos.  There’s DSPs and SSPs and DMPs and CMPs….and…suffice to say that people who want to place ads use DSP software — demand side platforms — to bid on people and places they want to put ads, while publishers use SSPs — supply side platforms — to hawk their available ad inventory.  Each platform might talk to thousands of brands and websites, constantly, marrying desk nap pillows with tired, high income office workers.

And for this we let them know every click we make and every app we open?

19. CHRIS: I would say that maybe the easiest way for a consumer to just kind of come to the final answer. Is that we are at a stage now where every single individual’s experience on the world wide web is different than every other individuals. Every experience you have is tailored to you. Everywhere wants to know you and everywhere wants to bring you back to something, whether it’s convincing you of an idea, um, having you buy other products and services making you stay there. So you click, click, click and drive more ad impressions, maybe a purely economic reason. Uh, that is the predominance of source code that is executing on everyone’s devices.

And right now society is looking at this saying all of those terrible companies are doing these, you know, these [00:44:00] things. Well, it’s happening under all of our noses on our stuff. Right. I think those companies are geared toward delivering what we’re asking them to give. Um, and it’s just now that, that people are really sort of starting to say, no, wait, what, what’s the bigger impact of this.

20. BOB :  What’s the bigger impact of this? It’s incredible, the amount of companies inlved in delivering this…”experience” … to you.
21. CHRIS: So essentially the ecosystem, um, has access to your device. And it’s bringing you things that make it money or that they think are interesting to you, basically, whatever it perceives to be the ideal experience, um, in context of dollars that they can make. Um, and at the same time, it’s learning all about you. So that happens when things like data management platforms run. Um, if you look at a large ad supported website, You may have 15 different data management platforms rendering on one page view because it’s not just the websites DMP that’s arriving. It’s the DMP of each of the advertiser or the mechanism that brings the advertiser, the targeting mechanism, whatever that might be.

So lots of different data transactions are occurring. This is how the digital ecosystem learns about you. From there as you start to move different parts of that ecosystem have access to your behavior on the actual site. If you click on shoes, maybe only the website itself has access to know that you like shoes, maybe third parties know that you like shoes, right?

So that is one of the places where, um, the website owner or operators should know who’s on the site. What are they allowed to access? Why are they here? Right. So it’s in their best interest to have this, by the way, not just your best interest. I think from there, as you move through a purchase funnel on a retail site, all sorts of other, um, executions that are occurring as you run through, um, to the page, I would, I’d say that that’s very, um, important, um, is that most companies today that are on.

The digital ecosystem, meaning they have a website or an app, um, for all intents and purposes, our media companies, they just haven’t admitted it yet, or they haven’t figured it out yet. So if you’re a retail company, just like an ad supported newspaper right or a new site, or a social media platform, a significant part of your job and a major part of what executes on the consumer is you acting like a media company….more or less in real time

22. BOB :  One thing I think about a lot….despite all this technology, billions of dollars changing hands…I still often get what I call the worst ads ever designed in the history of advertising.  When I search online for a new drum kit, and then buy, I am then stalked by ads for that very drum kit…the one I just bought…for weeks, or longer.  What’s worse than an ad for something I just bought!!! I have a friend who recently posted that her dad had died more than 10 years ago, but still, she gets pummeled with ads for Father’s Day every year. You’d think this system which delivers personalized ads for everything, all the time….wouldn’t get things so wrong.


I asked Jason Kint about this. He’s the CEO of Digital Content Next, a trade group that adcates for content companies. He’s also a frequent critic of Facebook and Google

23. Bob: I mean, it’d be one thing if it was very, very effective, you know, maybe you could talk me into surrendering my privacy for that. But if people are just collecting all this data about me and storing it somewhere where it’s eventually gonna do no good it’ll be hacked or something. I mean, well, the ads I get still stink, that seems like a really bad bargain.

Jason Kint: It is a bad bargain. And I think it’s why you’re seeing both from policy changes. Yeah. And technology changes a shift towards what’s often called first-party data, but a shift towards data that’s being collected in the context of what the user’s trying to do at the moment, the site that they’re visiting the app that they’re using and that data being allowed as part of the experience.

24. Bob: Billions of dollars is being spent tracking me. And yet I still get really stupid ads. So what’s the explanation for that?

Jason Kint: I mean, it is one of the fallacies, I think, you know, in some way we have a whole generational change that has defined relevance based on, you know, based on the individual that’s being targeted and whether or not they’re going to click on the ad or respond to it in that moment in time.

Rather than, you know, we have centuries of research on, on advertising and, and relevancy is much broader than that. It can be done based on the context of the, the entertainment or the news you’re looking at. Right. Um, or it can be done on, you know, on the context of where your state of mind is at that moment.

And, you know, just trying to create some desire, deman for the advertisers product that may be actionable down the road. And so. This idea that, that we need this kind of third-party data from tracking in order to be relevant as is. I think it’s false. And I think the faster we get past that it actually will probably be good for the advertisers to, um, who have kind of moved down this stream of using digital advertising almost entirely for kind of performance-based media direct response.

The equivalent of junk mail in our mailbox rather than really shaping minds and creating desire for products. Like they, you know, they do on a lot of other means.


BOB: Okay. I have not heard someone say that before, and I think it’s quite brilliant. What we’re looking at is digital junk mail all the time.


Jason Kint: Yeah. It’s a lot of it is now. Within the video space, you know, there’s, there’s advertising that resembles almost the commercials on television that, that do make us stop and think and do create, you know, that that relationship or consideration of a brand that are more are more, um, than I think that equivalent to junk mail, but, but a lot of the impressions on the internet are entirely focused on getting somebody to convert…that’s where creative is lost

25. BOB:  Great!  We’ve spent perhaps trillions of dollars building the world’s most powerful data collection system, the universe’s best tool for slinging fake news around the world…all so we can fill our every waking second with digital junk mail from thousands of companies we’ve never heard of.  Human achievement unlocked!!!


But, there is hope. I’ve already said I think a lot of this problems stems from the third-party nature of these relationships. After all, if did something that pissed you off, you could just go to’s website.  That’s capitalism. But if the real culprit is Spooky Data Broker Inc….and Spooky Data Broker Inc gets information from every news website…what is a consumer supposed to do?  That’s the difference between a first party and a third party relationship. But lately…there’s been a shift in the winds thanks to projects underway at Apple and Google.  They are both taking measures to insulate users from third party companies – Apple, by requiring that apps like Facebook now get explicit opt-in from users for some kinds of tracking… And Google with its program called Privacy Sandbox, which theoretically will prevent third-party cookies from stalking Chrome users.  Now, these projects are by no means perfect…people are rightly worried they will entrench Big Tech companies and give them even more power… but, at least they are getting people talking about the third-party problem, and that’s great.

26. BOB : But I don’t think it’s a great idea to rely on the largesse of two or three large tech companies to fix this problem.  If we want to get all these anonymous subcontractors swinging hammers out of our homes…I think we’re going to have to call the cops. We’re going to need a federal law, with teeth, that lets people take back control of who gets in their living rooms, and their hearts and minds. Here’s Duke University professor Jolynn Dellinger – we’ve heard from her a couple of times on Debugger
27. Jolynn: I will say I’d like to underscore that we do have a system that in my view has been tilted in far of business. And companies for the past 20, some years, this notice and consent, each company basically makes its own decision about how it’s going to treat data and maybe, or maybe not gives you the option to opt out.

And I feel that’s one way to look at it. We’ve done that. We’ve tried that out now in my view with not great consequences. And so what about the idea of opt-in. What about if a company’s out there and they have a great idea and a million reasons why you should use their service or their product persuade us of that.

And then we’ll opt in and we’ll say, Oh, that sounds fun. I like to use that service. I think it’s possible [00:39:00] that the internet won’t break. I think it’s possible that we won’t lose all innovators and technologists. I think maybe we will even find, I mean, there was a recent great article about. Uh, a company like the BBC, I think it’s called the NPO and another one’s that’s gotten rid of behavioral advertising.

It’s gone back to contextual advertising and their revenues went up. Right. And maybe that will happen. Um, maybe there are other ways there are other business models. There are other things that we could do that more accurately reflect. Our values as people. And that’s what I guess I just want to keep coming back to is all of this is choice.

Our legislation is choice. Our actions are choice, and we want to get back to a point where citizens are making choices and the more I’m surveilled and the more my data is [00:40:00] collected and used in ways. I don’t understand the less choice and the less agency I have. And that’s what we have to rely on.

Keeping this stuff …human.


28. BOB : But before we go too far down the  ‘how do we fix this road,’ I think it’s worth spending some time better understanding how we got to this crazy situation in the first place. You know me, I love history lessons. So we’re going to dive into what I think might be the craziest history lesson of all, a story about where all these thousands of anonymous data collection companies come from.  We’re going to dive into the history of data brokers. It’s a dark, secretive tale.  I mean, what else would the history of data brokers be? That’s our next special feature, on Debugger. (And then, after that, we’ll talk about the risk to national security that data brokers pose….)


Survey: Managing third-party permissions is ‘overwhelming’

The purpose of this research is to understand organizations’ approach to managing third-party remote access risk and to provide guidance, based on the findings, on how to prepare for the future. A significant problem that needs to be addressed is third parties accessing an organization’s networks and, as a result, exposing it to security and non-compliance risks.

Sponsored by SecureLink, Ponemon Institute surveyed 627 individuals who have some level of involvement in their organization’s approach to managing remote third-party data risks. They were also instructed to focus their responses on only those outsourcing relationships that require the sharing of sensitive or confidential information and involve processes or activities that require providing access to such information.

Organizations are having data breaches caused by giving too much privileged access to third parties. Only 36 percent of respondents are confident that third parties would notify their organization if they had a data breach involving their sensitive and confidential information. More than half (51 percent) of respondents say their organizations have experienced a data breach caused by a third party that resulted in the misuse of its sensitive or confidential information either directly or indirectly. There could possibly be more data breaches because of respondents’ lack of confidence that they would be contacted by their third parties, as discussed above.

In the past 12 months, 44 percent of respondents say their organizations experienced a data breach caused by a third party either directly or indirectly. Of these respondents, 74 percent of respondents say it was the result of giving too much privileged access to third parties.

The following findings reveal the risks created by third-party remote access

Organizations are at risk for non-compliance with regulations because third parties are not aware of their industry’s data breach reporting regulations.  On average, less than half of respondents (48 percent) say their third parties are aware of their industry’s data breach reporting regulations. Only 44 percent of respondents rate the effectiveness of their third parties in achieving compliance with security and privacy regulations that affect their organization as very high.

Managing remote access to the network is overwhelming. Seventy-three percent of respondents say managing third-party permissions and remote access to their networks is overwhelming and a drain on their internal resources. As a consequence, 63 percent of respondents say remote access is becoming their organization’s weakest attack surface.

It is understandable why 69 percent of respondents say cybersecurity incidents and data breaches involving third parties is increasing because only 40 percent of respondents say their organizations are able to provide third parties with just enough access to perform their designated responsibilities and nothing more. Further, only 37 percent of respondents say their organizations have visibility into the level of access and permissions for both internal and external users.

Many organizations do not know all the third parties with access to their networks. Only 46 percent of respondents say their organizations have a comprehensive inventory of all third parties with access to its network. The average number of third parties in this inventory is 2,368.

Fifty-four percent of respondents say they don’t have an inventory (50 percent) or are unsure (4 percent). Respondents say it is because there is no centralized control over third parties (59 percent) and 47 percent of respondents say it is because of the complexity in third party relationships.

Organizations are not taking the necessary steps to reduce third-party remote access risk. Instead of taking steps to stop third-party data breaches and cybersecurity attacks, organizations are mostly focused on collecting relevant and up-to-date contact information for each vendor.

Very few are collecting and documenting information about third-party network access (39 percent of respondents), confirmation security practices are in place (36 percent of respondents), identification of third parties that have the most sensitive data (35 percent of respondents), confirmation that basic security protocols are in place (32 percent of respondents) and past and/or current known vulnerabilities in hardware or software.

Most organizations are not evaluating the security and privacy practices of all third parties before they are engaged. Less than half of respondents (49 percent) say their organizations are assessing the security and privacy practices of all third parties before allowing them to have access to sensitive and confidential information.

Of these respondents, 59 percent of respondents say their organizations rely on signed contracts that legally obligates the third party to adhere to security and privacy practices. Fifty-one percent of respondents say they obtain evidence of security certifications such as ISO 2700/27002 or SOC. Only 39 percent of respondents say their organizations conduct an assessment of the third party’s security and privacy practices.

Reliance on reputation is why the majority of organizations are not evaluating the privacy and security practices of third parties. Fifty-one percent of respondents say their organizations are not evaluating third-parties privacy and security practices and the main reason is reliance on their reputation (63 percent of respondents) and data protection regulations (60 percent of respondents). However, as discussed previously, only 48 percent of respondents say their organizations are aware of their industry’s data breach reporting regulations. Less than half of respondents (48 percent) have confidence in the third party’s ability to secure information.

Organizations are in the dark about the third-party risk because most are not required to complete security questionnaires. An average of only 35 percent of third parties are required to fill out security questionnaires and only an average of 26 percent is required to conduct remote on-site assessments.

If organizations monitor third-party security and privacy practices, they mostly rely upon legal or procurement review. Only 46 percent of respondents say their organizations are monitoring the security and privacy practices of third parties that they share sensitive or confidential information with on an ongoing basis. Fifty percent of respondents say the law or procurement functions conduct the monitoring. Only 41 percent of respondents say they use automated monitoring tools.

Again, reliance on contracts is why 54 percent of respondents say their organizations are not monitoring the third parties’ security and privacy practices. Sixty-one percent of respondents say their organizations do not feel the need to monitor because of contracts and another 61 percent of respondents say they rely upon the business reputation of the third party.

Third-party risk in most organizations is not defined or ranked in most organizations. Only 39 percent of respondents say their third-party management program defines and ranks level of risk. The top three indicators of risk are poorly written security and privacy policies and procedures, lack of screening or background checks for third-party key personnel and history of frequent data breach incidents.

Organizations are ineffective in preventing third parties from sharing credentials in the form of usernames and passwords. Respondents were asked to rate their organizations effectiveness in knowing all third-party concurrent users, controlling third-party access to their networks and preventing third parties from sharing credentials in the form of usernames and/or passwords on a scale from 1 = not effective to 10 = highly effective. Only 41 percent of respondents say their organizations are very effective in controlling third-party access to their networks and preventing third parties from sharing credentials in the form of usernames and passwords.

Click here to access the full report at SecureLink’s website.


‘Please turn off your surveillance gadgets before dinner’

Bob Sullivan

What do you do if you think your friend is bugging you? I don’t mean bothering you. I mean…bugging you…using a device to listen to you, maybe e1ven recording your conversations, when you visit their home. Well, that’s the world most of us live in now. Personal assistants, many modern TVs, smart doorbells…they all incorporate listening devices. What if you don’t want to be surveilled like that? Should you ask your friends to turn off their Alexa when you walk in the door?  Should they offer?

Tech etiquette sounds like something maybe we should be talking to Ann Landers about politeness or whatnot, but in reality, tech etiquette has a very, very serious side. When somebody visits a friend’s home right now, it’s quite likely they have some kind of electronic devices that could be listening, smart, doorbells, smart televisions, personal assistance. What are the rules around these things? Social rules, legal rules, and what can be done about them.

For my latest episode of Debugger, I talked with Jolynn Dellinger, a privacy law professor at Duke University, where she is also a senior fellow at the Kenan Institute for Ethics.  (Kenan and Duke’s Sanford School of Public Policy support podcast).

You can listen by clicking here, or clicking the play button below if it appears in your browser. A short transcript follows.

[00:02:05] Jolynn:  I’ve just been thinking about it more recently with the proliferation of personal assistants like Alexa, Google Assistant. But I had a very interesting conversation on Twitter the other day with a bunch of random people that I don’t know. Uh, and it seemed to spark a lot of interest and along a lot of strong opinions about … What, what is the etiquette? And my question was .. When you go to somebody’s house, do you need to ask them whether they have always on listening devices or should a homeowner when you have people over as guests? Let those people know I have listening devices on? Is that okay? And this seems like a weird question because I don’t think anyone is engaging in this kind of etiquette right now.

And as a couple of people pointed out on the Twitter conversation, you know, this is just as important to ask with phones. Because folks who have their personal assistants activated on their phones, which I don’t, but some people do, whether it’s Siri or, or whatever on an Android, it presents the same issue of being always on.

And I think that raises some questions we should be talking about in terms of what’s appropriate in terms of potentially recording other people without their permission or consent

[00:03:27] Bob: Something I can’t even imagine right now. Welcome. Let me take your coat. Would you like a glass of wine and oh, are you okay if I record it this entire evening.

[00:03:36] Jolynn: Yes, exactly. Exactly. They’re not recording the entire evening and I don’t want to over-represent, but they are always listening and able to record. Of course.

But I think back in the day, well, at least when I was in college, it was almost people treated it almost like an imposition if you said you were a vegetarian. Right? Like they invite you to dinner and you’re like, okay, well I’m a vegetarian. Oh really? And so I think nowadays, when people invite someone over for dinner, they almost always say, are there any dietary restrictions? And I don’t know whether that’s because becoming vegetarian or vegan is more mainstream or there is a proliferation of allergies. You want to make sure somebody doesn’t have a nut allergy or, or something like that. So it’s gained acceptance over time and I’m kind of wondering if we should be heading in that direction with, surveillance technology.

[00:04:36] Bob: I wonder mechanically, how it would even be accomplished. So let’s say someone says, I just really morally opposed to having listening devices around me activated, how would someone turn off all their smart devices in a swoop like that?Is that even possible?

[00:04:53] Jolynn: I was asking that on Twitter. I mean, there are certainly people far more technologically proficient than myself and many pointed that out. Somebody said, well, I have 20 of those devices around my house. It would take me half an hour to go around and dismantle everything. And another person made a really good point as well. This person had some physical challenges, um, and was using listening devices in a way that enabled him or her to, to live a safe life. So that turning off those devices would actually pose a serious problem for that person. And I think, you know, we always need to be aware of that and, and I would never ask somebody to turn something off that they were using in that way.

I think the more common situation though, is folks using these for convenience or fun. And so they’re cooking and they can say, Alexa, play my blah-blah-blah playlist. And in that circumstance, I think that, um, my preference or anyone’s preference to not have the device on, uh, should supersede. However, do you just unplug it or do you put it in the microwave or the refrigerator? I mean, these were all things that were discussed on this Twitter thread, which was hilarious. Some people said it wouldn’t be that hard to disconnect.

[00:06:08] Bob: I was surprised to see several people …seemed to say, if you have a problem with my Siri or my Alexa, then I’m, I’m uninviting you. You’re not allowed in my house anymore. That was a shocking reaction to me.

[00:06:21] Jolynn: Right. I thought so too. And that’s what I was seeing. You know, if you invited your friend for dinner, I mean, we need to keep this focused on the fact that these are usually our friends that we’re inviting over and have you invited somebody for dinner? And they said, I’m a vegetarian. You likely would not say, well, tough. I’m having steak, right? I mean, that’s probably not what you would say. I think there are many, many technologies that we use. And part of the reason I pose this question is every time that we make personal decisions to use emerging technologies, surveillance technologies, and others, it may be something that affects only ourselves and our families, or maybe something that affects others.

And this even goes for a direct to consumer DNA test, right? That’s not protected. And your DNA gets out information about other people besides just you. And is that something where you should be asking your family before you send spit in a cup to 23 and me, um, other things around the house besides Alexa is like nest cameras.

You know, some folks have those around their homes so they can see in their children’s rooms when they’re out for the evening. Well, if I send my child over to play with that child is my child then being observed by those cameras? Internet of things, Barbie dolls that talk to children or other toys, rain cameras that maybe doing audio and video recording of conversations that take place outside people’s apartments or homes.

I think there’s a lot of these questions that we should be talking about

[00:07:54] Bob: So I had a Ring experience. I sold a house, not that long ago, and I put up a ring camera because I was going to be traveling during some of the process. And I put it up with the default settings and right away, a realtor and prospective buyer showed up at the house and I could hear them talking about my house and, and I thought to myself, I was horrified by this.

Of course, there was a piece of me that was tempted to listen in closely and say, oh, perhaps I should change the paint on that wall, everyone hates the color in the living room or whatnot. Right? But that would give me an unfair advantage potentially. And more than an unfair advantage. When I dug through the paperwork, uh, then the specifications on the Ring camera, it suggested to me that it might be an illegal wiretap to be recording people’s audio conversations without their knowledge. Aren’t a lot of these things potentially?

[00:08:44] Jolynn: I’m not really sure if there’s a lot of case law about that yet. I did see a case in New Hampshire where a judge said it was not a violation. And I think they might’ve had a two party consent rule there, where the judge said it was not a problem. I think it should be a problem.

Uh, I think that the basis of the decision there was, well, the conversation was taking place in a public space where you should expect … other people may be able to hear you. But I think when you’re standing on somebody’s porch, you don’t necessarily expect that. I think if you’re in an apartment building… say, and you live across the hall from someone who may have installed a Ring doorbell, you may know, you may not know if you come in with a friend and you’re unlocking your door and no one else is in the hallway.

[00:09:29] I think you naturally would expect privacy around that conversation. So you can disable the audio on Ring doorbells. And I think Amazon suggests that people may want to consider doing that if they feel the need, but I don’t think Amazon is providing legal advice.

[00:09:45] Bob: Yeah. And I disabled my audio immediately out of fear of just that. And it also, it seemed just wildly unfair, but that suggests that there’s perhaps more serious implications to some of this. And you raised one of the most serious of all, what might be the national security implications of all of these unintended overheard conversations.

[00:10:07] Jolynn: Well, that question didn’t get as much uptake on Twitter, but I’m very interested about it. I think there was a big deal about whether or not there could be a Peloton in the White House because of the various things … it can hear with, microphones or cameras. And I just wondered about all the people working from home who are dealing with classified or secure materials, confidential materials.

If they’re sitting in their office on a Zoom call and you question about it being a Aoom call and what devices do they have around them that may be picking things up and should that be not permissible?

[00:10:44] Bob: It’s not hard to imagine a foreign adversary figuring out how to wake up all of these listening devices without people’s knowledge and using it as a sort of extensive spy network.

[00:10:55] Jolynn: It seems possible. I just think as we get more and more of these ubiquitous technologies, we should all at least be willing to discuss and have a conversation about what’s the right thing to do here for our neighbors, friends, and family.

[00:11:13] Bob: Have you ever walked into a house and said to someone, is anything listening to me?

[00:11:17] Jolynn: Yes, I have. I have

[00:11:19] Bob: I was hoping you would say that. And what was the reaction?

[00:11:23] Jolynn: These are friends of mine, so they were nice about it. And one time I said the refrigerator might be a good place for that model.

[00:11:34] Bob: You can always put your gadgets in the fridge if you’re worried about them. Professor Jolynn Dellinger from Duke University, thank you very much for your time.


The Fourth Annual Global Study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way

Since Ponemon Institute conducted the first study on threat intelligence sharing in 2014, organizations that use and exchange threat intelligence are improving their security posture and the ability to prevent and mitigate the consequences of a cyberattack. As revealed in The Fourth Annual Study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way, Some 74 percent of respondents that had a cyberattack believe that the availability of timely and accurate threat intelligence could have prevented or mitigated the consequences of such an attack.

According to the 1,432 IT and IT security practitioners surveyed in North America, EMEA and Asia Pacific , the consumption and exchange of threat intelligence continues to increase, as shown in Figure 1. Despite the increase in the exchange and use of threat intelligence, more work needs to be done to improve the timeliness and actionability of the threat intelligence.

Following are 11 trends that describe the current state of threat intelligence sharing.

1. Satisfaction with the ability to obtain threat intelligence decreases slightly. This year, 40 percent of respondents say they are very satisfied or satisfied with the way their organizations obtain threat intelligence. This is a slight decrease from 41 percent of respondents in 2017. To increase satisfaction, threat intelligence needs to be more timely, less complex and more actionable.

2. Organizations do not have confidence in free sources of threat intelligence. Reasons for paying for threat intelligence is because it has proven effective in stopping security incidents and a lack of confidence in free sources of intelligence.

3. On a positive note, the accuracy of threat intelligence is increasing. However, the majority of organizations believe the timeliness and the actionability of threat intelligence is low.

4. The two main metrics are the ability to prioritize threat intelligence and its timely delivery. Other metrics are the ability to implement the threat intelligence and the number of false positives.

5. When it comes to measuring the ROI of their threat intelligence, 39 percent of respondents say their organizations calculate the ROI. The top ROI metrics organizations look at include the following factors: reduction in the dwell time of a breach, reduction in the number of successful breaches and faster, more effective incident response.

6. Timeliness of threat intelligence is critical but not achieved. Only 11 percent of respondents say threat intelligence is provided real time and only 13 percent of respondents say threat intelligence is provided hourly

7. Threat indicators provide valuable intelligence. Eighty-five percent of respondents say they use threat indicators. The most valuable types of indicators are malicious IP addresses and indicators of malicious URLs.

8. Most organizations either currently or plan to consolidate threat intelligence data from multiple solutions. However, 53 percent of respondents say their organizations mainly use manual processes to accomplish the consolidation.

9. With regards to how threat intelligence is used through the network, the majority of organizations are using it in IDS. United Threat Management (UTM) is usually a single security appliance that provides multiple security functions as a single point on the network. The use of UTMs has increased significantly since 2017.

10. Internal silos prevent more effective collaboration and the exchange of threat intelligence with other organizations. Only 40 percent of respondents say the collaboration between their organization and other companies in the exchange of threat intelligence is either very effective or effective.

11. The use of automated processes to investigate threats is gaining traction. Fifty-four percent of respondents, an increase from 47 percent of respondents, are using automated processes to investigate threats. There also has been a significant increase in the use of machine learning and AI since 2017.

To read the full report visit the Infoblox website.

Plane crashes are investigated. Computer crashes should be, too

Bob Sullivan

When a plane crashes, a government agency rushes to the scene looking for answers…and lessons that might prevent the next plane crash. When computers crash — and the economy crashes, as we’ve seen this week — there is no such fact-finding mission. There should be. And now, perhaps, there will be.

The National Safety Transportation Board, while imperfect, has a remarkable track record for getting to the bottom of transportation disasters. Air travel is remarkably safe, in no small part because of all the public hearings and final reports issued by the NTSB through the years. Yes, wounds are exposed and companies take it on the chin after a crash. That’s the price of learning. Lives are at stake.

Cybersecurity could benefit dramatically from this kind of soul-searching after major attacks.

This week’s Colonial Pipeline ransomware incident and resulting run on gas stations is just the latest incident that screams for some kind of independent agency devoted to this kind of soul searching. And I do mean “just the latest.” A quick trip down memory lane had me re-reading essay after essay calling for a “Computer Network Safety Board” or a “National Cybersecurity Safety Board.” This 2016 report that was part of a NIST Commission cites a 1990(!) publication named Computers at Risk: Safe Computing in the Information Age which called for creation of an incident database, saying “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.”

So, this is an idea whose time has come. And perhaps it will. In the wake of the pipeline ransomware incident, President Biden issued an executive order this week addressing cybersecurity. These things can seem like pageantry, but they don’t have to be. The list of actions in the order is non-controversial and has been in the works for a while. Things like: raising government security standards, stronger supply chain/vendor oversight, and improved information sharing. But to me, this is the most critical part of the order:

Establish a Cybersecurity Safety Review Board. The Executive Order establishes a Cybersecurity Safety Review Board, co-chaired by government and private sector leads, that may convene following a significant cyber incident to analyze what happened and make concrete recommendations for improving cybersecurity. Too often organizations repeat the mistakes of the past and do not learn lessons from significant cyber incidents. When something goes wrong, the Administration and private sector need to ask the hard questions and make the necessary improvements. This board is modeled after the National Transportation Safety Board, which is used after airplane crashes and other incidents.


This….CSRB?….faces a lot of obstacles. Paul Rosenzweig, one of the essayists who has called for such a thing in the past, laid these obstacles out well in his 2018 paper for R Street. There’s (usually) no wreckage after a computer crash, so investigations will be much harder. There are tens of thousands of important computer hacks every year. Can’t study them all. How will the CSRB pick which ones to examine? Victim companies are notoriously hesitant to share details after an attack, fearing those details will end up in a lawsuit. Sometimes…often…the investigation will be inconclusive. And finally: the “flaw” found by such an investigation will often be a person, not software or hardware.


I’ve been to 100 conferences where security professionals spend a week talking about fancy new software and then at a closing address, someone ends by saying, “It all comes down to the human element.” I suspect a CSRB will find *many* incidents come down to a mistake made by a person. That’s a good start. Of course, no one person can really screw up something like this. That person is part of a team. S/he is nearly always overworked, part of a flawed system, walking a tightrope without a net, and acting on the wrong incentives. These are the kinds of real problems that can finally be exposed by CSRB reports.

Having covered this industry for 25 years, I am suspicious of the idea that many investigations will be inconclusive. Yes, there are occasional Zero Day hacks and nation-state-sponsored attacks that might elude investigators. But many, many hacks fall into the Equifax camp — they involve an event cascade of errors that should have been caught, like a horror movie where the protagonists miss a dozen or more chances to avert the disaster.

Every one of those movies should be made, and studied, by the CSRB.

Perhaps one conclusion might be limitations on workload, the kind that now protect truck drivers, train engineers and pilots. Perhaps other innovative recommendations will arise from shining such a public light on hacking incidents. Perhaps there will be so many that we’ll move past shaming cybersecurity workers to solving the real problem. If we don’t, we’re going to see a lot more gas lines that result from malicious computer code.