Digital firms embrace the creepy, out users in ads — ‘the person who streamed ‘Issues’ over 3,152 times’

A Spotify ad spotted on the DC Metro

Bob Sullivan

If you have any doubts that the companies you trust with your data are indeed watching you closely, a few new creepy ads should disavow you of that notion.  In fact, it seems digital firms are starting to lean in to the creepy.

The ad above, and the ads below, were spotted Dec. 20 on the Washington D.C. Metro. Hopefully you weren’t the person who streamed the Julia Michaels song “Issues” on the streaming service Spotify 3,152 times this year. (NOTE: That’s Julia Michaels in the ad, not a Spotify user. The same applies to the ads below).

I’ve asked Spotify if these are real users, or just made-up for-the-fun-of-it factoids; if the firm answers, I’ll let you know.  Either choice seems bad, however.   If the facts are fake, Spotify seems to be taking a casual attitude towards the privacy of users, some who might not think it’s funny to divulge an individual user’s preferences in this way.

And if the facts are real, the privacy implications seem obvious.  While the actual human being with all those “issues” isn’t identified, he or she might very well find out about the ad and feel violated.  Or mocked.  Or put at risk for the disclosure of a serious mental health problem. Meanwhile, everyone else might wonder, “Am I next?” or, “How far might Spotify take this joke?”

For a sense of that, see the ads below.

This creepy ad trend was first spotted by Zach Whittaker at ZDNet last week after Netflix published a Tweet that bothered some users.

“To the 53 people who’ve watched A Christmas Prince every day for the past 18 days: Who hurt you?” the Tweet read.  It prompted swift backlash.

“Why are you calling people out like that Netflix,” wrote one in reply.

“So much for privacy” wrote another.

Corporations mine data like this all the time — any visit to Facebook will prove that.  But it’s unusual for them to call such attention to the data mining, let alone splatter it around in advertising.

To be clear, Metro riders have no idea who the “Issues” person is from the ad; he or she is not named. Corporations often say they carefully anonymize data before they study it or use it.  Studies by privacy scholars like Carnegie Mellon’s Alessandro Acquisti have shown that seemingly anonymized data can be combined with other data sets to reveal the identities of people in them, however.  I’ll not ruminate on how someone might “out” the subjects of these Spotify ads, but you probably ponder that on your own.

Either way — even if the factoids are fake — the ads seem to show Spotify has no concerns about listeners knowing they are being observed to this degree. The firm might be right. Spotify did something similar last year, too.   (“Dear person who played ‘Sorry’ 42 times on Valentine’s Day, what did you do?”)

Here are a few more ads spotted on the D.C. Metro.


Data risk in the third-party ecosystem: second annual study

Larry Ponemon

We are pleased to present the findings of Data Risk in the Third-Party Ecosystem: Second Annual Study, sponsored by Opus, to understand trends in the challenges companies face in protecting sensitive and confidential information shared with third parties and their third parties (Nth party risk). While the findings of this study reveal that the risk of sharing sensitive and confidential information with third parties is increasing, there are governance and IT security practices that can be implemented to significantly reduce the likelihood of a third-party data breach.

Since the study was first conducted last year, companies have made little progress in improving the overall effectiveness of their third-party risk management programs. This includes understanding how many of their third and Nth parties have access to sensitive and confidential data, confirming the existence of adequate safeguards and security policies in third parties and reviewing third-party management policies and programs to ensure risks are addressed. A serious barrier to achieving these objectives is the lack of adequate resources to manage third-party risk, according to 60 percent of participants in this research.

We define the third-party ecosystem as the many direct and indirect relationships companies have with third parties and Nth parties. These relationships are important to fulfilling business functions or operations. However, the research underscores the difficulty companies have in detecting, mitigating and minimizing risks associated with third parties that have access to their sensitive or confidential information.

The study found strong correlations between certain best practices and a reduction in the likelihood of third-party data breaches. The two most effective practices that when deployed reduce the likelihood of a breach are the evaluation of the security and privacy practices of third parties (46 percent likelihood of a data breach vs. 66 percent likelihood) and an inventory of all third parties with whom the organization shares information (46 percent likelihood of a data breach vs. 65 percent likelihood).

Key Report Findings:

  • Data breaches caused by third parties are on the rise

Fifty-six percent of respondents confirm that their organizations experienced a data breach caused by one of their vendors, an increase of 7 percent over the last year.

Cyber attacks against third parties that resulted in the misuse of their company’s sensitive or confidential information also increased significantly from 34 percent to 42 percent of respondents.

  • The effectiveness of third party governance programs remains low

 Less than half of all respondents say managing outsourced relationship risks is a priority in their organization.

Only 17 percent of respondents rate their companies’ effectiveness in mitigating third party risk as highly effective.

Sixty percent of respondents feel unprepared to check or verify their third parties, down from 66 percent in 2016.

  • Accountability and board level involvement increased slightly

Accountability for the third-party risk management program is dispersed throughout the organization. However, 5 percent more respondents now have an owner of the third-party program compared to last year.

Forty-two percent of respondents strongly agree or agree that their companies’ board of directors requires assurances that third-party risk is being assessed, managed and monitored.

However, only one-third of all respondents say their companies regularly report to the boards of directors on the effectiveness of the third-party management program and potential risks to the organization.

  • Companies lack visibility into third party and Nth party relationships

 The average number of third parties with access to confidential or sensitive information has increased by 25 percent over last year from 378 to 471 third parties.

More than half of all respondents do not keep a comprehensive inventory of all third parties with whom they share sensitive information.

Visibility gets worse with Nth party relationships, only 18 percent of respondents say their companies know how their information is being accessed or processed by Nth parties with whom they have no direct relationship.

Thirteen percent of all respondents could not determine if they had experienced a third-party data breach.

  • Today’s programs are insufficient to manage third party risks

 Fifty-seven percent of respondents say they are not able to determine if vendors’ safeguards and security policies are sufficient to prevent a data breach

Less than half of all respondents say that their company evaluates the security and privacy practices of all vendors before starting a business relationship that requires the sharing of sensitive or confidential information.

If they do conduct an evaluation, it is mostly to acquire signatures on contracts that legally obligate the third party to adhere to security and privacy practices.


  1. Evaluation of the security and privacy practices of all third parties. In addition to contractual agreements, conduct audits and assessments to evaluate the security and privacy practices of third parties.
  1. Inventory of all third parties with whom you share information. Create an inventory of third parties who have access to confidential information and how many of these third parties are sharing this data with one or more of their contractors
  1. Frequent review of third-party management policies and programs. The third-party risk management committee should create a formal process for and regularly review the security and privacy practices of their third and Nth parties to ensure they address new and emerging threats, such as unsecured Internet of Things devices.
  1. Formation of a third-party risk management committee. Create a cross-functional team to regularly review and update third-party management policies and programs.
  1. Visibility into third or Nth parties with whom you do not have a direct relationship. Increase visibility into the security practices of all parties with access to company sensitive information – even subcontractors
  1. Accountability for proper handling of third-party risk management program. Centralize and assign accountability for the correct handling of your company’s third-party risk management program and ensure that appropriate privacy and security language is included in all vendor contracts. 
  1. Third party notification when data is shared with Nth parties. Companies should include in their vendor contract requirements that third parties provide information about possible third-party relationships with whom they will be sharing sensitive information.
  1. Oversight by the board of directors. Involve senior leadership and boards of directors in third-party risk management programs. This includes regular reports on the effectiveness of these programs based on the assessment, management and monitoring of third-party security practices and policies. Such high-level attention to third-party risk may increase the budget available to address these threats to sensitive and confidential information.

To read the entire study, click here.

‘We don’t need Net Neutrality, we have a free market.’ Why that’s wrong

Bob Sullivan

Standard Oil committed many sins on the way to infamy, and Teddy Roosevelt’s s&*t list, but a big one was “vertical integration.” Rockefeller’s people owned oil refineries, and trucks that delivered gasoline, and the gas stations that sold it, and so on. It owned businesses up and down the supply chain, right down to the person who took the money from the consumer. A clever business model, that.  And believe it or not, it’s not necessarily illegal.  Companies purchase firms in their supply chain all the time.  IKEA famously purchased acres of forests in Romania, for example. That’s just smart; unless it leads to abusive monopoly power.  If IKEA were the only place to buy furniture, purchase of forests would raise alarms bells.   Were IKEA to buy all the forests in Europe, well, now I’d hope someone would step if and stop them.  Better yet, I’d hope we’d have a rule to stop that kind of thing before it starts.  Or if we had one, I’d hope we wouldn’t rescind it because IKEA asked nicely.

That’s not precisely what happened today when FCC chairman Ajit Pai announced he would dump Net Neutrality, but it’s a pretty decent approximation.  If Net Neutrality goes down in flames, you better believe TV prices are going up. I’d bet my over-the-top SlingTV subscription on that. Let me explain.

Net neutrality sounds like a complicated concept. (So does vertical integration.)  It’s not.  The rule simply stops an Internet service provider from favoring some 1s and 0s over others.  It prevents some content providers from being charged extra to be on the fast lane, which in turns obviously means other companies would be relegated to the slow lane.

“That’s too much government interference,” neutrality opponents have said.  Then comes the Economics 101 argument that free markets, rather than the government, should decide such things.  If only these folks would take Econ 102, when monopolies come up.

See, there is no free market in Internet service.  How many options do you have for broadband at your house? If you have three, you’re lucky. Many Americans — 50 million!! — have no choice at all for internet provider; they are forced to pay the exorbitant price their single carrier requires.  So, immediately, stop with the free market cliche.  In a situation where choice is not naturally occurring, it’s just and necessary for government to step in.

Let’s add to this discussion the fact that broadband Internet is a necessity today. A quick quiz: Does Internet service have more in common with electricity, or with a subscription to a wine club? A: Internet service is a utility.

I’ll bet zero percent of those who’d argue Internet is somehow optional live without Internet at their homes. I do wish Ajit Pai had to live without home service from now until Dec. 12, when the final FCC vote will be held. Let him argue then that Internet service is not a utility.

Now, back to vertical integration, and your soon-to-be higher TV prices. Comcast is one of America’s largest Internet service providers. It also owns NBC.  That means it owns both the pipe that goes into your home, and some of the stuff that goes through that pipe.  That’s vertical integration.  After Dec. 12, Comcast will be within its rights to make NBC content look better than competitors’ services when viewed over its Internet service.  Maybe Saturday Night Live arrives in brilliant HD, but that Netflix movie you are trying to watch instead keeps pixelating and hiccuping.*

Maybe that wouldn’t be so bad if you had a dozen choices for Internet service, and you could easily say, “Screw Comcast!  I’m switching to Bob’s Internet, where Netflix always looks great.”  You already know what I’m going to say next. This magical world of ISP competition does not exist. Furthermore, as anyone who tried to intelligently purchase cell phone service in the past 15 years knows, there is no way to know how reliable your bandwidth will be when you switch services.  Even if there were options, would they really be better?  Throw on top all those anti-competitive habits like early termination fees and equipment contracts and you have a really broken market on your hands.  In that environment, competition doesn’t solve all ills.

The fear you usually hear from the mega-companies involved in this fight is that without Net Neutrality, Netflix will end up being extorted by ISPs, forced to pay extra to be in their fast lane.  Well, I’m sympathetic to ISPs on this one. At one point, Netflix and YouTube accounted for half of all Internet traffic in the evening.  Should those firms have to pay something to help build out the pipes they using so much? Yes, I can see this argument.  I don’t care much; let the billion-dollar corporations bicker over that. They can hold their own. They are equal adversaries in a big marketplace dispute. They can handle themselves.

Here’s what I’m worried about.  Pay TV companies are in big trouble.  They are losing subscribers all the time — so-called cord-cutters.  Some 2% of pay TV watchers annually are dropping cable or satellite every year. That doesn’t sound like the end of the world. There are still almost 100 million households in America who do pay. The real problem is the reality of “cord-nevers.” — young people who’ve never paid for month TV in their lives, and never will.  That group includes some 35 million young people.  Many of them just watch stuff on Amazon Prime, or Hulu, or Netflix, or Major League Baseball Advance Media instead.  Or, they get basic TV from over-the-top services like SlingTV instead. That costs $25 a month, and it’s great. Presence of these alternatives has also forced TV providers like Verizon to get creative, and offer “skinny” bundles at much lower costs.   Ain’t competition great?

Even with all these great new options, cable user ARPU (Average revenue per user) keeps setting records.  Comcast made about $150 per subscriber last year. But that revenue is under serious threat. In 2009, only 10 percent of American paid for a streaming service. Today, that number is 49%, and growing. Many of over-the-top users live just fine without CNN, or NBC, or ESPN.

How can pay TV companies stop the bleeding?  Well, it’s easy.  Make the over-the-top services under-the-weather.  Make your service better than something you can buy from a competitor. If you own the pipe, and you can discriminate over traffic, you can do that. You can make your content look better than theirs.  You can drive out all the other gas stations — er, TV stations — to the point where your ARPU is no longer under pressure.

*Comcast, naturally, says it would never do this.  Perhaps it won’t.  Understand, however, that Comcast is far more responsible to its shareholders than its promises.

Ajit Pai says clear disclosures of fast lane / slow lane arrangements are all that’s needed to Make the Internet Great Again.  That’s hooey. What good is a notice saying your favorite shows won’t work so well on service A if you have no service B?

Here’s what would work. Guaranteed minimum service standards that are real, change with the times,  and are expediently enforceable. If the Net Neutrality rollback came with a real way to prove that there would be no slow lane, I’d listen.  Hey, I said I was sympathetic to the view that Netflix should pay a fair share for hogging the Internet.  Without such a real guarantee, however, everything you are hearing about Net Neutrality is a farce.  It’s an abdication of the responsibility to govern. It’s picking winners under the guise of “light-touch” regulation.  And, it’s going to hurt you.

We’ll get back to this, I promise.  The temptation ISPs will have to abuse their monopoly power will simply be too great. In fact, you’d almost believe these companies would be derelict to not exploit their newfound market power as soon as they can. That’s what companies are supposed to do.  Grow as big and powerful as possible. And governments are supposed to act as a counterbalance to that urge.

Without Net Neutrality in place, there is only one other options.  ISPs need to be broken up. There simply is no way we can allow Rockefeller to own the gasoline trucks and the gas stations….I mean we can’t have single firms owning Internet pipes and the content that travels along them.  We can deal with this now, or deal with it later, when the problems are far more endemic, and a generation of innovation has suffered. I fear we are about the chose the latter, dumber path.

How data breaches affect reputation and share value

Larry Ponemon

How Data Breaches Affect Reputation & Share Value: A Study of U.S. Marketers, IT Practitioners and Consumers, conducted by Ponemon Institute and sponsored by Centrify, examines from the perspective of IT practitioners and marketers how a company’s reputation and share value can be affected by a data breach.  As part of this research, we surveyed consumers to learn their expectations about steps companies should take to safeguard their personal information and prevent data loss.

This study is unique because it presents the views of three diverse groups who have in common the ability to influence share value and reputation. Ponemon Institute surveyed 448 individuals in IT operations and information security (hereafter referred to as IT practitioners) and 334 senior level marketers and corporate communication professionals (hereafter referred to as CMOs).

Forty-three percent of IT practitioner respondents and 31 percent of CMOs in this study say their organization had a data breach involving the loss or theft of more than 1,000 records containing sensitive or confidential customer or business information in the past two years.  We also surveyed 549 consumers. Sixty-two percent of these respondents say in the past two years they have been notified by a company or government agency that their personal information was lost or stolen as a result of one or more data breaches.

The results of this study show how data loss affects shareholder value and customer loyalty.  To protect brand and reputation, it is critical the C-suite and boards of directors address consumers’ expectations about how their personal information is used and secured.  On a positive note, the study reveals the majority of both IT practitioners and CMOs believe their companies’ senior management understands the importance of brand management.

The affect of data breaches on stock price and customer losses

For the economic analysis of the stock price, we selected 113 publicly traded benchmarked companies that experienced a data breach involving the loss of customer or consumer data. We created a portfolio composed of the stock prices of these companies. We tracked the index value for 30 days prior to the announcement of the data breach and 90 days following the data breach.

The key takeaway from the analysis is that companies that achieve a strong security posture through investments in people, process and technologies are less likely to see a decline in their stock prices, especially over the long term. Because of their strong security posture, these companies are better able to quickly respond to the data breach. Following are conclusions from this analysis.

  • Following the data breach, companies’ share price declined soon after the incident was disclosed.
  • Companies that self-reported their security posture as superior and quickly responded to the breach event recovered their stock value after an average of 7 days.
  • In contrast, companies that had a poor security posture at the time of the data breach and did not respond quickly to the incident experienced a stock price decline that on average lasted more than 90 days.
  • The difference in the loss of share price between companies with a low security posture and a high security posture averaged 4 percent.
  • Organizations with a poor security posture were more likely to lose customers. In contrast, a strong security posture supports customer loyalty and trust.
  • The 113 companies in our sample that experienced a low customer loss rate (less than 2 percent) had an average revenue loss of $2.67 million. Organizations that lost more than 5 percent of their customers experienced an average revenue loss of $3.94 million.

 Other key takeaways

The loss of stock price is not the top concern of CMOs and IT practitioners. Reputation loss due to a data breach is the biggest concern to both IT practitioners and CMOs. Only 20 percent of CMOs and 5 percent of IT practitioners say they would be concerned about a decline in their companies’ stock price. In fact, in organizations that had a data breach, only 5 percent of CMOs and 6 percent of IT professionals say a negative consequence of the breach was a decline in their companies’ stock price.

Thirty-one percent of consumers surveyed say they discontinued their relationship with the company that had a data breach. Of those consumers affected by one or more breaches, 65 percent say they lost trust in the breached organization and more than 31 percent say they discontinued their relationship

IT practitioners and CMOs both believe a data breach is a top threat to their companies’ reputation and brand value. A data breach is considered by participants in this research to be a top threat to their companies’ reputation and brand value. On a positive note, the majority of IT practitioners (55 percent) and 58 percent of CMOs do believe their companies’ senior-level executives take brand protection seriously.

More CMOs have confidence than IT practitioners in the resilience of their organizations to recover from a data breach involving high value assets. Only 44 percent of IT practitioners believe their organizations are highly resilient to the consequences of a data breach involving high value assets. However, 63 percent of CMOs are confident their company would be resilient to a data breach that results in the loss or theft of high value assets.

More CMOs believe the biggest cost of a security incident is the loss of brand value. Seventy-one percent of CMOs in this study believe the biggest cost of a security incident is the loss of reputation and brand value. In contrast, less than half of IT practitioners (49 percent) see brand diminishment as the biggest cost of a security incident.  

Following a data breach, the IT function comes under greater scrutiny. IT practitioners in organizations that had a data breach (43 percent) consider the following the most negative consequences of a breach: greater scrutiny of the capabilities of the IT function, significant financial harm and a loss of productivity (56 percent, 44 percent and 40 percent, respectively).

IT practitioners do not believe that brand protection is their responsibility. Sixty-six percent of IT respondents do not believe protecting their company’s brand is their responsibility. However, 50 percent of these respondents do believe a material cybersecurity incident or data breach would diminish the brand value of their company.

CMOs allocate more money in their budgets to brand protection than IT does. Thirty-seven percent of CMOs surveyed say a portion of their marketing and communications budget is allocated to brand preservation and 65 percent of these respondents say their department collaborates with other functions in maintaining its brand. Whereas, only 21 percent of IT practitioners say they allocate a portion of the IT security budget to brand preservation and only19 percent collaborate with other functions on brand protection. This response is understandable because so many IT practitioners do not believe brand protection is the IT function’s responsibility.

Consumers’ expectation for the security of personal information they share with companies is much higher than CMOs and IT practitioners’ expectations. Eighty percent of consumers believe organizations have an obligation to take reasonable steps to secure their personal information. However, only 49 percent of CMOs and 48 percent IT practitioners agree. The research reveals differences in perceptions between IT practitioners and CMOs on issues regarding reputation and brand management practices. However, more serious differences are the gaps between consumers’ expectations and the perceptions of IT practitioners and CMOs about how their personal information should be safeguarded

CMOs and IT practitioners are less likely to believe their organizations have a responsibility to control access to consumers’ information. While 71 percent of consumers surveyed believe organizations have an obligation to control access to their information, 47 percent of CMOs and 46 percent of IT security practitioners believe this is an obligation.

Consumer trust in certain industries may be misplaced. Eighty percent of consumers say they trust healthcare providers to preserve their privacy and to protect personal information. In contrast, only 26 percent of consumers trust credit card companies. Yet, healthcare organizations account for 34 percent of all data breaches while banking, credit and financial organizations account for only 4.8 percent. Banking, credit and financial industries also spend two-to-three times more on cybersecurity than healthcare organizations.

IT practitioners and CMOs share the same concern about the loss of reputation as the biggest impact after a breach, but after that, the concerns are specific to their function. For CMOs, the impact to reputation is followed by a concern over loss of customers and decline in revenue (76 percent, 55 percent and 46 percent of respondents, respectively). For IT, the two biggest concerns are the loss of their jobs (56 percent of IT respondents and time to recover decreases productivity (45 percent).

In Congress, Facebook, Twitter take more blame for Russian election meddling, but there’s more coming

Bob Sullivan

We’ve come a long way since Mark Zuckerberg famously said that it was “crazy” to think fake news on Facebook influenced the 2016 election.  How far? Not long ago, Facebook said it had identified only a few thousands suspicious accounts on its service that might have been linked to Russia.  Today, during Congressional testimony, the firm said 126 million people may have seen Russian propaganda on the service.

During a mostly civil hearing before a Senate intelligence committee hearing on Tuesday, Facebook, Twitter and Google used the strongest language yet admitting their services were abused during the election, and vowed to work against further attacks by foreign governments.  The obstacles they face are enormous however, ranging from the ease of obscuring the origins of such attacks to the problem of “false positives” — tighter controls on content will inevitably infringe on free speech.

Not long ago, Internet firms were content to hide behind their legal designations as agnostic platforms, as opposed to publishers that could be held responsible for content they spread.  The time for that has passed.

“All three companies here…no longer think whatever goes across your platform is not your concern, right?” said Sen Sheldon Whitehouse (D-R.I.).

Facebook’s general counsel Colin Stretch called the Russian disinformation campaign “reprehensible.” Twitter acting general counsel Sean Edgett said the firm was acting “to ensure that experience of 2016 never happens again.”

Sen. Sen Chris Coons (D-Del,) was unimpressed by the firms’ efforts so far, however.

“Why has it taken Facebook 11 months (to offer this information) when former President Obama cautioned your CEO 9 days after the election?” he asked.

During the hearing, Stretch explained how Russian paid ads were used to attract drive users towards Facebook pages, which were then used to spread propaganda through the service’s traditional network effects — they were shared and re-shared by users. That’s how a few thousands paid ads could ultimately reach potentially millions of users.

At one point, Coons held up one example — a Facebook page called Heart of Texas that ultimately collected about 225,000 followers.  Ads for the page were purchased in rubles. One Heart of Texas ad said Hillary Clinton was despised by an overwhelming number of veterans, and urged secession if she won the election.

“That ad has no place on Facebook. It makes me angry. It makes everyone on Facebook angry,” Stretch said.

But Sen. Al Franken (D-Minn.) challenged Stretch about why the firm didn’t spot the Russian influence problem sooner.

“These are American political ads (purchased) with Russian money…how could you not connect the dots?” he said. “People are buying ads on your platform with Rubles. You put billions of data points together all the time….You can’t put together rubles with political ads and go, ‘Hmmm. Those two data points spell out something bad.’ ”

“Senator, that’s a signal we should have been alert to and in hindsight, it’s one we missed,” Stretch said.

Twitter was targeted for similar criticism by Sen. Richard Blumenthal (D-Conn.). He held up an ad saying citizens could vote from home,allegedly shown to likely Hillary Clinton voters.  Twitter said the ads were ultimately removed as illegal voter suppression.

“But they kept reappearing,” Blumenthal complained.

Most of the fake Russian ads and posts– something Facebook calls “coordinated inauthentic activity” — were issue-based, the firms said. They didn’t necessarily support a candidate, but instead sought to cause fights among users.  In Internet lingo, it was a sophisticated troll campaign

“Russia does not have loyalty to a political party. Their goal is to divide us,” Sen. Chuck Grassley (R-Iowa) said.

Much of the hearing focused on the potential for abuse that comes with social media targeting technology,which allows advertisers to be very selective in who sees ads that are purchased.  The tools are tailor-made for micro-targeting propaganda. Blumenthal questioned whether a Russian group could have made micro-targeting decisions without help from political consultants in the U.S., hinting the Russians had help from U.S. agents.

The most chilling part of the hearing occurred after Facebook, Google, and Twitter left, however. Clint Watts, an analyst with the Foreign Policy Research Institute, explained that no single firm could “fully comprehend” the influence that Russians had in 2016 — because Russian propagandists used a holistic plan of attack. A single post on the 4Chan message board would be discussed on Russian-backed Twitter accounts, then spread far and wide on Facebook, then land in news stories on Google, and so on. He called Russia’s 2016 disinformation campaign “the most successful in history,” and said it would certainly be copied.

“The Kremlin playbook will be adopted by others,” he said. Other foreign governments, dark political candidates, and .even corporations would copy Russian techniques unless Congress managed to get control of the issue now, he warned.

Cybercrime costs up 23 percent in just two years; firms investing in wrong technologies

Larry Ponemon

Over the last two years, the accelerating cost of cyber crime means that it is now 23 percent more than last year and is costing organizations, on average, US$11.7
million. Whether managing incidents themselves or
spending to recover from the disruption to the business
and customers, organizations are investing on an unprecedented scale—but current spending priorities show that much of this is misdirected toward security capabilities that fail to deliver the greatest efficiency and effectiveness.

A better understanding of the cost of cyber crime could help executives bridge the gap between their own defenses and the escalating creativity—and numbers— of threat actors. Alongside the increased cost of cyber crime—which runs into an average of more than US$17 million for organizations in industries like Financial Services and Utilities and Energy—attackers are getting smarter. Criminals are evolving new business models, such as ransomware-as-a-service, which mean that attackers are finding it easier to scale cyber crime globally.

With cyber attacks on the rise, successful breaches per company each year has risen more than 27 percent, from an average of 102 to 130. Ransomware attacks alone have doubled in frequency, from 13 percent to 27 percent, with incidents like WannaCry and Petya
affecting thousands of targets and disrupting public services and large corporations across the world. One of the most significant data breaches in recent years has been the successful theft of 143 million customer records from Equifax—a consumer credit reporting agency—a cyber crime with devastating consequences due to the type of personally identifiable information stolen and knock-on effect on the credit markets. Information theft of this type remains
the most expensive consequence of a cyber crime. Among the organizations we studied, information loss represents the largest cost component with a rise from 35 percent in 2015 to 43 percent in 2017. It is this threat landscape that demands organizations reexamine
their investment priorities to keep pace with these more sophisticated and highly motivated attacks.

To better understand the effectiveness of investment decisions, we analyzed nine security technologies across two dimensions: the percentage spending level between them and their value in terms of cost-savings to the business. The findings illustrate that many organizations may be spending too much on the wrong technologies. Five of the nine security technologies had a negative value gap where the percentage spending level is higher than the
relative value to the business. Of the remaining four technologies, three had a significant positive value gap and one was in balance. So, while maintaining the status quo on advanced identity and access governance, the opportunity exists to evaluate potential over-spend in areas which have a negative value gap and rebalance these funds by investing in the breakthrough innovations which deliver positive value.

Following on from the first Cost of Cyber Crime report launched in the United States eight years ago, this study, undertaken by the Ponemon Institute and jointly developed by Accenture, evaluated the responses of 2,182 interviews from 254 companies in seven countries—Australia,
France, Germany, Italy, Japan, United Kingdom and the United States. We aimed to quantify the economic impact of cyber attacks and observe cost trends over time to offer some practical guidance on how organizations can stay ahead of growing cyber threats.


Security intelligence systems (67 percent) and advanced identity and access governance (63
percent) are the top two most widely deployed enabling security technologies across the enterprise. They also deliver the highest positive value gap with organizational cost savings of US$2.8 million and US$2.4 million respectively. As the threat landscape constantly evolves, these investments should be monitored closely so that spend is at an appropriate
level and maintains effective outcomes. Aside from systems and governance, other investments show a lack of balance. Of the nine security technologies evaluated, the highest percentage spend was on advanced perimeter controls. Yet, the cost savings associated with technologies in this area were only fifth in the overall ranking with a negative value gap of
minus 4. Clearly, an opportunity exists here to assess spending levels and potentially reallocate investments to higher-value security technologies.

Spending on governance, risk and compliance (GRC) technologies is not a fast-track to increased security. Enterprise-wide deployment of GRC technology and automated policy management showed the lowest effectiveness in reducing cyber crime costs (9 percent and 7 percent respectively) out of nine enabling security technologies. So, while compliance technology is important, organizations must spend to a level that is appropriate to achieve the required capability and effectiveness, enabling them to free up funds for breakthrough innovations.

Innovations are generating the highest returns on investment, yet investment in them is low. For example, two enabling security technology areas identified as “Extensive use of cyber analytics and User Behavior Analytics (UBA)” and “Automation, orchestration and machine learning” were the lowest ranked technologies for enterprise-wide deployment
(32 percent and 28 percent respectively) and yet they provide the third and fourth highest cost savings for security technologies. By balancing investments from less rewarding technologies into these breakthrough innovation areas, organizations could improve the effectiveness of their security programs.

The foundation of a strong and effective security program is to identify and “harden” the higher-value assets. These are the “crown jewels” of a business— the assets most critical to operations, subject to the most stringent regulatory penalties, and the source of important trade secrets and market differentiation. Hardening these assets makes it as difficult and costly as possible for adversaries to achieve their goals, and limits the damage they can cause if they do obtain access.

By taking the following three steps, organizations can further improve the effectiveness of their cybersecurity efforts to fend of and reduce the impact of cyber crime:

Invest in the “brilliant basics” such as security intelligence and advanced access management and yet recognize the need to innovate to stay ahead of the hackers. Organizations should not rely on compliance alone to enhance their security profile but undertake extreme pressure testing to identify vulnerabilities more rigorously than even the most highly motivated attacker.
Balance spend on new technologies, specifically analytics and artificial intelligence, to enhance program effectiveness and scale value.

Organizations need to recognize that spending alone does not always equate to value. Beyond prevention and remediation, if security fails, companies face unexpected costs from not being
able to run their businesses efficiently to compete in the digital economy. Knowing which assets must be protected, and what the consequences will be for the business if protection fails, requires an intelligent security strategy that builds resilience from the inside out and an industry-specific strategy that protects the entire value chain. As this research shows, making wise security investments can help to make a difference.

To learn more about the study, visit


Q: Why would anyone at Equifax have access to 143 million SSNs? A: Greed

Click for Beyond Trust Five Deadly Sins white paper.

Bob Sullivan

There’s lots of juicy details about the Equifax hack in a story published today by Bloomberg. It makes the strongest case yet that the massive heist of American SSNs was probably pulled off by a nation-state. That’s likely true about the huge theft of federal employee data back in 2015, also, so it’s not a surprise.

One thing has been gnawing at me from the beginning about Equifax, however, and it should be gnawing at you, too: Why would anyone, anywhere, have access to 143 million Social Security numbers?

What business use would there ever be at a place like Equifax to access a database like that, or to access various data files and put them together?

The answer is: There isn’t one.

Equifax was never going to put money into each of our Social Security “accounts.”  It should never have even contemplated something like a mass mailing to every America that required our SSNs.  CEO Richard Smith was never going home at night and reading a “book” of American personal identification just to understand his business from a holistic point of view.

Nope. I can’t think of a reason. Well, except laziness and arrogance.

Bloomberg’s story provides food for thought on this count. It cites a LinkedIn post by Steve VanWieren, an executive who left Equifax in January 2012.

“It bothered me how much access just about any employee had to the personally identifiable attributes. I would see printed credit files sitting near shredders, and I would hear people speaking about specific cases, speaking aloud consumer’s personally identifiable information,” the post reads.  VanWieren was describing incidents at least five years old, as he left the firm in 2012. Still, they clearly paint the same picture I am.

Too many privileges!

One basic premise of modern security is limiting employees to only those resources they need to do their jobs.  And when those jobs are over, the access must be cut off. For example, desktop support doesn’t need access to human resource files, unless there’s a specific problem — and when there is, access to salary data, etc., should be as limited and temporary as possible.  Access permitted on a need-to-know basis, and no more.

Managing privileges is annoying, but it works.  Morey Haber, vice president of technology at security firm Beyond Trust, recently told me that fully 94 percent of vulnerabilities require administrative rights on targeted machines.  So, no admin rights, no problem.

Back to Equifax.  Who ever created an architecture that would allow anyone to peek at, let alone remove, 143 million SSNs? What account had the rights to do that? Why?

BeyondTrust recently tied up a bunch of security principles in a tidy narrative it called “Five Deadly Sins that Increase the Risks of a Data Breach.”  It includes Envy, Pride, Ignorance, and Apathy.  But I suspect the real blame for the Equifax hack is the first sin:


Greedy people, in the security sense, need access to as much data and resources as they can get. And when they get it, they don’t want to give it up.  In the tech world, privileges are like the old workplace concept of “turf.”  Heaven help someone trying to get a worker to give up tech turf.

I asked Haber about the role of greed in the Equifax case. He speculated that one could imagine a marketing use for pulling together that massive Equifax database, but even then, that data should be obfuscated immediately.

“Obviously, (someone) had to have full access to all that data,” he said. “There was no reason to.”

And now, a hacker — perhaps even a nation-state — has access to all that data. Forever.

VanWieren’s comments pretty much make the case here.  Clearly, a wide selection of employees had access to far more than “need-to-know” data. It was standard operating procedure.

Your workplace is probably like this, too.  Greed is common, but despite what you may have heard in the movies, it’s not good. Why is that? In part, Haber said, it’s because employees react very emotionally to having their network privileges restricted, and even worse to having them revoked.

“(It can be) like taking away someone’s guns,” he said.  Tech workers are used to having admin rights and “Doing what I want to do.”

The time for accommodating such greed is over, he warned.

“We live in a different set of times now,” he said. “We have to rethink how to be safe.”

Declining confidence in IT, but still reasons for optimism

Larry Ponemon

Public sector organizations are feeling the pains of digital transformation. Faced with modernization, data center upgrades and continuous cloud-first initiatives, this transformation of the IT environment is making it a challenge to deliver services, comply with service level agreements (SLAs), meet citizens’ expectations and achieve organizational missions.

The evidence is clear from a research study conducted by the Ponemon Institute and sponsored by Splunk of 736 decision makers in federal & department of defense IT Operations.

Challenges & Trends in Federal & Department of Defense: the United States reveals that digital transformation is well underway with budgets shifting from traditional on premise investments to more cloud and agile development paradigms.

This shift in the IT environment, while being embraced, has led to an overall loss of confidence in federal and DoD operations and is evidenced in respondents’ lack of confidence in their organizations’ ability to accomplish the following:

  • Have the people with the right skills to “get the job one”
  • Ensure performance and availability of systems to meet SLAs consistently
  • Manage data center upgrades
  • Perform IT operations efficiently
  • Migrate workloads and applications to the cloud

Research findings explain the reasons for the loss of confidence. These include skills gap among existing resources, according to 71 percent of respondents. Respondents also cite silos of IT systems and technologies and an inability to integrate them (71 percent of respondents) and complexity and diversity of IT systems and technology (67 percent of respondents).

Even with monitoring and data analytics in place, these tools are disconnected from each other and most respondents believe they are ineffective at helping quickly pinpoint issues and determine root cause (78 percent of respondents) because they do not offer end-to-end visibility.

Respondents also say that a lack of collaboration across teams and not enough data fidelity and
context are challenges to timely issue resolutions. Such challenges also affect organizations’ ability to quickly and efficiently respond to system outages and interruptions.

On average, it takes 42 hours and 12 staff members to restore the IT system to operational status following a system outage or interruption.

Despite the loss of confidence, respondents do see a silver lining in the transformation of their IT operations. According to respondents, the move to DevOps (development and operations) is making it easier to deliver quality services on time and within budget. To support the transformation, organizations are shifting spending from on premise to cloud computing, DevOps and new technologies.

Respondents also recognize that machine learning capabilities (27 percent), better network visibility across the entire organization (26 percent) and better enforcement of current policies and regulations (26 percent) can improve their organizations’ IT operations. Respondents are also increasingly aware of the types of data available and how such data can be used across
operational silos to reduce risks to their organizations.

Following are key findings from this research:

Confidence in current IT operations is lower than it was 12 months ago. The primary reasons for this change are not having the staff with the right skills “to get the job done”, the inability to ensure performance and ability to ensure performance and availability of systems to meet SLAs and inability to manage data center upgrades.

The confidence gap seems to stem from a skills gap, silos and complexity. Respondents believe that the greatest difficulties in carrying out their duties arise from a skills gap among existing resources (71 percent of respondents), silos of IT systems and technologies and a lack of ability to integrate them (71 percent of respondents) and complexity and diversity of IT systems and technology (67 percent of respondents).

Machine learning capabilities, visibility and enforcement of policies are seen as critical to improving IT operations. Out of a list of five options of the most effective way to strengthen IT operations, 27 percent of respondents believe that machine learning capabilities would be most effective. Better network visibility across the entire organization and better enforcement of current policies and regulations would strengthen IT operations (both 26 percent of respondents).

Spending on cloud operations and DevOps will grow significantly while on-premise spending dwindles. Almost one-half of respondents (49 percent) say that spending on cloud operations and 48 percent of respondents say DevOps will grow over the next year, while only 31 percent say that on-premise spending would do the same.

Alerts still remain too numerous and erroneous, and current event monitoring tools are not solving the problem. More than half of respondents say they still receive too many alerts (52 percent) and that those alerts generate too many false positives (55 percent). Seventy-eight percent of respondents are unsure or do not think that their current crop of analytics and monitoring tools are helping them pinpoint problems and determine root causes because they lack end-to-end visibility.

The challenges and risks described in this research result in inefficient response to system outages and interruptions. According to 65 percent of respondents, their organizations lack a consistent and formal IT outage response process. On average, it takes 42 hours and 12 staff members to restore the IT system to operational status following a system outage and interruption.

Will IT and security converge? More than two-thirds of respondents (73 percent) do not believe or are unsure if their security and IT operations will converge in the future.

Is it possible to use the same data sets across the organization to solve problems? Sixty-four percent of respondents are unsure or don’t think the data sets they are using can solve multiple challenges such as IT troubleshooting, service monitoring, security and business/mission analytics. Similarly, 66 percent of respondents are doubtful the same data can be used throughout the organization.

For the full study, click here: 

‘Nothing … in the way except motivation’ — Report claims hackers have penetrated deep into energy sector networks

Bob Sullivan

It started off as a fake invitation to a New Year’s Eve party, emailed to energy section employees. It ended with hackers taking screen shots of power grid control computer screens. Well, we can only hope it ended there.

Symantec Corporation released an alarming report this week claiming that a group of power grid hackers it calls “Dragonfly 2.0” have made their most successful raid into critical infrastructure computers in the U.S. and around the world.

“The energy sector in Europe and North America being targeted by a new wave of cyber attacks that could provide attackers with the means to severely disrupt affected operations,” Symantec wrote in its report.

In a chilling statement to Wired, Symatec’s Eric Chien said the incident means the intruders are, as the moment, capable of causing disruptions and power outages as they wish. They are just waiting for the right moment.

“There’s a difference between being a step away from conducting sabotage and actually being in a position to conduct sabotage … being able to flip the switch on power generation,” Eric Chien said. “We’re now talking about on-the-ground technical evidence this could happen in the US, and there’s nothing left standing in the way except the motivation of some actor out in the world.

Security researchers have been watching Dragonfly for years, claiming the group has been probing energy sector machines since at least 2011. Symantec says it went dark until a reemergence in late December 2015, when the New Year’s Even party invite went out. There is a “a distinct increase in activity in 2017,” Symantec said.

“The Dragonfly 2.0 campaigns show how the attackers may be entering into a new phase, with recent campaigns potentially providing them with access to operational systems, access that could be used for more disruptive purposes in future,” according to the report.

Symantec doesn’t say where Dragonfly is from — and its report shows the hackers might be intentionally trying to confuse investigators.  But late last year, the Department of Homeland Security claimed Dragonfly’s origins were Russian, and it was one of several groups groups working to “compromise and exploit networks and endpoints associated with the U.S. election, as well as a range of U.S. Government, political, and private sector entities. was part of organized camp.”

Symantec says the most concerning evidence found during its analysis were the screen captures.

“In one particular instance the attackers used a clear format for naming the screen capture files, [machine description and location].[organization name]. The string “cntrl” (control) is used in many of the machine descriptions, possibly indicating that these machines have access to operational systems,” it said.

Symantec links the initial hacker campaign to this more recent spate of attacks because there are similarities in the malware used. The Dragonfly campaigns that began in 2011 “now appear to have been a more exploratory phase,” Symantec said.

“The Dragonfly 2.0 campaigns show how the attackers may be entering into a new phase, with recent campaigns potentially providing them with access to operational systems, access that could be used for more disruptive purposes in future,” the firm claims. “What (the group) plans to do with all this intelligence has yet to become clear, but its capabilities do extend to materially disrupting targeted organizations should it choose to do so.”

Omer Schneider, CEO and co-founder of security firm CyberX, said this type of attack is inevitable.

“Why is everyone so surprised?” Schneider said. “As early as 2014, the ICS-CERT warned that adversaries had penetrated our control networks to perform cyber-espionage. Over time the adversaries have gotten even more sophisticated and now they’ve stolen credentials that give them direct access to control systems in our energy sector. If I were a foreign power, this would be a great way to threaten the US while I invade other countries or engage in other aggressive actions against US allies.”

Cost of a data breach, 2017 — $225 per record lost, an all-time high

Larry Ponemon

IBM Security and Ponemon Institute are pleased to present the 2017 Cost of Data Breach Study: United States, our 12th annual benchmark study on the cost of data breach incidents for companies located in the United States. The average cost for each lost or stolen record containing sensitive and confidential information increased from $221 to $225. The average total cost experienced by organizations over the past year increased from $7.01 million to $7.35 million. To date, 572 U.S. organizations have participated in the benchmarking process since the inception of this research.

Ponemon Institute conducted its first Cost of Data Breach Study in the United States 12 years ago. Since then, we have expanded the study to include the following countries and regions:

  • The United Kingdom
  • Germany
  • Australia
  • France
  • Brazil
  • Japan
  • Italy
  • India
  • Canada
  • South Africa
  • The Middle East (including the United Arab Emirates and Saudi Arabia)
  • ASEAN region (including Singapore, Indonesia, the Philippines and Malaysia

The 2017 study examines the costs incurred by 63 U.S. companies in 16 industry sectors after those companies experienced the loss or theft of protected personal data and the notification of breach victims as required by various laws. It is important to note that costs presented in this research are not hypothetical but are from actual data-loss incidents. They are based upon cost estimates provided by individuals we interviewed over a 10-month period in the companies that are represented in this research.

The number of breached records per incident this year ranged from 5,563 to 99,500 records. The average number of breached records was 28,512. We did not recruit organizations that have data breaches involving more than 100,000 compromised records. These incidents are not indicative of data breaches most organizations incur. Thus, including them in the study would have artificially skewed the results.

Why the cost of data breach fluctuates across countries

What explains the significant increases in the cost of data breach this year for organizations in the Middle East, the United States and Japan? In contrast, how did organizations in Germany, France, Australia, and the United Kingdom succeed in reducing the costs to respond to and remediate the data breach? Understanding how the cost of data breach is calculated will explain the differences among the countries in this research.

For the 2017 Cost of Data Breach Study: Global Overview, we recruited 419 organizations in 11 countries and two regions to participate in this year’s study. More than 1,900 individuals who are knowledgeable about the data breach incident in these 419 organizations were interviewed. The first data points we collected from these organizations were: (1) how many customer records were lost in the breach (i.e. the size of the breach) and (2) what percentage of their customer base did they lose following the data breach (i.e. customer churn). This information explains why the costs increase or decrease from the past year.

In the course of our interviews, we also asked questions to determine what the organization spent on activities for the discovery of and the immediate response to the data breach, such as forensics and investigations, and those conducted in the aftermath of discovery, such as the notification of victims and legal fees. A list of these activities is shown in Part 3 of this report. Other issues covered that may have an influence on the cost are the root causes of the data breach (i.e. malicious or criminal attack, insider negligence or system glitch) and the time to detect and contain the incident.

It is important to note that only events directly relevant to the data breach experience of the 419 organizations represented in this research and discussed above are used to calculate the cost. For example, new regulations, such as the General Data Protection Regulation (GDPR), ransomware and cyber attacks, such as Shamoon, may encourage organizations to increase investments in their governance practices and security-enabling technologies but do not directly affect the cost of a data breach as presented in this research.

The following are the most salient findings and implications for organizations:

The cost of data breach sets a record high. According to this year’s benchmark findings, data breaches cost companies an average of $225 per compromised record – of which $146 pertains to indirect costs, including abnormal turnover or churn of customers and $79 represents the direct costs incurred to resolve the data breach, such as investments in technologies or legal fees.

The total average organizational cost of data breach reaches a new high. This year, we record the highest average total cost of data breach at $7.35 million. Prior to this year’s research, the most costly breach occurred in 2011 when companies spent an average of $7.24 million. In 2013, companies experienced the lowest total data breach cost at $5.40 million.

Measures reveal why the cost of data breach increases. The average total cost of data breach increased 4.7 percent, the average per capita cost increased by 1.8 percent and abnormal churn of existing customers increased 5 percent. In the context of this paper, abnormal churn is defined as a greater-than-expected loss of customers in the normal course of business. In contrast, the average size of a data breach (number of records lost or stolen) decreased 1.9 percent.

Certain industries have higher data breach costs. Heavily regulated industries such as health care ($380 per capita) and financial services ($336 per capita), had per capita data breach costs well above the overall mean of $225. In contrast, public sector organizations ($110 per capita) had a per capita cost of data breach below the overall mean.

Malicious or criminal attacks continue to be the primary cause of data breach. Fifty-two percent of incidents involved a malicious or criminal attack, 24 percent of incidents were caused by negligent employees, and another 24 percent were caused by system glitches, including both IT and business process failures.

Malicious attacks are the costliest. Organizations that had a data breach due to malicious or criminal attacks had a per capita data breach cost of $244, which is significantly above the mean. In contrast, system glitches or human error as the root cause had per capita costs below the mean ($209 and $200 per capita, respectively).

Four new factors are in this year’s cost analysis.  The following factors that influence data breach costs have been added to this year’s study. They are as follows: (1) compliance failures, (2) the extensive use of mobile platforms, (3) CPO appointment and (4) the use of security analytics. The use of security analytics reduced the per capita cost of data breach by $7.7 and the appointment of a CPO reduced the cost by $4.3. However, extensive use of mobile platforms at the time of the breach increased the cost by $6.5 and compliance failures increased the per capita cost by $19.3.

The more records lost, the higher the cost of data breach. This year, for companies with data breaches involving less than 10,000 records, the average total cost of data breach was $4.5 million and companies with the loss or theft of more than 50,000 records had a cost of data breach of $10.3 million.

The more churn, the higher the cost of data breach. Companies that experienced less than 1 percent churn or the loss of existing customers, had an average total cost of data breach of $5.3 million and those that experienced churn greater than 4 percent had an average total cost of data breach of $10.1 million.

Certain industries are more vulnerable to churn. Financial, life science, health, technology and service organizations experience a relatively high abnormal churn rate and public sector and entertainment organizations experienced a relatively low abnormal churn rate.

Detection and escalation costs are at a record high. These costs include forensic and investigative activities, assessment and audit services, crisis team management, and communications to executive management and board of directors. Average detection and escalation costs increased dramatically from $0.73 million to $1.07 million, suggesting that companies are investing more heavily in these activities.

Notification costs increase slightly. Such costs typically include IT activities associated with the creation of contact databases, determination of all regulatory requirements, engagement of outside experts, postal expenditures, secondary mail contacts or email bounce-backs and inbound communication set-up. This year’s average notification costs increased slightly from $0.59 million in 2016 to $0.69 million in this year’s study.

Post data breach costs decrease. Such costs typically include help desk activities, inbound communications, special investigative activities, remediation activities, legal expenditures, product discounts, identity protection services and regulatory interventions. These costs decreased from $1.72 million in 2016 to $1.56 million in this year’s study.

Lost business costs increase. Such costs include the abnormal turnover of customers, customer acquisition activities, reputation losses and diminished goodwill. The current year’s cost increased from $3.32 million in 2016 to $4.03 million. The highest lost business cost over the past 12 years was $4.59 million in 2009.

Companies continue to spend more on indirect per capita costs than direct per capita costs. Indirect costs include the time employees spend on data breach notification efforts or investigations of the incident. Direct costs refer to what companies spend to minimize the consequences of a data breach and assist victims. These costs include engaging forensic experts to help investigate the data breach, hiring a law firm and offering victims identity protection services. This year, the indirect costs were $146 and direct costs were $79.

The time to identify and contain data breaches impact costs. In this year’s study, it took companies an average of 206 days to detect that an incident occurred and an average of 55 days to contain the incident. If the mean time to identify (MTTI) was less than 100 days, the average cost to identify was $5.99 million. However, if the mean time to identify was greater than 100 days the cost rose significantly to $8.70 million. If the mean time to contain (MTTC) the breach was less than 30 days, the average cost to contain was $5.87 million. If it took 30 days or longer, the cost rose significantly to $8.83 million.

To read the full report, click here.