Monthly Archives: September 2014

Do cloud breaches cost more?

Larry Ponemon

Larry Ponemon

Can a data breach in the cloud result in a larger and more costly incident? In short, yes. The more places where data resides, the harder it is to control, and the more it costs to clean up a compromise. The cloud multiplier calculates the increase in the frequency and cost of data breach based on the growth in the use of the cloud and uncertainty as to how much sensitive data is in the cloud.

We surveyed 613 IT and IT security practitioners in the United States who are familiar with their company’s usage of cloud services. The majority of respondents (51
percent) say on-premise IT is equally or less secure than cloud-based services. However, 66 percent of respondents say their organization’s use of cloud resources
diminishes its ability to protect confidential or sensitive information and 64 percent believe it makes it difficult to secure business-critical applications.

As shown in more detail in this report, we consider two types of data breach incidents to determine the cloud multiplier effect. We found that if the data breach involves the loss or theft of 100,000 or more customer records, instead of an average cost of $2.37 million it could be as much as $5.32 million. Data breaches involving the theft of high value information could increase from $2.99 million to $4.16 million.

Faith in cloud providers is not what it should be.

Faith in cloud providers is not what it should be.

A lack of knowledge about the number of computing devices connected to the network and enterprise systems, software applications in the cloud and business critical applications
used in the cloud workplace could be creating a cloud  multiplier effect. Other uncertainties
identified in this research include how much sensitive or confidential information is stored in the cloud.

For the first time, we attempt to quantify the potential scope of a data breach based on typical use of cloud services in the workplace or what can be described as the cloud multiplier effect. The report describes nine scenarios involving the loss or theft of more than 100,000 customer records and a material breach involving the loss or theft of high value IP or business confidential.

When asked to rate their organizations’ effectiveness in securing data and applications used in
the cloud, the majority (51 percent) of respondents say it is low. Only 26 percent rate the
effectiveness as high. Based on their lack of confidence, 51 percent say the likelihood of a data
breach increases due to the cloud.

Key takeaways from this research include the following:
* Cloud security is an oxymoron for many companies.
Sixty-two percent of respondents do not agree or are unsure that cloud services are
thoroughly vetted before deployment. Sixty-nine percent believe there is a failure to be
proactive in assessing information that is too sensitive to be stored in the cloud.
* Certain activities increase the cost of a breach when customer data is lost or stolen.
An increase in the backup and storage of sensitive and/or
confidential customer information in the cloud can cause the most costly breaches. The
second most costly occurs when one of the organization’s primary cloud services provider
expands operations too quickly and information.

Certain activities increase the cost of a breach when high value IP and business
confidential information is lost or stolen. Bring Your  Own Cloud (BYOC) results in the
most costly data breaches involving high value IP. The second most costly is the backup and
storage of sensitive or confidential information in the cloud increases. The least costly occurs
when one of the organization’s primary cloud providers fails an audit failure that concerns the
its inability to securely manage identity and authentication processes.

Why should Big Data have more right to privacy than people?

BobWASHINGTON D.C. — What if we treated data with the same scrutiny as people?  When a consumer applies for a loan or a job, firms uses massive databases and can consider thousands of data points when they assess the integrity of that person. But what if consumers could, in equal painstaking detail, interrogate the integrity of the data? What if every single piece of data about you had to declare where it came from, where it was bought and sold, what it had been used for, and so on?

That was the provocative suggestion made by Carnegie Mellon professor Alessandro Acquisti in Washington D.C. today at a conference devoted to Big Data and its ability to treat consumers fairly.


As you might imagine, no industry representative jumped at the opportunity. In fact, his suggestion was entirely ignored.

It shouldn’t be. The technology certainly exists that would give consumers this fair playing field when it comes to their data. After all, it is their data (despite what industry groups might argue, they they own the data they collect and the inferences they draw.).

Acquisti was simply offering a idea that would bring about more transparency in a world that is dogged by murky, shady operators. Firms don’t just collect data about consumers as they browse, or walk around stores, or use their credit cards.  They do it secretly. They hate answering questions about it. In fact, they think the mystery surrounding the data is actually the value of the data.

Monday’s conference, titled “Big Data: Tool for Inclusion or Exclusion,” included a lot of the usual meaningless privacy dialog around policy and disclosure and best practices.  The discussions were lively, but this elephant in the room was rarely addressed.  Credit scores work, when they work, because consumers don’t understand them. Once consumers understand them, they can game them, and banks move on to something more obscure.  The data collection industry pays lip service about preventing consumer harm. But there is little to believe industry actors want anything more than to make as much money as they can by invading consumers’ privacy as much as they can get away with.

As Acquisti pointed out, the battle is asynchronous. Consumers can be interrogated with alarming tenacity, but they enjoy very little in the way of rights to face the digital 1s and 0s that constitute their accuser.

Not surprisingly, the idea of giving consumers more rights to control their information and its use was greeted with frosty newsspeak — “Consumers hate dealing with cookie warnings when the browse the web!. They don’t want more rights!” was the basic, cynical response.


FTC Commissioner Julie Brill was among the speakers who alluded to the excellent report published earlier this year by the agency explaining the wide variety of invasive behavior committed by data broker companies you’ve likely never heard of  — but these firms know you. They have probably decided you are an “urban scrambler” with a “diabetes interest.” Brill called for data brokers to fess up about what they do and who they do it for.

The discussion generally felt a bit fatalistic, however. Big data is here to stay, and in fact, it hurts both consumers whose privacy is completely violated by it, and it hurts consumers who are invisible to it.  The only thing worse than having a credit report is not having a credit report, which can prevent you from participating in the American economy at all.

Pam Dixon wrote a report earlier this year called “The Scoring of America,” which described the hundreds of 3-digit numbers that can control every aspect of your life — we’ve moved waaayyyy beyond credit scores. On a panel, she urged a new, broader view of data usage that drew on a long history of data collection stretching back to World War II and the Nuremberg Principles. They call for a clear need to obtain meaningful, informed consent of people when they are subjects of experiments.

That would be hard to do, certainly. But we should try.

When I write about scams, gotchas, and company misbehavior — and often, when I bicker with companies who give some version of an excuse that comes down to, “it’s the consumer’s fault” — I have a simple test I give:

“Are people surprised you took their money? If they are surprised, they you did the wrong thing.”

With data collection, surprise isn’t just an element of a “gotcha.”  Surprise is the product itself.  That’s wrong, and that needs to change.  Without real, informed consent from the public, Big Data collection is a runaway train that is going to do a lot more harm than good.