Category Archives: Uncategorized

Why should Big Data have more right to privacy than people?

BobWASHINGTON D.C. — What if we treated data with the same scrutiny as people?  When a consumer applies for a loan or a job, firms uses massive databases and can consider thousands of data points when they assess the integrity of that person. But what if consumers could, in equal painstaking detail, interrogate the integrity of the data? What if every single piece of data about you had to declare where it came from, where it was bought and sold, what it had been used for, and so on?

That was the provocative suggestion made by Carnegie Mellon professor Alessandro Acquisti in Washington D.C. today at a conference devoted to Big Data and its ability to treat consumers fairly.

BigData

As you might imagine, no industry representative jumped at the opportunity. In fact, his suggestion was entirely ignored.

It shouldn’t be. The technology certainly exists that would give consumers this fair playing field when it comes to their data. After all, it is their data (despite what industry groups might argue, they they own the data they collect and the inferences they draw.).

Acquisti was simply offering a idea that would bring about more transparency in a world that is dogged by murky, shady operators. Firms don’t just collect data about consumers as they browse, or walk around stores, or use their credit cards.  They do it secretly. They hate answering questions about it. In fact, they think the mystery surrounding the data is actually the value of the data.

Monday’s conference, titled “Big Data: Tool for Inclusion or Exclusion,” included a lot of the usual meaningless privacy dialog around policy and disclosure and best practices.  The discussions were lively, but this elephant in the room was rarely addressed.  Credit scores work, when they work, because consumers don’t understand them. Once consumers understand them, they can game them, and banks move on to something more obscure.  The data collection industry pays lip service about preventing consumer harm. But there is little to believe industry actors want anything more than to make as much money as they can by invading consumers’ privacy as much as they can get away with.

As Acquisti pointed out, the battle is asynchronous. Consumers can be interrogated with alarming tenacity, but they enjoy very little in the way of rights to face the digital 1s and 0s that constitute their accuser.

Not surprisingly, the idea of giving consumers more rights to control their information and its use was greeted with frosty newsspeak — “Consumers hate dealing with cookie warnings when the browse the web!. They don’t want more rights!” was the basic, cynical response.

 

FTC Commissioner Julie Brill was among the speakers who alluded to the excellent report published earlier this year by the agency explaining the wide variety of invasive behavior committed by data broker companies you’ve likely never heard of  — but these firms know you. They have probably decided you are an “urban scrambler” with a “diabetes interest.” Brill called for data brokers to fess up about what they do and who they do it for.

The discussion generally felt a bit fatalistic, however. Big data is here to stay, and in fact, it hurts both consumers whose privacy is completely violated by it, and it hurts consumers who are invisible to it.  The only thing worse than having a credit report is not having a credit report, which can prevent you from participating in the American economy at all.

Pam Dixon wrote a report earlier this year called “The Scoring of America,” which described the hundreds of 3-digit numbers that can control every aspect of your life — we’ve moved waaayyyy beyond credit scores. On a panel, she urged a new, broader view of data usage that drew on a long history of data collection stretching back to World War II and the Nuremberg Principles. They call for a clear need to obtain meaningful, informed consent of people when they are subjects of experiments.

That would be hard to do, certainly. But we should try.

When I write about scams, gotchas, and company misbehavior — and often, when I bicker with companies who give some version of an excuse that comes down to, “it’s the consumer’s fault” — I have a simple test I give:

“Are people surprised you took their money? If they are surprised, they you did the wrong thing.”

With data collection, surprise isn’t just an element of a “gotcha.”  Surprise is the product itself.  That’s wrong, and that needs to change.  Without real, informed consent from the public, Big Data collection is a runaway train that is going to do a lot more harm than good.

 

 

 

Light bulbs hacked; It's funny, but it's not

BobIt’s a question I think about a lot: Are we moving towards a world that’s safer or more dangerous?  More or less secure? This week, the “less secure” side scored another goal. Light bulbs can be hacked.  Doing so seems like a rather silly science fair project until you think about what it really means.

London-based security firm Context has taken an interest  in fragility of the Internet of Things, as we all should.  As a refresher, the Internet of Things simply means wireless chips will soon be placed in many items in your home, and these will all talk to the Internet and each other.  It’s not science fiction; it’s more like George Jetson. Whiz-bangy light bulbs sold by a firm named LIFX are among the first Internet of Things products. The bulbs talk to each other, and can be controlled with a smartphone.  Neat, I guess, in a chia pet sort of way. (Click on! Click off!)

Context took the things apart and found that a hacker could trick the bulbs into surrendering control to a stranger.  Essentially, bad guys can hop on the bulb users’ WiFi network and take control of the bulbs.  If you look at the firm’s website, you’ll see how much trouble it went to in order to turn a victim’s lights on and off.  Also neat, I guess.  The hack comes with a strong mitigating factor; the hacker must be within 30 meters of the target to start the surprise disco effect.  So state secrets are not at stake.

But here’s what you should think about.  LIFX seems like a responsible enough outfit. It isn’t Yo, that’s for sure.  The bulbs actually came loaded with AES (Advanced!) encryption. So the engineers actually thought about this problem. But the bulbs all shared the same underlying encryption key. Hack one, hack them all. That’s what Context did.

LIFX, by all accounts, reacted quickly to the hack and has issued a fix. Great, I guess. Happy ending?  Not by a long shot. I promise you, this pattern will repeat itself again, and again, and again.  There is no model currently that requires firms inventing cool stuff to make it safe. Features first, safety last. If ever.

Therefore, our world will soon be full of really creative devices full of fatal flaws.  It’s always been this way — features over safety — but when vulnerabilities were limited to personal computers, there were some real-world limits on how much trouble consumers could get into.  When the threats are in everything, as they will be with the Internet of Things, watch out.  Here’s a thought exercise.  What happens when it’s not the light bulbs, but rather the power outlets, that are “smart” and can be hacked?

This is why I made much ado about the nothing piece of software called Yo that had its 15 minutes of fame a few weeks ago.  Quick refresh: Yo is Twitter in two characters. Participants send single, two-character messages using Yo. It got a flurry of attention, allegedly a flurry of investment, and then hackers figured out they could download all personal information anyone had given Yo.   The firm that made Yo bragged that it was programmed in a day. The Internet of Things will be full of gadgets programmed in a day, full of basic, serious flaws, unless something changes.

Light bulbs hacked; It’s funny, but it’s not

BobIt’s a question I think about a lot: Are we moving towards a world that’s safer or more dangerous?  More or less secure? This week, the “less secure” side scored another goal. Light bulbs can be hacked.  Doing so seems like a rather silly science fair project until you think about what it really means.

London-based security firm Context has taken an interest  in fragility of the Internet of Things, as we all should.  As a refresher, the Internet of Things simply means wireless chips will soon be placed in many items in your home, and these will all talk to the Internet and each other.  It’s not science fiction; it’s more like George Jetson. Whiz-bangy light bulbs sold by a firm named LIFX are among the first Internet of Things products. The bulbs talk to each other, and can be controlled with a smartphone.  Neat, I guess, in a chia pet sort of way. (Click on! Click off!)

Context took the things apart and found that a hacker could trick the bulbs into surrendering control to a stranger.  Essentially, bad guys can hop on the bulb users’ WiFi network and take control of the bulbs.  If you look at the firm’s website, you’ll see how much trouble it went to in order to turn a victim’s lights on and off.  Also neat, I guess.  The hack comes with a strong mitigating factor; the hacker must be within 30 meters of the target to start the surprise disco effect.  So state secrets are not at stake.

But here’s what you should think about.  LIFX seems like a responsible enough outfit. It isn’t Yo, that’s for sure.  The bulbs actually came loaded with AES (Advanced!) encryption. So the engineers actually thought about this problem. But the bulbs all shared the same underlying encryption key. Hack one, hack them all. That’s what Context did.

LIFX, by all accounts, reacted quickly to the hack and has issued a fix. Great, I guess. Happy ending?  Not by a long shot. I promise you, this pattern will repeat itself again, and again, and again.  There is no model currently that requires firms inventing cool stuff to make it safe. Features first, safety last. If ever.

Therefore, our world will soon be full of really creative devices full of fatal flaws.  It’s always been this way — features over safety — but when vulnerabilities were limited to personal computers, there were some real-world limits on how much trouble consumers could get into.  When the threats are in everything, as they will be with the Internet of Things, watch out.  Here’s a thought exercise.  What happens when it’s not the light bulbs, but rather the power outlets, that are “smart” and can be hacked?

This is why I made much ado about the nothing piece of software called Yo that had its 15 minutes of fame a few weeks ago.  Quick refresh: Yo is Twitter in two characters. Participants send single, two-character messages using Yo. It got a flurry of attention, allegedly a flurry of investment, and then hackers figured out they could download all personal information anyone had given Yo.   The firm that made Yo bragged that it was programmed in a day. The Internet of Things will be full of gadgets programmed in a day, full of basic, serious flaws, unless something changes.

Attacks on healthcare systems have risen 100 percent

 

Larry Ponemon

Larry Ponemon

News that millions of patient Social Security numbers were stolen recently from Community Health Systems Inc. computers should come as a surprise.  Earlier this year, we published results from our Fourth Annual Benchmark Study on Patient Privacy and Data Security, and the headline result was this:      Criminal attacks on healthcare systems have risen a startling 100 percent since we first conducted this study four years ago in 2010.

Many other findings were equally as sobering. Healthcare employees are fueling breach risks by increased use of their personal unsecured devices (smartphones, laptops and tablets). Business Associates—those that have access to PHI and work with healthcare organizations—are not yet in compliance with the HIPAA Final Rule.

Data breaches continue to cost some healthcare organizations millions of dollars every year.

While the cost can range from less than $10,000 to more than $1 million, we calculate that the average cost for the organizations represented in this year’s benchmark study is approximately $2 million over a two-year period. This is down from $2.4 million in last year’s report as well as from the $2.2 million reported in 2011 and $2.1 million in 2010. Based on the experience of the healthcare organizations in this benchmark study, we believe the potential cost to the healthcare industry could be as much as $5.6 billion annually.

The types of healthcare organizations participating in the study are hospitals or clinics that are part of a healthcare network (49 percent), integrated delivery systems (34 percent) and standalone hospital or clinic (17 percent). This year 91 healthcare organizations participated in this benchmark research and 388 interviews were conducted. All organizations in this research are subject to HIPAA as a covered entity. Most respondents interviewed work in compliance, IT, patient services and privacy.

Other key research findings:

The number of data breaches decreased slightly. Ninety percent of healthcare organizations in this study have had at least one data breach in the past two years. However, 38 percent report that they have had more than five incidents. This is a decline from last year’s report when 45 percent of organizations had more than 5. This coupled with an increase in organizations’ level of confidence in data breach detections suggests that modest improvements have been made in reducing threats to patient data.

Healthcare organizations improve ability to control data breach costs. The economic impact of one or more data breaches for healthcare organizations in this study ranges from less than $10,000 to more than $1 million over a two-year period. Based on the ranges reported by respondents, we calculated that the average economic impact of data breaches over the past two years for the healthcare organizations represented in this study is $2.0 million. This is a decrease of almost $400,000 or 17 percent since last year.

ACA increases risk to patient privacy and information security. Respondents in 69 percent of organizations represented believe the ACA significantly increases (36 percent) or increases (33 percent) risk to patient privacy and security. The primary concerns are insecure exchange of patient information between healthcare providers and government (75 percent of organizations), patient data on insecure databases (65 percent) and patient registration on insecure websites (63 percent of organizations).

ACO participation increases data breach risks. Fifty-one percent of organizations say they are part of an Accountable Care Organization (ACO) and 66 percent say the risks to patient privacy and security due to the exchange of patient health information among participants has increased. When asked if their organization experienced changes in the number of unauthorized disclosure of PHI, 41 percent say it is too early to tell. Twenty-three percent say they noticed an increase.

Confidence in the security of Health Information Exchanges (HIEs) remains low. An HIE is defined as the mobilization of healthcare information electronically across organizations within a region, community or hospital system. The percentage of organizations joining HIEs increased only slightly. This year, 32 percent say they are members and this is up slightly from 28 percent last year. One-third of organizations say they do not plan to become a member. The primary reason could be that 72 percent of respondents say they are only somewhat confident (32 percent) or not confident (40 percent) in the security and privacy of patient data share on HIEs.

Criminal attacks on healthcare organizations increase 100 percent since 2010. Insider negligence continues to be at the root of most data breaches reported in this study but a major challenge for healthcare organizations is addressing the criminal threat. These types of attacks on sensitive data have increased 100 percent since the study was conducted in 2010 from 20 percent of organizations reporting criminal attacks to 40 percent of organizations in this year’s study.

Employee negligence is considered the biggest security risk. Seventy-five percent of organizations say employee negligence is their biggest worry followed by use of public cloud services (41 percent), mobile device insecurity (40 percent) and cyber attackers (39 percent).

BYOD usage continues to rise. Despite the concerns about employee negligence and the use of insecure mobile devices, 88 percent of organizations permit employees and medical staff to use their own mobile devices such as smart phones or tablets to connect to their organization’s networks or enterprise systems such as email. Similar to last year, more than half of organizations are not confident that the personally-owned mobile devices or BYOD are secure.

 

To find out more, or download the entire report, click here. 

THAT Facebook study; yes, you should be concerned

BobThere was another round of confused Facebook outrage this month when a story in the Atlantic revealed the social media giant had intentionally toyed with users’ mood — allegedly for science, but in reality, for money.  FB turned up the sadness dial on some users’ news feeds, and found out the sadness was contagious (happiness was, too).

The study that’s produced the outrage did qualify as science. It was published in the prestigious journal Proceedings of the National Academy of Sciences.  It has plenty of folks scrambling.

What does the nation’s leading privacy researcher, and a frequent but even-handed Facebook critic, think of the Facebook mood manipulation study controversy?  It’s a case of “shooting the messenger,” Alessandro Acquisti of Carnegie Mellon University told me.  It’s also a rare peek “through the looking-glass” at the future of privacy.

Acquisti has been involved in designing hundreds of studies, and he has a deep research interest in privacy, so I asked him for his reaction to the Facebook research dust-up. His response might surprise you.

“The reaction to the study seems like a case of shooting the messenger,” he said. “Facebook (and probably many other online services) engages daily in user manipulation. There is no such thing as a “neutral” algorithm; Facebook decides what to show you, how, and when, in order to satisfy an externally inscrutable objective function (Optimize user interactions? Maximize the time they spend on Facebook, or the amount of information they disclose, or the number of ads they will click? Who knows?) The difference is that, with this study, the researchers actually revealed what had been done, why, and with what results. Thus, this study offers an invaluable wake-up call – a peek through the looking glass of the present and future of privacy and social media.

“Those attacking the study may want to reflect upon the fact that we have been part of the Facebook experiment since the day any of us created an account, and that privacy is much more than protection of personal information —  it is about protection against the control that others can have over us once they have enough information about us.”

Here’s my take on it.  It’s a bad idea to harm people for science.  In many cases it’s merely unethical, not illegal, but it’s a really bad idea.  When harm is unavoidable — say you are trying an experimental drug that might cure a person with cancer, or might kill them faster — scientists must obtain “informed consent.” Now, informed consent is a very slippery idea.  A patient who is desperate and ready to try anything might not really be in a position to give informed consent, for example. Doctors and researchers are supposed to go the extra mile to ensure study subjects *truly* understand what they are doing.

Meanwhile, in many social science studies, informed consent prior to the experiment would wreck the study.  Telling Facebook users, “We are going to try to manipulate your mood” wouldn’t work.  In those cases, researchers are supposed to tell subjects as soon as is feasible what they were up to. And the “harm” must be as minimal as possible.

Everyone who’s conducted research (including me!) has a grand temptation to bend these rules in the name of science — but my research has the power to change the world! — so science has a solution to that problem. Study designs must be approved by an Institutional Review Board, or IRB.  This independent body decides, for example, “No, you may not intentionally make thousands of people depressed in order to see if they will buy more stuff. At least if you do that, you can’t call it science.”

Sign up for Bob Sullivan’s free email newsletter. 

Pesky folks, those IRB folks. My science friends complain all the time that they nix perfectly good research ideas.  A friend who conducts privacy research, for example, can’t trick people into divulging delicate personal information like credit card numbers in research because that would actually cause them harm.

Facebook apparently decided it was above the pesky IRB. Well, the journal editor seemed to say the research was IRB-approved, and then later, seemed to say only part of the research was IRB approved, all of which seems to say no IRB really said, “Sure, make thousands of people depressed for a week.”

And while a billion people on the planet have clicked “Yes” to Facebook’s terms of service, which apparently includes language that gives the firm the right to conduct research, it doesn’t appear Facebook did anything to get informed consent from the subjects. (If you argue that a TOS click means informed consent, send me $1 million. You consented to that by reading this story).

Back to Facebook researchgate.  The problem isn’t some new discovery that Facebook manipulates people. Really, if you didn’t realize that was happening all the time, you are foolish.  The problem is the incredible disconnect between Facebook and its data subjects (ie, people).  Our petty concerns with the way it operates keep getting in Facebook’s way. We should all just pipe down and stop fussing.

Let’s review what’s happened here. Facebook:

1) Decided it was above the standard academic review process

2) Used a terms of service click, in some cases years old, to serve as “informed consent” to harm subjects

Think carefully about this: What wouldn’t Facebook do? What line do you trust someone inside Facebook to draw?

If you’d like to read a blow-by-blow analysis of what went on here – including an honest debate about the tech world’s “so-what” reaction – visit Sebastian Deterding’s Tumblr page.

Here’s the basic counter-argument, made with the usual I’m-more-enlightened-than-you sarcasm of Silicon Valley:

“Run a web site, measure anything, make any changes based on measurements? Congratulations, you’re running a psychology experiment!” said Marc Andreessen, Web browser creator and Internet founding father or sorts. “Helpful hint: Whenever you watch TV, read a book, open a newspaper, or talk to another person, someone’s manipulating your emotions!”

In other words, all those silly rules about treating study subjects fairly that academic institutions have spent decades writing – they must be dumb.  Why should Silicon Valley be subject to any such inconvenience?

My final point: When the best defense for doing something that many people find outrageous is you’ve been doing it for a long time, it’s time for some soul-searching.

 

Who's in charge at power plants? Many don't know

Larry Ponemon

Larry Ponemon

An unnamed natural gas company hired an IT firm to test its corporate information system. POWER Magazine reported, “The consulting organization carelessly ventured into a part of the network that was directly connected the SCADA system. The penetration test locked up the SCADA system and the utility was not able to send gas through its pipelines for four hours. The outcome was the loss of service to its customer base for those four hours.”

As stories like these become more common, we wanted to study how well utility firms are preparing for what seems like the inevitable: a major, successful attack.  The answer is a mixed bag.

This month, we release the results of Stealth Research: Critical Infrastructure, sponsored by Unisys. The purpose of this research is to learn how utility, oil and gas, alternate energy and manufacturing organizations are addressing cybersecurity threats.

Among the more alarming findings: 67 percent of those surveyed said they’d suffered at least one security compromise, but yet one quarter don’t actually know who’s in charge of security.

As the findings reveal, organizations are not as prepared as they should be to deal with the sophistication and stealth of a cyber threat or the negligence of an employee or third party. In fact, the majority of participants in this study do not believe their companies’ IT security programs are “mature.” For purposes of this research, a mature stage is defined as having most IT security program activities deployed. Most companies have defined what their security initiatives are but deployment and execution are still in the early or middle stages.

Key findings of this research

Most companies have not fully deployed their IT security programs. Only 17 percent of companies represented in this research self-report that most of their IT security program activities are deployed. Fifty percent of respondents say their IT security activities have not as yet been defined or deployed (7 percent) or they have defined activities but they are only partially deployed (43 percent). A possible reason is that only 28 percent of respondents agree that security is one of the top five strategic priorities across the enterprise.

The risk to industrial control systems and SCADA is believed to have substantially increased. Fifty-seven percent of respondents agree that cyber threats are putting industrial control systems and SCADA at greater risk. Only 11 percent say the risk has decreased due to heightened regulations and industry-based security standards.

Security compromises are occurring in most companies. It is difficult to understand why security is not a top a priority because 67 percent of respondents say their companies have had at least one security compromise that that led to the loss of confidential information or disruption to operations over the last 12 months. Twenty-four percent of respondents say these compromises were due to an insider attack or negligent privileged IT users.

Upgrading existing legacy systems may result in sacrificing mission-critical security. Fifty four percent of respondents are not confident (36 percent) or unsure (18 percent) that their organization would be able to upgrade legacy systems to the next improved security state in cost effective ways without sacrificing mission-critical security.

 Many organizations are not getting actionable real-time threat alerts about security exploits. According to 34 percent of respondents, their companies do not get real-time alerts, threat analysis and threat prioritization intelligence that can be used to stop or minimize the impact of a cyber attack. If they do receive such intelligence, 22 percent of respondents say they are not effective. Only 15 percent of respondents say threat intelligence is very effective and actionable.

More than half, hit. The majority of companies have had at least one security compromise in the past 12 months. Sixty-seven percent of companies represented in this research have had at least one incident that led to the loss of confidential information or disruption to operations. Twenty-four percent of security incidents were due to a negligent employee with privileged access. However, 21 percent of respondents say they were not able to determine the source of the incident.

Who’s in charge? When asked if their company has dedicated personnel and/or departments responsible for industrial control systems and SCADA security, 25 percent say they do not have anyone assigned,. The majority (55 percent) say they have one person responsible

Out of control. Nearly one-third of respondents say that more than a quarter of their network components are outside their control, including third party endpoints such as smartphones and home computers are outside the direct control of their organization’s security operations.

Who’s in charge at power plants? Many don’t know

Larry Ponemon

Larry Ponemon

An unnamed natural gas company hired an IT firm to test its corporate information system. POWER Magazine reported, “The consulting organization carelessly ventured into a part of the network that was directly connected the SCADA system. The penetration test locked up the SCADA system and the utility was not able to send gas through its pipelines for four hours. The outcome was the loss of service to its customer base for those four hours.”

As stories like these become more common, we wanted to study how well utility firms are preparing for what seems like the inevitable: a major, successful attack.  The answer is a mixed bag.

This month, we release the results of Stealth Research: Critical Infrastructure, sponsored by Unisys. The purpose of this research is to learn how utility, oil and gas, alternate energy and manufacturing organizations are addressing cybersecurity threats.

Among the more alarming findings: 67 percent of those surveyed said they’d suffered at least one security compromise, but yet one quarter don’t actually know who’s in charge of security.

As the findings reveal, organizations are not as prepared as they should be to deal with the sophistication and stealth of a cyber threat or the negligence of an employee or third party. In fact, the majority of participants in this study do not believe their companies’ IT security programs are “mature.” For purposes of this research, a mature stage is defined as having most IT security program activities deployed. Most companies have defined what their security initiatives are but deployment and execution are still in the early or middle stages.

Key findings of this research

Most companies have not fully deployed their IT security programs. Only 17 percent of companies represented in this research self-report that most of their IT security program activities are deployed. Fifty percent of respondents say their IT security activities have not as yet been defined or deployed (7 percent) or they have defined activities but they are only partially deployed (43 percent). A possible reason is that only 28 percent of respondents agree that security is one of the top five strategic priorities across the enterprise.

The risk to industrial control systems and SCADA is believed to have substantially increased. Fifty-seven percent of respondents agree that cyber threats are putting industrial control systems and SCADA at greater risk. Only 11 percent say the risk has decreased due to heightened regulations and industry-based security standards.

Security compromises are occurring in most companies. It is difficult to understand why security is not a top a priority because 67 percent of respondents say their companies have had at least one security compromise that that led to the loss of confidential information or disruption to operations over the last 12 months. Twenty-four percent of respondents say these compromises were due to an insider attack or negligent privileged IT users.

Upgrading existing legacy systems may result in sacrificing mission-critical security. Fifty four percent of respondents are not confident (36 percent) or unsure (18 percent) that their organization would be able to upgrade legacy systems to the next improved security state in cost effective ways without sacrificing mission-critical security.

 Many organizations are not getting actionable real-time threat alerts about security exploits. According to 34 percent of respondents, their companies do not get real-time alerts, threat analysis and threat prioritization intelligence that can be used to stop or minimize the impact of a cyber attack. If they do receive such intelligence, 22 percent of respondents say they are not effective. Only 15 percent of respondents say threat intelligence is very effective and actionable.

More than half, hit. The majority of companies have had at least one security compromise in the past 12 months. Sixty-seven percent of companies represented in this research have had at least one incident that led to the loss of confidential information or disruption to operations. Twenty-four percent of security incidents were due to a negligent employee with privileged access. However, 21 percent of respondents say they were not able to determine the source of the incident.

Who’s in charge? When asked if their company has dedicated personnel and/or departments responsible for industrial control systems and SCADA security, 25 percent say they do not have anyone assigned,. The majority (55 percent) say they have one person responsible

Out of control. Nearly one-third of respondents say that more than a quarter of their network components are outside their control, including third party endpoints such as smartphones and home computers are outside the direct control of their organization’s security operations.

Snowden is sexy, but this privacy issue is more important

BobHow do you think the big computers in the sky see you?   Are you “Rural Everlasting” or a “Mobile Mixer?” Are you a “Married Sophisticate,” a “Senior Product Buyer,” a “dog owner,” and “winter activity enthusiast,” “Bible Lifestyle” or an “Affluent Baby Boomer?” Maybe you are “Financially Challenged,” or “Plus size apparel,” or maybe “Exercise- spotty living.”  Heaven forbid, you might be have a “Diabetes Interest,” or a “Cholesterol Focus.”

Data brokers with names you’ve never heard have decided which of these categories you fit into, and they use that information for everything from targeted online ads to denying purchases over fraud concerns to helping suspicious relatives check up on you.

How would these firms know all this about you?  The stores you shop at tell on you.  Not just the websites – the brick and mortar stores. Swipe your credit card at a department store, and you’ve become a profitable data point for retailers to sell and resell to the highest bidder.  Buy an organic pepper, and you might be dumped into “New Age/Organic Lifestyle.”  And who knows what other conclusions the Cloud might come to about you.

While America remains wrapped up in the personal story of NSA leaker Edward Snowden, more critical and personal privacy invasions go on every day, millions of times each day, without any seeming limitations. Even as NBC was drumming up attention for its (impressive) interview with Snowden, the Federal Trade Commission released a crucial report: “Data Brokers: A Call for Transparency and Accountability.” America yawned when it should have gasped.

Americans are obsessed with their credit scores, which is obvious from the number of advertisements you see from firms selling them. People understand that a simple mistake in a credit report might one day cost them the ability to buy a home, and they’ve fought credit report secrecy since at least the 1970s.  The Fair Credit Reporting Act, and its updates, are hardly perfect, but the law at least creates a good amount of tension between credit reporting firms and consumers

But there is another class of firms that log intimate details about our behavior. Basically any firm that’s not covered by the Fair Credit Reporting Act is known, not as a credit reporting agency, but rather, as a “data broker.” And among data brokers, virtually anything goes.

In the data world, it’s California in 1849. Lawless prospectors aren’t mining for gold, they are mining you.

Last year, the FTC sent records requests to nine of the more important data brokers in an attempt to get a grasp on the industry.  The assertions above and below are based entirely on what these nine firms told the FTC.  These are all actions these firms, among the more reputable, have admitted during a government investigation. Heaven knows what other data brokers are up to.

Imagine a database with information on 700 millions consumers worldwide. How much information?  Something near 3,000 data points on each person.  In other words, the data broker Cloud knows 3,000 things about you.  Check that: Axciom, just one of the nine firms the FTC consulted, knows 3,000 things about you.  The other 8 have similar rap sheets on you.

For some time, most consumers have held on to the naïve idea that offline shopping and online shopping are relatively distinct.  We might be very careful which websites we shop at, for example, but don’t think much about the brick and mortar stores where we swipe our plastic. Those days are over. Through a process called “onboarding,” data brokers are increasingly matching “real world” data with virtual data. They are linking in-store purchases with social media accounts, for example, and then changing the ads you see on Facebook based on stores you’ve shopped at recently.

If credit scores worry you, I have news for you. Credit scores are just the tip of the iceberg.  Data brokers are inventing all kinds of scores used to categorize you and, on occasion, punish you. Fraud prevention is big business for brokers. Based on your real-life shopping activity, they create scores to predict the likelihood your online purchase will result in a chargeback.  You might be surprised to find a transaction rejected and – remember, this is the Wild West – you’ll never know why. You have no right to dispute incorrect facts in the data broker cloud. You don’t even have a right to know what’s there.

Meanwhile, the categorization of people into groups should make you feel immediately queasy.  “Urban scramble,” and “Everlasting Rural” can easily be seen as code words for poor minorities and poor whites; the FTC found these categories were over-representative of those groups. Is it fair for certain groups to never receive offers that Married Sophisticates do? Or to face other, so far hidden, consequences?

That’s the main concern with all this data collection: it’s incredibly invisible. The FTC report does not pull punches:

“Many of these findings point to a fundamental lack of transparency about data broker industry practices,” it says. “Data brokers acquire a vast array of detailed and specific information about consumers; analyze it to make inferences about consumers, some of which may be considered sensitive; and share the information with clients in a range of industries. All of this activity takes place behind the scenes, without consumers’ knowledge.”

Sure, a couple of the firms involved offer scant access to the raw data they collect on consumers – sometimes for free, sometimes for a fee.  But raw data isn’t all that interesting.  A few past addresses, maybe an age or an affiliation. The gold is hiding in how this data is assembled and used for informed conjecture about you. The inferences firms make are of most value to the companies that want to know if you are an “expectant parent” or have a “diabetes interest” or you are a “resolute renter” or you are “handling single parenthood and the stresses of urban life on a small budget.”   Nobody’s telling you that.

It probably wouldn’t matter much if they did.  The vast majority of consumers have never heard of CoreLogic or DataLogix or eBureau or RapLeaf or PeekYou or Recorded Future.  All of them could make records available for free on their websites and it would do no good at all.  Heck, you’d have to tell them all sorts of things about you just so they could find you in their databases. I don’t want to tell RapLeaf about myself, do you?

No matter. It’s crazy how much they already know about you. Of course they have devoured everything Google knows about you, and everything you’ve put on your Facebook page, ever.  Most of them keep it forever, despite the obvious hacker concerns this raises. But even if you’ve kept your online profile low, that probably hasn’t accomplished much. Here’s what the FTC says brokers learn from your offline shopping:

“Data brokers obtain detailed, transaction-specific data about purchases from retailers and catalog companies. Such information can include the types of purchases (e.g., high-end shoes, natural food, toothpaste, items related to disabilities or orthopedic conditions), the dollar amount of the purchase, the date of the purchase, and the type of payment used.”

The data broker report ends with some important suggestions, such as legislation that creates some legal framework for data brokers. There’s even a call for a single web portal that lets consumers find out what all these various companies know. Great, so it’ll be California in 1860 then.   It would be a start, but it’s going to be really, really hard to shove all that data back into Pandora’s Box.

Here’s the most depressing finding in the report: Even if individual brokers allowed you to delete your data from their Cloud, that wouldn’t accomplish anything. Layers and layers of providers are constantly populating their clouds with new scrapes of data. And of course the brokers all sell data back and forth to each other. Whatever was deleted would almost certainly be replaced immediately with the next upload or “onboard.”  The Cloud might be too big already.

It’s important to note what an outlier America is in the world of data brokers.  Europe is right now wrestling with its new legal privacy regime, which includes a consumer “Right to be Forgotten.”  The right is murky and how it will be implemented is a very open question. But the highest EU court just ruled Google must honor requests that data about individuals be removed from its search engine on request. A Spaniard who was annoyed that a 1988 debt kept popping up in Google results about him had brought the case, and he won.

In America, we don’t even have the right to know what they know about us. When it comes to privacy rights, America is on Mars.

“Forget worrying about loyalty cards or programs: it’s the everyday purchases you make tied to your name with a debit or credit card that can land you on data brokers’ lists,” wrote the World Privacy Forum in an analysis of the FTC report. It called on Congress to go much further than the legislation suggested by the FTC, however.  While the FTC rightly talks about brokers coming out of the shadows and requiring consumer consent, it’s almost unfathomable how these firms could go about gaining meaningful, informed consent. After all, there’s a reason they operate so quietly: if we knew what they were doing, we’d try like holy Hell to stop it.

Or to use the language of the industry, consumers have not yet been persuaded about the value proposition of trading your privacy for…….for what again? At least the NSA can claim it’s trying to keep us safe from terrorists. Data brokers are just trying to get money for nothing.

 

 

The Snowden effect: Insider threat protection remains elusive

LarryPonemonPrivAccess

Larry Ponemon

Larry Ponemon

Well-publicized disclosures of highly sensitive information by wiki leaks and former NSA employee Edward Snowden have drawn attention and concern about the insider threat caused by privileged users. We originally conducted a study on this topic in 2011 and decided it was time to see if the risk of privileged user abuse has increased, decreased or stayed the same.  Unfortunately companies have not made much progress in stopping this threat since then. Our latest study commissioned by Raytheon, “Privileged User Abuse & The Insider Threat,” looks at what companies are doing right and the vulnerabilities that need to be addressed with policies and technologies. One area that is a big problem is the difficulty in actually knowing if an action taken by an insider is truly a threat. Sixty-nine percent of respondents say they don’t have enough contextual information from security tools to make this assessment and 56 percent say security tools yield too many false positive.  Here are a few other highlights from the report. (You can obtain a full report by clicking here)

Despite the risks posed by insiders, 49 percent of respondents do not have policies for assigning privileged user access. However, slightly more organizations do use well-defined  policies that are centrally controlled by corporate IT (35 percent in 2014 vs. 31 percent in 2011).

Is it really an insider threat? Companies often have difficulty in actually knowing if an action taken by an insider is truly a threat. The biggest challenges are having enough contextual information provided by security tools (69 percent of respondents) and security tools yield too many false positives (56 percent of respondents).

What’s most at risk? While respondents believe general business and customer information is most at risk in their organizations due to the lack of proper access controls over privileged users (56 percent and 49 percent), fears about abuse to corporate intellectual property increased dramatically from 12 percent of respondents to 33 percent of respondents.

While the establishment of privileged user access policies is lacking, processes are improving. The findings show a significant increase in the use of commercial off-the-shelf automated solutions from 35 percent of respondents in 2011 to 57 percent in 2014 in granting user access privilege. The use of manual processes such as by phone or email also increased from 22 percent of respondents in 2011 to 40 percent of respondents in 2014.

Business unit managers are gaining influence in granting privileged user access and conducting privileged user role certification. Fifty-one percent of respondents say it is the business unit manager who most often handles granting access. This is an increase from 43 percent in 2011.

(You can obtain a full report by clicking here)

Cost of data leaks rising, but there is a ray of hope

Larry Ponemon

Larry Ponemon

Throughout the world, companies are finding that data breaches have become as common as a cold but far more expensive to treat. With the exception of Germany, companies had to spend more on their investigations, notification and response when their sensitive and confidential information was lost or stolen. As revealed in the 2014 Cost of Data Breach Study: Global Analysis, sponsored by IBM, the average cost to a company was $3.5 million in US dollars and 15 percent more than what it cost last year.

Will these costs continue to escalate? Are there preventive measures and controls that will make a company more resilient and effective in reducing the costs? Nine years of research about data breaches has made us smarter about solutions.

Critical to controlling costs is keeping customers from leaving. The research reveals that reputation and the loss of customer loyalty does the most damage to the bottom line. In the aftermath of a breach, companies find they must spend heavily to regain their brand image and acquire new customers. Our report also shows that certain industries, such as pharmaceutical companies, financial services and healthcare, experience a high customer turnover. In the aftermath of a data breach, these companies need to be especially focused on the concerns of their customers.

As a preventive measure, companies should consider having an incident response and crisis management plan in place. Efficient response to the breach and containment of the damage has been shown to reduce the cost of breach significantly. Other measures include having a CISO in charge and involving the company’s business continuity management team in dealing with the breach.

In most countries, the primary root cause of the data breach is a malicious insider or criminal attack. It is also the most costly. In this year’s study, we asked companies represented in this research what worries them most about security incidents, what investments they are making in security and the existence of a security strategy.

Here are some bullet points from the study:

  • The cost of a data breach is on the rise. Most countries saw an uptick in both in the cost per stolen or lost record and in the average total cost of a breach.
  • Fewer customers remain loyal after a breach, particularly in the financial services industry.
  • For many countries, malicious or criminal attacks have taken the top spot as the root cause of the data breaches experienced by participating companies.
  • For the first time, the research reveals that having business continuity management involved in the remediation of a breach can help reduce the cost.

An interesting finding is the important role cyber insurance can play in not only managing the risk of a data breach but in improving the security posture of the company. While it has been suggested that having insurance encourages companies to slack off on security, our research suggests the opposite. Those companies with good security practices are more likely to purchase insurance.

Global companies also are worried about malicious code and sustained probes, which have increased more than other threats. Companies estimate that they will be dealing with an average of 17 malicious codes each month and 12 sustained probes each month. Unauthorized access incidents have mainly stayed the same and companies estimate they will be dealing with an average of 10 such incidents each month.

When asked about the level of investment in their organizations’ security strategy and mission, on average respondents would like to see it doubled from what they think will be spent—an average of $7 million to what they would like to spend—an average of $14 million. This may be a tough sell in many companies. However, our cost of data breach research can help IT security executives make the case that a strong security posture can result in a financially stronger company.

To download the complete report please use the following link:

www.ibm.com/services/costofbreach