Category Archives: Uncategorized

THAT Facebook study; yes, you should be concerned

BobThere was another round of confused Facebook outrage this month when a story in the Atlantic revealed the social media giant had intentionally toyed with users’ mood — allegedly for science, but in reality, for money.  FB turned up the sadness dial on some users’ news feeds, and found out the sadness was contagious (happiness was, too).

The study that’s produced the outrage did qualify as science. It was published in the prestigious journal Proceedings of the National Academy of Sciences.  It has plenty of folks scrambling.

What does the nation’s leading privacy researcher, and a frequent but even-handed Facebook critic, think of the Facebook mood manipulation study controversy?  It’s a case of “shooting the messenger,” Alessandro Acquisti of Carnegie Mellon University told me.  It’s also a rare peek “through the looking-glass” at the future of privacy.

Acquisti has been involved in designing hundreds of studies, and he has a deep research interest in privacy, so I asked him for his reaction to the Facebook research dust-up. His response might surprise you.

“The reaction to the study seems like a case of shooting the messenger,” he said. “Facebook (and probably many other online services) engages daily in user manipulation. There is no such thing as a “neutral” algorithm; Facebook decides what to show you, how, and when, in order to satisfy an externally inscrutable objective function (Optimize user interactions? Maximize the time they spend on Facebook, or the amount of information they disclose, or the number of ads they will click? Who knows?) The difference is that, with this study, the researchers actually revealed what had been done, why, and with what results. Thus, this study offers an invaluable wake-up call – a peek through the looking glass of the present and future of privacy and social media.

“Those attacking the study may want to reflect upon the fact that we have been part of the Facebook experiment since the day any of us created an account, and that privacy is much more than protection of personal information —  it is about protection against the control that others can have over us once they have enough information about us.”

Here’s my take on it.  It’s a bad idea to harm people for science.  In many cases it’s merely unethical, not illegal, but it’s a really bad idea.  When harm is unavoidable — say you are trying an experimental drug that might cure a person with cancer, or might kill them faster — scientists must obtain “informed consent.” Now, informed consent is a very slippery idea.  A patient who is desperate and ready to try anything might not really be in a position to give informed consent, for example. Doctors and researchers are supposed to go the extra mile to ensure study subjects *truly* understand what they are doing.

Meanwhile, in many social science studies, informed consent prior to the experiment would wreck the study.  Telling Facebook users, “We are going to try to manipulate your mood” wouldn’t work.  In those cases, researchers are supposed to tell subjects as soon as is feasible what they were up to. And the “harm” must be as minimal as possible.

Everyone who’s conducted research (including me!) has a grand temptation to bend these rules in the name of science — but my research has the power to change the world! — so science has a solution to that problem. Study designs must be approved by an Institutional Review Board, or IRB.  This independent body decides, for example, “No, you may not intentionally make thousands of people depressed in order to see if they will buy more stuff. At least if you do that, you can’t call it science.”

Sign up for Bob Sullivan’s free email newsletter. 

Pesky folks, those IRB folks. My science friends complain all the time that they nix perfectly good research ideas.  A friend who conducts privacy research, for example, can’t trick people into divulging delicate personal information like credit card numbers in research because that would actually cause them harm.

Facebook apparently decided it was above the pesky IRB. Well, the journal editor seemed to say the research was IRB-approved, and then later, seemed to say only part of the research was IRB approved, all of which seems to say no IRB really said, “Sure, make thousands of people depressed for a week.”

And while a billion people on the planet have clicked “Yes” to Facebook’s terms of service, which apparently includes language that gives the firm the right to conduct research, it doesn’t appear Facebook did anything to get informed consent from the subjects. (If you argue that a TOS click means informed consent, send me $1 million. You consented to that by reading this story).

Back to Facebook researchgate.  The problem isn’t some new discovery that Facebook manipulates people. Really, if you didn’t realize that was happening all the time, you are foolish.  The problem is the incredible disconnect between Facebook and its data subjects (ie, people).  Our petty concerns with the way it operates keep getting in Facebook’s way. We should all just pipe down and stop fussing.

Let’s review what’s happened here. Facebook:

1) Decided it was above the standard academic review process

2) Used a terms of service click, in some cases years old, to serve as “informed consent” to harm subjects

Think carefully about this: What wouldn’t Facebook do? What line do you trust someone inside Facebook to draw?

If you’d like to read a blow-by-blow analysis of what went on here – including an honest debate about the tech world’s “so-what” reaction – visit Sebastian Deterding’s Tumblr page.

Here’s the basic counter-argument, made with the usual I’m-more-enlightened-than-you sarcasm of Silicon Valley:

“Run a web site, measure anything, make any changes based on measurements? Congratulations, you’re running a psychology experiment!” said Marc Andreessen, Web browser creator and Internet founding father or sorts. “Helpful hint: Whenever you watch TV, read a book, open a newspaper, or talk to another person, someone’s manipulating your emotions!”

In other words, all those silly rules about treating study subjects fairly that academic institutions have spent decades writing – they must be dumb.  Why should Silicon Valley be subject to any such inconvenience?

My final point: When the best defense for doing something that many people find outrageous is you’ve been doing it for a long time, it’s time for some soul-searching.

 

Who's in charge at power plants? Many don't know

Larry Ponemon

Larry Ponemon

An unnamed natural gas company hired an IT firm to test its corporate information system. POWER Magazine reported, “The consulting organization carelessly ventured into a part of the network that was directly connected the SCADA system. The penetration test locked up the SCADA system and the utility was not able to send gas through its pipelines for four hours. The outcome was the loss of service to its customer base for those four hours.”

As stories like these become more common, we wanted to study how well utility firms are preparing for what seems like the inevitable: a major, successful attack.  The answer is a mixed bag.

This month, we release the results of Stealth Research: Critical Infrastructure, sponsored by Unisys. The purpose of this research is to learn how utility, oil and gas, alternate energy and manufacturing organizations are addressing cybersecurity threats.

Among the more alarming findings: 67 percent of those surveyed said they’d suffered at least one security compromise, but yet one quarter don’t actually know who’s in charge of security.

As the findings reveal, organizations are not as prepared as they should be to deal with the sophistication and stealth of a cyber threat or the negligence of an employee or third party. In fact, the majority of participants in this study do not believe their companies’ IT security programs are “mature.” For purposes of this research, a mature stage is defined as having most IT security program activities deployed. Most companies have defined what their security initiatives are but deployment and execution are still in the early or middle stages.

Key findings of this research

Most companies have not fully deployed their IT security programs. Only 17 percent of companies represented in this research self-report that most of their IT security program activities are deployed. Fifty percent of respondents say their IT security activities have not as yet been defined or deployed (7 percent) or they have defined activities but they are only partially deployed (43 percent). A possible reason is that only 28 percent of respondents agree that security is one of the top five strategic priorities across the enterprise.

The risk to industrial control systems and SCADA is believed to have substantially increased. Fifty-seven percent of respondents agree that cyber threats are putting industrial control systems and SCADA at greater risk. Only 11 percent say the risk has decreased due to heightened regulations and industry-based security standards.

Security compromises are occurring in most companies. It is difficult to understand why security is not a top a priority because 67 percent of respondents say their companies have had at least one security compromise that that led to the loss of confidential information or disruption to operations over the last 12 months. Twenty-four percent of respondents say these compromises were due to an insider attack or negligent privileged IT users.

Upgrading existing legacy systems may result in sacrificing mission-critical security. Fifty four percent of respondents are not confident (36 percent) or unsure (18 percent) that their organization would be able to upgrade legacy systems to the next improved security state in cost effective ways without sacrificing mission-critical security.

 Many organizations are not getting actionable real-time threat alerts about security exploits. According to 34 percent of respondents, their companies do not get real-time alerts, threat analysis and threat prioritization intelligence that can be used to stop or minimize the impact of a cyber attack. If they do receive such intelligence, 22 percent of respondents say they are not effective. Only 15 percent of respondents say threat intelligence is very effective and actionable.

More than half, hit. The majority of companies have had at least one security compromise in the past 12 months. Sixty-seven percent of companies represented in this research have had at least one incident that led to the loss of confidential information or disruption to operations. Twenty-four percent of security incidents were due to a negligent employee with privileged access. However, 21 percent of respondents say they were not able to determine the source of the incident.

Who’s in charge? When asked if their company has dedicated personnel and/or departments responsible for industrial control systems and SCADA security, 25 percent say they do not have anyone assigned,. The majority (55 percent) say they have one person responsible

Out of control. Nearly one-third of respondents say that more than a quarter of their network components are outside their control, including third party endpoints such as smartphones and home computers are outside the direct control of their organization’s security operations.

Who’s in charge at power plants? Many don’t know

Larry Ponemon

Larry Ponemon

An unnamed natural gas company hired an IT firm to test its corporate information system. POWER Magazine reported, “The consulting organization carelessly ventured into a part of the network that was directly connected the SCADA system. The penetration test locked up the SCADA system and the utility was not able to send gas through its pipelines for four hours. The outcome was the loss of service to its customer base for those four hours.”

As stories like these become more common, we wanted to study how well utility firms are preparing for what seems like the inevitable: a major, successful attack.  The answer is a mixed bag.

This month, we release the results of Stealth Research: Critical Infrastructure, sponsored by Unisys. The purpose of this research is to learn how utility, oil and gas, alternate energy and manufacturing organizations are addressing cybersecurity threats.

Among the more alarming findings: 67 percent of those surveyed said they’d suffered at least one security compromise, but yet one quarter don’t actually know who’s in charge of security.

As the findings reveal, organizations are not as prepared as they should be to deal with the sophistication and stealth of a cyber threat or the negligence of an employee or third party. In fact, the majority of participants in this study do not believe their companies’ IT security programs are “mature.” For purposes of this research, a mature stage is defined as having most IT security program activities deployed. Most companies have defined what their security initiatives are but deployment and execution are still in the early or middle stages.

Key findings of this research

Most companies have not fully deployed their IT security programs. Only 17 percent of companies represented in this research self-report that most of their IT security program activities are deployed. Fifty percent of respondents say their IT security activities have not as yet been defined or deployed (7 percent) or they have defined activities but they are only partially deployed (43 percent). A possible reason is that only 28 percent of respondents agree that security is one of the top five strategic priorities across the enterprise.

The risk to industrial control systems and SCADA is believed to have substantially increased. Fifty-seven percent of respondents agree that cyber threats are putting industrial control systems and SCADA at greater risk. Only 11 percent say the risk has decreased due to heightened regulations and industry-based security standards.

Security compromises are occurring in most companies. It is difficult to understand why security is not a top a priority because 67 percent of respondents say their companies have had at least one security compromise that that led to the loss of confidential information or disruption to operations over the last 12 months. Twenty-four percent of respondents say these compromises were due to an insider attack or negligent privileged IT users.

Upgrading existing legacy systems may result in sacrificing mission-critical security. Fifty four percent of respondents are not confident (36 percent) or unsure (18 percent) that their organization would be able to upgrade legacy systems to the next improved security state in cost effective ways without sacrificing mission-critical security.

 Many organizations are not getting actionable real-time threat alerts about security exploits. According to 34 percent of respondents, their companies do not get real-time alerts, threat analysis and threat prioritization intelligence that can be used to stop or minimize the impact of a cyber attack. If they do receive such intelligence, 22 percent of respondents say they are not effective. Only 15 percent of respondents say threat intelligence is very effective and actionable.

More than half, hit. The majority of companies have had at least one security compromise in the past 12 months. Sixty-seven percent of companies represented in this research have had at least one incident that led to the loss of confidential information or disruption to operations. Twenty-four percent of security incidents were due to a negligent employee with privileged access. However, 21 percent of respondents say they were not able to determine the source of the incident.

Who’s in charge? When asked if their company has dedicated personnel and/or departments responsible for industrial control systems and SCADA security, 25 percent say they do not have anyone assigned,. The majority (55 percent) say they have one person responsible

Out of control. Nearly one-third of respondents say that more than a quarter of their network components are outside their control, including third party endpoints such as smartphones and home computers are outside the direct control of their organization’s security operations.

Snowden is sexy, but this privacy issue is more important

BobHow do you think the big computers in the sky see you?   Are you “Rural Everlasting” or a “Mobile Mixer?” Are you a “Married Sophisticate,” a “Senior Product Buyer,” a “dog owner,” and “winter activity enthusiast,” “Bible Lifestyle” or an “Affluent Baby Boomer?” Maybe you are “Financially Challenged,” or “Plus size apparel,” or maybe “Exercise- spotty living.”  Heaven forbid, you might be have a “Diabetes Interest,” or a “Cholesterol Focus.”

Data brokers with names you’ve never heard have decided which of these categories you fit into, and they use that information for everything from targeted online ads to denying purchases over fraud concerns to helping suspicious relatives check up on you.

How would these firms know all this about you?  The stores you shop at tell on you.  Not just the websites – the brick and mortar stores. Swipe your credit card at a department store, and you’ve become a profitable data point for retailers to sell and resell to the highest bidder.  Buy an organic pepper, and you might be dumped into “New Age/Organic Lifestyle.”  And who knows what other conclusions the Cloud might come to about you.

While America remains wrapped up in the personal story of NSA leaker Edward Snowden, more critical and personal privacy invasions go on every day, millions of times each day, without any seeming limitations. Even as NBC was drumming up attention for its (impressive) interview with Snowden, the Federal Trade Commission released a crucial report: “Data Brokers: A Call for Transparency and Accountability.” America yawned when it should have gasped.

Americans are obsessed with their credit scores, which is obvious from the number of advertisements you see from firms selling them. People understand that a simple mistake in a credit report might one day cost them the ability to buy a home, and they’ve fought credit report secrecy since at least the 1970s.  The Fair Credit Reporting Act, and its updates, are hardly perfect, but the law at least creates a good amount of tension between credit reporting firms and consumers

But there is another class of firms that log intimate details about our behavior. Basically any firm that’s not covered by the Fair Credit Reporting Act is known, not as a credit reporting agency, but rather, as a “data broker.” And among data brokers, virtually anything goes.

In the data world, it’s California in 1849. Lawless prospectors aren’t mining for gold, they are mining you.

Last year, the FTC sent records requests to nine of the more important data brokers in an attempt to get a grasp on the industry.  The assertions above and below are based entirely on what these nine firms told the FTC.  These are all actions these firms, among the more reputable, have admitted during a government investigation. Heaven knows what other data brokers are up to.

Imagine a database with information on 700 millions consumers worldwide. How much information?  Something near 3,000 data points on each person.  In other words, the data broker Cloud knows 3,000 things about you.  Check that: Axciom, just one of the nine firms the FTC consulted, knows 3,000 things about you.  The other 8 have similar rap sheets on you.

For some time, most consumers have held on to the naïve idea that offline shopping and online shopping are relatively distinct.  We might be very careful which websites we shop at, for example, but don’t think much about the brick and mortar stores where we swipe our plastic. Those days are over. Through a process called “onboarding,” data brokers are increasingly matching “real world” data with virtual data. They are linking in-store purchases with social media accounts, for example, and then changing the ads you see on Facebook based on stores you’ve shopped at recently.

If credit scores worry you, I have news for you. Credit scores are just the tip of the iceberg.  Data brokers are inventing all kinds of scores used to categorize you and, on occasion, punish you. Fraud prevention is big business for brokers. Based on your real-life shopping activity, they create scores to predict the likelihood your online purchase will result in a chargeback.  You might be surprised to find a transaction rejected and – remember, this is the Wild West – you’ll never know why. You have no right to dispute incorrect facts in the data broker cloud. You don’t even have a right to know what’s there.

Meanwhile, the categorization of people into groups should make you feel immediately queasy.  “Urban scramble,” and “Everlasting Rural” can easily be seen as code words for poor minorities and poor whites; the FTC found these categories were over-representative of those groups. Is it fair for certain groups to never receive offers that Married Sophisticates do? Or to face other, so far hidden, consequences?

That’s the main concern with all this data collection: it’s incredibly invisible. The FTC report does not pull punches:

“Many of these findings point to a fundamental lack of transparency about data broker industry practices,” it says. “Data brokers acquire a vast array of detailed and specific information about consumers; analyze it to make inferences about consumers, some of which may be considered sensitive; and share the information with clients in a range of industries. All of this activity takes place behind the scenes, without consumers’ knowledge.”

Sure, a couple of the firms involved offer scant access to the raw data they collect on consumers – sometimes for free, sometimes for a fee.  But raw data isn’t all that interesting.  A few past addresses, maybe an age or an affiliation. The gold is hiding in how this data is assembled and used for informed conjecture about you. The inferences firms make are of most value to the companies that want to know if you are an “expectant parent” or have a “diabetes interest” or you are a “resolute renter” or you are “handling single parenthood and the stresses of urban life on a small budget.”   Nobody’s telling you that.

It probably wouldn’t matter much if they did.  The vast majority of consumers have never heard of CoreLogic or DataLogix or eBureau or RapLeaf or PeekYou or Recorded Future.  All of them could make records available for free on their websites and it would do no good at all.  Heck, you’d have to tell them all sorts of things about you just so they could find you in their databases. I don’t want to tell RapLeaf about myself, do you?

No matter. It’s crazy how much they already know about you. Of course they have devoured everything Google knows about you, and everything you’ve put on your Facebook page, ever.  Most of them keep it forever, despite the obvious hacker concerns this raises. But even if you’ve kept your online profile low, that probably hasn’t accomplished much. Here’s what the FTC says brokers learn from your offline shopping:

“Data brokers obtain detailed, transaction-specific data about purchases from retailers and catalog companies. Such information can include the types of purchases (e.g., high-end shoes, natural food, toothpaste, items related to disabilities or orthopedic conditions), the dollar amount of the purchase, the date of the purchase, and the type of payment used.”

The data broker report ends with some important suggestions, such as legislation that creates some legal framework for data brokers. There’s even a call for a single web portal that lets consumers find out what all these various companies know. Great, so it’ll be California in 1860 then.   It would be a start, but it’s going to be really, really hard to shove all that data back into Pandora’s Box.

Here’s the most depressing finding in the report: Even if individual brokers allowed you to delete your data from their Cloud, that wouldn’t accomplish anything. Layers and layers of providers are constantly populating their clouds with new scrapes of data. And of course the brokers all sell data back and forth to each other. Whatever was deleted would almost certainly be replaced immediately with the next upload or “onboard.”  The Cloud might be too big already.

It’s important to note what an outlier America is in the world of data brokers.  Europe is right now wrestling with its new legal privacy regime, which includes a consumer “Right to be Forgotten.”  The right is murky and how it will be implemented is a very open question. But the highest EU court just ruled Google must honor requests that data about individuals be removed from its search engine on request. A Spaniard who was annoyed that a 1988 debt kept popping up in Google results about him had brought the case, and he won.

In America, we don’t even have the right to know what they know about us. When it comes to privacy rights, America is on Mars.

“Forget worrying about loyalty cards or programs: it’s the everyday purchases you make tied to your name with a debit or credit card that can land you on data brokers’ lists,” wrote the World Privacy Forum in an analysis of the FTC report. It called on Congress to go much further than the legislation suggested by the FTC, however.  While the FTC rightly talks about brokers coming out of the shadows and requiring consumer consent, it’s almost unfathomable how these firms could go about gaining meaningful, informed consent. After all, there’s a reason they operate so quietly: if we knew what they were doing, we’d try like holy Hell to stop it.

Or to use the language of the industry, consumers have not yet been persuaded about the value proposition of trading your privacy for…….for what again? At least the NSA can claim it’s trying to keep us safe from terrorists. Data brokers are just trying to get money for nothing.

 

 

The Snowden effect: Insider threat protection remains elusive

LarryPonemonPrivAccess

Larry Ponemon

Larry Ponemon

Well-publicized disclosures of highly sensitive information by wiki leaks and former NSA employee Edward Snowden have drawn attention and concern about the insider threat caused by privileged users. We originally conducted a study on this topic in 2011 and decided it was time to see if the risk of privileged user abuse has increased, decreased or stayed the same.  Unfortunately companies have not made much progress in stopping this threat since then. Our latest study commissioned by Raytheon, “Privileged User Abuse & The Insider Threat,” looks at what companies are doing right and the vulnerabilities that need to be addressed with policies and technologies. One area that is a big problem is the difficulty in actually knowing if an action taken by an insider is truly a threat. Sixty-nine percent of respondents say they don’t have enough contextual information from security tools to make this assessment and 56 percent say security tools yield too many false positive.  Here are a few other highlights from the report. (You can obtain a full report by clicking here)

Despite the risks posed by insiders, 49 percent of respondents do not have policies for assigning privileged user access. However, slightly more organizations do use well-defined  policies that are centrally controlled by corporate IT (35 percent in 2014 vs. 31 percent in 2011).

Is it really an insider threat? Companies often have difficulty in actually knowing if an action taken by an insider is truly a threat. The biggest challenges are having enough contextual information provided by security tools (69 percent of respondents) and security tools yield too many false positives (56 percent of respondents).

What’s most at risk? While respondents believe general business and customer information is most at risk in their organizations due to the lack of proper access controls over privileged users (56 percent and 49 percent), fears about abuse to corporate intellectual property increased dramatically from 12 percent of respondents to 33 percent of respondents.

While the establishment of privileged user access policies is lacking, processes are improving. The findings show a significant increase in the use of commercial off-the-shelf automated solutions from 35 percent of respondents in 2011 to 57 percent in 2014 in granting user access privilege. The use of manual processes such as by phone or email also increased from 22 percent of respondents in 2011 to 40 percent of respondents in 2014.

Business unit managers are gaining influence in granting privileged user access and conducting privileged user role certification. Fifty-one percent of respondents say it is the business unit manager who most often handles granting access. This is an increase from 43 percent in 2011.

(You can obtain a full report by clicking here)

Cost of data leaks rising, but there is a ray of hope

Larry Ponemon

Larry Ponemon

Throughout the world, companies are finding that data breaches have become as common as a cold but far more expensive to treat. With the exception of Germany, companies had to spend more on their investigations, notification and response when their sensitive and confidential information was lost or stolen. As revealed in the 2014 Cost of Data Breach Study: Global Analysis, sponsored by IBM, the average cost to a company was $3.5 million in US dollars and 15 percent more than what it cost last year.

Will these costs continue to escalate? Are there preventive measures and controls that will make a company more resilient and effective in reducing the costs? Nine years of research about data breaches has made us smarter about solutions.

Critical to controlling costs is keeping customers from leaving. The research reveals that reputation and the loss of customer loyalty does the most damage to the bottom line. In the aftermath of a breach, companies find they must spend heavily to regain their brand image and acquire new customers. Our report also shows that certain industries, such as pharmaceutical companies, financial services and healthcare, experience a high customer turnover. In the aftermath of a data breach, these companies need to be especially focused on the concerns of their customers.

As a preventive measure, companies should consider having an incident response and crisis management plan in place. Efficient response to the breach and containment of the damage has been shown to reduce the cost of breach significantly. Other measures include having a CISO in charge and involving the company’s business continuity management team in dealing with the breach.

In most countries, the primary root cause of the data breach is a malicious insider or criminal attack. It is also the most costly. In this year’s study, we asked companies represented in this research what worries them most about security incidents, what investments they are making in security and the existence of a security strategy.

Here are some bullet points from the study:

  • The cost of a data breach is on the rise. Most countries saw an uptick in both in the cost per stolen or lost record and in the average total cost of a breach.
  • Fewer customers remain loyal after a breach, particularly in the financial services industry.
  • For many countries, malicious or criminal attacks have taken the top spot as the root cause of the data breaches experienced by participating companies.
  • For the first time, the research reveals that having business continuity management involved in the remediation of a breach can help reduce the cost.

An interesting finding is the important role cyber insurance can play in not only managing the risk of a data breach but in improving the security posture of the company. While it has been suggested that having insurance encourages companies to slack off on security, our research suggests the opposite. Those companies with good security practices are more likely to purchase insurance.

Global companies also are worried about malicious code and sustained probes, which have increased more than other threats. Companies estimate that they will be dealing with an average of 17 malicious codes each month and 12 sustained probes each month. Unauthorized access incidents have mainly stayed the same and companies estimate they will be dealing with an average of 10 such incidents each month.

When asked about the level of investment in their organizations’ security strategy and mission, on average respondents would like to see it doubled from what they think will be spent—an average of $7 million to what they would like to spend—an average of $14 million. This may be a tough sell in many companies. However, our cost of data breach research can help IT security executives make the case that a strong security posture can result in a financially stronger company.

To download the complete report please use the following link:

www.ibm.com/services/costofbreach

Where trust is currency, we don't want a run on 'the bank'

Bob

Bob Sullivan

In the past few months, consumers have been deluged with one reason after another to fear technology and transactions. Target. Neiman Marcus. Michaels.  Millions of stolen credit cards. Millions of passwords leaked and lost by Adobe, and a little less recently, Yahoo. Net users are used to, and perhaps growing numb to, the constant bad news.

Then came Heartbleed.  The most recent scary Internet disaster is much worse than a compromised bank account. Heartbleed turns the very thing that was supposed to keep us safe into our worst technology nightmare. It’s a little like learning that every cop in your city is really working for the mob.  Perhaps better said, it’s like learning that every store you give your credit card to is really a hacker out to steal it.

What are we supposed to do now?  And I don’t mean reset your password, which is a lovely thing to do, but it may help and it may hurt you in this situation, and it doesn’t actually help with the real problem: Trust.  If consumers finally lose trust in our transaction systems, everybody loses. Even the hackers.

“This is the last thing consumers need in the wake of the Target breach and all the other security breaches we have been hearing about,” said Avivah Litan, the security analyst at Gartner Group who is the loudest voice you’ll hear when there is a big data leak.

To review, Heartbleed is a flaw in the encryption technology used to keep data safely scrambled while it flies around the Internet. You know of it mostly because of those little locks that appear next to web addresses in your browser. A technology that is designed to keep encrypted connections open over time — by sending a regular “heartbeat” message that lets one computer tell another “I’m still here” — was instead a hacker’s best friend.  Researchers figured out they could craft a heartbeat message that tricked a server into sending back every kind of data it stored. The heartbeat could be made to bleed data. That includes credit cards and passwords, but even worse, it even includes encryption keys.  A bit like the ominous hacker movie Sneakers, the Heartbleed bug truly meant an end to secrets online.

The Heartbleed code is now fixed, and companies are racing to install the fix, and consumers are stumbling through changing passwords and doing the usual “have I been robbed?” inventory on their bank accounts.  Crisis averted.  This time. (Aside: If you have already changed your passwords, you should really change them again in about a month, because there’s no way to know if you updated your security while a hacker still controlled the website you logged into. )

The question has to be asked: How many times can we warn consumers to check their bank account statements carefully? Hanging over the Heartbleed incident, and Target before it, and Yahoo before that, is a dark feeling that the whole thing might not be safe.  Consumers always react to large credit card hacks by saying they will now buy with cash.  Most of the time, data shows, they don’t mean it.  But Target had to admit last quarter that its revenue was materially impacted by the credit card incident.  This is getting serious.

In the credit card world, the response to Target was straightforward. Journalists discovered that U.S. credit cards were a decade behind the times, and folks started pushing to add computer chips to our old-fashioned plastic, using a technology known as EMV. Of course, if EMV were so great, U.S. card issuers would have installed the chips 10 or even 15 years ago. Folks who know credit card security will admit privately that moving to EMV isn’t really much of a solution — fraudsters can just move to other kinds of credit card fraud the chips can’t stop. But there is still a very good reason to add the chips.

Trust.

EMV will make shoppers feel better.  That’s not a placebo. Trust is a very real thing.  In fact, it’s the only thing.

If — when? — consumers finally get fed up by all the bad news, and a real trust gap arises, lots of people are going to lose lots of money.  When a consumer pays for something with a $20 bill instead of swiping a card, at least 4 different entities miss out on getting a cut of that transaction. Trust means you don’t think, you just pull out your plastic. A trust gap means, perhaps, you don’t bother logging into that website and changing your password, you simply go somewhere else.

In other words, trust is basically the currency of our time.  A tipping point on trust would create the equivalent of a run on a bank during a currency crisis.  Lack of trust can snowball.  With each “withdrawal,” the trust gap only grows.

In the credit card world, only comprehensive changes to the entire, end-to-end system of payments will really take a bite out of crime. I recently spoke to Visa’s Chief Risk Officer, Ellen Richey, who told me that a move to chip cards should be accompanied by new technology that makes online credit card fraud more difficult.

We don’t need to plug a hole in the dam with our thumb, we need a new dam.

This same thinking needs to govern online transactions, and privacy in general. It’s terrible that folks around the world are being told, in rather panicked tones, “CHANGE ALL YOUR PASSWORDS!”  But it’s even more terrible that most of our digital and financial lives are guarded only by 50-year-old technology involving 8 upper or lower case letters and maybe a number or two. Two years ago, after a series of high-profile password list leaks from sites like LinkedIn, experts proclaimed the password dead.  Heartbleed proves it’s more like a vampire that seems to live forever and come out to threaten us once in a while.

Litan, the Gartner analyst, has some good news about Heartbleed.  Remember, this is a flaw discovered by good guys, not an active crime (like Target). That means the damage can be contained, and she thinks it will be. This time.

“I don’t think this is an uncontrollable disaster,” she said. “It’s manageable and as long as the companies who use this version of Open SSL act responsibly – i.e. patch and secure their systems and ask users to change passwords – we are OK.  There is no evidence that the criminals have used this attack vector yet.  And if these security steps are taken and upgrades are made – they won’t be able to.”

So, there’s no run on the trust bank this time.  But I guarantee that consumer patience is not infinite.  We can only come up with so many variations of our pets’ names. Tokens? Fingerprints? Disposable passcodes?  Something needs to change before we ask users to invent new passwords one time too many, and the trust gap swallows up the whole thing.

Where trust is currency, we don’t want a run on ‘the bank’

Bob

Bob Sullivan

In the past few months, consumers have been deluged with one reason after another to fear technology and transactions. Target. Neiman Marcus. Michaels.  Millions of stolen credit cards. Millions of passwords leaked and lost by Adobe, and a little less recently, Yahoo. Net users are used to, and perhaps growing numb to, the constant bad news.

Then came Heartbleed.  The most recent scary Internet disaster is much worse than a compromised bank account. Heartbleed turns the very thing that was supposed to keep us safe into our worst technology nightmare. It’s a little like learning that every cop in your city is really working for the mob.  Perhaps better said, it’s like learning that every store you give your credit card to is really a hacker out to steal it.

What are we supposed to do now?  And I don’t mean reset your password, which is a lovely thing to do, but it may help and it may hurt you in this situation, and it doesn’t actually help with the real problem: Trust.  If consumers finally lose trust in our transaction systems, everybody loses. Even the hackers.

“This is the last thing consumers need in the wake of the Target breach and all the other security breaches we have been hearing about,” said Avivah Litan, the security analyst at Gartner Group who is the loudest voice you’ll hear when there is a big data leak.

To review, Heartbleed is a flaw in the encryption technology used to keep data safely scrambled while it flies around the Internet. You know of it mostly because of those little locks that appear next to web addresses in your browser. A technology that is designed to keep encrypted connections open over time — by sending a regular “heartbeat” message that lets one computer tell another “I’m still here” — was instead a hacker’s best friend.  Researchers figured out they could craft a heartbeat message that tricked a server into sending back every kind of data it stored. The heartbeat could be made to bleed data. That includes credit cards and passwords, but even worse, it even includes encryption keys.  A bit like the ominous hacker movie Sneakers, the Heartbleed bug truly meant an end to secrets online.

The Heartbleed code is now fixed, and companies are racing to install the fix, and consumers are stumbling through changing passwords and doing the usual “have I been robbed?” inventory on their bank accounts.  Crisis averted.  This time. (Aside: If you have already changed your passwords, you should really change them again in about a month, because there’s no way to know if you updated your security while a hacker still controlled the website you logged into. )

The question has to be asked: How many times can we warn consumers to check their bank account statements carefully? Hanging over the Heartbleed incident, and Target before it, and Yahoo before that, is a dark feeling that the whole thing might not be safe.  Consumers always react to large credit card hacks by saying they will now buy with cash.  Most of the time, data shows, they don’t mean it.  But Target had to admit last quarter that its revenue was materially impacted by the credit card incident.  This is getting serious.

In the credit card world, the response to Target was straightforward. Journalists discovered that U.S. credit cards were a decade behind the times, and folks started pushing to add computer chips to our old-fashioned plastic, using a technology known as EMV. Of course, if EMV were so great, U.S. card issuers would have installed the chips 10 or even 15 years ago. Folks who know credit card security will admit privately that moving to EMV isn’t really much of a solution — fraudsters can just move to other kinds of credit card fraud the chips can’t stop. But there is still a very good reason to add the chips.

Trust.

EMV will make shoppers feel better.  That’s not a placebo. Trust is a very real thing.  In fact, it’s the only thing.

If — when? — consumers finally get fed up by all the bad news, and a real trust gap arises, lots of people are going to lose lots of money.  When a consumer pays for something with a $20 bill instead of swiping a card, at least 4 different entities miss out on getting a cut of that transaction. Trust means you don’t think, you just pull out your plastic. A trust gap means, perhaps, you don’t bother logging into that website and changing your password, you simply go somewhere else.

In other words, trust is basically the currency of our time.  A tipping point on trust would create the equivalent of a run on a bank during a currency crisis.  Lack of trust can snowball.  With each “withdrawal,” the trust gap only grows.

In the credit card world, only comprehensive changes to the entire, end-to-end system of payments will really take a bite out of crime. I recently spoke to Visa’s Chief Risk Officer, Ellen Richey, who told me that a move to chip cards should be accompanied by new technology that makes online credit card fraud more difficult.

We don’t need to plug a hole in the dam with our thumb, we need a new dam.

This same thinking needs to govern online transactions, and privacy in general. It’s terrible that folks around the world are being told, in rather panicked tones, “CHANGE ALL YOUR PASSWORDS!”  But it’s even more terrible that most of our digital and financial lives are guarded only by 50-year-old technology involving 8 upper or lower case letters and maybe a number or two. Two years ago, after a series of high-profile password list leaks from sites like LinkedIn, experts proclaimed the password dead.  Heartbleed proves it’s more like a vampire that seems to live forever and come out to threaten us once in a while.

Litan, the Gartner analyst, has some good news about Heartbleed.  Remember, this is a flaw discovered by good guys, not an active crime (like Target). That means the damage can be contained, and she thinks it will be. This time.

“I don’t think this is an uncontrollable disaster,” she said. “It’s manageable and as long as the companies who use this version of Open SSL act responsibly – i.e. patch and secure their systems and ask users to change passwords – we are OK.  There is no evidence that the criminals have used this attack vector yet.  And if these security steps are taken and upgrades are made – they won’t be able to.”

So, there’s no run on the trust bank this time.  But I guarantee that consumer patience is not infinite.  We can only come up with so many variations of our pets’ names. Tokens? Fingerprints? Disposable passcodes?  Something needs to change before we ask users to invent new passwords one time too many, and the trust gap swallows up the whole thing.

Just how safe is Sochi?

BobNo doubt, you’ve already seen all the complaints from journalists in Sochi about sub-standard bathroom facilities.  Heck, a dear friend was locked *inside* her hotel room on her first day reporting there.   These are funny stories, but can sound a bit like first-world problems.

I’m worried about something much more serious happening during the next three weeks, and I have enough friends there that it’s personal. Not surprisingly, we’ve already learned that visitors to Sochi should expect their entire lives to be hacked. Indeed, the Committee to Protect Journalists cites a Russian government decree published in the state newspaper in November which announces the government’s intention to collect metadata on all telecommunications. (Question: Is that better or worse than what the NSA does?).  And NBC’s Richard Engle demonstrated this week how his cell phones were hacked.

When Russians say they need to pry to keep Sochi safe, they aren’t inventing reasons. There are many credible threats of terrorism at the Games.

  • Chechen rebel leader Doku Umarov — some experts call him the Russian bin Laden — called for attacks on Sochi last summer.  Suicide bombings in Vologagrad (formerly Stalingrad) during December that killed 40 people show the threats are real, even if the connection between the attacks and Umarov is tenuous.
  • This week, the U.S. Department of Homeland Security warned airlines flying into Russia that bombs might be concealed in toothpaste tubes or cosmetic cases.
  • U.S. athletes have been told not to wear U.S. logos outside the Olympic Village. Many athletes chose to leave their families at home
  • And there are real threats of kidnappings, too — this week, two Austrian athletes were directly threatened in a letter sent to the Austrian Olympic Committee.

Until figure skating and hockey heat up, you will hear more and more about the threat of terrorism in Sochi. So for some level-headed analysis of the real threat, I turned to  Charles Hecker, Director of Global Research and Russia expert at Control Risk, a private global security team.  Here’s what Hecker told me.

“There is this ‘cordon sanitaire’ (secure perimeter – Russians are calling it a Ring of Steel) around the area. There is extensive surveillance—including underwater sonar—and in the air and through the electronic waves, every single move that anybody makes in and around Sochi is going to be monitored and recorded,” he said. “There hasn’t been this sort of peacetime security effort in Russia—or in too many other places, frankly—as we’re seeing now down in the North Caucasus and Southern Russia. This is the ultimate test of Russia’s capability.”

Expect Russia to spare no expense — or at least no civil liberty — while monitoring for potential threats, he said. Any family or employee in Sochi should expect everything they do to be watched.

He did offer this comforting message to those worried about direct attacks on Sochi during the Games.

“The security of the games and the Olympic Games sites should be pretty well taken care of, barring something none of us can anticipate,” he said. ”There is very little—in fact no—precedent in Russia for terrorist attacks being aimed specifically at tourists and visitors. Almost all of the terrorist activity in Russia has been aimed at government targets and at infrastructure targets.”

Islamic separatists believed to be loyal to Umarov have recently attacked train stations and an airport, for example. And while Umarov lifted an alleged ban on attacking civilians in July while calling for attacks on the Olympics, his ability to execute on such threats is unclear. A security report issued by Control Risks in January makes clear that Caucasus Emirate, the group Umarov leads, is “not a military organization with a reliable line of command.”  Any attacks would be planned and carried out “locally and autonomously.”

Russian and Vladimir Putin have every incentive to prevent an embarrassing attack, Hecker noted.

“Forget about it as a sporting event, the Olympics in Russia are far more than that. This is Russia’s attempt at imprinting an entire new image of itself on the world,” he said.

Attacks in other areas of Russia during the Games — in Moscow, St. Petersburg, or other large cities outside Sochi — are more likely, Control Risks says.

But even without an attack, the separatists might be able to claim victory anyway, argues Uval Mond, in an opinion piece that appeared this week in The Times of Israel.

“Before the games even begin, Umarov’s threats have succeeded in generating anxiety to the level of real panic, which has fueled an international debate over the security situation in Russia and the authorities’ ability to guarantee the safety of the visiting athletes and fans,” he wrote. “This arch-terrorist has positioned himself as a geostrategic player whose presence is definitely troubling the sleep of one of the most powerful world leaders. That alone is a victory for Doku Umarov.”

Congress: The real risks at HealthCare.gov are real

Larry Ponemon

Larry Ponemon

I have been asked to testify about the possibility of identity theft on the Healthcare.gov website and the potential consequences to the American public. Identity theft and medical identity theft are not victimless crimes and affect those who are most vulnerable in our society – such as the ill, elderly and poor.

Beyond doing numerous empirical studies on this topic, this issue that really struck home. Last year my 88-year-old mother who lives in Tucson suffered a stroke. She was rushed to the hospital and admitted. Unbeknownst to her, an identity thief was on the premises and made photocopies of her driver’s license, debit card and credit card she had in her purse. The thief was able to wipe out her bank account and there were charges on her credit card amounting to thousands of dollars. In addition to dealing with her serious health issues, she also had to cope with the stress of recovering her losses and worrying about more threats to her finances and medical records.

The situation with my mother in the hospital and those who are sharing personal information on the healthcare.gov website are not dissimilar. My mother had a reasonable expectation that the personal information she had in her wallet would not be stolen – especially by a hospital employee.  Those who visit and enroll in healthcare.gov also have an expectation that the people who are helping them purchase health insurance will not steal their identity. They also have a reasonable expectation that all necessary security safeguards are in place to prevent cyber attackers or malicious insiders from seizing their personal data.

In my opinion, the controversy regarding security of the healthcare.gov website is both a technical and emotional issue.  In short, security controls alone will not ease the public’s concerns about the safety and privacy of their personal information.  Based on our research, regaining the public’s trust will be essential to the ultimate acceptance and success of this important initiative.

Following are some key facts that we have learned from our consumer research on privacy, data protection and information security:

First, the public has a higher expectation of the protection of their personal information when using or browsing government websites such as the USPS or IRS than when accessing commercial websites such as Amazon.com or ebay.com.

Second, the loss of one’s identity can destroy a person’s wealth and reputation.  Further, the compromise of credit and debit cards drives the cost of credit up for everyone, thus making it more difficult for Americans to procure goods and services.

Third, medical identity theft negative impacts the most vulnerable people in our nation. Beyond financial consequences, the contamination of health records caused by imposters can result in health misdiagnosis and in extreme cases could be fatal. Because there are no credit reports to track medical identity theft, it is nearly impossible to know you have become a victim.

Based on our Institute’s research, I would like to recommend a three-part approach to raising the trust and confident of Americans when using healthcare.gov.

  • First, is accountability. It is important to demonstrate to the public that the government is accountable for the security of the information and can be trusted. This translates into standards that do not just meet basic practices but exceeds them to ensure the website is safe and secure. As an example, one requirement should be to encrypt all personal data at rest in backend systems.
  • Second, is ownership by the CEO. In this case it is the president of the United States who should take ownership of the website and ensure good security and privacy practices are met as a priority.
  • Third, is independent verification or audit of the website to ensure all areas and underlying systems meet high security standards.

This is an excerpt of Congressional testimony Larry Ponemon recently gave before the House Committee on Science, Space and Technology