Author Archives: Bob Sullivan

The Sony hack, and why your email might be next

Bob Sullivan

Bob Sullivan

Sony reminds me of the chaos theory in the hacking world. Yes, you should be very afraid of what’s happening at Sony right now. Here’s why.

Four years ago, I wandered the halls at the giant RSA security conference collecting scuttlebutt. Companies spend thousands, even millions of dollars, to make a splash at the annual geek-fest, but on this day, one company completely stole the spotlight. For free. And no one was jealous, because on that day, wanted to be government contractor HB Gary.

Hackers calling themselves members of the Anonymous group had hacked HB Gary servers, stolen the firm’s email, then made it public for all the world to see. Days of embarrassment and nightmarish news followed, from exposure of a less-than-comfortable relationship with Bank of America to incredibly uncomfortable personal emails from workers.

At the time, the smartest geeks on the planet were terrified over the news. These folks weren’t afraid of hackers hell-bent on stealing their intellectual property or their financial information. Most of them had fought off those attacks for decades. What they feared was chaos. The HB Gary hackers weren’t after money. They wanted revenge. And computer criminals who simply want to destroy things are the most frightening. Publishing entire email spools stolen from company servers gains hackers almost nothing. But it exposes everyone inside a company, and everyone who ever communicated with any of those workers, to tremendous embarrassment, or worse. It creates chaos.

It’s an unpopular thought, but it’s true: There is no absolute security. Spend money and time protecting this, and you will leave that vulnerable. That’s how it works at airports, and that’s how it works in networks. Folks who protect digital assets for a living are constantly making trade-offs. Email is often one of those trade offs. Most energy is focused on protecting money. A lot of energy is focused on protecting intellectual property. Four years ago, Anonymous realized email servers are often neglected. And they realized just how much chaos they could cause by publishing…and indexing for easy discovery…HB Gary’s email.

Back then, every confident security professional I knew had two burning questions in mind. One: was I in HB Gary’s email? And two: What about my email server? What would happen if someone published my all company’s email? How many ‘secret’ job searches … sexiest or racist jokes …illicit affairs…might be exposed with an email dump?

There was a great chill in the entire profession. People imagined the worst.

Now, the worst has happened. Execs have been forced to apologize to President Obama for racist comments. Sony has lawyers running around threatening journalists not to publish bits and piece of upcoming movie scripts. Journalists have been exposed for too-cozy chats with sources. Heck, Aaron Sorkin is actually attacking …not the hackers … but those who even looked at what was hacked.

Revenge. Chaos. A crisis that seems without end. Mission Accomplished.

Perhaps, these hackers ultimately have money in mind. Perhaps they are state-sponsored. Perhaps the attack is purely politically motivated. We’ll probably never know, though most certainly, someone in the middle of this simply wants money.

But clearly, the criminals here were out to wreak havoc. Folks who just want to break things are pretty hard to stop. And now the playbook, first established four years ago, has been darn near perfected. Out folks’ private communications, let curious onlookers go to town, and you have a full-fledged techno-disaster on your hands. The point can’t be overstated: In both HB Gary and Sony, hackers exposed their target companies and potentially anyone who had ever emailed with their employees. Publish the email of a big enough company, and you might very exposed a majority of Americans in one hack.

Stealing secrets and dumping them online is the hateful practice of “doxxing” — exposing private parts of victims’ lives online, such as their home address, with the intent to invite harassment — writ large. It’s pretty hard to stop doxxing. You should all just hope no one every finds a reason to do it to you. And it’s almost as hard to stop doxxing on a massive scale. Yes, shutting down a power plant or similar critical infrastructure hack could be a horrible disaster. But I think this kind of choas might ultimately be more damaging to the U.S. It’s certainly easier to fashion.

What’s the lesson here? I’ve said forever that any time you type anything into any kind of keyboard, you should be prepared for the world to see it one day, even if you think your communication is private. That’s good advice, but it has its limits. For starters, we all use chat tools, texts, and even email as casually as we talk now. It’s pretty hard to remember that you are always one co-worker’s stupid click away from your chatter being exposed to the world. A private note with one comment that could be described as racist, sexist, even elitist…..said to one person ….. could seriously tarnish your career or legacy. In that world, being 99.9 percent careful just isn’t good enough.

But the problem is scarier than that. Standards change all the time, but servers are forever. Imagine if we could read long email chats between political or corporate figures from 25 or 50 years ago. They’d all sound awful. It’s really, really hard to predict what something you say today might sound like 10 or 20 years in the future. The old “out of context” explanation doesn’t work any more. This is why the world of pack-rat programming alarms me. Companies (in the U.S.) reflexively save every piece of data for as long as possible. It will be the radioactive fallout of our time. We haven’t even begun to digest the implications of that.

Sony is a pretty good hint, however. Be very, very careful what you type.

 

The seven reasons consumers still care about privacy

Larry Ponemon

Larry Ponemon

Consumers’ Perceptions about Privacy & Security: Do They Still Care? conducted by Ponemon Institute and sponsored by RSA is intended to understand what consumers think about privacy and information security. Specifically, how have recent mega-breaches affected consumer behavior and attitudes about privacy? Moreover, is the constant sharing of personal information online and with mobile apps diminishing the importance consumers place on their privacy?

We surveyed 1,020 consumers in the United States between the ages of 18 and 65+. Forty-nine percent of respondents say they have been victims of at least one data breach. However, 45 percent are not confident that they know of all instances when their personal information was lost or stolen in a data breach.

Read the entire study (PDF)

Based on the findings we conclude that consumers perceive a loss of control over their personal information because of data breaches, the lack of trust in the security of the mobile apps they continue to use and increased government surveillance. However, they still believe the privacy and security of their personal information is important.

The following seven findings reveal why consumers still care about privacy:

Privacy rights are believed to be at risk. Seventy-five percent of respondents worry that they will lose their privacy rights as the Internet progresses into the future and are very concerned about this happening. 

Privacy and security expectations are high for financial transactions. No matter what their privacy profile is, respondents have high expectations for privacy and security when filing a tax return, making mobile payments or banking.

Privacy and security on the Internet and when using social media is important.  Respondents are spending an average of 56 hours per week on the Internet and 27 hours using social networks, social messaging and other social media tools. They rate the importance of the security and privacy of these activities as very high.

Prompt data breach notification is important. Seventy-seven percent of respondents say prompt notification about the loss or theft of their personal information is either very important (56 percent) or important (21 percent).

Respondents worry about the theft of certain information. Most respondents are concerned about the theft or misuse of their Social Security numbers, passwords or PIN and payment information such as credit card number.

Strong online authentication procedures are very important. Fifty-four percent strongly agree or agree that the websites they use have strong authentication procedures that can be trusted to safeguard their sensitive or confidential information. They also do not trust systems or websites that only rely on passwords to identify and authenticate users or consumers (62 percent). Similarly they do not trust systems or websites when identity and authentication procedures appear too easy (62 percent of respondents).

Biometric authentication methods are viewed favorably. Seventy-eight percent of respondents say they would prefer authentication procedures that verify their identity without requiring them to share personal information such as a name, address, email and so forth.

 

‘We’ve lost control,’ say 9 out of 10 Americans

PewPrivactyYou couldn’t get nine out of 10 Americans to agree that the sky is blue.  So it’s remarkable that nine out of 10 say they have lost control over how their personal information is collected and used by corporations, a new survey released Wednesday by the Pew Research Center has found. Virtually the same number feel like it would be “very difficult” to remove inaccurate information about them online. And roughly two-thirds believe the government should do more to regulate advertisers and how they use personal information.

On the other hand, more than half said they were willing to share “some information” about themselves in order to use online services for free, and about one-third say that surveillance can be beneficial for society.

The results show Americans’ feelings about privacy are varied and subtle, said Lee Rainie, director of the Internet Project and a co-author of the study.

“Far from being apathetic about their privacy, most Americans say they want to do more to protect it,” Rainie said. “It’s also clear that different types of information elicit different levels of sensitivity among Americans.”

The slew of data breaches at major retailers over the past year have put privacy concerns front and center in Americans’ minds. Credit monitoring, transaction alerts and general vigilance of where you share your data and who you share it with are all part of keeping your data footprint limited. It won’t necessarily prevent identity theft or fraud(two consequences of sharing your personal information broadly), but it can make dealing with it easier. Any large, unexpected changes in your credit score could be signs of new-account fraud. (You can use free online tools – including those at Credit.com – to monitor your scores for any changes in your credit scores. You can also get free credit reports once a year at AnnualCreditReport.com.)

Other findings in the poll, which questioned a representative cross section of Americans:

When they want to have anonymity online, few feel that is easy to achieve. Just 24% of adults “agree” or “strongly agree” with the statement: “It is easy for me to be anonymous when I am online.”

  • 61% of adults “disagree” or “strongly disagree” with the statement: “I appreciate that online services are more efficient because of the increased access they have to my personal data.”
  • 80% of those who use social networking sites say they are concerned about third parties like advertisers or businesses accessing the data they share on these sites.
  • 70% of social networking site users say that they are at least somewhat concerned about the government accessing some of the information they share on social networking sites
  • Generally, people trust old technology more than new for privacy. People trust old-fashioned telephones more than social media or text messages, for example. They even trust landline phones more than cellphones.
  • 36% “agree” or “strongly agree” with the statement: “It is a good thing for society if people believe that someone is keeping an eye on the things that they do online.”

Privacy law expert Chris Hoofnagle, a teacher at Berkeley Law school was reviewed the study, noted that attitudes about surveillance were linked to citizens’ education levels.

“A sizable minority agrees with the idea that surveillance is beneficial for society. This group was characterized as younger and less well educated, with each step in more education resulting in less agreement of its beneficence,” he said. “I think there are very interesting class dynamics in privacy privacy and it is something that the Digital Trust Foundation is going to start funding research around this question in 2015. A question to ask here is why does this group find beneficence in surveillance? Could it be because they are heavily surveilled and simply do not have a choice over the matter?”

Here’s a few more of Hoofnagle’s observations.

“Trust in communications channels is based both on the age of technology and legal protections. The oldest and most legally protected technology (ECPA warrant standard) is the landline phone, followed by wireless phones. Email and text go over the wire in plain text, making them technologically inferior, and they are less protected as they fall under the SCA. Chat is the strange one—it is a newer technology, and so perhaps less trusted for that reason. But some chat is very strongly protected (iMessage).  And of course, no one should feel secure on social media sites because Facebook is crawling with investigators and Facebook itself is a privacy threat.

“Finally…many others have found that Americans are skeptical of both private-sector and government collection of information. (These) results are consistent with surveys going back to the 1980s that finds distrust of both government and commercial data practices.”

 

Mobile security: 30 percent of firms say they have none

Larry Ponemon

Larry Ponemon

Organizations seem to be willing to sacrifice security to realize the benefits of a more efficient workforce that is “always connected”. A much better, but challenging, approach is to adopt a mobile strategy with technologies that enable the employee to work efficiently without putting confidential information at risk. Strategies also need to include training and awareness programs because of employees’ negligence and tendency to ignore security procedures. The research also reveals that the biggest barrier to achieving an effective mobile security strategy is employee resistance.

Ponemon Institute is pleased to present the findings of Security in the New Mobile Ecosystem, commissioned by Raytheon. The purpose of this research is to examine the impact of mobile devices, mobile apps and the mobile workforce (a.k.a. mobile ecosystem) on the overall security posture of organizations in the United States. In the context of this research, mobile devices are smartphones and tablets.

We surveyed 618 IT and IT security practitioners who are involved in their organizations’ mobile
and enterprise security activities. Most of the respondents are engaged in implementing
enterprise security (65 percent of respondents), managing mobile technologies and platforms (55 percent of respondents) and setting mobile strategy (47 percent of respondents).
Following are key takeaways from this research:

End-user productivity drives growth of mobile devices in the workplace. Sixty-one percent of respondents say mobile devices increase productivity, which is an incentive for employees to use them and organizations to encourage their use. According to the research, on average one-third of employees use mobile devices exclusively to do their work and this is expected to increase to an average of 47 percent of employees in the next 12 months.

More mobile devices must be managed but budgets fail to keep up with the growth. The
typical organization represented in this study must manage an average of almost 20,000 mobile
devices and this is expected to increase to an average of 28,000 in the next 12 months.

Only 36 percent of respondents say they have a budget sufficient to deal with the explosive
growth of mobile devices. The average budget that is considered adequate is approximately $5.5 million annually – or $278 per managed device.

Security is sacrificed for productivity. The majority of respondents (52 percent) say security
practices on mobile devices have been sacrificed in order to improve employee productivity.
Moreover, 60 percent believe employees have become less diligent in practicing good mobile
security. The two biggest mobile security risks are malware infections and end-user negligence.

Security in the new mobile ecosystem is critical. Thirty percent of respondents say their organizations have no mobile security features in place. However, 75 percent say it is important to secure employees’ mobile devices. A virtualized solution is popular with 57 percent of respondents. The methods most often used to secure mobile devices are mobile device
management and secure containers.

To receive the full report, click here.

Dancing in the Dark with your data

Up At Nigjt

 

Larry Ponemon

Larry Ponemon

Here’s a surprise: The uncertainty about the location of sensitive and confidential data is more of a worry than a hacker or malicious employee.

We surveyed 1,587 Global IT and IT security practitioners in 16 countries (the research was sponsored by Informatica). A list of participating countries is presented in the appendix of this report. To ensure a knowledgeable and quality
response, only IT practitioners whose job involves the protection of sensitive or confidential structured and unstructured data were allowed to participate.

For purposes of this research, datacentric security assigns a data security policy at creation and
follows the data wherever it gets replicated, copied or integrated—independent of technology
platform, geography or hosting platform. Data centric security includes technologies such as data masking, encryption, tokenization and database activity monitoring. This research reveals,
however, that automated solutions would help improve an organization’s compliance and data
protection posture.

Key findings of this research:

1. Data in the dark keeps IT practitioners up at night. Fifty-seven percent of respondents say
not knowing where the organization¡¦s sensitive or confidential data is located keeps them up
at night. This is followed by 51 percent who say migration to new mobile platforms is a
concern.
2. Sensitive or confidential data is often invisible to IT security. Only 16 percent of the
respondents believe they know where all sensitive structured data is located and a very small
percentage (7 percent) know where unstructured data resides.
3.  Organizations mainly rely upon the classification of sensitive data to safeguard data
assets. The two most popular technologies for structured data are sensitive data
classification and application-level access controls. Only 19 percent say their organizations
use centralized access control management and entitlements and 14 percent use file system
and access audits.
4. Automated sensitive data-discovery solutions are believed to reduce the risk to data
and increase security effectiveness. Despite the positive perception about automated
solutions, 60 percent of respondents say they are not using automated solutions to discover
where sensitive or confidential data is located. Of the 40 percent of respondents who say
their organizations use automated solutions, 64 percent say they use it for discovering where
sensitive or confidential data are located in databases and enterprise applications. Only 22
percent use it to discover data in files and emails.
5. Specific automated solutions would improve the organization’s compliance and data protection posture. The most popular capabilities are automated user access history with real-time monitoring followed by policy workflow automation.

To read the rest of the report, click here.

What? *Another* replacement credit card? Why database hacks are becoming a real, and costly, hassle

Bob Sullivan

Bob Sullivan

“You’re not liable for any fraudulent charges.” It’s a cheery phrase you’ve seen or heard dozens of times lately, usually said to help ease the blow of bad news: Your credit card has been hacked.  “But don’t worry!  A new card is on its way!  Everything is fine! Smiley face. =-)”

You recognize the language. It means you’ve been “Home Depot’d.”  Or “Target’d.” Or “Michael’d.”

And you know everything isn’t fine.

Consumers might be weary of news stories chronicling multi-million account hackings at major retailers like Target or Home Depot, but they are much more tired by the fallout: two, three, even four cards replaced in recent months, each one bringing with it a separate set of hassles and payment mixups.

Let’s call it, “Card replacement fatigue.” Consumers are starting to get pretty restless about all the new plastic they are getting in the mail.

(I am carrying three versions of the same card in my wallet right now as I sort through which one is the right one to use.  Both replacements arrived while I was traveling, hence the confusion).

“My credit card has been replaces 3 times this summer – I’m over it,” complained Melanie Web-Stelter. “I’m considering going back to checks and cash.”

Murray Lahn has had it even worse.

“At one point about 2 years ago, I went through 5 Mastercards in 20 months, and my most recent one was replaced just weeks ago before the Home Depot breach,” Lahn said. “I feel like I’m the king of card replacements.”

Most consumers are delighted to know their bank is looking out for them.  In fact, customer satisfaction ratings are high with phone calls warning that a consumers’ card might have been used for fraud.   Even new cards can provide some of that halo effect, partly offsetting the $5-to $10-per-card price tag of a reissue.

But there’s a limit to the good-will that can be earned with mass card cancelations, and it appears we are nearing that limit. There can be real costs associated with suffering a credit card hack. Not from the bank, or the fraud, but the hassle.

Automated payments are the best way to make sure the bills are paid and there’s no late fees. Consumer advocates (like me!) recommend using credit cards for lots of recurring bills — the electricity, the cell phone, the cable, and of course automated toll payments  — as a way to simplify your financial life.  It’s not simple however, when a bank gives you a new account number and you have to update all your automated payments.  Sure, you can look at last month’s statement and pluck them out, but what if you miss one?  Then the banks no-liability fraud policy won’t protect you from late fees.

And while many consumers say calling firms to update account information isn’t that much of a hassle, others report crazy situations.

“Time Warner Cable’s billing system … according to a customer rep has not been updated for decades,” said Dayle Henshel.  “Credit card changes, anything other than new expiration dates, are effectively hand-entered into their system and take 4-8 weeks to propagate into the system.”

Then, there’s EZ-Pass.

“Had to turn around on the Chesapeake Bay bridge/tunnel because EZ Pass triggered a reload on the old card number,” said Ron Urbanski. “After paying cash, we were able to update our account on the iPhone to allow us to pay the next tolls.”

In an informal poll, plenty of folks indicated their bank of credit union helped smooth the automated payment transition process, easing the pain considerably. Still, there is work involved — work consumers must do through no fault of their own.

“Got a letter from Chase identifying vendors that I interact with that I should contact based on reoccurring charges to account that may be auto pay or subscriptions,” said Mark Ladisky. “Helpful but I had to do the legwork.”

And there is one more hidden victim in the “victimless” crime of a massive credit card database hack: charities.

“I work with a little public radio station that’s pushing monthly ‘sustainer’ membership. More and more cards get declined due to replacements,” bemoaned Tom Lucci. “It’s a lot of extra time – that we don’t have – to track down new card info. Obviously we can’t charge a late fee or report to the credit bureaus. So if you do get breached, reach out to any nonprofits where you’re a sustaining contributor. Right thing to do, much appreciated.”

 

Do cloud breaches cost more?

Larry Ponemon

Larry Ponemon

Can a data breach in the cloud result in a larger and more costly incident? In short, yes. The more places where data resides, the harder it is to control, and the more it costs to clean up a compromise. The cloud multiplier calculates the increase in the frequency and cost of data breach based on the growth in the use of the cloud and uncertainty as to how much sensitive data is in the cloud.

We surveyed 613 IT and IT security practitioners in the United States who are familiar with their company’s usage of cloud services. The majority of respondents (51
percent) say on-premise IT is equally or less secure than cloud-based services. However, 66 percent of respondents say their organization’s use of cloud resources
diminishes its ability to protect confidential or sensitive information and 64 percent believe it makes it difficult to secure business-critical applications.

As shown in more detail in this report, we consider two types of data breach incidents to determine the cloud multiplier effect. We found that if the data breach involves the loss or theft of 100,000 or more customer records, instead of an average cost of $2.37 million it could be as much as $5.32 million. Data breaches involving the theft of high value information could increase from $2.99 million to $4.16 million.

Faith in cloud providers is not what it should be.

Faith in cloud providers is not what it should be.

A lack of knowledge about the number of computing devices connected to the network and enterprise systems, software applications in the cloud and business critical applications
used in the cloud workplace could be creating a cloud  multiplier effect. Other uncertainties
identified in this research include how much sensitive or confidential information is stored in the cloud.

For the first time, we attempt to quantify the potential scope of a data breach based on typical use of cloud services in the workplace or what can be described as the cloud multiplier effect. The report describes nine scenarios involving the loss or theft of more than 100,000 customer records and a material breach involving the loss or theft of high value IP or business confidential.

When asked to rate their organizations’ effectiveness in securing data and applications used in
the cloud, the majority (51 percent) of respondents say it is low. Only 26 percent rate the
effectiveness as high. Based on their lack of confidence, 51 percent say the likelihood of a data
breach increases due to the cloud.

Key takeaways from this research include the following:
* Cloud security is an oxymoron for many companies.
Sixty-two percent of respondents do not agree or are unsure that cloud services are
thoroughly vetted before deployment. Sixty-nine percent believe there is a failure to be
proactive in assessing information that is too sensitive to be stored in the cloud.
* Certain activities increase the cost of a breach when customer data is lost or stolen.
An increase in the backup and storage of sensitive and/or
confidential customer information in the cloud can cause the most costly breaches. The
second most costly occurs when one of the organization’s primary cloud services provider
expands operations too quickly and information.

Certain activities increase the cost of a breach when high value IP and business
confidential information is lost or stolen. Bring Your  Own Cloud (BYOC) results in the
most costly data breaches involving high value IP. The second most costly is the backup and
storage of sensitive or confidential information in the cloud increases. The least costly occurs
when one of the organization’s primary cloud providers fails an audit failure that concerns the
its inability to securely manage identity and authentication processes.

Why should Big Data have more right to privacy than people?

BobWASHINGTON D.C. — What if we treated data with the same scrutiny as people?  When a consumer applies for a loan or a job, firms uses massive databases and can consider thousands of data points when they assess the integrity of that person. But what if consumers could, in equal painstaking detail, interrogate the integrity of the data? What if every single piece of data about you had to declare where it came from, where it was bought and sold, what it had been used for, and so on?

That was the provocative suggestion made by Carnegie Mellon professor Alessandro Acquisti in Washington D.C. today at a conference devoted to Big Data and its ability to treat consumers fairly.

BigData

As you might imagine, no industry representative jumped at the opportunity. In fact, his suggestion was entirely ignored.

It shouldn’t be. The technology certainly exists that would give consumers this fair playing field when it comes to their data. After all, it is their data (despite what industry groups might argue, they they own the data they collect and the inferences they draw.).

Acquisti was simply offering a idea that would bring about more transparency in a world that is dogged by murky, shady operators. Firms don’t just collect data about consumers as they browse, or walk around stores, or use their credit cards.  They do it secretly. They hate answering questions about it. In fact, they think the mystery surrounding the data is actually the value of the data.

Monday’s conference, titled “Big Data: Tool for Inclusion or Exclusion,” included a lot of the usual meaningless privacy dialog around policy and disclosure and best practices.  The discussions were lively, but this elephant in the room was rarely addressed.  Credit scores work, when they work, because consumers don’t understand them. Once consumers understand them, they can game them, and banks move on to something more obscure.  The data collection industry pays lip service about preventing consumer harm. But there is little to believe industry actors want anything more than to make as much money as they can by invading consumers’ privacy as much as they can get away with.

As Acquisti pointed out, the battle is asynchronous. Consumers can be interrogated with alarming tenacity, but they enjoy very little in the way of rights to face the digital 1s and 0s that constitute their accuser.

Not surprisingly, the idea of giving consumers more rights to control their information and its use was greeted with frosty newsspeak — “Consumers hate dealing with cookie warnings when the browse the web!. They don’t want more rights!” was the basic, cynical response.

 

FTC Commissioner Julie Brill was among the speakers who alluded to the excellent report published earlier this year by the agency explaining the wide variety of invasive behavior committed by data broker companies you’ve likely never heard of  — but these firms know you. They have probably decided you are an “urban scrambler” with a “diabetes interest.” Brill called for data brokers to fess up about what they do and who they do it for.

The discussion generally felt a bit fatalistic, however. Big data is here to stay, and in fact, it hurts both consumers whose privacy is completely violated by it, and it hurts consumers who are invisible to it.  The only thing worse than having a credit report is not having a credit report, which can prevent you from participating in the American economy at all.

Pam Dixon wrote a report earlier this year called “The Scoring of America,” which described the hundreds of 3-digit numbers that can control every aspect of your life — we’ve moved waaayyyy beyond credit scores. On a panel, she urged a new, broader view of data usage that drew on a long history of data collection stretching back to World War II and the Nuremberg Principles. They call for a clear need to obtain meaningful, informed consent of people when they are subjects of experiments.

That would be hard to do, certainly. But we should try.

When I write about scams, gotchas, and company misbehavior — and often, when I bicker with companies who give some version of an excuse that comes down to, “it’s the consumer’s fault” — I have a simple test I give:

“Are people surprised you took their money? If they are surprised, they you did the wrong thing.”

With data collection, surprise isn’t just an element of a “gotcha.”  Surprise is the product itself.  That’s wrong, and that needs to change.  Without real, informed consent from the public, Big Data collection is a runaway train that is going to do a lot more harm than good.

 

 

 

Light bulbs hacked; It’s funny, but it’s not

BobIt’s a question I think about a lot: Are we moving towards a world that’s safer or more dangerous?  More or less secure? This week, the “less secure” side scored another goal. Light bulbs can be hacked.  Doing so seems like a rather silly science fair project until you think about what it really means.

London-based security firm Context has taken an interest  in fragility of the Internet of Things, as we all should.  As a refresher, the Internet of Things simply means wireless chips will soon be placed in many items in your home, and these will all talk to the Internet and each other.  It’s not science fiction; it’s more like George Jetson. Whiz-bangy light bulbs sold by a firm named LIFX are among the first Internet of Things products. The bulbs talk to each other, and can be controlled with a smartphone.  Neat, I guess, in a chia pet sort of way. (Click on! Click off!)

Context took the things apart and found that a hacker could trick the bulbs into surrendering control to a stranger.  Essentially, bad guys can hop on the bulb users’ WiFi network and take control of the bulbs.  If you look at the firm’s website, you’ll see how much trouble it went to in order to turn a victim’s lights on and off.  Also neat, I guess.  The hack comes with a strong mitigating factor; the hacker must be within 30 meters of the target to start the surprise disco effect.  So state secrets are not at stake.

But here’s what you should think about.  LIFX seems like a responsible enough outfit. It isn’t Yo, that’s for sure.  The bulbs actually came loaded with AES (Advanced!) encryption. So the engineers actually thought about this problem. But the bulbs all shared the same underlying encryption key. Hack one, hack them all. That’s what Context did.

LIFX, by all accounts, reacted quickly to the hack and has issued a fix. Great, I guess. Happy ending?  Not by a long shot. I promise you, this pattern will repeat itself again, and again, and again.  There is no model currently that requires firms inventing cool stuff to make it safe. Features first, safety last. If ever.

Therefore, our world will soon be full of really creative devices full of fatal flaws.  It’s always been this way — features over safety — but when vulnerabilities were limited to personal computers, there were some real-world limits on how much trouble consumers could get into.  When the threats are in everything, as they will be with the Internet of Things, watch out.  Here’s a thought exercise.  What happens when it’s not the light bulbs, but rather the power outlets, that are “smart” and can be hacked?

This is why I made much ado about the nothing piece of software called Yo that had its 15 minutes of fame a few weeks ago.  Quick refresh: Yo is Twitter in two characters. Participants send single, two-character messages using Yo. It got a flurry of attention, allegedly a flurry of investment, and then hackers figured out they could download all personal information anyone had given Yo.   The firm that made Yo bragged that it was programmed in a day. The Internet of Things will be full of gadgets programmed in a day, full of basic, serious flaws, unless something changes.

Attacks on healthcare systems have risen 100 percent

 

Larry Ponemon

Larry Ponemon

News that millions of patient Social Security numbers were stolen recently from Community Health Systems Inc. computers should come as a surprise.  Earlier this year, we published results from our Fourth Annual Benchmark Study on Patient Privacy and Data Security, and the headline result was this:      Criminal attacks on healthcare systems have risen a startling 100 percent since we first conducted this study four years ago in 2010.

Many other findings were equally as sobering. Healthcare employees are fueling breach risks by increased use of their personal unsecured devices (smartphones, laptops and tablets). Business Associates—those that have access to PHI and work with healthcare organizations—are not yet in compliance with the HIPAA Final Rule.

Data breaches continue to cost some healthcare organizations millions of dollars every year.

While the cost can range from less than $10,000 to more than $1 million, we calculate that the average cost for the organizations represented in this year’s benchmark study is approximately $2 million over a two-year period. This is down from $2.4 million in last year’s report as well as from the $2.2 million reported in 2011 and $2.1 million in 2010. Based on the experience of the healthcare organizations in this benchmark study, we believe the potential cost to the healthcare industry could be as much as $5.6 billion annually.

The types of healthcare organizations participating in the study are hospitals or clinics that are part of a healthcare network (49 percent), integrated delivery systems (34 percent) and standalone hospital or clinic (17 percent). This year 91 healthcare organizations participated in this benchmark research and 388 interviews were conducted. All organizations in this research are subject to HIPAA as a covered entity. Most respondents interviewed work in compliance, IT, patient services and privacy.

Other key research findings:

The number of data breaches decreased slightly. Ninety percent of healthcare organizations in this study have had at least one data breach in the past two years. However, 38 percent report that they have had more than five incidents. This is a decline from last year’s report when 45 percent of organizations had more than 5. This coupled with an increase in organizations’ level of confidence in data breach detections suggests that modest improvements have been made in reducing threats to patient data.

Healthcare organizations improve ability to control data breach costs. The economic impact of one or more data breaches for healthcare organizations in this study ranges from less than $10,000 to more than $1 million over a two-year period. Based on the ranges reported by respondents, we calculated that the average economic impact of data breaches over the past two years for the healthcare organizations represented in this study is $2.0 million. This is a decrease of almost $400,000 or 17 percent since last year.

ACA increases risk to patient privacy and information security. Respondents in 69 percent of organizations represented believe the ACA significantly increases (36 percent) or increases (33 percent) risk to patient privacy and security. The primary concerns are insecure exchange of patient information between healthcare providers and government (75 percent of organizations), patient data on insecure databases (65 percent) and patient registration on insecure websites (63 percent of organizations).

ACO participation increases data breach risks. Fifty-one percent of organizations say they are part of an Accountable Care Organization (ACO) and 66 percent say the risks to patient privacy and security due to the exchange of patient health information among participants has increased. When asked if their organization experienced changes in the number of unauthorized disclosure of PHI, 41 percent say it is too early to tell. Twenty-three percent say they noticed an increase.

Confidence in the security of Health Information Exchanges (HIEs) remains low. An HIE is defined as the mobilization of healthcare information electronically across organizations within a region, community or hospital system. The percentage of organizations joining HIEs increased only slightly. This year, 32 percent say they are members and this is up slightly from 28 percent last year. One-third of organizations say they do not plan to become a member. The primary reason could be that 72 percent of respondents say they are only somewhat confident (32 percent) or not confident (40 percent) in the security and privacy of patient data share on HIEs.

Criminal attacks on healthcare organizations increase 100 percent since 2010. Insider negligence continues to be at the root of most data breaches reported in this study but a major challenge for healthcare organizations is addressing the criminal threat. These types of attacks on sensitive data have increased 100 percent since the study was conducted in 2010 from 20 percent of organizations reporting criminal attacks to 40 percent of organizations in this year’s study.

Employee negligence is considered the biggest security risk. Seventy-five percent of organizations say employee negligence is their biggest worry followed by use of public cloud services (41 percent), mobile device insecurity (40 percent) and cyber attackers (39 percent).

BYOD usage continues to rise. Despite the concerns about employee negligence and the use of insecure mobile devices, 88 percent of organizations permit employees and medical staff to use their own mobile devices such as smart phones or tablets to connect to their organization’s networks or enterprise systems such as email. Similar to last year, more than half of organizations are not confident that the personally-owned mobile devices or BYOD are secure.

 

To find out more, or download the entire report, click here.