Author Archives: Bob Sullivan

Worried about the wrong thing: Hospital hacks show privacy, HIPAA might be dangerous to our health

Bob Sullivan

Bob Sullivan

A few years ago, my long-time, elderly, live-alone neighbor was taken away in an ambulance.  I wasn’t home and heard about it second-hand.  At first, I had no idea how serious it was or even where he was taken, but I was really concerned. So I started calling local hospitals to ask if he’d been admitted.  You can probably guess how that worked out for me.

I was stonewalled at every turn. Even when I said might be the only one who would call about him, that I was concerned he had no nearby next of kin, I got nowhere. I was fully HIPAA’d out.

Eventually, I talked to local police who tipped me off that he had been brought to a nearby hospital. I called them again.

“Not to be morbid, but can I even confirm that he’s still alive?” I pleaded.

“Due to patient privacy, we cannot divulge anything,” I was told.

Now you probably know I care about privacy as much as the next person, but if my friend and neighbor was dying in a hospital bed, I was Hell bent to make sure he didn’t die without knowing at least someone cared about him. And this seemed cruel to me.

I called a few more times.  I finally lucked out and got to someone who, from her voice, sounded quite a bit older. Maybe even a volunteer. She heard me out.

“You didn’t hear it from me,” I recall her saying. “But he’s recovering from brain surgery. He probably had a stroke.”

I’m happy to tell you that I went to see my neighbor a few times during the next several weeks, and after a long recovery, he’s actually doing really well.

I tell you all this because I am worried that situations like these are really helping hackers.

Perhaps you’ve heard about the rash of hospital and health care systems being attacked by ransomware.  In the Washington D.C. area, a chain named MedStar was reduced to performing nearly all tasks on paper by a virus that locked all its files and demanded payment to unlock them.  The problem is so serious that U.S. and Canadian authorities jointly issued a warning about ransomware on March 31, calling attention to attacks on hospitals.

What does this have to do with HIPAA, or my neighbor’s stroke?  It shows we are worrying about the wrong things.

All of us have been HIPAA’d at some point.  We’ve felt the wrath of the Health Insurance Portability and Accountability Act, enacted in 1996.  Want a yes or no answer to a simple question from your doctor?  You can’t get an email from her or him. You have to login to a server that will probably reject the first five passwords you enter and then force you to a reset page, and half the time you’ll give up before you find out that, yes, you should take that pill with food.

There’s a saying in the geek world that “compliance is a bad word in security.”    Walk into any health care facility and you’ll immediately get the sense that everyone from doctors to nurses to cleaning staff are TERRIFIED to violate HIPAA.  On the other hand, I’ve been told by someone who has worked on a recent hospital attack, health facilities routinely are five or even 10 years behind on installing security patches.

Geoff Gentry, a security analyst with Independent Security Evaluators, puts it this way:

“We are defending the wrong asset,” he told me. “We are defending patient records instead of patient health.”

If someone steals a patient record, sure, they can do damage. They can perhaps mess up a patient’s credit report. But if someone hacks and alters a patient record, the consequences can be much more dire.

“It could be life or death,” he said.

Gentry was part of a team from Independent Security Evaluators that reviewed hospital security at a set of facilities three months ago in the Baltimore/Washington area.  The timing couldn’t have been better.  The message couldn’t be more important.

“For almost two decades, HIPAA has been ineffective at protecting patient privacy, and instead has created a system of confusion, fear, and busy work that has cost the industry billions. Punitive measures for compliance failures should not disincentivize the security process, and healthcare organizations should be rewarded for proactive security work that protects patient health and privacy,” the report says. “(HIPAA has) not been successful in curtailing the rise of successful attacks aimed at compromising patient records, as can be seen in the year over year increase in successful attacks. This is no surprise however, since compliance rarely succeeds at addressing anything more than the lowest bar of adversary faced, and so long as more and better adversaries come on to the scene, these attempts will continue to fail.”

In the test, Independent Security Evaluators found issues that ran the gamut from unpatched systems to critical hospital computers left on, and logged in, when patients are left alone in examination rooms.  A typical problem: Aging computers designated for a single task that are left untouched for months or even years, missing critical security updates.

Larry Ponemon, who runs a privacy consulting firm, was an adviser on that project.  He assessment is equally as blunt.

“Being HIPAA compliant has become almost like a religion,” he says. “The reality is that being compliant with
HIPAA doens’t get you really far.”

To be clear:  The report didn’t uncover lazy IT workers playing video games while IT infrastructure crumbles around them. Nor did it find uncaring doctors, nurses, or even administrators. To the contrary, if found haggard security professionals desperately trying to keep up with security issues, and generally falling hopelessly behind as their attention is constantly redirected to paranoia over compliance issues.

“A lot of companies have made poor investment decisions in security. They are doing things that are not diminishing their risk,” Ponemon, who runs The Ponemon Institute, said. (NOTE: Larry Ponemon and I have a joint project on privacy issues, a newsletter called The Ponemon Sullivan Privacy Report.)

Hackers are devoted copycats, so we know more attacks on hospitals are coming. At the moment, these attacks seem to have been limited to administrative systems, and the impacted health care facilities say patient care was unaffected. (I did interview a D.C.-area patient who said two doctors were unable to share his patient files, leading to unnecessary delay and expense).

It’s easy to imagine far worse outcomes, however.  Gentry speculated that hackers could attack a specific patient and extort him or her.  Ponemon talked about attacks on pacemakers or other digitally-connected devices that control patient health.

“These sound like they are science fiction, but hospitals are part of the Internet of Things,” he said.  “And there doesn’t seem to be a plan to manage the security risk.”

The plan, Gentry says, has to involve righting the regulatory ship and letting hospitals and health care facilities worry about the right things.

“We need to take a lot of this bandwidth we are appropriating to compliance and use that bandwidth on security and patient health,” he said.

And we’d better start soon. Because we’ve given the bad guys a pretty sizable head start while we were distracted by Herculean efforts to protect my neighbor from me.

Two-thirds of security pros waste a ‘significant’ amount of time chasing false positives

Larry Ponemon

Larry Ponemon

We are pleased to present the findings of The State of Malware Detection & Prevention sponsored by Cyphort. The study reveals the difficulty of preventing and detecting malware and advanced threats. The IT function also seems to lack the information and intelligence necessary to update senior executives on cybersecurity risks.

We surveyed 597 IT and IT security practitioners in the United States who are responsible for directing cybersecurity activities and/or investments within their organizations. All respondents have a network-based malware detection tool or are familiar with this type of tool.

Getting malware attacks under control continues to be a challenge for companies. Some 68 percent of respondents say their security operations team spends a significant amount of time chasing false positives. However, only 32 percent of respondents say their teams spend a significant amount of time prioritizing alerts that need to be investigated.

Despite such catastrophic data breaches as Target, cyber threats are not getting the appropriate attention from senior leadership they deserve. As shown in the findings of this research, respondents say they do not have the necessary intelligence to make a convincing case to the C-suite about the threats facing their company.

The following findings further reveal the problems IT security faces in safeguarding their companies’ high value and sensitive information.

Companies are ineffective in dealing with malware and advanced threats. Only 39 percent of respondents rate their ability to detect a cyber attack as highly effective, and similarly only 30 percent rate their ability to prevent cyber attacks as highly effective. Respondents also say their organizations are doing poorly in prioritizing alerts and minimizing false positives. As mentioned above, a significant amount time is spent chasing false positives but not prioritizing alerts.

Most respondents say C-level executives aren’t concerned about cyber threats. Respondents admit they do not have the intelligence and necessary information to effectively update senior executives on cyber threats. If they do meet with senior executives, 70 percent of respondents say they report on these risks to C-level executives only on a need-to-know basis (36 percent of respondents) or never (34 percent of respondents).

Sixty-three percent of respondents say their companies had one or more advanced attacks during the past 12 months. On average, it took 170 days to detect an advanced attack, 39 days to contain it and 43 days to remediate it.

The percentage of malware alerts investigated and determined to be false positives. On average, 29 percent of all malware alerts received by their security operations team are investigated and an average of 40 percent are considered to be false positives. Only 18 percent of respondents say their malware detection tool provides a level of risk for each incident.

Do organizations reimage endpoints based on malware detected in the network? More than half (51 percent) of respondents say their organization reimages endpoints based on malware detected in the network. An average of 33 percent of endpoint re-images or remediations are performed without knowing whether it was truly infected. The most effective solutions for the remediation of advanced attacks are network-based sandboxing and network behavior anomaly analysis.

Download and read the rest of the report.

Car hacking worries FBI, too; and reports of keyless entry hacking won’t go away

Bob Sullivan

Bob Sullivan

We know that Americans are concerned about their cars being hacked.  We also know that some consumers believe criminals are “hacking” into their parked cars and committing “snatch and grab” crimes using devices that simulate newfangled keyless entry systems.

Now, we know the FBI is worried about car hacking, too. The agency, along with the National Highway Traffic Safety Administration, issued a bold warning to consumers and manufacturers last week.

“The FBI and NHTSA are warning the general public and manufacturers – of vehicles, vehicle components, and aftermarket devices – to maintain awareness of potential issues and cybersecurity threats related to connected vehicle technologies in modern vehicles,” the warning says. “While not all hacking incidents may result in a risk to safety – such as an attacker taking control of a vehicle – it is important that consumers take appropriate steps to minimize risk.”

The FBI warning didn’t raise any new concerns; it mainly cites revelations of car hacking from 2015 as impetus for the warning. Still, the notice clearly demonstrates there is a level of activity around car hacking that should have everyone concerned. Drive down the highway sometime (as a passenger) and use your smartphone to see at all the cars sending out Bluetooth connections around you and you’ll get an idea about how connected our vehicles have become.

Meanwhile, consumers continue to report mysterious car break-ins around the country with no signs of forced entry, in situations when they swear their car doors were locked.  In Baltimore, a string of crimes following this pattern frustrated local residents earlier this year.

“What was strange to me was that, while I could tell it was broken into because my jacket was taken and they tossed through the stuff in the car, there were no signs of a breaking. No broken windows or anything,” said one driver. “I called and reported it mostly because I wanted to know how anyone could have gotten in if it was locked and no windows were broken. The officer said people have these things that basically interfere with newer cars electronic/fob locking systems and disable the alarms.”

The reports follow a persistent set of national stories around keyfob break-ins that began with a CNN report two years ago, and was followed by a New York Times story last year that casually suggested drivers store their car fobs in their freezers to keep them safe from hackers. (Notably, the story appeared in the Times’ Style section. The science was a little shallow).

There have also been vague warnings issued by some agencies around the world, like this notice from London Police, or this notice from the National Insurance Crime Bureau,

“The key-less entry feature on newer cars is a popular advancement that lets drivers unlock their cars with the simple click of a button on a key fob using radio frequency transmission. The technology also helps prevent drivers from locking their keys in the vehicle,” it says.  “Not surprisingly, thieves have found a way to partially outwit the new technology using electronic ‘scanner boxes.’ These small, handheld devices can pop some factory-made electronic locks in seconds, allowing thieves to get into the vehicle and steal personal items left inside.”

The existence of such a scanner box is very much in question, as are assertions that such a universal master key can be purchased for as little as $17; so is any notion that the crime is widespread. If any law enforcement agency has seized such a device, we are all waiting for it to be put on display.

How would such a magic device work?  By tricking your car into thinking your key fob is nearby and opening the door in response to a handle jiggle; or perhaps by amplifying the signal it sends out, or by intercepting that signal and copying it somehow. Or, hackers could “guess” the code for opening a car, if the code were poorly constructed. Here’s a great explanation of how it might work, and why it’s a major challenge unlikely to be used by street thugs.

*Could* such a hack exist? Well, of course, says embedded device security expert Philip Koopman, a professor at  Carnegie Mellon. Koopman actually worked on earlier generation designs for key fobs.

“I would not at all be surprised if the Bad Guys have figured out that some manufacturer has bad security and how to attack it,” he said. “There is nothing really new here, other than general lack of people to admit that if you cut corners on security you will get burned, and an insistence by manufacturers and suppliers that known bad practices are adequate.”

In a blog post six years ago, he warned about the cost sensitivity for auto manufacturors (“No way could we afford industrial strength crypto.”)

Back to today, he offered this speculation on keyless entry attacks.

“It is (possible) that the manufacturers used bad crypto that is easy to hack, possibly via just listening to transmissions and doing off-line analysis. And it is possible to attack by getting near someone when they aren’t near their car and extracting the secrets from their car keys when it is in their pocket, then using that info to build a fake key. The technology is very similar to the US Passport biometric chips, so all the attacks for those are plausible here as well.”

The FBI offers the following advice to consumers: Keep your car software up to date, as you do with your PC; don’t modify your car software; be careful when connecting your car to third parties; and “be aware of who has physical access to your vehicle.”

That last bit of advice might work for people with long driveways, but the rest of us can’t do much about who might be able to walk by our cars on streets and in parking lots.

“While these tips may seem innocuous, they do show the limitations that law enforcement and consumers have in combating the car hacking threat,” said Tyler Cohen Wood, Cyber Security Advisor of Inspired eLearning.  “With the ever-increasing implementation of Internet of Things devices, including devices installed in newer cars, it’s a real challenge for law enforcement to identify different threat vectors associated with vehicle hacking.  There is no real standard for Internet of Things devices from a vehicle standpoint—each automobile manufacturer offers different types of devices as options in vehicles, from entertainment and navigation systems to remote ignition starting devices.  There is no industry standard for operating systems or security protocols on these devices, so it’s difficult for law enforcement to identify the specific threats that the devices pose to the public.”

So what else should you do?  Putting your car “keys” in the freezer is probably a bad idea; it will likely create more problems than it solves.  You might damage the very expensive key, for example, to mitigate a threat that is still perceived as low. But it wouldn’t hurt to take great care with where you leave the key. If you park directly in front of your front door, perhaps you shouldn’t leave the key right there.  Otherwise, read the local police blotter and talk to neighbors about street crime.

Most of all, make sure you really do lock your car doors.

 

Flipping the economics of hacker attacks

Larry Ponemon

Larry Ponemon

How much does it cost technically proficient adversaries to conduct successful attacks, and how
much do they earn? In Flipping the Economics of Attacks, sponsored by Palo Alto Networks, we look at the relationships between the time spent and compensation of today’s adversaries and how organizations can thwart attacks. As revealed in this research, while some attackers may be motivated by non-pecuniary reasons, such as those that are geopolitical or reputational, an average of 69 percent of respondents say they are in it for the money.
In this study, we surveyed 304 threat experts in the United States, United Kingdom and Germany.
We built this panel of experts based on their participation in Ponemon Institute activities and IT security conferences. They were assured their identity would remain anonymous. Twenty-one percent of respondents say they are very involved, and 79 percent of respondents are involved in the threat community. They are all familiar with present-day hacking methods.

Here are the key findings:

Attackers are opportunistic. Adversaries go after the easiest targets first. They won’t waste time on an attack that will not quickly result in a treasure trove of high-value information,
according to 72 percent of respondents. Further, attackers will quit when the
targeted company has a strong defense, according to 69 percent of respondents.

Cost and time to plan and execute attacks is decreasing. According to 53
percent of respondents, the total cost of a successful attack has decreased, driving
even more attacks across the industry. Similarly, 53 percent of respondents say
the time to plan and execute an attack has decreased. Of these 53 percent of
respondents who say it takes less time, 67 percent agree the number of known
exploits and vulnerabilities has increased, 52 percent agree attacker skills have improved and 46 percent agree hacking tools have improved.

Increased usage of low-cost and effective toolkits drives attacks. Technically proficient
attackers are spending an average of $1,367 for specialized toolkits to execute attack. In the
past two years, 63 percent of respondents say their use of hacker tools has increased and 64
percent of respondents say these tools are highly effective.

Time to deter the majority of attacks is less than two days. The longer an organization can
keep the attacker from executing a successful attack the stronger its ability to safeguard its
sensitive and confidential information. The inflection point for deterring the majority of attacks is less than two days (40 hours) resulting in more than 60 percent of all attackers moving on to
another target.

Adversaries make less than IT security professionals. On average, attackers earn $28,744
per year in annual compensation, which is about one-quarter of a cybersecurity professional’s
average yearly wage.

Organizations with strong defenses take adversaries more than double the time to plan
and execute attacks. The average number of hours a technically proficient attacker takes to plan and execute an attack against an organization with a “typical” IT security infrastructure is less than three days (70 hours). However, when the company has an “excellent” IT infrastructure the time doubles to an average of slightly more than six days (147 hours).

Threat intelligence sharing is considered the most effective in preventing attacks.
According to respondents, an average of 39 percent of all hacks can be thwarted because the
targeted organization engaged in the sharing of threat intelligence with its peers.
Investments in security effectiveness can reduce successful attacks significantly. As an
organization strengthens its security effectiveness, the ability to deter attacks increases, as
shown in this report.

The following are recommendations to harden organizations against malicious actors:

  • Create a holistic approach to cyber security, which includes focusing on the three important
    components of a security program: people, process and technology.
  • Implement training and awareness programs that educate employees on how to identify and protect their organization from such attacks as phishing.
  • Build a strong security operations team with clear policies in place to respond effectively to
    security incidents.
  • Leverage shared threat intelligence in order to identify and prevent attacks seen by your
    peers.
  • Invest in next-generation technology such as threat intelligence sharing and integrated
    security platforms that can prevent attacks and other advanced security technologies.

Where the presidential candidates stand on Snowden, surveillance

Bob Sullivan

Bob Sullivan

What do the presidential candidates think of domestic intelligence collection — or spying on Americans, depending on your point of view?  What do they think of Ed Snowden?

We haven’t heard a lot about the NSA or Snowden during the noisy campaigns so far, and that’s a shame. That’s because all the air is being sucked out of the conversation by more trivial concerns, such as Donald Trump’s debate schedule.  But all the candidates have spoken about domestic spying and about Snowden.

As we welcome election season proper, here’s a primer on the candidates’ views.

But first, a few notes: The most remarkable item of note is that Sen. Bernie Sanders voted against the original Patriot Act back in 2001 as a member of the House. He’s part of a very select group who did so.

Second, while some candidates have expressed a bit more sympathy for Snowden’s role as whistleblower, they’ve all called for him to face prosecution for treason. Even Sanders.

REPUBLICANS

Marco Rubio

On Snowden: He “sparked conspiracy theories”

From the Atlantic: “We must also distinguish these reasonable concerns from conspiracy theories sparked by Edward Snowden. This man is a traitor who has sought assistance and refuge from some of the world’s most notorious violators of liberty and human rights.”

On domestic surveillance: (The Washington Post) Those who voted for the Freedom Act, like Ted Cruz, put America at risk by making it harder to gather intelligence.

Ted Cruz

On Snowden: His opinion seems to have grown harsher over time

In 2013, he said (TheHill.com): “If it is the case that the federal government is seizing millions of personal records about law-abiding citizens, and if it is the case that there are minimal restrictions on accessing or reviewing those records, then I think Mr. Snowden has done a considerable public service by bringing it to light.”

More recently, he said: “Today, we know that Snowden violated federal law, that his actions materially aided terrorists and enemies of the United States, and that he subsequently fled to China and Russia,” he continued. “Under the Constitution, giving aid to our enemies is treason.”

On surveillance: (The Guardian) Cruz has defended his Senate for the USA Freedom Act, which clarified the NSA’s metadata telephone records collection pogram

Donald Trump

On Snowden: He’s hinted that he’d lead a charge to return and execute Snowden.

“I think he’s a terrible traitor, and you know what we used to do in the good old days when we were a strong country? You know what we used to do to traitors, right?” Trump said on Fox. 

On surveillanceI tend to err on the side of security, I must tell you,” he has said (TheHill.com).  “I assume when I pick up my telephone people are listening to my conversations anyway, if you want to know the truth… It’s a pretty sad commentary.”

He also said he would be “fine” with restoring provisions of the Patriot Act (TheHill.com) to allow for the bulk data collection.

DEMOCRATS

Hillary Clinton

On Snowden: He should ‘face the music’

(The Atlantic) “He broke the laws of the United States… He could have been a whistleblower, he could have gotten all the protections of a whistleblower. He chose not to do that. He stole very important information that has fallen into the wrong hands so I think he should not be brought home without facing the music.”

On surveillance:

Clinton voted for both the 2001 Patriot Act and the 2008 FISA Amendments that extended NSA data collection capabilities.  More on her views at The Atlantic.

Bernie Sanders

On Snowden: “I think Snowden played a very important role in educating the American public … he did break the law, and I think there should be a penalty to that,” Sanders said (HuffingtonPost.com). He went on to say that the role Snowden played in educating the public about violations of their civil liberties should be considered before he is sentenced. On the other hand, this mildly Snowden sympathetic story is posted on Sanders’ senate webpage.

On surveillance:

Sanders voted against Patriot Act in 2001 as a member of the House of Representatives.  Later in the Senate, he voted against the 2008 FISA Amendments.

 

Exchanging Cyber Threat Intelligence: There Has to Be a Better Way

Larry Ponemon

Larry Ponemon

Our second annual study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way reveals interesting trends in how organizations are participating in initiatives or programs for exchanging threat intelligence with peers, industry groups, IT vendors and government.

According to the 692 IT and IT security practitioners surveyed, there is more recognition that the exchange of threat intelligence can improve an organization’s security posture and situational awareness. However, concerns about trust in the sources of intelligence and timeliness of the information continue to be a deterrent to participation in such initiatives.

Forty-seven percent of respondents say their organization had a material security breach that involved an attack that compromised the networks or enterprise systems. This attack could have been external (i.e. hacker), internal (i.e. malicious insider) or both. Most respondents (65 percent) say threat intelligence could have prevented or minimized the consequences of the attack.

Following are key research takeaways:

Threat intelligence is essential for a strong security posture. Seventy-five percent of respondents, who are familiar and involved in their company’s cyber threat intelligence activities or process, believe gathering and using threat intelligence is essential to a strong security posture.

Potential liability and lack of trust in sources of intelligence, keep some organizations from participating. Organizations that only partially participate cite potential liability of sharing (62 percent of respondents) and lack of trust in the sources of intelligence (60 percent of respondents). However, more respondents believe there is a benefit to exchanging threat intelligence.

Organizations rely upon peers and security vendors for threat intelligence. Sixty-five percent of respondents say they engage in informal peer-to-peer exchange of information or through a vendor threat exchange service (45 percent of respondents). IT vendors and peers are also considered to provide the most actionable information. Law enforcement or government officials are not often used as a source for threat intelligence.

Threat intelligence needs to be timely and easy to prioritize. Sixty-six percent of respondents who are only somewhat or not satisfied with current approaches say it is because the information is not timely and 46 percent complain the information is not categorized according to threat type or attacker.

Organizations are moving to a centralized program controlled by a dedicated team.  A huge barrier to effective collaboration in the exchange of threat intelligence is the existence of silos. Centralizing control over the exchange of threat intelligence is becoming more prevalent and might address the silo problem.

I hope you will download the full report.

Verizon grounds JetBlue — another Plan B goes badly

Bob Sullivan

Bob Sullivan

Verizon managed to ground an airline for several hours on Jan. 14. But it’s important to ask: Who’s really to blame?

Discount airliner JetBlue appears to have cut some corners with its disaster recovery planning. The airline suffered nationwide delays on Thursday when many of its computer systems went down, preventing fliers from checking in. The problems lasted at least three hours, and probably longer, halting flights at many airports.

JetBlue blamed the outage on Verizon.

“We’re currently experiencing network issues due to a Verizon data center power outage. We’re working to resolve the issue as soon as possible,” JetBlue said on its blog. “The power was disrupted during a maintenance operation at the Verizon data center.  Verizon can provide more details into the cause.”

At 2:30 p.m. ET, JetBlue posted an update saying it was still experiencing system issues.

Verizon told me the problem began three hours earlier.

“On Thursday morning at 11:37 am ET, a Verizon data center experienced a power outage that impacted JetBlue’s operations,” the firm said in a statement. “JetBlue’s systems are now being restored.  Our engineering team has been working to restore service quickly, and power has been restored to the data center.”

The impact of the outage was dramatic: “Customer support systems, including jetblue.com, mobile apps, 1-800-JETBLUE, check-in and airport counter/gate systems, are impacted,” JetBlue said.

Consumers spent the early afternoon Tweeting their displeasure and the uncertainty the outage created.

“At least make some estimates on flight delays so people can make informed decisions,” said Jared Levy on Twitter.

It’s worth noting that JetBlue said on its blog at 1:50 p.m. that power had been restored to to Verizon’s data center, “and we are working to fully restore our systems as soon as possible.”

That sure sounds like JetBlue is completely dependent on Verizon. Maybe the firm had some rollover plan that it never implemented, and got the idea that doing so would take longer than waiting for Verizon to fix its electricity problem. Either option doesn’t sound great. A misbehaving backhoe can take down a major airline’s operation? In the middle of the day?  And it stays down until Verizon can implement a power fix? Sounds like someone’s plan B wasn’t grade A.

That’s not uncommon, however. One of my favorite stories, now nearly five years old, was titled “Why plan B’s often work out badly. ” Inspired by the Japanese nuclear power plant disaster, I examined why backup plans often fail when reality strikes.  The short answer: It’s very hard to create an entirely duplicate universe where you can test plan B.  And it’s even hard to keep on testing it regularly and make sure it actually works. To wit: Your snow plow often doesn’t start after the first snow because it’s been sitting idle all summer.

Of course, big airlines should do better. But reality is, they often don’t. Hopefully more details will emerge soon so we can all learn from this.

 

Anti-encryption opportunists seize on Paris attacks; don’t be fooled

Bob Sullivan

Bob Sullivan

It’s natural to look for a scapegoat after something terrible happens, like this: If only we could read encrypted communications, perhaps the Paris terrorist attacks could have been stopped.  It’s natural, but it’s wrong.  Read every story you see about Paris carefully and look for evidence that encryption played a role.

There’s a reason The Patriot Act was passed only a few weeks after 9-11, and it wasn’t because Congress was finally able to act quickly and efficiently on something.  The speed came because many elements of the Patriot Act had already been written, and forces with an agenda were sitting in wait for a disaster so they could push that agenda.  That is wrong.

So here we are now, once again faced with political opportunism after an unthinkable human tragedy, and we must remain strong in the face of it.  There is no simple answer to terrorism, and we should all know this by now.  And so there must be no simple discussion about the use of encryption in the Western world.  The debate requires a bit of thoughtful analysis, and we owe it to everyone who ever died for a free society to have this debate thoughtfully.

The basics are this: Only recently, computing power has become inexpensive enough that ordinary citizens can scramble messages so effectively that even governments with near-infinite resources cannot crack them. Such secret-keeping powers scare government officials, and for good reason.  They can, theoretically, allow criminals and terrorists to communicate with a cloak of invisibility.  Not surprisingly, several government officials have called for a method that would allow law enforcement to crack these codes.  There are many schemes for this, but they all boil down to something akin to creating a master key that would be generated by encryption-making firms and given to government officials, who would use the key only after a judge granted permission.  This is sometimes referred to as creating “backdoors” for law enforcement.

Governments can already listen in on telephone conversations after obtaining the proper court order.  What’s the difference with a master encryption key?

Sadly, it’s not so simple.

For starters, U.S. firms that sell products using encryption would create backdoors, if forced by law.  But products created outside the U.S.?  They’d create backdoors only if their governments required it.  You see where I’m going. There will be no global master key law that all corporations adhere to.  By now I’m sure you’ve realized that such laws would only work to the extent that they are obeyed.  Plenty of companies would create rogue encryption products, now that the market for them would explode.  And of course, terrorists are hard at work creating their own encryption schemes.

There’s also the problem of existing products, created before such a law. These have no backdoors and could still be used. You might think of this as the genie out of the bottle problem, which is real. It’s very,  very hard to undo a technological advance.

Meanwhile, creation of backdoors would make us all less safe.  Would you trust governments to store and protect such a master key?  Managing defense of such a universal secret-killer is the stuff of movie plots.  No, the master key would most likely get out, or the backdoor would be hacked.  That would mean illegal actors would still have encryption that worked, but the rest of us would not. We would be fighting with one hand behind out backs.

In the end, it’s a familiar argument: disabling encryption would only stop people from using it legally. Criminals and terrorists would still use it illegally.

Is there some creative technological solution that might help law enforcement find terrorists without destroying the entire concept of encryption? Perhaps, and I’d be all ears. I haven’t heard it yet.

Only a few weeks after 9-11, a software engineer who told me he was working for the FBI contacted me and told me he was helping create a piece of software called Magic Lantern.  It was a type of computer virus, a Trojan horse keylogger, that could be remotely installed on a target’s computer and steal passphrases used to open up encrypted documents.  The programmer was uncomfortable with the work and wanted to expose it. I wrote the story for msnbc.com, and after denying the existence of Magic Lantern for a while, the FBI ultimately conceded using this strategy.  While we could debate the merits of Magic Lantern, at least it constituted a targeted investigation — something far, far removed from rendering all encryption ineffective.

For a far more detailed examination of these issues, you should read Kim Zetter at Wired, as I always do. Then make up your own mind.

Don’t let a politician or a law enforcement official with an agenda make it for you. Most of all, don’t allow someone who capitalizes on tragedy a mere hours after the first blood is spilled — an act so crass it disqualifies any argument such a person makes — to influence your thinking.

The fake account problem — why it’s everyone’s problem

Larry Ponemon

Larry Ponemon

User growth has become a key indicator of a company’s financial growth and sustainability. Even a company’s revenues can take a back seat to its user base as a metric that predicts future success. While it may have taken the telephone 70 years to reach 50 million users, in today’s fast-paced world companies can reach that same number in a matter of months.

As the user-base becomes a new form of currency, driving valuations of companies around the world higher and faster than ever before, it is becoming increasingly important to protect the integrity of these users. Information about who users are, what they do and how they do it is incredibly valuable. If not adequately protected this information can be (and is being) exploited.

The purpose of this report is to understand the scope of registration fraud, and how this epidemic is impacting companies and their users. It offers a glimpse into how companies verify and protect their users, and the damage that can be done when fraudulent users and fake accounts are allowed to exist within a user base.

Thanks to a sponsorship from Telesign, We surveyed 584 U.S. and 414 UK individuals who are involved in the registration, use or management of user accounts and hold such positions as product manager, IT security practitioner and app developer. Eighty-nine percent of these respondents say their organization considers its user base a critical asset with an average value of $117 million.

However, account fraud is becoming more prevalent because most organizations have a difficult time ensuring bona fide users and not bad actors are authenticated during the registration process. Only 36 percent believe they are able to avoid fraudulent registrations. Moreover, once fake users are registered, they spam legitimate users and often create more fraudulent accounts. Fake users also steal confidential information as well as engage in phishing, social engineering and account takeover.

The findings reveal why companies are vulnerable to the threats of fake users:

  • The authentication process is difficult to manage, according to 69 percent of respondents, allowing fake users to infiltrate the user base.
  • Fifty-eight percent of respondents say user convenience is most important to their fraud prevention strategy and 42 percent of respondents say ease of use is critical. Only 21 percent say security is important.
  • The majority of respondents (54 percent) say a phone number is enough to stop fraudulent registrations and protect account access.
  • Companies seem to be unwilling to crack down on fraudulent registrations. Forty-three percent of respondents say their company doesn’t worry about the registration of fake accounts to avoid friction in the registration process. Most companies do not have a formal method for determining whether a potential user is real.
  • Only 39 percent of respondents say their company is vigilant in determining that each user account belongs to a real person.
  • Only 25 percent of respondents believe the traditional username and password(s) is a reasonably secure authentication method for their users. However, 94 percent of respondents say they use passwords or PINs and 79 percent use email addresses to create an account(s).

To read the rest of the report findings, please download the PDF from Telesign.com

Is your company ready for a big data breach? Only one-third say they are

Larry Ponemon

Larry Ponemon

With data breaches continuing to increase in frequency and severity, it comes as no surprise that businesses are acknowledging this risk as a top concern and priority. Nearly half of organizations surveyed report having a data breach involving the loss or theft of more than 1,000 records containing sensitive or confidential information in the past two years. And the frequency of data breaches is increasing. Sixty-three percent of these respondents report their company had two or more breaches in the past two years.

However, the enclosed findings from our Third Annual Study: Is Your Company Ready for a Big Data Breach sponsored by Experian® Data Breach Resolution, illustrate that many companies still lack confidence in their ability to manage these issues and execute their data breach response plan. We surveyed 604 executives and staff employees who work primarily in privacy and compliance in the United States.

ready for breachSince 2013, we have tracked changes in how confident companies are in responding to a data breach. This year, we took our analysis a step further by digging into what companies are specifically including in their data breach response plans to get to the root cause of why their confidence is lacking and the areas where they struggle to follow best practices.

As shown in Figure 1, of the 81 percent of respondents who say their company has a plan, only 34 percent say these plans are very
effective or effective. This is a slight increase from 30 percent in 2014. Thus, major gaps remain in how they are comprehensively preparing for a data breach.

Specifically, organizations aren’t taking into account the full breadth of procedures that need to be incorporated in the response plan
and aren’t considering the wide variety of security incidents that can happen. The good news is some of the barriers to addressing
those issues can be easily solved.

Some of the key findings we uncovered from this year’s survey include:

Data breaches are more concerning than product recalls and lawsuits. A majority of business leaders acknowledge the potential damage data breaches can cause to corporate reputation is significant. They ranked a data breach second only to poor customer service and ahead of product recalls, environmental incidents and publicized lawsuits. The combination of the higher likelihood and significant impact has caused data breaches to be a major issue across all sectors.

Data breach preparedness sees increased awareness from senior leadership. Boards of directors, chairmen and CEOs have become more involved and informed in the past 12 months about their companies’ plans to deal with a possible data breach. In 2014, only 29 percent of
respondents said their senior leadership were involved in data breach preparedness. This year, perhaps due to recent mega breaches, 39 percent of respondents say their boards, chairmen and CEOs are involved at a high level. Most interesting is their participation in a high level review of
the data breach response plan in place increased from 45 percent to 54 percent of respondents.
Significant increase in response plans over three years. As discussed above, this year more companies have a baseline data breach response plan in place. Since first conducting this study in 2013, the percentage of organizations that reported having a data breach response plan
increased from 61 percent to 81 percent. However, it is surprising that still not all companies are taking the basic step of developing a data breach response plan.

Many are still struggling in terms of feeling confident in their ability to secure data and manage a breach. Figure 1 above shows only 34 percent of respondents say their organizations’ data breach response plan is very effective or effective. Despite increased security investments and incident response planning, when asked in detail about the preparedness of their
organization, many senior executives are not confident in how they would handle a real-life issue.

Following are reasons for rating these plans as not as effective as they should be:

  • Forty-one percent of respondents say their organization is not effective or unsure about the effectiveness of their data breach response plan.
  • Only 28 percent of respondents rate their organization’s response plan as effective in reducing the likelihood of lawsuits; and only 32 percent rate their response plan as effective for protecting customers.
    Executives are concerned about their ability to respond to a data breach involving confidential information and intellectual property.
  • Only 39 percent report they are prepared to respond to this type of incident.
  • Only 32 percent of organizations report they understand what needs to be done following a material data breach to prevent negative public opinion.
  • Only 28 percent of organizations are confident in its ability to minimize the financial and reputational consequences of a material breach.