Category Archives: Uncategorized

Verizon grounds JetBlue — another Plan B goes badly

Bob Sullivan

Bob Sullivan

Verizon managed to ground an airline for several hours on Jan. 14. But it’s important to ask: Who’s really to blame?

Discount airliner JetBlue appears to have cut some corners with its disaster recovery planning. The airline suffered nationwide delays on Thursday when many of its computer systems went down, preventing fliers from checking in. The problems lasted at least three hours, and probably longer, halting flights at many airports.

JetBlue blamed the outage on Verizon.

“We’re currently experiencing network issues due to a Verizon data center power outage. We’re working to resolve the issue as soon as possible,” JetBlue said on its blog. “The power was disrupted during a maintenance operation at the Verizon data center.  Verizon can provide more details into the cause.”

At 2:30 p.m. ET, JetBlue posted an update saying it was still experiencing system issues.

Verizon told me the problem began three hours earlier.

“On Thursday morning at 11:37 am ET, a Verizon data center experienced a power outage that impacted JetBlue’s operations,” the firm said in a statement. “JetBlue’s systems are now being restored.  Our engineering team has been working to restore service quickly, and power has been restored to the data center.”

The impact of the outage was dramatic: “Customer support systems, including jetblue.com, mobile apps, 1-800-JETBLUE, check-in and airport counter/gate systems, are impacted,” JetBlue said.

Consumers spent the early afternoon Tweeting their displeasure and the uncertainty the outage created.

“At least make some estimates on flight delays so people can make informed decisions,” said Jared Levy on Twitter.

It’s worth noting that JetBlue said on its blog at 1:50 p.m. that power had been restored to to Verizon’s data center, “and we are working to fully restore our systems as soon as possible.”

That sure sounds like JetBlue is completely dependent on Verizon. Maybe the firm had some rollover plan that it never implemented, and got the idea that doing so would take longer than waiting for Verizon to fix its electricity problem. Either option doesn’t sound great. A misbehaving backhoe can take down a major airline’s operation? In the middle of the day?  And it stays down until Verizon can implement a power fix? Sounds like someone’s plan B wasn’t grade A.

That’s not uncommon, however. One of my favorite stories, now nearly five years old, was titled “Why plan B’s often work out badly. ” Inspired by the Japanese nuclear power plant disaster, I examined why backup plans often fail when reality strikes.  The short answer: It’s very hard to create an entirely duplicate universe where you can test plan B.  And it’s even hard to keep on testing it regularly and make sure it actually works. To wit: Your snow plow often doesn’t start after the first snow because it’s been sitting idle all summer.

Of course, big airlines should do better. But reality is, they often don’t. Hopefully more details will emerge soon so we can all learn from this.

 

Anti-encryption opportunists seize on Paris attacks; don't be fooled

Bob Sullivan

Bob Sullivan

It’s natural to look for a scapegoat after something terrible happens, like this: If only we could read encrypted communications, perhaps the Paris terrorist attacks could have been stopped.  It’s natural, but it’s wrong.  Read every story you see about Paris carefully and look for evidence that encryption played a role.

There’s a reason The Patriot Act was passed only a few weeks after 9-11, and it wasn’t because Congress was finally able to act quickly and efficiently on something.  The speed came because many elements of the Patriot Act had already been written, and forces with an agenda were sitting in wait for a disaster so they could push that agenda.  That is wrong.

So here we are now, once again faced with political opportunism after an unthinkable human tragedy, and we must remain strong in the face of it.  There is no simple answer to terrorism, and we should all know this by now.  And so there must be no simple discussion about the use of encryption in the Western world.  The debate requires a bit of thoughtful analysis, and we owe it to everyone who ever died for a free society to have this debate thoughtfully.

The basics are this: Only recently, computing power has become inexpensive enough that ordinary citizens can scramble messages so effectively that even governments with near-infinite resources cannot crack them. Such secret-keeping powers scare government officials, and for good reason.  They can, theoretically, allow criminals and terrorists to communicate with a cloak of invisibility.  Not surprisingly, several government officials have called for a method that would allow law enforcement to crack these codes.  There are many schemes for this, but they all boil down to something akin to creating a master key that would be generated by encryption-making firms and given to government officials, who would use the key only after a judge granted permission.  This is sometimes referred to as creating “backdoors” for law enforcement.

Governments can already listen in on telephone conversations after obtaining the proper court order.  What’s the difference with a master encryption key?

Sadly, it’s not so simple.

For starters, U.S. firms that sell products using encryption would create backdoors, if forced by law.  But products created outside the U.S.?  They’d create backdoors only if their governments required it.  You see where I’m going. There will be no global master key law that all corporations adhere to.  By now I’m sure you’ve realized that such laws would only work to the extent that they are obeyed.  Plenty of companies would create rogue encryption products, now that the market for them would explode.  And of course, terrorists are hard at work creating their own encryption schemes.

There’s also the problem of existing products, created before such a law. These have no backdoors and could still be used. You might think of this as the genie out of the bottle problem, which is real. It’s very,  very hard to undo a technological advance.

Meanwhile, creation of backdoors would make us all less safe.  Would you trust governments to store and protect such a master key?  Managing defense of such a universal secret-killer is the stuff of movie plots.  No, the master key would most likely get out, or the backdoor would be hacked.  That would mean illegal actors would still have encryption that worked, but the rest of us would not. We would be fighting with one hand behind out backs.

In the end, it’s a familiar argument: disabling encryption would only stop people from using it legally. Criminals and terrorists would still use it illegally.

Is there some creative technological solution that might help law enforcement find terrorists without destroying the entire concept of encryption? Perhaps, and I’d be all ears. I haven’t heard it yet.

Only a few weeks after 9-11, a software engineer who told me he was working for the FBI contacted me and told me he was helping create a piece of software called Magic Lantern.  It was a type of computer virus, a Trojan horse keylogger, that could be remotely installed on a target’s computer and steal passphrases used to open up encrypted documents.  The programmer was uncomfortable with the work and wanted to expose it. I wrote the story for msnbc.com, and after denying the existence of Magic Lantern for a while, the FBI ultimately conceded using this strategy.  While we could debate the merits of Magic Lantern, at least it constituted a targeted investigation — something far, far removed from rendering all encryption ineffective.

For a far more detailed examination of these issues, you should read Kim Zetter at Wired, as I always do. Then make up your own mind.

Don’t let a politician or a law enforcement official with an agenda make it for you. Most of all, don’t allow someone who capitalizes on tragedy a mere hours after the first blood is spilled — an act so crass it disqualifies any argument such a person makes — to influence your thinking.

Anti-encryption opportunists seize on Paris attacks; don’t be fooled

Bob Sullivan

Bob Sullivan

It’s natural to look for a scapegoat after something terrible happens, like this: If only we could read encrypted communications, perhaps the Paris terrorist attacks could have been stopped.  It’s natural, but it’s wrong.  Read every story you see about Paris carefully and look for evidence that encryption played a role.

There’s a reason The Patriot Act was passed only a few weeks after 9-11, and it wasn’t because Congress was finally able to act quickly and efficiently on something.  The speed came because many elements of the Patriot Act had already been written, and forces with an agenda were sitting in wait for a disaster so they could push that agenda.  That is wrong.

So here we are now, once again faced with political opportunism after an unthinkable human tragedy, and we must remain strong in the face of it.  There is no simple answer to terrorism, and we should all know this by now.  And so there must be no simple discussion about the use of encryption in the Western world.  The debate requires a bit of thoughtful analysis, and we owe it to everyone who ever died for a free society to have this debate thoughtfully.

The basics are this: Only recently, computing power has become inexpensive enough that ordinary citizens can scramble messages so effectively that even governments with near-infinite resources cannot crack them. Such secret-keeping powers scare government officials, and for good reason.  They can, theoretically, allow criminals and terrorists to communicate with a cloak of invisibility.  Not surprisingly, several government officials have called for a method that would allow law enforcement to crack these codes.  There are many schemes for this, but they all boil down to something akin to creating a master key that would be generated by encryption-making firms and given to government officials, who would use the key only after a judge granted permission.  This is sometimes referred to as creating “backdoors” for law enforcement.

Governments can already listen in on telephone conversations after obtaining the proper court order.  What’s the difference with a master encryption key?

Sadly, it’s not so simple.

For starters, U.S. firms that sell products using encryption would create backdoors, if forced by law.  But products created outside the U.S.?  They’d create backdoors only if their governments required it.  You see where I’m going. There will be no global master key law that all corporations adhere to.  By now I’m sure you’ve realized that such laws would only work to the extent that they are obeyed.  Plenty of companies would create rogue encryption products, now that the market for them would explode.  And of course, terrorists are hard at work creating their own encryption schemes.

There’s also the problem of existing products, created before such a law. These have no backdoors and could still be used. You might think of this as the genie out of the bottle problem, which is real. It’s very,  very hard to undo a technological advance.

Meanwhile, creation of backdoors would make us all less safe.  Would you trust governments to store and protect such a master key?  Managing defense of such a universal secret-killer is the stuff of movie plots.  No, the master key would most likely get out, or the backdoor would be hacked.  That would mean illegal actors would still have encryption that worked, but the rest of us would not. We would be fighting with one hand behind out backs.

In the end, it’s a familiar argument: disabling encryption would only stop people from using it legally. Criminals and terrorists would still use it illegally.

Is there some creative technological solution that might help law enforcement find terrorists without destroying the entire concept of encryption? Perhaps, and I’d be all ears. I haven’t heard it yet.

Only a few weeks after 9-11, a software engineer who told me he was working for the FBI contacted me and told me he was helping create a piece of software called Magic Lantern.  It was a type of computer virus, a Trojan horse keylogger, that could be remotely installed on a target’s computer and steal passphrases used to open up encrypted documents.  The programmer was uncomfortable with the work and wanted to expose it. I wrote the story for msnbc.com, and after denying the existence of Magic Lantern for a while, the FBI ultimately conceded using this strategy.  While we could debate the merits of Magic Lantern, at least it constituted a targeted investigation — something far, far removed from rendering all encryption ineffective.

For a far more detailed examination of these issues, you should read Kim Zetter at Wired, as I always do. Then make up your own mind.

Don’t let a politician or a law enforcement official with an agenda make it for you. Most of all, don’t allow someone who capitalizes on tragedy a mere hours after the first blood is spilled — an act so crass it disqualifies any argument such a person makes — to influence your thinking.

The fake account problem — why it's everyone's problem

Larry Ponemon

Larry Ponemon

User growth has become a key indicator of a company’s financial growth and sustainability. Even a company’s revenues can take a back seat to its user base as a metric that predicts future success. While it may have taken the telephone 70 years to reach 50 million users, in today’s fast-paced world companies can reach that same number in a matter of months.

As the user-base becomes a new form of currency, driving valuations of companies around the world higher and faster than ever before, it is becoming increasingly important to protect the integrity of these users. Information about who users are, what they do and how they do it is incredibly valuable. If not adequately protected this information can be (and is being) exploited.

The purpose of this report is to understand the scope of registration fraud, and how this epidemic is impacting companies and their users. It offers a glimpse into how companies verify and protect their users, and the damage that can be done when fraudulent users and fake accounts are allowed to exist within a user base.

Thanks to a sponsorship from Telesign, We surveyed 584 U.S. and 414 UK individuals who are involved in the registration, use or management of user accounts and hold such positions as product manager, IT security practitioner and app developer. Eighty-nine percent of these respondents say their organization considers its user base a critical asset with an average value of $117 million.

However, account fraud is becoming more prevalent because most organizations have a difficult time ensuring bona fide users and not bad actors are authenticated during the registration process. Only 36 percent believe they are able to avoid fraudulent registrations. Moreover, once fake users are registered, they spam legitimate users and often create more fraudulent accounts. Fake users also steal confidential information as well as engage in phishing, social engineering and account takeover.

The findings reveal why companies are vulnerable to the threats of fake users:

  • The authentication process is difficult to manage, according to 69 percent of respondents, allowing fake users to infiltrate the user base.
  • Fifty-eight percent of respondents say user convenience is most important to their fraud prevention strategy and 42 percent of respondents say ease of use is critical. Only 21 percent say security is important.
  • The majority of respondents (54 percent) say a phone number is enough to stop fraudulent registrations and protect account access.
  • Companies seem to be unwilling to crack down on fraudulent registrations. Forty-three percent of respondents say their company doesn’t worry about the registration of fake accounts to avoid friction in the registration process. Most companies do not have a formal method for determining whether a potential user is real.
  • Only 39 percent of respondents say their company is vigilant in determining that each user account belongs to a real person.
  • Only 25 percent of respondents believe the traditional username and password(s) is a reasonably secure authentication method for their users. However, 94 percent of respondents say they use passwords or PINs and 79 percent use email addresses to create an account(s).

To read the rest of the report findings, please download the PDF from Telesign.com

The fake account problem — why it’s everyone’s problem

Larry Ponemon

Larry Ponemon

User growth has become a key indicator of a company’s financial growth and sustainability. Even a company’s revenues can take a back seat to its user base as a metric that predicts future success. While it may have taken the telephone 70 years to reach 50 million users, in today’s fast-paced world companies can reach that same number in a matter of months.

As the user-base becomes a new form of currency, driving valuations of companies around the world higher and faster than ever before, it is becoming increasingly important to protect the integrity of these users. Information about who users are, what they do and how they do it is incredibly valuable. If not adequately protected this information can be (and is being) exploited.

The purpose of this report is to understand the scope of registration fraud, and how this epidemic is impacting companies and their users. It offers a glimpse into how companies verify and protect their users, and the damage that can be done when fraudulent users and fake accounts are allowed to exist within a user base.

Thanks to a sponsorship from Telesign, We surveyed 584 U.S. and 414 UK individuals who are involved in the registration, use or management of user accounts and hold such positions as product manager, IT security practitioner and app developer. Eighty-nine percent of these respondents say their organization considers its user base a critical asset with an average value of $117 million.

However, account fraud is becoming more prevalent because most organizations have a difficult time ensuring bona fide users and not bad actors are authenticated during the registration process. Only 36 percent believe they are able to avoid fraudulent registrations. Moreover, once fake users are registered, they spam legitimate users and often create more fraudulent accounts. Fake users also steal confidential information as well as engage in phishing, social engineering and account takeover.

The findings reveal why companies are vulnerable to the threats of fake users:

  • The authentication process is difficult to manage, according to 69 percent of respondents, allowing fake users to infiltrate the user base.
  • Fifty-eight percent of respondents say user convenience is most important to their fraud prevention strategy and 42 percent of respondents say ease of use is critical. Only 21 percent say security is important.
  • The majority of respondents (54 percent) say a phone number is enough to stop fraudulent registrations and protect account access.
  • Companies seem to be unwilling to crack down on fraudulent registrations. Forty-three percent of respondents say their company doesn’t worry about the registration of fake accounts to avoid friction in the registration process. Most companies do not have a formal method for determining whether a potential user is real.
  • Only 39 percent of respondents say their company is vigilant in determining that each user account belongs to a real person.
  • Only 25 percent of respondents believe the traditional username and password(s) is a reasonably secure authentication method for their users. However, 94 percent of respondents say they use passwords or PINs and 79 percent use email addresses to create an account(s).

To read the rest of the report findings, please download the PDF from Telesign.com

Is your company ready for a big data breach? Only one-third say they are

Larry Ponemon

Larry Ponemon

With data breaches continuing to increase in frequency and severity, it comes as no surprise that businesses are acknowledging this risk as a top concern and priority. Nearly half of organizations surveyed report having a data breach involving the loss or theft of more than 1,000 records containing sensitive or confidential information in the past two years. And the frequency of data breaches is increasing. Sixty-three percent of these respondents report their company had two or more breaches in the past two years.

However, the enclosed findings from our Third Annual Study: Is Your Company Ready for a Big Data Breach sponsored by Experian® Data Breach Resolution, illustrate that many companies still lack confidence in their ability to manage these issues and execute their data breach response plan. We surveyed 604 executives and staff employees who work primarily in privacy and compliance in the United States.

ready for breachSince 2013, we have tracked changes in how confident companies are in responding to a data breach. This year, we took our analysis a step further by digging into what companies are specifically including in their data breach response plans to get to the root cause of why their confidence is lacking and the areas where they struggle to follow best practices.

As shown in Figure 1, of the 81 percent of respondents who say their company has a plan, only 34 percent say these plans are very
effective or effective. This is a slight increase from 30 percent in 2014. Thus, major gaps remain in how they are comprehensively preparing for a data breach.

Specifically, organizations aren’t taking into account the full breadth of procedures that need to be incorporated in the response plan
and aren’t considering the wide variety of security incidents that can happen. The good news is some of the barriers to addressing
those issues can be easily solved.

Some of the key findings we uncovered from this year’s survey include:

Data breaches are more concerning than product recalls and lawsuits. A majority of business leaders acknowledge the potential damage data breaches can cause to corporate reputation is significant. They ranked a data breach second only to poor customer service and ahead of product recalls, environmental incidents and publicized lawsuits. The combination of the higher likelihood and significant impact has caused data breaches to be a major issue across all sectors.

Data breach preparedness sees increased awareness from senior leadership. Boards of directors, chairmen and CEOs have become more involved and informed in the past 12 months about their companies’ plans to deal with a possible data breach. In 2014, only 29 percent of
respondents said their senior leadership were involved in data breach preparedness. This year, perhaps due to recent mega breaches, 39 percent of respondents say their boards, chairmen and CEOs are involved at a high level. Most interesting is their participation in a high level review of
the data breach response plan in place increased from 45 percent to 54 percent of respondents.
Significant increase in response plans over three years. As discussed above, this year more companies have a baseline data breach response plan in place. Since first conducting this study in 2013, the percentage of organizations that reported having a data breach response plan
increased from 61 percent to 81 percent. However, it is surprising that still not all companies are taking the basic step of developing a data breach response plan.

Many are still struggling in terms of feeling confident in their ability to secure data and manage a breach. Figure 1 above shows only 34 percent of respondents say their organizations’ data breach response plan is very effective or effective. Despite increased security investments and incident response planning, when asked in detail about the preparedness of their
organization, many senior executives are not confident in how they would handle a real-life issue.

Following are reasons for rating these plans as not as effective as they should be:

  • Forty-one percent of respondents say their organization is not effective or unsure about the effectiveness of their data breach response plan.
  • Only 28 percent of respondents rate their organization’s response plan as effective in reducing the likelihood of lawsuits; and only 32 percent rate their response plan as effective for protecting customers.
    Executives are concerned about their ability to respond to a data breach involving confidential information and intellectual property.
  • Only 39 percent report they are prepared to respond to this type of incident.
  • Only 32 percent of organizations report they understand what needs to be done following a material data breach to prevent negative public opinion.
  • Only 28 percent of organizations are confident in its ability to minimize the financial and reputational consequences of a material breach.

Fine print alert: Hey kids! Your parents have read and agreed to this, right? (wink)

Snapchat.com

Snapchat.com

Hey parents! You won’t believe the contracts your kids have been roped into.

Like a fine print virus spreading quickly around the globe, under-aged teen-agers are suddenly being shrink-wrapped into contracts of dubious enforceability all around the web. The situation highlights a conundrum for companies targeting the 13-17 crowd: how to set rules with minors who generally can’t actually consent to contract terms, and almost certainly don’t get their parents’ permission to do so.

Snapchat changed its terms of service recently, attracting a lot of attention. While most of it was focused on the company giving itself virtual ownership over content posted on the service, something else in the terms caught my eye.

“By using the Services, you state that: You can form a binding contract with Snapchat—meaning that if you’re between 13 and 17, your parent or legal guardian has reviewed and agreed to these Terms.”

Well, really it caught privacy lawyer Joel Winston’s eye. He called it to my attention.

Let me take a guess and estimate that of Snapchat’s roughly 100 million users, most of them minors, perhaps 43 or so have shown those terms to their agreeable parents.  In other words, if your kid uses Snapchat, he or she has almost certainly lied about you to the company, all in the name of forming a contract – of sorts.

Winston had a different problem with the language.

“A minor cannot declare herself competent to sign a binding contract that would otherwise require consent from an adult,” he said.  There are some exceptions to that, which we’ll get to.  But the headline point remains.  Generally speaking, contracts with minors aren’t really contracts.

So what’s this language doing in Snapchat’s terms of service?  It’s not just Snapchat. That very language appears in lots of kid-focused services, like Skout (a flirting tool), THQ (a game site), and even  PETAkids.com (an animal rights site.)  Similar terms appears across the Web.

Snapchat certainly is a leader in the 13-17 space, however.  I asked the firm to comment about its terms.  It declined.

When I ran Snapchat’s terms by Ira Reinhgold, executive director at the National Association of Consumer Advocates, he was aghast.

“Why did they do this, to frighten people into not suing them?” he said, rhetorically.  “I cannot imagine any court would find this binding.  No lawyer worth his salt would think this would think this is going to stick…a youngster cannot consent.”

Maybe…and maybe not. Last year, a California court actually did rule that, in some circumstances, terms of service are enforceable against minors. That case involved Facebook’s use of member photos in “Sponsored Stories.” Facebook’s terms at the time provided for what amounted to a publicity rights release, and the plaintiffs in the case argued that release was unenforceable. A judge sided with Facebook.

To put a fine point on it, minors can agree to certain kind of contract terms (that allow them to work, for example), but such contracts have a unique status and can be voided at any time by the minor.  Because the plaintiffs in the case continued to use Facebook, they had not voided their contract, and therefore Facebook was protected by the agreement.

“This is a big win for all online services, not only Facebook,” wrote Eric Goldman in a blog post about the case.

The situation highlights the unique problem of dealing with children over 13 but under 18 Goldman, said to me.

“Snapchat may have legally enforceable contracts with minors. Contracts with minors are usually ‘voidable,’ meaning that the minor can tear up the contract whenever he/she wants. However, until the minor disaffirms, the contract is valid. And in the case of social networking services, the courts have indicated that minors can disaffirm the contract only by terminating their accounts, meaning that the contract remains legally binding for the entire period of time the minor has the account,” he said. “As a contracts scholar, I can understand the formalist logic behind this conclusion, but it conflicts with the conceptual principle that minors aren’t well-positioned to protect their own interests in contract negotiations.”

On the other hand, the solution might be worse than the problem itself.

“The counter-story is that most online services don’t have any reliable way to determine the age of their users, and an adhesion contract that works unpredictably on only some classes of users isn’t really useful. And I don’t think anyone would favor web-wide ‘age-gating’ as the solution to that problem,” he said.

Of course, the problem isn’t just the existence of a contract, but what the terms of that contract might be, and whether a minor is capable of understanding and consenting to its terms.  Winston is concerned with what comes after the “parental promise” section in Snapchat’s contract: a binding arbitration agreement and class action waiver. (That’s the kind of waiver the Consumer Financial Protection Bureau is about to ban.)

“All claims and disputes arising out of, relating to, or in connection with the Terms or the use the Services that cannot be resolved informally or in small claims court will be resolved by binding arbitration,” the terms say. “ALL CLAIMS AND DISPUTES WITHIN THE SCOPE OF THIS ARBITRATION AGREEMENT MUST BE ARBITRATED OR LITIGATED ON AN INDIVIDUAL BASIS AND NOT ON A CLASS BASIS.” (Snapchat’s CAPS, not mine)

As Winston sees it, not only is Snapchat requiring a minors to agree to a contract, it’s requiring them to surrender their rights to have their day in court.

“I would certainly be very interested to read any legal ruling that enables a 13 year old to agree that she will ‘waive any constitutional and statutory rights to go to court and have a trial in front of a judge or jury,’ “ he said, echoing the terms.  “I am not currently aware of any case law that enforces a mandatory binding arbitration clause against an adult parent based on the purported ‘consent’ of her minor child.”

Were those terms to survive a court challenge, and if Snapchat tried something like Sponsored Stories, Snapchat’s minor users would have waived their rights to join a class action against the firm.

In the end, you might be wondering why parents – or kids – might want to argue with Snapchat anyway? Winston leaps at a chance to answer that.

“The Snapchat TOS contract is relevant because the company is actively collecting personal data from millions of children. That includes device phonebook, camera and photos, user location information (from) GPS, wireless networks, cell towers, Wi-Fi access points, and other sensors, such as gyroscopes, accelerometers, and compasses,” he said. “It’s also relevant because Snapchat is sharing user data from millions of children with third-parties and business partners for the purpose of advertising and monetization.”

I’m not one to give parents more homework, and I hesitate to advise you to try to read all the terms of service agreements to every app on your child’s phone.  But it might be a good learning moment to ask your kids what they’ve told tech companies about you — and find out what you’ve agreed to.

Why are the bad guys winning? They have a two-month head start, new report finds

Bob Sullivan

Bob Sullivan

Bad guys are so much more nimble than good guys that they have a two-month head start in most hacking situations, a new report has found.  Meanwhile, software flaws that are even a decade old continue to be used to hack hundreds of thousands of computers, according to Kenna Security.

In the hacking world, a secret software flaw that can be exploited is known as a “zero-day” vulnerability.  Known only to a select few, zero-day exploits give hackers the ability to break into machines at will, and much has been made of this alarming problem.

But even known vulnerabilities might as well be “zero day” flaws, suggests findings in a report issued Tuesday by Kenna on what it calls the “Remediation Gap.”  Kenna says it examined one billion breach events and came to this disturbing conclusion:

Most organizations require 100-120 days before fixing vulnerabilities; meanwhile, hackers exploit them within 40-60 days.  That’s two months of free shots.

“The public has grown plenty familiar with hacker seeking out a specialized target, such as Ashley Madison. But automated, non-targeted attacks still remain the most significant threat to businesses of all sizes,” said Karim Toubba, CEO of Kenna. “Every company has data that hackers want to get their hands on, but security teams remain one step behind their adversaries. Security teams need to move quickly to remediate critical vulnerabilities, but they don’t have the tools needed to keep pace with hackers.”

The report suggests that too much attention has been placed recently on targeted attacks, while old-fashioned “spray and pray” attacks remain many firms’ greatest threat.

“Of the organizations that Kenna has evaluated, 100 percent are susceptible to vulnerabilities – which correlate to at least one stable publicly available exploit,” the report says.

Kenna said it pulled its sample from a database of 10 million successful attacks per week, collected through AlienVault’s Open Threat Exchange, as well as threat intelligence data as well as data from various partners, including Dell SecureWorks, Verisign, SANS ISC and US-CERT.

“By executing this approach, we were able to estimate the probability that a vulnerability might be exploited, as well as the sheer volume of attacks, based on the volume of attacks displayed by the aggregated data,” the report says.

Security professionals do a poor job of prioritizing which threats they remediate, and often fail to patch old flaws that are known to be popular among hackers in favor of top-of-mind flaws that have been recently announced, the firm argues.

“One of the points we need to make is that the vulnerabilities in question are often very old, well-known weaknesses that simply haven’t been fixed yet. We’ve seen this over and over again as we evaluate the data,” the report says. “In many cases these vulnerabilities are not sexy, and they don’t hog the spotlight – but in many environments they actually represent major weaknesses.”

For example, Kenna spotted 156,000 exploitations of the Slammer worm executed during 2014. Slammer hit so many servers that it dramatically slowed down general Internet traffic – in 2003.

The report also finds that automated attacks are on the rise: Kenna says there have been over 1.2 billion successful exploits witnessed in 2015 to date, compared to 220 million successful exploits witnessed in 2013 and 2014 combined – an increase of 445 percent.

“Companies will continue to face the cold reality that throwing people at the problem is no longer sufficient for remediating vulnerabilities and combating the sheer volume of automated attacks,” Toubba said.”

Cyber crime costs jump by 19 percent

Larry Ponemon

Larry Ponemon

We are pleased to present the 2015 Cost of Cyber Crime Study: United States, the sixth annual study of US companies. Sponsored by Hewlett Packard Enterprise, this year’s study is based on a representative sample of 58 organizations in both the public and private sectors. While our research focused on organizations located in the United States, most are  multinational corporations.

This is the fourth year Ponemon Institute has conducted cyber crime cost studies for companies in the United Kingdom, Germany, Australia and Japan and the second year for the Russian Federation. This year we added Brazil. The findings from this research are presented in separate reports.

The number of cyber attacks against US companies continues to grow in frequency and severity. Recent cyber attacks include Anthem Blue Cross and Blue Shield, United Airlines, Sabre Corp. and American Airlines. In the public sector, the Office of Personnel Management sustained an attack that resulted in the theft of information about more than 4.2 million current and former federal employees and attacks against the Internal Revenue Service resulted in the theft of personal information about more than 100,000 taxpayers.

While the companies represented in this research did not have cyber attacks as devastating as
these were, they did experience incidents that were expensive to resolve and disruptive to their
operations. For purposes of this study, we refer to cyber attacks as criminal activity conducted via the Internet. These attacks include stealing an organization’s intellectual property, confiscating online bank accounts, creating and distributing viruses on other computers, posting confidential business information on the Internet and disrupting a country’s critical national infrastructure.

Our goal is to quantify the economic impact of cyber attacks and observe cost trends over time.
We believe a better understanding of the cost of cyber crime will assist organizations in
determining the appropriate amount of investment and resources needed to prevent or mitigate the consequences of an attack.
In our experience, a traditional survey approach does not capture the necessary details required to extrapolate cyber crime costs. Therefore, we conduct field-based research that involves interviewing senior-level personnel about their organizations’ actual cyber crime incidents.

Approximately 10 months of effort is required to recruit companies, build an activity-based cost
model to analyze the data, collect source information and complete the analysis.

For consistency purposes, our benchmark sample consists of only larger-sized organizations (i.e., A minimum of approximately 1,000 enterprise seats). The study examines the total costs
organizations incur when responding to cyber crime incidents. These include the costs to detect, recover, investigate and manage the incident response. Also covered are the costs that result in after-the-fact activities and efforts to contain additional costs from business disruption and the loss of customers. These costs do not include the plethora of expenditures and investments made to sustain an organization’s security posture or compliance with standards, policies and regulations.

cost of cyber crime chart

Figure 1 presents the estimated average cost of cyber crime for the seven countries represented in this research. These figures are converted into US dollars for comparative purposes. As shown, there is significant variation in total cyber crime costs among participating companies in the benchmark samples. The US sample reports the highest total average cost at $15 million and the RF sample reports the lowest total average cost at $2.4 million.

Key findings:

Cyber crimes continue to be very costly for organizations. We found that the mean
annualized cost for 58 benchmarked organizations is $15 million per year, with a range from $1.9 million to $65 million each year per company. Last year’s mean cost per benchmarked
organization was $12.7 million. Thus, we observe a $2.7 million (19 percent) increase in mean
value. The net increase over six years in the cost of cyber crime is 82 percent.

Cyber crime cost varies by organizational size. Results reveal a positive relationship between
organizational size (as measured by enterprise seats) and annualized cost. However, based on
enterprise seats, we determined that small organizations incur a significantly higher per capita
cost than larger organizations ($1,571 versus $667).

The cost of cyber crime increases for all industries. The average annualized cost of cyber
crime appears to vary by industry segment, where organizations in financial services, energy &
utilities and defense & aerospace experience a higher cost of cyber crime. Organizations in the
consumer products and hospitality industries on average experience a much lower cost of cyber crime.

The most costly cyber crimes are those caused by denial of services, malicious insiders
and malicious code. These account for more than 50 percent of all cyber crime costs per
organization on an annual basis. Mitigation of such attacks requires enabling technologies such
as SIEM, intrusion prevention systems, applications security testing solutions and enterprise GRC solutions.

Cyber attacks can get costly if not resolved quickly. Results show a positive relationship
between the time to contain an attack and organizational cost. Please note that resolution does
not necessarily mean that the attack has been completely stopped. For example, some attacks
remain dormant and undetected (i.e., modern day attacks).

The average time to resolve a cyber attack was 46 days, with an average cost to participating organizations of $1,988,554 during this 46-day period. This represents a 22 percent increase from last year’s estimated average cost of $1,593,627, which was based upon a 45-day resolution period. Results show that malicious insider attacks can take an average of
approximately 63 days to contain.

Information theft continues to represent the highest external cost, followed by the costs
associated with business disruption. On an annualized basis, information theft accounts for
42 percent of total external costs. Costs associated with disruption to business or lost productivity account for 36 percent of external costs (up 4 percent from the six-year average).

Detection and recovery are the most costly internal activities. On an annualized basis,
detection and recovery combined account for 55 percent of the total internal activity cost with
cash outlays and direct labor representing the majority of these costs. However, since 2013 this has declined from 40 percent to 36 percent in 2015. The application layer has increased in budget allocation from 15 percent in 2013 to 20 percent in 2015.

Deployment of security intelligence systems makes a difference. The cost of cyber crime is
moderated by the use of security intelligence systems (including SIEM). Findings suggest
companies using security intelligence technologies were more efficient in detecting and
containing cyber attacks. As a result, these companies enjoyed an average cost savings of $3.7
million when compared to companies not deploying security intelligence technologies.
Companies deploying security intelligence systems experienced a substantially higher
ROI at 32 percent than all other technology categories presented. Also significant are the
estimated ROI results for companies that extensively deploy encryption technologies (27 percent) and advanced perimeter controls such as UTM, NGFW, IPS with reputation feeds (15 percent).

Deployment of enterprise security governance practices moderates the cost of cyber
crime. Findings show companies that invest in adequate resources, employ certified or expert
staff and appoint a high-level security leader have cyber crime costs that are lower than
companies that have not implemented these practices. Specifically, a sufficient budget can save
an average of $2.8 million, employment of certified/expert security personnel can save $2.1
million and the appointment of a high-level security leader can reduce costs by $2 million.

Click here to read the rest of the report.

Volkswagen software tricked emissions tests, feds say; hacking of customers is the real problem

Bob Sullivan

Bob Sullivan

A Volkswagen executive recently proclaimed that by 2020, all the automaker’s cars will be smartphones on wheels.

Turns out, Volkswagen cars were a little too smart for their own good. The Environmental Protection Agency on Friday accused the firm of using software to evade U.S.. emissions testing.  Computer code known as a “defeat device” recognized when the car was being tested and kicked on full emissions control systems.  The rest of the time the car chose…let’s say … “performance mode” over Earth-friendly mode.

The Obama administration has ordered the German automaker to recall half a million 4-cylinder Volkswagen and Audi cars from model years 2009-2015 cars and reprogram them.  The firm could also face fines that could range into the billions.  (At the moment, the firm hasn’t issued a statement.)

If accurate, such brazen use of software to evade federal law not only shocks the senses, it raises serious consumer protection issues. Many drivers are today rightly horrified that they were tricked into polluting the planet.  They also were driving cars with with performance that was artificially boosted — perhaps drivers would have chosen other cars if test drives of competitors’ models had been a fair fight.

In short, consumers have been hacked. Their cars’ software was doing things without their knowledge, just as if a virus writer had dropped a Trojan on their machines.

Recently, we talked about the very real fear drivers expressed to Kelley Blue Book — 4 out of 5 said car hacking will be a real problem in the next three years.

The survey referred to hacking by outside criminals, but there’s another kind of hacking going on here — when companies hack their own consumers.  Products we buy are now full of mysterious software, often instructed to do things we never imagined. TVs listen to our conversations; dating sites trick us into flirting with bots; our social networks and grocery stores talk about us; our web software tattles on us to the highest bidder;  and our cars trick emissions officials.

During an age when the very nature of advertising is constantly under siege, it makes sense that firms which already have a presence in our lives try to get a few more pieces of data out of us, and monetize that relationship just a little bit more. The temptation, if not desperation, is great.

But Friday’s Volkswagen story should be the beginning of some really serious soul searching, perhaps even a turning point for the Internet of Things.  It’s inevitable: our light bulbs, toasters, door bells, and our cars will all communicate some day soon.  We need a rock-solid ethic — not just laws, but a social morality — that machines should never do things unless people know all about them.  People should run the gadgets, not the other way around.

If we build a world of sneaky machines, we will deserve the consequences.