Category Archives: Uncategorized

The day my bank, yet again, blocked me from my money for ‘security’ — and why two-factor tools aren’t ready for prime time

Bob Sullivan

Bob Sullivan

How can a bank – or any organization — become less secure in its attempts to become more secure?  Let me tell you how.

Security must do two things: Protect and enable.  If your security doesn’t enable people to do what they have to do, they will inevitably circumvent it, creating all sorts of exception conditions as they do. And that is the path to perdition (and hacking).

Security often fails because people who design security are much better at throwing up roadblocks than they are creating pathways.  Both are equally important if a security scheme is to work.

This month brought yet another story chronicling theft of millions of passwords by hackers, once again highlighting the importance of implementing “not-just-passwords security” at places that really matter.

But I’m about to turn off two-factor authentication at my bank, right at the moment when everyone seems hell bent to turn it on. Why?  Because it doesn’t make me safer if it doesn’t work; it just prevents me from accessing my money.

I’ve run into classic 21st Century Red Tape headaches with my bank recently as I try very hard to use its two-factor authentication scheme.  I often don’t like single-anecdote stories, but occasionally they illuminate larger problems so perfectly they are worth telling. So here goes.

A quick review:  Two-factor authentication adds a strong layer of security to a service by requiring two tests be met by a person seeking access — a debit card and a PIN code, for example, representing something you have and something you know.  Online banks and websites are slowly but surely nudging everyone towards various forms of two-factor authentication, because it really does make life harder for hackers.

Most of these two-factor forms involve use of smartphones, as they have become nearly ubiquitous. Log on to a website at a PC, confirm a code sent to your phone.  Something you have (the phone) and something you know (the password). Simple, but elegant, and far harder for bad guys to crack.

And it’s great, when it works. But what about when it doesn’t work?

Here’s a simple problem. Consumers get new phones all the time. If the code is tied to the physical handset, the code doesn’t work any longer. What then?

Turns out this can be a really vexing problem. (Readers of this column know why I had to get a new smartphone recently)

I’ve been a USAA banking customer for decades. The financial services firm has ranked atop customer satisfaction surveys seemingly forever, and for good reason:  It really does take good care of members.

At least it did, until it tried to implement two-factor security. I try not to be hypocritical, and follow my own advice, so I turned on USAA’s flavor of two-factor pretty early on. It’s a solid design: A Symantec app loaded onto your smartphone offers a temporary token — a 6-digit code — that changes every 30 seconds. The token is tied to the physical handset. Only a person who knows your PIN and can access the token on that handset can log into the website. You can see all the layers of protection that creates.

Sure, it’s a tiny hassle to pull out the phone every time you want to log on to the website — a larger hassle if your phone battery is dead. But that’s a fair price to pay for security.

However, the hassle becomes immense when it becomes time to change handsets.   So immense that as I type this, I cannot access my bank…and have no idea when I will be able to do so.(UPDATE: I was able to fix my login woes 24 hours later.) And that’s happened twice to me in the past year. Why? Chiefly because USAA is not set up to deal with the problem of new handsets.

To review: When I tried to access the website it demanded a token from my phone — a token that was no longer valid because I had a new phone.  When I tried to use the phone’s app to access my accounts, USAA asked for a password because it didn’t recognize the phone.  I didn’t have a password, I had a token — an invalid token.  You get the picture.

All that is a predictable technology hiccup that’s not the end of the world.  The real problem came next.

A call to customer service seemed to be my last available option, but that was dismal, too.  At various times I wasn’t been able to get through to customer service phone lines. What’s much worse, however, is what happened when I did get through.

People change phones roughly every two years, so this new handset problem must come up often enough.  Yet it’s obvious to me USAA operators are not ready to handle the problem when consumers call.  Each time I have reached an operator, I had to spend a lot of time explaining the problem — and remember, I do this for a living.  The first successful call today, the operator merely changed my mobile application login settings after putting me on hold for minutes.  When I protested that, she said she had to transfer me to a special department, and then the phone went dead.

After a second call and wait, the operator was sympathetic, but put me on hold quickly and wasted a lot of time trying to set me up with a new phone number.  It took a while before I could convince her that “new phone” meant “new handset” not “new number,” a mistake I will correct in future calls. We eventually agreed that all I needed was someone to turn off two-factor and issue me a temporary password so I could go in and re-establish the connection between my handset and my account.  But after another long hold, and transfers to two other operators, I was told that, sadly, they were having trouble issuing temporary passwords and asked if I could call back in an hour or so.

I’ve left out many steps in this saga.  At each stage, of course, I was subject to strict authentication questions. That’s fine — I was asking for a new password, after all.  But at the end of my fruitless journey through tech support, when I asked if I could somehow get express treatment when I called back just to find out if I could get a temporary password, I was told, “no.”  So I will have to once, again, convince a primary operator who I am, and that I am having token problems and that I need a temporary password.  There is obviously no “token problem” script, ready for my problem.

My experience last time was similar, so I know I am not just the victim of bad luck.

The last time this happened, I was sure to give the operator who finally liberated my account some specific feedback — there needs to be a tidy process for dealing with people who get new handsets.    Obviously, that hasn’t occurred. And so, the first thing I will do when I can access my account is disable the token. (I’ll use another form of two-factor). While I am afraid of hackers, I’m more afraid of not be able to access my money because my bank has poorly implement a security solution.

When I called USAA as a reporter to discuss my experience, the firm owned up to the challenges of implementing two-factor security.

“You’ve encountered an experience we are aware of,” said Mike Slaugh, Executive Director, Financial Crimes Prevention, at USAA. “What we’re working on here is a way to make that experience better. … Multi-factor authentication for us at USAA and the industry in general, it’s important.  (Making this experience better) is top of mind for us as we work to help members protect  themselves.”

USAA is hardly the only firm having trouble dealing with two-factor issues.  Independent security analyst Harri Hursti told me about the foibles consumers face when dealing with two-factor authentication that relies on text messages.

“The moment you start traveling, all bets are off. Text messages over roaming are far from reliable – they either are never delivered, or they experience regular delivery delays over 10-15 minutes, which are the most typical time-out limits on the websites,” he said. Hursti, who was in Portugal when I interviewed him, said he was late paying an electricity bill this month because of two-factor pain points.  “Basically, in order to do banking when travelling internationally, you need to start that by turning all security off. And yet you are knowingly getting into increased security risk environment.”

Gartner security analyst Avivah Litan says these kinds of implementation and customer service issues not only threaten adoption of two-factor security, they actually create more pathways for hackers.

“Two factor, in this case, actually weakens security – rather than strengthens it,” she said. “I always tell our clients that their security is only as strong as its weakest link and surely, when they disable two factor authentication on the account, they likely ask the account holder to verify their identity by answering those easily compromised challenge questions, which any criminal who can buy data on the dark web has access to.  Therefore this is an easy way for criminals to get access to your account.  So not does two factor authentication without proper supporting processes serve to annoy and greatly inconvenience good legitimate customers, it also does little to keep the bad guys out for this and other reasons.”

As Litan is fond of saying, there’s a fallacy that “harder is better” in security.  It “doesn’t keep bad guys out, but it annoys good guys.”

Perhaps this problem isn’t *that* common yet, as uptake on two-factor is still relatively small (USAA acknowledged that, and it’s common across the industry). Don’t worry: With each password hack, more and more people will turn on two-factor.  If companies blow the implementation, consumers will just as quickly turn it off again.  And we might lose them for several years.

Protect and enable, or we’re all at greater risk.

Healthcare organizations are in the cross hairs of cyber attackers

Larry Ponemon

Larry Ponemon

The State of Cybersecurity in Healthcare Organizations in 2016, sponsored by ESET, found that on average, healthcare organizations represented in this study have experienced almost one cyber attack per month over the past 12 months. Almost half (48 percent) of respondents say their organizations have experienced an incident involving the loss or exposure of patient information during this same period, but 26 percent of respondents are unsure.

The research reveals that healthcare organizations are struggling to deal with the same threats other industries face. According to 79 percent of respondents, system failures are the number one risk. The following threats are also considered serious: unsecure medical devices (77 percent of respondents), cyber attackers (77 percent of respondents), employee-owned mobile devices or BYOD (76 percent of respondents), identity thieves (73 percent of respondents) and mobile device insecurity (72 percent of respondents). Despite citing unsecure medical devices as a top security threat, only 27 percent of respondents say their organization has the security of medical device as part of their cybersecurity strategy.

With cyber attacks against healthcare organizations growing increasingly frequent and complex, there is more pressure to refine cybersecurity strategies. Moreover, healthcare organizations have a special duty to secure data and systems against cyber hacks. The misuse of patient information and system downtime can not only put sensitive and confidential information at risk but also put the lives of patients at risk as well.

We surveyed 535 IT and IT security practitioners in a variety of healthcare organizations such as private and public healthcare providers and government agencies . Sixty-four percent of respondents are employed in covered entities and 36 percent of respondents in business associates. Eighty-eight percent of organizations represented in this study have a headcount of between 100 and 500.

PS report chart april 2016

With cyber attacks against healthcare organizations growing increasingly frequent and complex, there is more pressure to refine cybersecurity strategies. Moreover, healthcare organizations have a special duty to secure data and systems against cyber hacks. The misuse of patient information and system downtime can not only put sensitive and confidential information at risk but the lives of patients as well. As shown in Figure 1, healthcare organizations are struggling to deal with a variety of threats such as system failures (79 percent of respondents), unsecure medical devices (77 percent of respondents), cyber attackers (77 percent of respondents), employee-owned mobile devices or BYOD (76 percent of respondents), identity thieves (73 percent of respondents) and unsecure mobile device (72 percent of respondents). Despite citing unsecure medical devices as a top security threat, only 27 percent of respondents say their organization has the security of medical devices as part of their cybersecurity strategy.

The following are key findings from this research:

Healthcare organizations experience monthly cyber attacks. Healthcare organizations experience, on average, a cyber attack almost monthly (11.4 attacks on average per year) as well as the loss or exposure of sensitive and confidential patient information. However, 13 percent are unsure how many cyber attacks they have endured. Almost half of respondents (48 percent) say their organization experienced an incident involving the loss or exposure of patient information in the past 12 months. As a consequence, many patients are at risk for medical identity theft. Exploits of existing software vulnerabilities and web-borne malware attacks are the most common security incidents. According to 78 percent of respondents, the most common security incident is the exploitation of existing software vulnerabilities greater than three months old. A close second, according to 75 percent of respondents, are web-borne malware attacks. This is followed by exploits of existing software vulnerability less than three months old (70 percent of respondents), spear phishing (69 percent of respondents) and lost or stolen devices (61 percent of respondents).

How effective are measures to prevent attacks? Forty-nine percent of respondents say their organizations experienced situations when cyber attacks have evaded their intrusion prevention systems (IPS) but many respondents (27 percent) are unsure. Thirty-seven percent of respondents say their organizations have experienced cyber attacks that evaded their anti-virus (AV) solutions and/or traditional security controls but 25 percent of respondents are unsure. On average, organizations have an APT incident every three months. Only 26 percent of respondents say their organizations have systems and controls in place to detect and stop advanced persistent threats (APTs) and 21 percent are unsure.

On average, over a 12-month period, organizations represented in this research had an APT attack about every 3 months (3.46 APT-related incidents in one year). Sixty-three percent of respondents say the primary consequences of APTs and zero day attacks were IT downtime, followed by the inability to provide services (46 percent of respondents), which create serious risks in the treatment of patients. Forty-four percent of respondents say these incidents resulted in the theft of personal information.

DDoS attacks have cost organizations on average $1.32 million in the past 12 months. Thirty-seven percent of respondents say their organization experienced a DDoS attack that caused a disruption to operations and/or system downtime about every four months and cost an average of $1.32 million. The largest cost component is lost productivity followed by reputation loss and brand damage. Respondents are pessimistic about their ability to mitigate risks, vulnerabilities and attacks across the enterprise. Only 33 percent of respondents rate their organizations’ cybersecurity posture as very effective. The primary challenges to becoming more effective are a lack of collaboration with other functions (76 percent of respondents), insufficient staffing (73 percent of respondents), not enough money and not considered a priority (both 65 percent of respondents).

Organizations are evenly divided in the deployment of an incident response plan. Fifty percent of respondents say their organization has an incident response plan in place. Information security and corporate counsel/compliance are the individuals most involved in the incident response process, according to 40 percent of respondents and 37 percent of respondents, respectively.

Technology poses a greater risk to patient information than employee negligence. The majority of respondents say legacy systems (52 percent of respondents) and new technologies and trends such as cloud, mobile, big data and the Internet of Things are both increasing vulnerability and threats to patient information. Respondents are also concerned about the impact of employee negligence (46 percent of respondents) and the ineffectiveness of business associate agreements to ensure the security of patient information (45 percent of respondents). System failures are the security threat healthcare organizations worry most about. Seventy-nine percent of respondents say this is one of the top three threats facing their organizations followed by 77 percent of respondents who say it is cyber attackers and unsecure medical devices. Employee-owned mobile devices in healthcare settings are also considered a significant threat for 76 percent of respondents. Once again respondents are more concerned about technology risks than employee negligence or error. Hackers are most interested in stealing patient information.

The most lucrative information for hackers can be found in patients’ medical records, according to 81 percent of respondents. This is followed by patient billing information (64 percent of respondents) and clinical trial and other research information (50 percent of respondents). Healthcare organizations need a healthy dose of investment in technologies. On average, healthcare organizations represented in this research are spending $23 million on IT and an average of 12 percent is allocated to information security. Since an average of $1.3 million is spent annually just to deal with DDoS attacks, the business case can be made to increase technology investments to reduce the frequency of successful attacks. Most organizations are measuring the effectiveness of technologies deployed. At this time, 51 percent of respondents say their organizations are measuring the effectiveness of investments in technology to ensure they achieve their security objectives. The technologies considered most effective are: identity management and authentication (80 percent of respondents) and encryption for data at rest (77 percent of respondents).

There is much more to the report, which can download for free here.

 

Worried about the wrong thing: Hospital hacks show privacy, HIPAA might be dangerous to our health

Bob Sullivan

Bob Sullivan

A few years ago, my long-time, elderly, live-alone neighbor was taken away in an ambulance.  I wasn’t home and heard about it second-hand.  At first, I had no idea how serious it was or even where he was taken, but I was really concerned. So I started calling local hospitals to ask if he’d been admitted.  You can probably guess how that worked out for me.

I was stonewalled at every turn. Even when I said might be the only one who would call about him, that I was concerned he had no nearby next of kin, I got nowhere. I was fully HIPAA’d out.

Eventually, I talked to local police who tipped me off that he had been brought to a nearby hospital. I called them again.

“Not to be morbid, but can I even confirm that he’s still alive?” I pleaded.

“Due to patient privacy, we cannot divulge anything,” I was told.

Now you probably know I care about privacy as much as the next person, but if my friend and neighbor was dying in a hospital bed, I was Hell bent to make sure he didn’t die without knowing at least someone cared about him. And this seemed cruel to me.

I called a few more times.  I finally lucked out and got to someone who, from her voice, sounded quite a bit older. Maybe even a volunteer. She heard me out.

“You didn’t hear it from me,” I recall her saying. “But he’s recovering from brain surgery. He probably had a stroke.”

I’m happy to tell you that I went to see my neighbor a few times during the next several weeks, and after a long recovery, he’s actually doing really well.

I tell you all this because I am worried that situations like these are really helping hackers.

Perhaps you’ve heard about the rash of hospital and health care systems being attacked by ransomware.  In the Washington D.C. area, a chain named MedStar was reduced to performing nearly all tasks on paper by a virus that locked all its files and demanded payment to unlock them.  The problem is so serious that U.S. and Canadian authorities jointly issued a warning about ransomware on March 31, calling attention to attacks on hospitals.

What does this have to do with HIPAA, or my neighbor’s stroke?  It shows we are worrying about the wrong things.

All of us have been HIPAA’d at some point.  We’ve felt the wrath of the Health Insurance Portability and Accountability Act, enacted in 1996.  Want a yes or no answer to a simple question from your doctor?  You can’t get an email from her or him. You have to login to a server that will probably reject the first five passwords you enter and then force you to a reset page, and half the time you’ll give up before you find out that, yes, you should take that pill with food.

There’s a saying in the geek world that “compliance is a bad word in security.”    Walk into any health care facility and you’ll immediately get the sense that everyone from doctors to nurses to cleaning staff are TERRIFIED to violate HIPAA.  On the other hand, I’ve been told by someone who has worked on a recent hospital attack, health facilities routinely are five or even 10 years behind on installing security patches.

Geoff Gentry, a security analyst with Independent Security Evaluators, puts it this way:

“We are defending the wrong asset,” he told me. “We are defending patient records instead of patient health.”

If someone steals a patient record, sure, they can do damage. They can perhaps mess up a patient’s credit report. But if someone hacks and alters a patient record, the consequences can be much more dire.

“It could be life or death,” he said.

Gentry was part of a team from Independent Security Evaluators that reviewed hospital security at a set of facilities three months ago in the Baltimore/Washington area.  The timing couldn’t have been better.  The message couldn’t be more important.

“For almost two decades, HIPAA has been ineffective at protecting patient privacy, and instead has created a system of confusion, fear, and busy work that has cost the industry billions. Punitive measures for compliance failures should not disincentivize the security process, and healthcare organizations should be rewarded for proactive security work that protects patient health and privacy,” the report says. “(HIPAA has) not been successful in curtailing the rise of successful attacks aimed at compromising patient records, as can be seen in the year over year increase in successful attacks. This is no surprise however, since compliance rarely succeeds at addressing anything more than the lowest bar of adversary faced, and so long as more and better adversaries come on to the scene, these attempts will continue to fail.”

In the test, Independent Security Evaluators found issues that ran the gamut from unpatched systems to critical hospital computers left on, and logged in, when patients are left alone in examination rooms.  A typical problem: Aging computers designated for a single task that are left untouched for months or even years, missing critical security updates.

Larry Ponemon, who runs a privacy consulting firm, was an adviser on that project.  He assessment is equally as blunt.

“Being HIPAA compliant has become almost like a religion,” he says. “The reality is that being compliant with
HIPAA doens’t get you really far.”

To be clear:  The report didn’t uncover lazy IT workers playing video games while IT infrastructure crumbles around them. Nor did it find uncaring doctors, nurses, or even administrators. To the contrary, if found haggard security professionals desperately trying to keep up with security issues, and generally falling hopelessly behind as their attention is constantly redirected to paranoia over compliance issues.

“A lot of companies have made poor investment decisions in security. They are doing things that are not diminishing their risk,” Ponemon, who runs The Ponemon Institute, said. (NOTE: Larry Ponemon and I have a joint project on privacy issues, a newsletter called The Ponemon Sullivan Privacy Report.)

Hackers are devoted copycats, so we know more attacks on hospitals are coming. At the moment, these attacks seem to have been limited to administrative systems, and the impacted health care facilities say patient care was unaffected. (I did interview a D.C.-area patient who said two doctors were unable to share his patient files, leading to unnecessary delay and expense).

It’s easy to imagine far worse outcomes, however.  Gentry speculated that hackers could attack a specific patient and extort him or her.  Ponemon talked about attacks on pacemakers or other digitally-connected devices that control patient health.

“These sound like they are science fiction, but hospitals are part of the Internet of Things,” he said.  “And there doesn’t seem to be a plan to manage the security risk.”

The plan, Gentry says, has to involve righting the regulatory ship and letting hospitals and health care facilities worry about the right things.

“We need to take a lot of this bandwidth we are appropriating to compliance and use that bandwidth on security and patient health,” he said.

And we’d better start soon. Because we’ve given the bad guys a pretty sizable head start while we were distracted by Herculean efforts to protect my neighbor from me.

Two-thirds of security pros waste a ‘significant’ amount of time chasing false positives

Larry Ponemon

Larry Ponemon

We are pleased to present the findings of The State of Malware Detection & Prevention sponsored by Cyphort. The study reveals the difficulty of preventing and detecting malware and advanced threats. The IT function also seems to lack the information and intelligence necessary to update senior executives on cybersecurity risks.

We surveyed 597 IT and IT security practitioners in the United States who are responsible for directing cybersecurity activities and/or investments within their organizations. All respondents have a network-based malware detection tool or are familiar with this type of tool.

Getting malware attacks under control continues to be a challenge for companies. Some 68 percent of respondents say their security operations team spends a significant amount of time chasing false positives. However, only 32 percent of respondents say their teams spend a significant amount of time prioritizing alerts that need to be investigated.

Despite such catastrophic data breaches as Target, cyber threats are not getting the appropriate attention from senior leadership they deserve. As shown in the findings of this research, respondents say they do not have the necessary intelligence to make a convincing case to the C-suite about the threats facing their company.

The following findings further reveal the problems IT security faces in safeguarding their companies’ high value and sensitive information.

Companies are ineffective in dealing with malware and advanced threats. Only 39 percent of respondents rate their ability to detect a cyber attack as highly effective, and similarly only 30 percent rate their ability to prevent cyber attacks as highly effective. Respondents also say their organizations are doing poorly in prioritizing alerts and minimizing false positives. As mentioned above, a significant amount time is spent chasing false positives but not prioritizing alerts.

Most respondents say C-level executives aren’t concerned about cyber threats. Respondents admit they do not have the intelligence and necessary information to effectively update senior executives on cyber threats. If they do meet with senior executives, 70 percent of respondents say they report on these risks to C-level executives only on a need-to-know basis (36 percent of respondents) or never (34 percent of respondents).

Sixty-three percent of respondents say their companies had one or more advanced attacks during the past 12 months. On average, it took 170 days to detect an advanced attack, 39 days to contain it and 43 days to remediate it.

The percentage of malware alerts investigated and determined to be false positives. On average, 29 percent of all malware alerts received by their security operations team are investigated and an average of 40 percent are considered to be false positives. Only 18 percent of respondents say their malware detection tool provides a level of risk for each incident.

Do organizations reimage endpoints based on malware detected in the network? More than half (51 percent) of respondents say their organization reimages endpoints based on malware detected in the network. An average of 33 percent of endpoint re-images or remediations are performed without knowing whether it was truly infected. The most effective solutions for the remediation of advanced attacks are network-based sandboxing and network behavior anomaly analysis.

Download and read the rest of the report.

Two-thirds of security pros waste a 'significant' amount of time chasing false positives

Larry Ponemon

Larry Ponemon

We are pleased to present the findings of The State of Malware Detection & Prevention sponsored by Cyphort. The study reveals the difficulty of preventing and detecting malware and advanced threats. The IT function also seems to lack the information and intelligence necessary to update senior executives on cybersecurity risks.

We surveyed 597 IT and IT security practitioners in the United States who are responsible for directing cybersecurity activities and/or investments within their organizations. All respondents have a network-based malware detection tool or are familiar with this type of tool.

Getting malware attacks under control continues to be a challenge for companies. Some 68 percent of respondents say their security operations team spends a significant amount of time chasing false positives. However, only 32 percent of respondents say their teams spend a significant amount of time prioritizing alerts that need to be investigated.

Despite such catastrophic data breaches as Target, cyber threats are not getting the appropriate attention from senior leadership they deserve. As shown in the findings of this research, respondents say they do not have the necessary intelligence to make a convincing case to the C-suite about the threats facing their company.

The following findings further reveal the problems IT security faces in safeguarding their companies’ high value and sensitive information.

Companies are ineffective in dealing with malware and advanced threats. Only 39 percent of respondents rate their ability to detect a cyber attack as highly effective, and similarly only 30 percent rate their ability to prevent cyber attacks as highly effective. Respondents also say their organizations are doing poorly in prioritizing alerts and minimizing false positives. As mentioned above, a significant amount time is spent chasing false positives but not prioritizing alerts.

Most respondents say C-level executives aren’t concerned about cyber threats. Respondents admit they do not have the intelligence and necessary information to effectively update senior executives on cyber threats. If they do meet with senior executives, 70 percent of respondents say they report on these risks to C-level executives only on a need-to-know basis (36 percent of respondents) or never (34 percent of respondents).

Sixty-three percent of respondents say their companies had one or more advanced attacks during the past 12 months. On average, it took 170 days to detect an advanced attack, 39 days to contain it and 43 days to remediate it.

The percentage of malware alerts investigated and determined to be false positives. On average, 29 percent of all malware alerts received by their security operations team are investigated and an average of 40 percent are considered to be false positives. Only 18 percent of respondents say their malware detection tool provides a level of risk for each incident.

Do organizations reimage endpoints based on malware detected in the network? More than half (51 percent) of respondents say their organization reimages endpoints based on malware detected in the network. An average of 33 percent of endpoint re-images or remediations are performed without knowing whether it was truly infected. The most effective solutions for the remediation of advanced attacks are network-based sandboxing and network behavior anomaly analysis.

Download and read the rest of the report.

Car hacking worries FBI, too; and reports of keyless entry hacking won’t go away

Bob Sullivan

Bob Sullivan

We know that Americans are concerned about their cars being hacked.  We also know that some consumers believe criminals are “hacking” into their parked cars and committing “snatch and grab” crimes using devices that simulate newfangled keyless entry systems.

Now, we know the FBI is worried about car hacking, too. The agency, along with the National Highway Traffic Safety Administration, issued a bold warning to consumers and manufacturers last week.

“The FBI and NHTSA are warning the general public and manufacturers – of vehicles, vehicle components, and aftermarket devices – to maintain awareness of potential issues and cybersecurity threats related to connected vehicle technologies in modern vehicles,” the warning says. “While not all hacking incidents may result in a risk to safety – such as an attacker taking control of a vehicle – it is important that consumers take appropriate steps to minimize risk.”

The FBI warning didn’t raise any new concerns; it mainly cites revelations of car hacking from 2015 as impetus for the warning. Still, the notice clearly demonstrates there is a level of activity around car hacking that should have everyone concerned. Drive down the highway sometime (as a passenger) and use your smartphone to see at all the cars sending out Bluetooth connections around you and you’ll get an idea about how connected our vehicles have become.

Meanwhile, consumers continue to report mysterious car break-ins around the country with no signs of forced entry, in situations when they swear their car doors were locked.  In Baltimore, a string of crimes following this pattern frustrated local residents earlier this year.

“What was strange to me was that, while I could tell it was broken into because my jacket was taken and they tossed through the stuff in the car, there were no signs of a breaking. No broken windows or anything,” said one driver. “I called and reported it mostly because I wanted to know how anyone could have gotten in if it was locked and no windows were broken. The officer said people have these things that basically interfere with newer cars electronic/fob locking systems and disable the alarms.”

The reports follow a persistent set of national stories around keyfob break-ins that began with a CNN report two years ago, and was followed by a New York Times story last year that casually suggested drivers store their car fobs in their freezers to keep them safe from hackers. (Notably, the story appeared in the Times’ Style section. The science was a little shallow).

There have also been vague warnings issued by some agencies around the world, like this notice from London Police, or this notice from the National Insurance Crime Bureau,

“The key-less entry feature on newer cars is a popular advancement that lets drivers unlock their cars with the simple click of a button on a key fob using radio frequency transmission. The technology also helps prevent drivers from locking their keys in the vehicle,” it says.  “Not surprisingly, thieves have found a way to partially outwit the new technology using electronic ‘scanner boxes.’ These small, handheld devices can pop some factory-made electronic locks in seconds, allowing thieves to get into the vehicle and steal personal items left inside.”

The existence of such a scanner box is very much in question, as are assertions that such a universal master key can be purchased for as little as $17; so is any notion that the crime is widespread. If any law enforcement agency has seized such a device, we are all waiting for it to be put on display.

How would such a magic device work?  By tricking your car into thinking your key fob is nearby and opening the door in response to a handle jiggle; or perhaps by amplifying the signal it sends out, or by intercepting that signal and copying it somehow. Or, hackers could “guess” the code for opening a car, if the code were poorly constructed. Here’s a great explanation of how it might work, and why it’s a major challenge unlikely to be used by street thugs.

*Could* such a hack exist? Well, of course, says embedded device security expert Philip Koopman, a professor at  Carnegie Mellon. Koopman actually worked on earlier generation designs for key fobs.

“I would not at all be surprised if the Bad Guys have figured out that some manufacturer has bad security and how to attack it,” he said. “There is nothing really new here, other than general lack of people to admit that if you cut corners on security you will get burned, and an insistence by manufacturers and suppliers that known bad practices are adequate.”

In a blog post six years ago, he warned about the cost sensitivity for auto manufacturors (“No way could we afford industrial strength crypto.”)

Back to today, he offered this speculation on keyless entry attacks.

“It is (possible) that the manufacturers used bad crypto that is easy to hack, possibly via just listening to transmissions and doing off-line analysis. And it is possible to attack by getting near someone when they aren’t near their car and extracting the secrets from their car keys when it is in their pocket, then using that info to build a fake key. The technology is very similar to the US Passport biometric chips, so all the attacks for those are plausible here as well.”

The FBI offers the following advice to consumers: Keep your car software up to date, as you do with your PC; don’t modify your car software; be careful when connecting your car to third parties; and “be aware of who has physical access to your vehicle.”

That last bit of advice might work for people with long driveways, but the rest of us can’t do much about who might be able to walk by our cars on streets and in parking lots.

“While these tips may seem innocuous, they do show the limitations that law enforcement and consumers have in combating the car hacking threat,” said Tyler Cohen Wood, Cyber Security Advisor of Inspired eLearning.  “With the ever-increasing implementation of Internet of Things devices, including devices installed in newer cars, it’s a real challenge for law enforcement to identify different threat vectors associated with vehicle hacking.  There is no real standard for Internet of Things devices from a vehicle standpoint—each automobile manufacturer offers different types of devices as options in vehicles, from entertainment and navigation systems to remote ignition starting devices.  There is no industry standard for operating systems or security protocols on these devices, so it’s difficult for law enforcement to identify the specific threats that the devices pose to the public.”

So what else should you do?  Putting your car “keys” in the freezer is probably a bad idea; it will likely create more problems than it solves.  You might damage the very expensive key, for example, to mitigate a threat that is still perceived as low. But it wouldn’t hurt to take great care with where you leave the key. If you park directly in front of your front door, perhaps you shouldn’t leave the key right there.  Otherwise, read the local police blotter and talk to neighbors about street crime.

Most of all, make sure you really do lock your car doors.

 

Car hacking worries FBI, too; and reports of keyless entry hacking won't go away

Bob Sullivan

Bob Sullivan

We know that Americans are concerned about their cars being hacked.  We also know that some consumers believe criminals are “hacking” into their parked cars and committing “snatch and grab” crimes using devices that simulate newfangled keyless entry systems.

Now, we know the FBI is worried about car hacking, too. The agency, along with the National Highway Traffic Safety Administration, issued a bold warning to consumers and manufacturers last week.

“The FBI and NHTSA are warning the general public and manufacturers – of vehicles, vehicle components, and aftermarket devices – to maintain awareness of potential issues and cybersecurity threats related to connected vehicle technologies in modern vehicles,” the warning says. “While not all hacking incidents may result in a risk to safety – such as an attacker taking control of a vehicle – it is important that consumers take appropriate steps to minimize risk.”

The FBI warning didn’t raise any new concerns; it mainly cites revelations of car hacking from 2015 as impetus for the warning. Still, the notice clearly demonstrates there is a level of activity around car hacking that should have everyone concerned. Drive down the highway sometime (as a passenger) and use your smartphone to see at all the cars sending out Bluetooth connections around you and you’ll get an idea about how connected our vehicles have become.

Meanwhile, consumers continue to report mysterious car break-ins around the country with no signs of forced entry, in situations when they swear their car doors were locked.  In Baltimore, a string of crimes following this pattern frustrated local residents earlier this year.

“What was strange to me was that, while I could tell it was broken into because my jacket was taken and they tossed through the stuff in the car, there were no signs of a breaking. No broken windows or anything,” said one driver. “I called and reported it mostly because I wanted to know how anyone could have gotten in if it was locked and no windows were broken. The officer said people have these things that basically interfere with newer cars electronic/fob locking systems and disable the alarms.”

The reports follow a persistent set of national stories around keyfob break-ins that began with a CNN report two years ago, and was followed by a New York Times story last year that casually suggested drivers store their car fobs in their freezers to keep them safe from hackers. (Notably, the story appeared in the Times’ Style section. The science was a little shallow).

There have also been vague warnings issued by some agencies around the world, like this notice from London Police, or this notice from the National Insurance Crime Bureau,

“The key-less entry feature on newer cars is a popular advancement that lets drivers unlock their cars with the simple click of a button on a key fob using radio frequency transmission. The technology also helps prevent drivers from locking their keys in the vehicle,” it says.  “Not surprisingly, thieves have found a way to partially outwit the new technology using electronic ‘scanner boxes.’ These small, handheld devices can pop some factory-made electronic locks in seconds, allowing thieves to get into the vehicle and steal personal items left inside.”

The existence of such a scanner box is very much in question, as are assertions that such a universal master key can be purchased for as little as $17; so is any notion that the crime is widespread. If any law enforcement agency has seized such a device, we are all waiting for it to be put on display.

How would such a magic device work?  By tricking your car into thinking your key fob is nearby and opening the door in response to a handle jiggle; or perhaps by amplifying the signal it sends out, or by intercepting that signal and copying it somehow. Or, hackers could “guess” the code for opening a car, if the code were poorly constructed. Here’s a great explanation of how it might work, and why it’s a major challenge unlikely to be used by street thugs.

*Could* such a hack exist? Well, of course, says embedded device security expert Philip Koopman, a professor at  Carnegie Mellon. Koopman actually worked on earlier generation designs for key fobs.

“I would not at all be surprised if the Bad Guys have figured out that some manufacturer has bad security and how to attack it,” he said. “There is nothing really new here, other than general lack of people to admit that if you cut corners on security you will get burned, and an insistence by manufacturers and suppliers that known bad practices are adequate.”

In a blog post six years ago, he warned about the cost sensitivity for auto manufacturors (“No way could we afford industrial strength crypto.”)

Back to today, he offered this speculation on keyless entry attacks.

“It is (possible) that the manufacturers used bad crypto that is easy to hack, possibly via just listening to transmissions and doing off-line analysis. And it is possible to attack by getting near someone when they aren’t near their car and extracting the secrets from their car keys when it is in their pocket, then using that info to build a fake key. The technology is very similar to the US Passport biometric chips, so all the attacks for those are plausible here as well.”

The FBI offers the following advice to consumers: Keep your car software up to date, as you do with your PC; don’t modify your car software; be careful when connecting your car to third parties; and “be aware of who has physical access to your vehicle.”

That last bit of advice might work for people with long driveways, but the rest of us can’t do much about who might be able to walk by our cars on streets and in parking lots.

“While these tips may seem innocuous, they do show the limitations that law enforcement and consumers have in combating the car hacking threat,” said Tyler Cohen Wood, Cyber Security Advisor of Inspired eLearning.  “With the ever-increasing implementation of Internet of Things devices, including devices installed in newer cars, it’s a real challenge for law enforcement to identify different threat vectors associated with vehicle hacking.  There is no real standard for Internet of Things devices from a vehicle standpoint—each automobile manufacturer offers different types of devices as options in vehicles, from entertainment and navigation systems to remote ignition starting devices.  There is no industry standard for operating systems or security protocols on these devices, so it’s difficult for law enforcement to identify the specific threats that the devices pose to the public.”

So what else should you do?  Putting your car “keys” in the freezer is probably a bad idea; it will likely create more problems than it solves.  You might damage the very expensive key, for example, to mitigate a threat that is still perceived as low. But it wouldn’t hurt to take great care with where you leave the key. If you park directly in front of your front door, perhaps you shouldn’t leave the key right there.  Otherwise, read the local police blotter and talk to neighbors about street crime.

Most of all, make sure you really do lock your car doors.

 

Flipping the economics of hacker attacks

Larry Ponemon

Larry Ponemon

How much does it cost technically proficient adversaries to conduct successful attacks, and how
much do they earn? In Flipping the Economics of Attacks, sponsored by Palo Alto Networks, we look at the relationships between the time spent and compensation of today’s adversaries and how organizations can thwart attacks. As revealed in this research, while some attackers may be motivated by non-pecuniary reasons, such as those that are geopolitical or reputational, an average of 69 percent of respondents say they are in it for the money.
In this study, we surveyed 304 threat experts in the United States, United Kingdom and Germany.
We built this panel of experts based on their participation in Ponemon Institute activities and IT security conferences. They were assured their identity would remain anonymous. Twenty-one percent of respondents say they are very involved, and 79 percent of respondents are involved in the threat community. They are all familiar with present-day hacking methods.

Here are the key findings:

Attackers are opportunistic. Adversaries go after the easiest targets first. They won’t waste time on an attack that will not quickly result in a treasure trove of high-value information,
according to 72 percent of respondents. Further, attackers will quit when the
targeted company has a strong defense, according to 69 percent of respondents.

Cost and time to plan and execute attacks is decreasing. According to 53
percent of respondents, the total cost of a successful attack has decreased, driving
even more attacks across the industry. Similarly, 53 percent of respondents say
the time to plan and execute an attack has decreased. Of these 53 percent of
respondents who say it takes less time, 67 percent agree the number of known
exploits and vulnerabilities has increased, 52 percent agree attacker skills have improved and 46 percent agree hacking tools have improved.

Increased usage of low-cost and effective toolkits drives attacks. Technically proficient
attackers are spending an average of $1,367 for specialized toolkits to execute attack. In the
past two years, 63 percent of respondents say their use of hacker tools has increased and 64
percent of respondents say these tools are highly effective.

Time to deter the majority of attacks is less than two days. The longer an organization can
keep the attacker from executing a successful attack the stronger its ability to safeguard its
sensitive and confidential information. The inflection point for deterring the majority of attacks is less than two days (40 hours) resulting in more than 60 percent of all attackers moving on to
another target.

Adversaries make less than IT security professionals. On average, attackers earn $28,744
per year in annual compensation, which is about one-quarter of a cybersecurity professional’s
average yearly wage.

Organizations with strong defenses take adversaries more than double the time to plan
and execute attacks. The average number of hours a technically proficient attacker takes to plan and execute an attack against an organization with a “typical” IT security infrastructure is less than three days (70 hours). However, when the company has an “excellent” IT infrastructure the time doubles to an average of slightly more than six days (147 hours).

Threat intelligence sharing is considered the most effective in preventing attacks.
According to respondents, an average of 39 percent of all hacks can be thwarted because the
targeted organization engaged in the sharing of threat intelligence with its peers.
Investments in security effectiveness can reduce successful attacks significantly. As an
organization strengthens its security effectiveness, the ability to deter attacks increases, as
shown in this report.

The following are recommendations to harden organizations against malicious actors:

  • Create a holistic approach to cyber security, which includes focusing on the three important
    components of a security program: people, process and technology.
  • Implement training and awareness programs that educate employees on how to identify and protect their organization from such attacks as phishing.
  • Build a strong security operations team with clear policies in place to respond effectively to
    security incidents.
  • Leverage shared threat intelligence in order to identify and prevent attacks seen by your
    peers.
  • Invest in next-generation technology such as threat intelligence sharing and integrated
    security platforms that can prevent attacks and other advanced security technologies.

Where the presidential candidates stand on Snowden, surveillance

Bob Sullivan

Bob Sullivan

What do the presidential candidates think of domestic intelligence collection — or spying on Americans, depending on your point of view?  What do they think of Ed Snowden?

We haven’t heard a lot about the NSA or Snowden during the noisy campaigns so far, and that’s a shame. That’s because all the air is being sucked out of the conversation by more trivial concerns, such as Donald Trump’s debate schedule.  But all the candidates have spoken about domestic spying and about Snowden.

As we welcome election season proper, here’s a primer on the candidates’ views.

But first, a few notes: The most remarkable item of note is that Sen. Bernie Sanders voted against the original Patriot Act back in 2001 as a member of the House. He’s part of a very select group who did so.

Second, while some candidates have expressed a bit more sympathy for Snowden’s role as whistleblower, they’ve all called for him to face prosecution for treason. Even Sanders.

REPUBLICANS

Marco Rubio

On Snowden: He “sparked conspiracy theories”

From the Atlantic: “We must also distinguish these reasonable concerns from conspiracy theories sparked by Edward Snowden. This man is a traitor who has sought assistance and refuge from some of the world’s most notorious violators of liberty and human rights.”

On domestic surveillance: (The Washington Post) Those who voted for the Freedom Act, like Ted Cruz, put America at risk by making it harder to gather intelligence.

Ted Cruz

On Snowden: His opinion seems to have grown harsher over time

In 2013, he said (TheHill.com): “If it is the case that the federal government is seizing millions of personal records about law-abiding citizens, and if it is the case that there are minimal restrictions on accessing or reviewing those records, then I think Mr. Snowden has done a considerable public service by bringing it to light.”

More recently, he said: “Today, we know that Snowden violated federal law, that his actions materially aided terrorists and enemies of the United States, and that he subsequently fled to China and Russia,” he continued. “Under the Constitution, giving aid to our enemies is treason.”

On surveillance: (The Guardian) Cruz has defended his Senate for the USA Freedom Act, which clarified the NSA’s metadata telephone records collection pogram

Donald Trump

On Snowden: He’s hinted that he’d lead a charge to return and execute Snowden.

“I think he’s a terrible traitor, and you know what we used to do in the good old days when we were a strong country? You know what we used to do to traitors, right?” Trump said on Fox. 

On surveillanceI tend to err on the side of security, I must tell you,” he has said (TheHill.com).  “I assume when I pick up my telephone people are listening to my conversations anyway, if you want to know the truth… It’s a pretty sad commentary.”

He also said he would be “fine” with restoring provisions of the Patriot Act (TheHill.com) to allow for the bulk data collection.

DEMOCRATS

Hillary Clinton

On Snowden: He should ‘face the music’

(The Atlantic) “He broke the laws of the United States… He could have been a whistleblower, he could have gotten all the protections of a whistleblower. He chose not to do that. He stole very important information that has fallen into the wrong hands so I think he should not be brought home without facing the music.”

On surveillance:

Clinton voted for both the 2001 Patriot Act and the 2008 FISA Amendments that extended NSA data collection capabilities.  More on her views at The Atlantic.

Bernie Sanders

On Snowden: “I think Snowden played a very important role in educating the American public … he did break the law, and I think there should be a penalty to that,” Sanders said (HuffingtonPost.com). He went on to say that the role Snowden played in educating the public about violations of their civil liberties should be considered before he is sentenced. On the other hand, this mildly Snowden sympathetic story is posted on Sanders’ senate webpage.

On surveillance:

Sanders voted against Patriot Act in 2001 as a member of the House of Representatives.  Later in the Senate, he voted against the 2008 FISA Amendments.

 

Exchanging Cyber Threat Intelligence: There Has to Be a Better Way

Larry Ponemon

Larry Ponemon

Our second annual study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way reveals interesting trends in how organizations are participating in initiatives or programs for exchanging threat intelligence with peers, industry groups, IT vendors and government.

According to the 692 IT and IT security practitioners surveyed, there is more recognition that the exchange of threat intelligence can improve an organization’s security posture and situational awareness. However, concerns about trust in the sources of intelligence and timeliness of the information continue to be a deterrent to participation in such initiatives.

Forty-seven percent of respondents say their organization had a material security breach that involved an attack that compromised the networks or enterprise systems. This attack could have been external (i.e. hacker), internal (i.e. malicious insider) or both. Most respondents (65 percent) say threat intelligence could have prevented or minimized the consequences of the attack.

Following are key research takeaways:

Threat intelligence is essential for a strong security posture. Seventy-five percent of respondents, who are familiar and involved in their company’s cyber threat intelligence activities or process, believe gathering and using threat intelligence is essential to a strong security posture.

Potential liability and lack of trust in sources of intelligence, keep some organizations from participating. Organizations that only partially participate cite potential liability of sharing (62 percent of respondents) and lack of trust in the sources of intelligence (60 percent of respondents). However, more respondents believe there is a benefit to exchanging threat intelligence.

Organizations rely upon peers and security vendors for threat intelligence. Sixty-five percent of respondents say they engage in informal peer-to-peer exchange of information or through a vendor threat exchange service (45 percent of respondents). IT vendors and peers are also considered to provide the most actionable information. Law enforcement or government officials are not often used as a source for threat intelligence.

Threat intelligence needs to be timely and easy to prioritize. Sixty-six percent of respondents who are only somewhat or not satisfied with current approaches say it is because the information is not timely and 46 percent complain the information is not categorized according to threat type or attacker.

Organizations are moving to a centralized program controlled by a dedicated team.  A huge barrier to effective collaboration in the exchange of threat intelligence is the existence of silos. Centralizing control over the exchange of threat intelligence is becoming more prevalent and might address the silo problem.

I hope you will download the full report.