Category Archives: Uncategorized

Cardinals’ ‘hacker’ gets nearly four years in jail (for ‘cheating’ in baseball?) — don’t you be next

Bob Sullivan

Bob Sullivan

Baseball has long celebrated cheating, but electronic cheating just sent a former team front-office worker to prison for nearly four years.

Former St. Louis Cardinals scouting director Chris Correa, who earlier pled guilty to using old passwords to access a former team’s scouting database, was sentenced to 46 months in jail on Monday. Correa broke into the Houston Astros’ computer systems repeatedly, stealing data. He had previously worked for the Astros.

Correa has been dubbed a hacker by sports media, but he simply made educated guesses to break into his old team’s computer database, mainly to download scouting intelligence that might help the Cardinals gain insight into players the Astros wanted to draft or trade for.

The long sentence was tied to the economic loss “suffered” by the Astros…and here things get confusing. According to STLToday.com, federal prosecutors essentially calculated how much money the Astros spent developing the data in their player database.

Assistant U.S. Attorney Michael Chu, who handled the hearing, listed the formula used to arrive at $1.7 million.

“But since much of the data that we looked at focused on the 2013 draft, what we did was we took the number of players that he looked at by 200 and we divided that by the number of players that were eligible to be drafted that year, and we multiplied that times the scouting budget of the Astros that year. That comes to $1.7 million,” he said.

That kind of loss meant a sentence of 36-48 months, according to federal guidelines.

That kind of jail time sounds like a lot for what some might consider the equivalent of stealing a third-base coach’s signs…particularly when you hear about rapists getting 6-month sentences…but it is not out of line with many computer criminal punishments.

There has long been debate about fairness in hacker sentencing, a debate that reached fever pitch after Aaron Swartz for “hacking” research and received a 30-year sentence and ultimately committed suicide.

Again, Correa is no hacker.  When I talked to Morey Haber, vice president of technology at BeyondTrust, he sharply defended the sentence.

“Yes, there is a certain amount of cheating that goes on (in sports), but that’s during the game,” he said. “This is corporate espionage. It’s no different from hacking a bank…It’s no different than if you went from Lockheed Martin to Northrup Grumman (and hacked into your old employer)….It’s not acceptable and courts are sending a strong message.”

Whatever you feel about Correa’s sentence — and hanging questions about whether or not he could have been the only one who knew about all this — there are three really important lessons to learn from the Cardinals hack.\

First, Correa actually told the judge during a hearing that he started breaking into Astros computers because he was afraid they were doing the same thing to him.  That may or may not be true. But “hacking back,” however tempting, is a crime. And it can steal several years from your life.

Second, using an old password to log into your old company — or slight variations of that — might seem like a fairly innocent thing to do. Maybe you forgot a contact phone number, or there’s a document (you wrote!) that you’d like to see one more time.  This kind of “hacking” can feel like no crime at all. It’s just a few keystrokes.

Doing that can also cost you years of your life.

Finally, to you Astros-like companies out there.  Passwords can be easily guessed.  And they can be really easily guessed by former employers who know the password tendencies of your current employees.  Look at this section of the court transcript that describes the ‘hack.’

“It was based on the name of a player who was scrawny and who would not have been thought of to succeed in the major leagues, but through effort and determination he succeeded anyway. So this user of the password just liked that name, so he just kept on using that name over the years. … Kind of like Magidson123… Or Magidson1/2,1/4,1/3.

Have a smarter authentication system than that. At least change the indicator once in a while. (That’s a baseball joke.)

Risky business: How company insiders put high value information at risk

Larry Ponemon

Larry Ponemon

Ponemon Institute is pleased to present the results of Risky Business: How Company Insiders Put High Value Information at Risk, sponsored by Fasoo. The purpose of this study is to understand what activities put business-critical information at risk in the workplace.

Based on the findings of this research, employees and other insiders often lack the information, conscientiousness and guidance needed to make intelligent decisions about the information they have access to and share. In fact, companies are more confident they can stop external attackers from accessing confidential information than their own employees and contractors.

We surveyed 637 IT and IT security practitioners who are familiar with their organization’s approach to securing confidential information contained in documents and files. All organizations represented in this research use document and file-level security tools. In the context of this study, high value information could be trade secrets, new product designs, merger and acquisition activity, confidential business and financial information, and employee information. The loss or theft of such information would be catastrophic for a company and affect its sustainability.

Safeguarding high value information in organizations is a two-way street. Employees need to be responsible and follow data protection policies and safeguards in place. In turn, companies need to have the tools, expertise and governance practices to protect sensitive and confidential information.

According to the study, the majority of organizations represented in this research (56 percent of respondents) say they do not educate employees on the protection of documents and files containing confidential information. In addition, most companies do not conduct an audit to determine if the use and sharing of confidential documents and files are in compliance with regulations and policies. Those companies that did conduct an audit discovered deficiencies in their document or file security practices.

Very few organizations are prepared to stop the leakage of high value information. Only 27 percent of respondents say they have the ability to restrict the sharing of confidential documents and files among employees and only 36 percent believe they can restrict the sharing of files with third parties. Similarly, 28 percent of respondents say their organizations have the ability to manage and control employee access to confidential documents and files.

The following are key takeaways from this report according to these topics:

 High value information is at risk
 The challenge of plugging the leaks of high value information
 Reverse the insider risk

High value information is at risk

Company insiders cause data breaches. The primary cause of data breaches experienced by companies in this study was the careless employee (56 percent of respondents) followed by lost or stolen devices (37 percent of respondents) or system glitches (28 percent of respondents). In contrast, only 22 percent of respondents say external attackers or malicious/criminal insiders (17 percent of respondents) caused the breach.

Companies lack the technologies to detect company insider risk. Sixty-eight percent of respondents say they do not know where their confidential information is located and 61 percent of respondents say their organizations do not have visibility into what confidential documents and files are used and/or shared among employees.

Technologies focus on the perimeter and not on preventing access to unencrypted files. The primary enabling security technologies used in the document and file collaboration environment are identity and access management tools or two-factor authentication. Far fewer organizations are using technologies to manage encryption keys so only the business can access unencrypted files. Enterprise file sharing solutions and technologies that enable organizations to obtain data location when using cloud services are not used frequently .

The sales department and human resources are most likely to put high value information at risk. The sales function and human resources pose the greatest risk to both structured and unstructured information assets . C-level executives also pose great risk to unstructured information assets. The research and development function is the most careful in protecting both structured and unstructured data.

Employees use document and file sharing applications vulnerable to data leakage. Fifty-eight percent of respondents say employees use free versions of consumer file sync and share applications. Only 36 percent of respondents say employees use enterprise-grade file sharing on private cloud.

Education and policies are not in place to provide guidance on appropriate access and sharing practices. Fifty-six percent of respondents say their organizations do not educate employees on the protection of documents and files containing confidential information and 50 percent of respondents say they do not have a policy for the acceptable use of document and cloud or Web-based file sharing applications by employees.

The sharing of files and documents is unsecured. Sixty-nine percent of respondents say files and documents are shared using unencrypted email and 58 percent of respondents say they share files using a cloud-based, commercial file-sharing tool. Only 30 percent of respondents say they use encrypted email and 31 percent of respondents use file transfer protocol (FTP).

Both company-assigned and employee-owned mobile devices are used to access and share confidential documents and files. Only 29 percent of respondents say their organization restricts the use of company-assigned mobile devices such as smartphones and tablets from accessing and sharing confidential documents and files with other employees and third parties. Fifty-four percent of respondents say their organization restricts the use of employee-owned (BYOD) mobile devices such as smartphones and tablets to access and/or share confidential documents and files with others.

Audits are rarely conducted, but they do reveal security deficiencies. Only 23 percent of respondents say their organizations conduct an audit to determine if the use and sharing of confidential documents and files are in compliance with regulations and policies. However, 69 percent of respondents say the audits reveal security issues that need to be addressed.

The challenge of plugging the leaks of high value information

Organizations get low scores for their ability to stop a potential data breach by employees and third parties. Only 41 percent of respondents say their organizations are highly effective in preventing the leakage of confidential documents and files by careless employees and 43 percent are highly effective in preventing the leakage of confidential documents and files by third parties such as vendors and business partners.

There is no clear responsibility for securing documents and files with confidential information. According to 37 percent of respondents, no one person in their organization has ultimate authority for ensuring the security of confidential information in documents and files. Chief information officers and end users have responsibility, according to 35 percent of respondents, respectively. Only 18 percent of respondents say the chief information security officer is responsible.

Organizations struggle to determine the appropriate level of confidentiality of documents and files. Only 17 percent of respondents rate their organizations as highly effective in determining the appropriate level of confidentiality of documents and files. Typically, organizations determine confidentiality by data type (71 percent of respondents), policies (65 percent of respondents) or data usage (59 percent of respondents). Only 13 percent of respondents say they determine confidentiality by who has access to the document or by a content management system (16 percent of respondents).

Stopping unauthorized access is a challenge for companies. Only 15 percent of respondents say their organizations are highly effective in setting employee/user permissions to access confidential documents and files and only 17 percent of respondents say they are successful in curtailing the use of unapproved/insecure document and file collaboration tools.

Reverse the insider risk

Company insiders frequently do stupid things with confidential information. According to 78 percent of respondents, employees frequently do not delete confidential documents or files that were no longer needed or required for use and 51 percent of respondents say employees frequently are sharing files and documents not intended for them. Forty-four percent say very often employees are forwarding confidential files or documents to individuals not authorized to receive them.

Organizations are willing to allow workers to have their confidential information on their home computers and devices. Almost half of respondents (48 percent) say they believe there are situations when it is acceptable for employees to transfer or retain confidential documents or files to their home computer and personally owned tablet or smartphone. Surprisingly, a lack of policy enforcement is an acceptable reason to transfer or retain confidential documents or files to a home computer or personally owned mobile device.

Who owns the company’s proprietary and high value information? If an employee, who is a software programmer, develops applications for a client company and then reuses the same source code in projects for other companies, does that employee have some level of ownership in the work and invention? Fifty percent of respondents say they do. However, if the employee does not receive advance permission from the client company to reuse the source code, 42 percent of respondents say this is a serious infraction and 19 percent of respondents say it is a minor infraction.

The unethical use of a competitor’s proprietary information occurs frequently. Forty-seven percent of respondents say they are aware of situations when recently hired employees bring confidential documents from former employers that are a competitor of their organization. Thirty-seven percent of respondents say they believe this happens very frequently (22 percent of respondents) or frequently (15 percent of respondents). However, 45 percent of respondents do not view the use of a competitor’s business confidential information as an infraction against the company.

To access the full report, click here:
http://en.fasoo.com/Ponemon-Risky-Business-How-Company-Insiders-Put-High-Value-Information-at-Risk

State official: Please stop falling for ransomware attacks — you're costing the taxpayers big bucks

Bob Sullivan

Bob Sullivan

How bad has the ransomware problem become?  The state auditor of Ohio held a press conference yesterday because local government agencies keep falling for ransomware attacks. And a firm that tracks domain activity found a 3,500% increase in ransomware-related domain name registrations in the past quarter.  Hacker love to cut and paste, so imitation is the surest sign that something is working.

Recall the high-profile, alarming ransomware attacks earlier this year on hospitals.  These “your money or your data” crimes can do a lot of damage quickly, and confused organizations brought to their knees by missing mission-critical data often pay up.  Of course, smaller organization with less IT resources are at greater risk.

Here’s what’s going on in Ohio.  Auditor of State Dave Yost issued a warning on Thrusday to treasurers, fiscal officers and others responsible for spending public money that cybercrimes targeting government are “on the rise.” And he offered these examples.

  • An investigation continues in an eastern Ohio county after the county’s court data was attacked by ransomware on May 31. A virus had encrypted the court’s data and hackers demanded $2,500 for the key to unlock the information. Because a recent copy of the data wasn’t available, the county agreed to pay the $2,500. (Note: Because the transaction is ongoing, we are not identifying the county.)
  • A similar ransomware attempt was made April 5 in Vernon Township (Clinton County). That cyberattack did not result in the payment of any ransom because the township’s data was backed up.
  • In Peru Township (Morrow County), the township fiscal officer’s computer began screeching on March 9 before a notice appeared on the screen advising that a solution was available by calling an 800 number. The township paid $200 to stop the attack.

In separate, non-ransomware incidents,  an employee at Big Walnut Local School District in Delaware County was tricked into issuing a check for $38,520 to a hacker. The money was recovered before it was lost. The Madison County Agricultural Society wasn’t as lucky; it was scammed out of $60,491 through someone posing as the IRS, collecting back taxes.

“We’ve all seen and heard about the criminals who try to steal our personal funds. These scammers would like nothing more than to get their sticky fingers on our tax dollars, too,” Yost said. “We need to be vigilant because they are becoming increasingly sophisticated in how they attempt to steal money through the internet.”

Yost is right.  Network security firm Infoblox reported last week that hackers were falling over each other to set up websites related to ransomware scams.  The firm tracks domain registrations as a way of monitoring the Internet for threats, and it says it found a 35-fold increase in newly observed ransomware domains from the fourth quarter of 2015.

“There is an old adage that success begets success, and it seems to apply to malware as in any other corner of life.
In the first quarter of 2016, there were numerous stories in the news about successful ransomware attacks on both
companies and consumers,” the firm said.  “We believe the larger cybercriminal community has taken notice.”

According to the FBI, ransomware victims reported costs of $209 million in the first quarter, compared to $24 million for all of 2015.

“Unless and until companies figure out how to guard against ransomware – and certainly not reward the attack – we expect it to continue its successful run,” Infoblox said.

Yost said all the crimes began with some variation of phishing, and urged all government employees to be on alert.

“The internet is the tool of choice for criminals, and we need to make it as difficult as possible for thieves to access community treasure chests,” Yost said.

The best way to do that, as Vernon Township showed above, is to keep good backups.

State official: Please stop falling for ransomware attacks — you’re costing the taxpayers big bucks

Bob Sullivan

Bob Sullivan

How bad has the ransomware problem become?  The state auditor of Ohio held a press conference yesterday because local government agencies keep falling for ransomware attacks. And a firm that tracks domain activity found a 3,500% increase in ransomware-related domain name registrations in the past quarter.  Hacker love to cut and paste, so imitation is the surest sign that something is working.

Recall the high-profile, alarming ransomware attacks earlier this year on hospitals.  These “your money or your data” crimes can do a lot of damage quickly, and confused organizations brought to their knees by missing mission-critical data often pay up.  Of course, smaller organization with less IT resources are at greater risk.

Here’s what’s going on in Ohio.  Auditor of State Dave Yost issued a warning on Thrusday to treasurers, fiscal officers and others responsible for spending public money that cybercrimes targeting government are “on the rise.” And he offered these examples.

  • An investigation continues in an eastern Ohio county after the county’s court data was attacked by ransomware on May 31. A virus had encrypted the court’s data and hackers demanded $2,500 for the key to unlock the information. Because a recent copy of the data wasn’t available, the county agreed to pay the $2,500. (Note: Because the transaction is ongoing, we are not identifying the county.)
  • A similar ransomware attempt was made April 5 in Vernon Township (Clinton County). That cyberattack did not result in the payment of any ransom because the township’s data was backed up.
  • In Peru Township (Morrow County), the township fiscal officer’s computer began screeching on March 9 before a notice appeared on the screen advising that a solution was available by calling an 800 number. The township paid $200 to stop the attack.

In separate, non-ransomware incidents,  an employee at Big Walnut Local School District in Delaware County was tricked into issuing a check for $38,520 to a hacker. The money was recovered before it was lost. The Madison County Agricultural Society wasn’t as lucky; it was scammed out of $60,491 through someone posing as the IRS, collecting back taxes.

“We’ve all seen and heard about the criminals who try to steal our personal funds. These scammers would like nothing more than to get their sticky fingers on our tax dollars, too,” Yost said. “We need to be vigilant because they are becoming increasingly sophisticated in how they attempt to steal money through the internet.”

Yost is right.  Network security firm Infoblox reported last week that hackers were falling over each other to set up websites related to ransomware scams.  The firm tracks domain registrations as a way of monitoring the Internet for threats, and it says it found a 35-fold increase in newly observed ransomware domains from the fourth quarter of 2015.

“There is an old adage that success begets success, and it seems to apply to malware as in any other corner of life.
In the first quarter of 2016, there were numerous stories in the news about successful ransomware attacks on both
companies and consumers,” the firm said.  “We believe the larger cybercriminal community has taken notice.”

According to the FBI, ransomware victims reported costs of $209 million in the first quarter, compared to $24 million for all of 2015.

“Unless and until companies figure out how to guard against ransomware – and certainly not reward the attack – we expect it to continue its successful run,” Infoblox said.

Yost said all the crimes began with some variation of phishing, and urged all government employees to be on alert.

“The internet is the tool of choice for criminals, and we need to make it as difficult as possible for thieves to access community treasure chests,” Yost said.

The best way to do that, as Vernon Township showed above, is to keep good backups.

Third-party risks, and why tone at the top matters so much

Larry Ponemon

Larry Ponemon

Tone at the Top and Third Party Risk was sponsored by Shared Assessments and conducted by Ponemon Institute to understand the relationship between tone at the top and the minimization of third party risks. We surveyed 617 individuals who have a role in the risk management process in their organizations and are familiar with the governance practices related to third party risks.

A key takeaway from the research is that accountability for managing third party risk is dispersed throughout the organization. Not having one person or function with ownership of the risk is a serious barrier to achieving an effective third party risk management program.

In the context of this study, tone at the top is a term used to describe an organization’s control environment, as established by its board of directors, audit committee and senior management. The tone at the top is set by all levels of management and has a trickle-down effect on all employees of the organization. If management is committed to a culture and environment that embraces honesty, integrity and ethics, employees are more likely to uphold those same values. As a result, such risks as insider negligence and third party risk are minimized.

Participants in this research agree with this assessment. We asked respondents to rate the importance of tone at the top based on a scale of 1 = not important to 10 = very important. The very important responses (7+) are shown in Figure 1. As shown, 83 percent of respondents believe a positive tone is very important to minimizing business risks within their organization and 78 percent of respondents say it is very important to reducing risks in third party (supply chain) relationships.

A positive tone at the top is thought to provide the following benefits, according to respondents:

  • Reduces the risks of working with third parties that are not trustworthy (71 percent of respondents);
  • Incorporates such values as integrity, ethics and trustworthiness in relationships with third parties (66 percent of respondents); and
  • Increases employee and third party awareness of the importance of security, data protection and business resiliency (43 percent of respondents).

The following are key takeaways from the research:

 

  • Third party risk is considered serious and is increasing. Seventy-five percent of respondents agree that third party risk is serious. Further, 70 percent of respondents say the third party risk in their organization is significantly increasing (21 percent of respondents), increasing (20 percent of respondents) or is staying the same (29 percent of respondents).
  • Third party risk is increasing because of a changing threat landscape. Disruptive technologies such as the Internet of Things (IoT) and migration to the Cloud are expected to increase third party risk. Sixty percent of respondents believe IoT increases third party risk significantly (35 percent + 25 percent), and 68 percent of respondents believe migration to the Cloud will increase risk (36 percent + 32 percent).
  • Cyber attacks and the IoT are expected to have the most significant impact on an organization’s third party risk profile. Seventy-eight percent of respondents say cyber attacks will have a significant impact on the risk profile and 76 percent of respondents say the IoT will have a significant impact. Cloud computing, mobility and mobile devices and big data analytics will have a significant impact, according to 71 percent, 67 percent and 51 percent of respondents, respectively.
  • Despite the seriousness of third party risk, it is not a primary risk management objective. The top two risk management objectives are to minimize downtime (56 percent of respondents) and minimize business disruptions (37 percent of respondents). As discussed above, cyber attacks are expected to have a significant impact on the risk of third party relationships. However, only 27 percent of respondents say a top objective is to prevent cyber attacks. Further, only 8 percent of respondents say improvement of their organization’s relationship with business partners is a top risk management objective for their organizations.
  • The consequences of not managing third party risk can be costly. In the past 12 months, organizations represented in this research spent an average of approximately $10 million to respond to a security incident as a result of negligent or malicious third parties.
  • Third party risk management programs are mostly informal and not effective. As discussed previously, reducing third party risk is considered serious but very few respondents say improvement in third party relationships is a top risk management objective. Thus, the incentive among the various business functions to create a comprehensive program for risk management is low. Only 29 percent of respondents say their organizations have a formal program.
  • The lack of formal programs affects the ability to mitigate third party risk. Respondents were asked to rate the effectiveness of their organizations in mitigating or curtailing third party risk from 1 = not effective to 10 = very effective. Only 21 percent of respondents say their organization’s effectiveness in mitigating or curtailing third party risk is considered highly effective (7+ on the scale of 1 to 10).
  • No one function owns the third party risk management program in organizations represented in this study. Accountability for the third party risk management program is dispersed throughout the organization. Twenty-three percent of respondents say the compliance department is most responsible for managing third party risk and 17 percent of respondents say it is the information security function. Only 9 percent of respondents say risk management has ownership of the risk.
  • Most C-level executives are not engaged in their organization’s third party risk management process. Only 37 percent of respondents agree that the C-level executives in their organization believe they are ultimately accountable for the effectiveness of third party risk management. As a possible consequence of this lack of engagement, 50 percent of respondents do not believe the risk management process is aligned with business goals, which are most likely determined by senior management.
  • Boards of directors are not actively engaged in risk management activities. Similar to the perceived lack of accountability on the part of C-suite executives, only 40 percent of respondents say their boards of directors are significantly involved (17 percent) or have at least some involvement in overseeing risk management activities (23 percent).
  • If boards of directors are engaged, it is mostly to conduct reviews. Fifty-two percent of respondents say the board mainly reviews management’s analysis of the effectiveness of a risk assessment and 42 percent of respondents say the board reviews and approves plans to address any risk management or control weakness. Only 25 percent of respondents say they are actively working with management to establish the vision, risk appetite and overall strategic direction for third party relationships.

To read the full research, visit SharedAssessments.org

The day my bank, yet again, blocked me from my money for 'security' — and why two-factor tools aren't ready for prime time

Bob Sullivan

Bob Sullivan

How can a bank – or any organization — become less secure in its attempts to become more secure?  Let me tell you how.

Security must do two things: Protect and enable.  If your security doesn’t enable people to do what they have to do, they will inevitably circumvent it, creating all sorts of exception conditions as they do. And that is the path to perdition (and hacking).

Security often fails because people who design security are much better at throwing up roadblocks than they are creating pathways.  Both are equally important if a security scheme is to work.

This month brought yet another story chronicling theft of millions of passwords by hackers, once again highlighting the importance of implementing “not-just-passwords security” at places that really matter.

But I’m about to turn off two-factor authentication at my bank, right at the moment when everyone seems hell bent to turn it on. Why?  Because it doesn’t make me safer if it doesn’t work; it just prevents me from accessing my money.

I’ve run into classic 21st Century Red Tape headaches with my bank recently as I try very hard to use its two-factor authentication scheme.  I often don’t like single-anecdote stories, but occasionally they illuminate larger problems so perfectly they are worth telling. So here goes.

A quick review:  Two-factor authentication adds a strong layer of security to a service by requiring two tests be met by a person seeking access — a debit card and a PIN code, for example, representing something you have and something you know.  Online banks and websites are slowly but surely nudging everyone towards various forms of two-factor authentication, because it really does make life harder for hackers.

Most of these two-factor forms involve use of smartphones, as they have become nearly ubiquitous. Log on to a website at a PC, confirm a code sent to your phone.  Something you have (the phone) and something you know (the password). Simple, but elegant, and far harder for bad guys to crack.

And it’s great, when it works. But what about when it doesn’t work?

Here’s a simple problem. Consumers get new phones all the time. If the code is tied to the physical handset, the code doesn’t work any longer. What then?

Turns out this can be a really vexing problem. (Readers of this column know why I had to get a new smartphone recently)

I’ve been a USAA banking customer for decades. The financial services firm has ranked atop customer satisfaction surveys seemingly forever, and for good reason:  It really does take good care of members.

At least it did, until it tried to implement two-factor security. I try not to be hypocritical, and follow my own advice, so I turned on USAA’s flavor of two-factor pretty early on. It’s a solid design: A Symantec app loaded onto your smartphone offers a temporary token — a 6-digit code — that changes every 30 seconds. The token is tied to the physical handset. Only a person who knows your PIN and can access the token on that handset can log into the website. You can see all the layers of protection that creates.

Sure, it’s a tiny hassle to pull out the phone every time you want to log on to the website — a larger hassle if your phone battery is dead. But that’s a fair price to pay for security.

However, the hassle becomes immense when it becomes time to change handsets.   So immense that as I type this, I cannot access my bank…and have no idea when I will be able to do so.(UPDATE: I was able to fix my login woes 24 hours later.) And that’s happened twice to me in the past year. Why? Chiefly because USAA is not set up to deal with the problem of new handsets.

To review: When I tried to access the website it demanded a token from my phone — a token that was no longer valid because I had a new phone.  When I tried to use the phone’s app to access my accounts, USAA asked for a password because it didn’t recognize the phone.  I didn’t have a password, I had a token — an invalid token.  You get the picture.

All that is a predictable technology hiccup that’s not the end of the world.  The real problem came next.

A call to customer service seemed to be my last available option, but that was dismal, too.  At various times I wasn’t been able to get through to customer service phone lines. What’s much worse, however, is what happened when I did get through.

People change phones roughly every two years, so this new handset problem must come up often enough.  Yet it’s obvious to me USAA operators are not ready to handle the problem when consumers call.  Each time I have reached an operator, I had to spend a lot of time explaining the problem — and remember, I do this for a living.  The first successful call today, the operator merely changed my mobile application login settings after putting me on hold for minutes.  When I protested that, she said she had to transfer me to a special department, and then the phone went dead.

After a second call and wait, the operator was sympathetic, but put me on hold quickly and wasted a lot of time trying to set me up with a new phone number.  It took a while before I could convince her that “new phone” meant “new handset” not “new number,” a mistake I will correct in future calls. We eventually agreed that all I needed was someone to turn off two-factor and issue me a temporary password so I could go in and re-establish the connection between my handset and my account.  But after another long hold, and transfers to two other operators, I was told that, sadly, they were having trouble issuing temporary passwords and asked if I could call back in an hour or so.

I’ve left out many steps in this saga.  At each stage, of course, I was subject to strict authentication questions. That’s fine — I was asking for a new password, after all.  But at the end of my fruitless journey through tech support, when I asked if I could somehow get express treatment when I called back just to find out if I could get a temporary password, I was told, “no.”  So I will have to once, again, convince a primary operator who I am, and that I am having token problems and that I need a temporary password.  There is obviously no “token problem” script, ready for my problem.

My experience last time was similar, so I know I am not just the victim of bad luck.

The last time this happened, I was sure to give the operator who finally liberated my account some specific feedback — there needs to be a tidy process for dealing with people who get new handsets.    Obviously, that hasn’t occurred. And so, the first thing I will do when I can access my account is disable the token. (I’ll use another form of two-factor). While I am afraid of hackers, I’m more afraid of not be able to access my money because my bank has poorly implement a security solution.

When I called USAA as a reporter to discuss my experience, the firm owned up to the challenges of implementing two-factor security.

“You’ve encountered an experience we are aware of,” said Mike Slaugh, Executive Director, Financial Crimes Prevention, at USAA. “What we’re working on here is a way to make that experience better. … Multi-factor authentication for us at USAA and the industry in general, it’s important.  (Making this experience better) is top of mind for us as we work to help members protect  themselves.”

USAA is hardly the only firm having trouble dealing with two-factor issues.  Independent security analyst Harri Hursti told me about the foibles consumers face when dealing with two-factor authentication that relies on text messages.

“The moment you start traveling, all bets are off. Text messages over roaming are far from reliable – they either are never delivered, or they experience regular delivery delays over 10-15 minutes, which are the most typical time-out limits on the websites,” he said. Hursti, who was in Portugal when I interviewed him, said he was late paying an electricity bill this month because of two-factor pain points.  “Basically, in order to do banking when travelling internationally, you need to start that by turning all security off. And yet you are knowingly getting into increased security risk environment.”

Gartner security analyst Avivah Litan says these kinds of implementation and customer service issues not only threaten adoption of two-factor security, they actually create more pathways for hackers.

“Two factor, in this case, actually weakens security – rather than strengthens it,” she said. “I always tell our clients that their security is only as strong as its weakest link and surely, when they disable two factor authentication on the account, they likely ask the account holder to verify their identity by answering those easily compromised challenge questions, which any criminal who can buy data on the dark web has access to.  Therefore this is an easy way for criminals to get access to your account.  So not does two factor authentication without proper supporting processes serve to annoy and greatly inconvenience good legitimate customers, it also does little to keep the bad guys out for this and other reasons.”

As Litan is fond of saying, there’s a fallacy that “harder is better” in security.  It “doesn’t keep bad guys out, but it annoys good guys.”

Perhaps this problem isn’t *that* common yet, as uptake on two-factor is still relatively small (USAA acknowledged that, and it’s common across the industry). Don’t worry: With each password hack, more and more people will turn on two-factor.  If companies blow the implementation, consumers will just as quickly turn it off again.  And we might lose them for several years.

Protect and enable, or we’re all at greater risk.

The day my bank, yet again, blocked me from my money for ‘security’ — and why two-factor tools aren’t ready for prime time

Bob Sullivan

Bob Sullivan

How can a bank – or any organization — become less secure in its attempts to become more secure?  Let me tell you how.

Security must do two things: Protect and enable.  If your security doesn’t enable people to do what they have to do, they will inevitably circumvent it, creating all sorts of exception conditions as they do. And that is the path to perdition (and hacking).

Security often fails because people who design security are much better at throwing up roadblocks than they are creating pathways.  Both are equally important if a security scheme is to work.

This month brought yet another story chronicling theft of millions of passwords by hackers, once again highlighting the importance of implementing “not-just-passwords security” at places that really matter.

But I’m about to turn off two-factor authentication at my bank, right at the moment when everyone seems hell bent to turn it on. Why?  Because it doesn’t make me safer if it doesn’t work; it just prevents me from accessing my money.

I’ve run into classic 21st Century Red Tape headaches with my bank recently as I try very hard to use its two-factor authentication scheme.  I often don’t like single-anecdote stories, but occasionally they illuminate larger problems so perfectly they are worth telling. So here goes.

A quick review:  Two-factor authentication adds a strong layer of security to a service by requiring two tests be met by a person seeking access — a debit card and a PIN code, for example, representing something you have and something you know.  Online banks and websites are slowly but surely nudging everyone towards various forms of two-factor authentication, because it really does make life harder for hackers.

Most of these two-factor forms involve use of smartphones, as they have become nearly ubiquitous. Log on to a website at a PC, confirm a code sent to your phone.  Something you have (the phone) and something you know (the password). Simple, but elegant, and far harder for bad guys to crack.

And it’s great, when it works. But what about when it doesn’t work?

Here’s a simple problem. Consumers get new phones all the time. If the code is tied to the physical handset, the code doesn’t work any longer. What then?

Turns out this can be a really vexing problem. (Readers of this column know why I had to get a new smartphone recently)

I’ve been a USAA banking customer for decades. The financial services firm has ranked atop customer satisfaction surveys seemingly forever, and for good reason:  It really does take good care of members.

At least it did, until it tried to implement two-factor security. I try not to be hypocritical, and follow my own advice, so I turned on USAA’s flavor of two-factor pretty early on. It’s a solid design: A Symantec app loaded onto your smartphone offers a temporary token — a 6-digit code — that changes every 30 seconds. The token is tied to the physical handset. Only a person who knows your PIN and can access the token on that handset can log into the website. You can see all the layers of protection that creates.

Sure, it’s a tiny hassle to pull out the phone every time you want to log on to the website — a larger hassle if your phone battery is dead. But that’s a fair price to pay for security.

However, the hassle becomes immense when it becomes time to change handsets.   So immense that as I type this, I cannot access my bank…and have no idea when I will be able to do so.(UPDATE: I was able to fix my login woes 24 hours later.) And that’s happened twice to me in the past year. Why? Chiefly because USAA is not set up to deal with the problem of new handsets.

To review: When I tried to access the website it demanded a token from my phone — a token that was no longer valid because I had a new phone.  When I tried to use the phone’s app to access my accounts, USAA asked for a password because it didn’t recognize the phone.  I didn’t have a password, I had a token — an invalid token.  You get the picture.

All that is a predictable technology hiccup that’s not the end of the world.  The real problem came next.

A call to customer service seemed to be my last available option, but that was dismal, too.  At various times I wasn’t been able to get through to customer service phone lines. What’s much worse, however, is what happened when I did get through.

People change phones roughly every two years, so this new handset problem must come up often enough.  Yet it’s obvious to me USAA operators are not ready to handle the problem when consumers call.  Each time I have reached an operator, I had to spend a lot of time explaining the problem — and remember, I do this for a living.  The first successful call today, the operator merely changed my mobile application login settings after putting me on hold for minutes.  When I protested that, she said she had to transfer me to a special department, and then the phone went dead.

After a second call and wait, the operator was sympathetic, but put me on hold quickly and wasted a lot of time trying to set me up with a new phone number.  It took a while before I could convince her that “new phone” meant “new handset” not “new number,” a mistake I will correct in future calls. We eventually agreed that all I needed was someone to turn off two-factor and issue me a temporary password so I could go in and re-establish the connection between my handset and my account.  But after another long hold, and transfers to two other operators, I was told that, sadly, they were having trouble issuing temporary passwords and asked if I could call back in an hour or so.

I’ve left out many steps in this saga.  At each stage, of course, I was subject to strict authentication questions. That’s fine — I was asking for a new password, after all.  But at the end of my fruitless journey through tech support, when I asked if I could somehow get express treatment when I called back just to find out if I could get a temporary password, I was told, “no.”  So I will have to once, again, convince a primary operator who I am, and that I am having token problems and that I need a temporary password.  There is obviously no “token problem” script, ready for my problem.

My experience last time was similar, so I know I am not just the victim of bad luck.

The last time this happened, I was sure to give the operator who finally liberated my account some specific feedback — there needs to be a tidy process for dealing with people who get new handsets.    Obviously, that hasn’t occurred. And so, the first thing I will do when I can access my account is disable the token. (I’ll use another form of two-factor). While I am afraid of hackers, I’m more afraid of not be able to access my money because my bank has poorly implement a security solution.

When I called USAA as a reporter to discuss my experience, the firm owned up to the challenges of implementing two-factor security.

“You’ve encountered an experience we are aware of,” said Mike Slaugh, Executive Director, Financial Crimes Prevention, at USAA. “What we’re working on here is a way to make that experience better. … Multi-factor authentication for us at USAA and the industry in general, it’s important.  (Making this experience better) is top of mind for us as we work to help members protect  themselves.”

USAA is hardly the only firm having trouble dealing with two-factor issues.  Independent security analyst Harri Hursti told me about the foibles consumers face when dealing with two-factor authentication that relies on text messages.

“The moment you start traveling, all bets are off. Text messages over roaming are far from reliable – they either are never delivered, or they experience regular delivery delays over 10-15 minutes, which are the most typical time-out limits on the websites,” he said. Hursti, who was in Portugal when I interviewed him, said he was late paying an electricity bill this month because of two-factor pain points.  “Basically, in order to do banking when travelling internationally, you need to start that by turning all security off. And yet you are knowingly getting into increased security risk environment.”

Gartner security analyst Avivah Litan says these kinds of implementation and customer service issues not only threaten adoption of two-factor security, they actually create more pathways for hackers.

“Two factor, in this case, actually weakens security – rather than strengthens it,” she said. “I always tell our clients that their security is only as strong as its weakest link and surely, when they disable two factor authentication on the account, they likely ask the account holder to verify their identity by answering those easily compromised challenge questions, which any criminal who can buy data on the dark web has access to.  Therefore this is an easy way for criminals to get access to your account.  So not does two factor authentication without proper supporting processes serve to annoy and greatly inconvenience good legitimate customers, it also does little to keep the bad guys out for this and other reasons.”

As Litan is fond of saying, there’s a fallacy that “harder is better” in security.  It “doesn’t keep bad guys out, but it annoys good guys.”

Perhaps this problem isn’t *that* common yet, as uptake on two-factor is still relatively small (USAA acknowledged that, and it’s common across the industry). Don’t worry: With each password hack, more and more people will turn on two-factor.  If companies blow the implementation, consumers will just as quickly turn it off again.  And we might lose them for several years.

Protect and enable, or we’re all at greater risk.

Healthcare organizations are in the cross hairs of cyber attackers

Larry Ponemon

Larry Ponemon

The State of Cybersecurity in Healthcare Organizations in 2016, sponsored by ESET, found that on average, healthcare organizations represented in this study have experienced almost one cyber attack per month over the past 12 months. Almost half (48 percent) of respondents say their organizations have experienced an incident involving the loss or exposure of patient information during this same period, but 26 percent of respondents are unsure.

The research reveals that healthcare organizations are struggling to deal with the same threats other industries face. According to 79 percent of respondents, system failures are the number one risk. The following threats are also considered serious: unsecure medical devices (77 percent of respondents), cyber attackers (77 percent of respondents), employee-owned mobile devices or BYOD (76 percent of respondents), identity thieves (73 percent of respondents) and mobile device insecurity (72 percent of respondents). Despite citing unsecure medical devices as a top security threat, only 27 percent of respondents say their organization has the security of medical device as part of their cybersecurity strategy.

With cyber attacks against healthcare organizations growing increasingly frequent and complex, there is more pressure to refine cybersecurity strategies. Moreover, healthcare organizations have a special duty to secure data and systems against cyber hacks. The misuse of patient information and system downtime can not only put sensitive and confidential information at risk but also put the lives of patients at risk as well.

We surveyed 535 IT and IT security practitioners in a variety of healthcare organizations such as private and public healthcare providers and government agencies . Sixty-four percent of respondents are employed in covered entities and 36 percent of respondents in business associates. Eighty-eight percent of organizations represented in this study have a headcount of between 100 and 500.

PS report chart april 2016

With cyber attacks against healthcare organizations growing increasingly frequent and complex, there is more pressure to refine cybersecurity strategies. Moreover, healthcare organizations have a special duty to secure data and systems against cyber hacks. The misuse of patient information and system downtime can not only put sensitive and confidential information at risk but the lives of patients as well. As shown in Figure 1, healthcare organizations are struggling to deal with a variety of threats such as system failures (79 percent of respondents), unsecure medical devices (77 percent of respondents), cyber attackers (77 percent of respondents), employee-owned mobile devices or BYOD (76 percent of respondents), identity thieves (73 percent of respondents) and unsecure mobile device (72 percent of respondents). Despite citing unsecure medical devices as a top security threat, only 27 percent of respondents say their organization has the security of medical devices as part of their cybersecurity strategy.

The following are key findings from this research:

Healthcare organizations experience monthly cyber attacks. Healthcare organizations experience, on average, a cyber attack almost monthly (11.4 attacks on average per year) as well as the loss or exposure of sensitive and confidential patient information. However, 13 percent are unsure how many cyber attacks they have endured. Almost half of respondents (48 percent) say their organization experienced an incident involving the loss or exposure of patient information in the past 12 months. As a consequence, many patients are at risk for medical identity theft. Exploits of existing software vulnerabilities and web-borne malware attacks are the most common security incidents. According to 78 percent of respondents, the most common security incident is the exploitation of existing software vulnerabilities greater than three months old. A close second, according to 75 percent of respondents, are web-borne malware attacks. This is followed by exploits of existing software vulnerability less than three months old (70 percent of respondents), spear phishing (69 percent of respondents) and lost or stolen devices (61 percent of respondents).

How effective are measures to prevent attacks? Forty-nine percent of respondents say their organizations experienced situations when cyber attacks have evaded their intrusion prevention systems (IPS) but many respondents (27 percent) are unsure. Thirty-seven percent of respondents say their organizations have experienced cyber attacks that evaded their anti-virus (AV) solutions and/or traditional security controls but 25 percent of respondents are unsure. On average, organizations have an APT incident every three months. Only 26 percent of respondents say their organizations have systems and controls in place to detect and stop advanced persistent threats (APTs) and 21 percent are unsure.

On average, over a 12-month period, organizations represented in this research had an APT attack about every 3 months (3.46 APT-related incidents in one year). Sixty-three percent of respondents say the primary consequences of APTs and zero day attacks were IT downtime, followed by the inability to provide services (46 percent of respondents), which create serious risks in the treatment of patients. Forty-four percent of respondents say these incidents resulted in the theft of personal information.

DDoS attacks have cost organizations on average $1.32 million in the past 12 months. Thirty-seven percent of respondents say their organization experienced a DDoS attack that caused a disruption to operations and/or system downtime about every four months and cost an average of $1.32 million. The largest cost component is lost productivity followed by reputation loss and brand damage. Respondents are pessimistic about their ability to mitigate risks, vulnerabilities and attacks across the enterprise. Only 33 percent of respondents rate their organizations’ cybersecurity posture as very effective. The primary challenges to becoming more effective are a lack of collaboration with other functions (76 percent of respondents), insufficient staffing (73 percent of respondents), not enough money and not considered a priority (both 65 percent of respondents).

Organizations are evenly divided in the deployment of an incident response plan. Fifty percent of respondents say their organization has an incident response plan in place. Information security and corporate counsel/compliance are the individuals most involved in the incident response process, according to 40 percent of respondents and 37 percent of respondents, respectively.

Technology poses a greater risk to patient information than employee negligence. The majority of respondents say legacy systems (52 percent of respondents) and new technologies and trends such as cloud, mobile, big data and the Internet of Things are both increasing vulnerability and threats to patient information. Respondents are also concerned about the impact of employee negligence (46 percent of respondents) and the ineffectiveness of business associate agreements to ensure the security of patient information (45 percent of respondents). System failures are the security threat healthcare organizations worry most about. Seventy-nine percent of respondents say this is one of the top three threats facing their organizations followed by 77 percent of respondents who say it is cyber attackers and unsecure medical devices. Employee-owned mobile devices in healthcare settings are also considered a significant threat for 76 percent of respondents. Once again respondents are more concerned about technology risks than employee negligence or error. Hackers are most interested in stealing patient information.

The most lucrative information for hackers can be found in patients’ medical records, according to 81 percent of respondents. This is followed by patient billing information (64 percent of respondents) and clinical trial and other research information (50 percent of respondents). Healthcare organizations need a healthy dose of investment in technologies. On average, healthcare organizations represented in this research are spending $23 million on IT and an average of 12 percent is allocated to information security. Since an average of $1.3 million is spent annually just to deal with DDoS attacks, the business case can be made to increase technology investments to reduce the frequency of successful attacks. Most organizations are measuring the effectiveness of technologies deployed. At this time, 51 percent of respondents say their organizations are measuring the effectiveness of investments in technology to ensure they achieve their security objectives. The technologies considered most effective are: identity management and authentication (80 percent of respondents) and encryption for data at rest (77 percent of respondents).

There is much more to the report, which can download for free here.

 

Worried about the wrong thing: Hospital hacks show privacy, HIPAA might be dangerous to our health

Bob Sullivan

Bob Sullivan

A few years ago, my long-time, elderly, live-alone neighbor was taken away in an ambulance.  I wasn’t home and heard about it second-hand.  At first, I had no idea how serious it was or even where he was taken, but I was really concerned. So I started calling local hospitals to ask if he’d been admitted.  You can probably guess how that worked out for me.

I was stonewalled at every turn. Even when I said might be the only one who would call about him, that I was concerned he had no nearby next of kin, I got nowhere. I was fully HIPAA’d out.

Eventually, I talked to local police who tipped me off that he had been brought to a nearby hospital. I called them again.

“Not to be morbid, but can I even confirm that he’s still alive?” I pleaded.

“Due to patient privacy, we cannot divulge anything,” I was told.

Now you probably know I care about privacy as much as the next person, but if my friend and neighbor was dying in a hospital bed, I was Hell bent to make sure he didn’t die without knowing at least someone cared about him. And this seemed cruel to me.

I called a few more times.  I finally lucked out and got to someone who, from her voice, sounded quite a bit older. Maybe even a volunteer. She heard me out.

“You didn’t hear it from me,” I recall her saying. “But he’s recovering from brain surgery. He probably had a stroke.”

I’m happy to tell you that I went to see my neighbor a few times during the next several weeks, and after a long recovery, he’s actually doing really well.

I tell you all this because I am worried that situations like these are really helping hackers.

Perhaps you’ve heard about the rash of hospital and health care systems being attacked by ransomware.  In the Washington D.C. area, a chain named MedStar was reduced to performing nearly all tasks on paper by a virus that locked all its files and demanded payment to unlock them.  The problem is so serious that U.S. and Canadian authorities jointly issued a warning about ransomware on March 31, calling attention to attacks on hospitals.

What does this have to do with HIPAA, or my neighbor’s stroke?  It shows we are worrying about the wrong things.

All of us have been HIPAA’d at some point.  We’ve felt the wrath of the Health Insurance Portability and Accountability Act, enacted in 1996.  Want a yes or no answer to a simple question from your doctor?  You can’t get an email from her or him. You have to login to a server that will probably reject the first five passwords you enter and then force you to a reset page, and half the time you’ll give up before you find out that, yes, you should take that pill with food.

There’s a saying in the geek world that “compliance is a bad word in security.”    Walk into any health care facility and you’ll immediately get the sense that everyone from doctors to nurses to cleaning staff are TERRIFIED to violate HIPAA.  On the other hand, I’ve been told by someone who has worked on a recent hospital attack, health facilities routinely are five or even 10 years behind on installing security patches.

Geoff Gentry, a security analyst with Independent Security Evaluators, puts it this way:

“We are defending the wrong asset,” he told me. “We are defending patient records instead of patient health.”

If someone steals a patient record, sure, they can do damage. They can perhaps mess up a patient’s credit report. But if someone hacks and alters a patient record, the consequences can be much more dire.

“It could be life or death,” he said.

Gentry was part of a team from Independent Security Evaluators that reviewed hospital security at a set of facilities three months ago in the Baltimore/Washington area.  The timing couldn’t have been better.  The message couldn’t be more important.

“For almost two decades, HIPAA has been ineffective at protecting patient privacy, and instead has created a system of confusion, fear, and busy work that has cost the industry billions. Punitive measures for compliance failures should not disincentivize the security process, and healthcare organizations should be rewarded for proactive security work that protects patient health and privacy,” the report says. “(HIPAA has) not been successful in curtailing the rise of successful attacks aimed at compromising patient records, as can be seen in the year over year increase in successful attacks. This is no surprise however, since compliance rarely succeeds at addressing anything more than the lowest bar of adversary faced, and so long as more and better adversaries come on to the scene, these attempts will continue to fail.”

In the test, Independent Security Evaluators found issues that ran the gamut from unpatched systems to critical hospital computers left on, and logged in, when patients are left alone in examination rooms.  A typical problem: Aging computers designated for a single task that are left untouched for months or even years, missing critical security updates.

Larry Ponemon, who runs a privacy consulting firm, was an adviser on that project.  He assessment is equally as blunt.

“Being HIPAA compliant has become almost like a religion,” he says. “The reality is that being compliant with
HIPAA doens’t get you really far.”

To be clear:  The report didn’t uncover lazy IT workers playing video games while IT infrastructure crumbles around them. Nor did it find uncaring doctors, nurses, or even administrators. To the contrary, if found haggard security professionals desperately trying to keep up with security issues, and generally falling hopelessly behind as their attention is constantly redirected to paranoia over compliance issues.

“A lot of companies have made poor investment decisions in security. They are doing things that are not diminishing their risk,” Ponemon, who runs The Ponemon Institute, said. (NOTE: Larry Ponemon and I have a joint project on privacy issues, a newsletter called The Ponemon Sullivan Privacy Report.)

Hackers are devoted copycats, so we know more attacks on hospitals are coming. At the moment, these attacks seem to have been limited to administrative systems, and the impacted health care facilities say patient care was unaffected. (I did interview a D.C.-area patient who said two doctors were unable to share his patient files, leading to unnecessary delay and expense).

It’s easy to imagine far worse outcomes, however.  Gentry speculated that hackers could attack a specific patient and extort him or her.  Ponemon talked about attacks on pacemakers or other digitally-connected devices that control patient health.

“These sound like they are science fiction, but hospitals are part of the Internet of Things,” he said.  “And there doesn’t seem to be a plan to manage the security risk.”

The plan, Gentry says, has to involve righting the regulatory ship and letting hospitals and health care facilities worry about the right things.

“We need to take a lot of this bandwidth we are appropriating to compliance and use that bandwidth on security and patient health,” he said.

And we’d better start soon. Because we’ve given the bad guys a pretty sizable head start while we were distracted by Herculean efforts to protect my neighbor from me.

Two-thirds of security pros waste a ‘significant’ amount of time chasing false positives

Larry Ponemon

Larry Ponemon

We are pleased to present the findings of The State of Malware Detection & Prevention sponsored by Cyphort. The study reveals the difficulty of preventing and detecting malware and advanced threats. The IT function also seems to lack the information and intelligence necessary to update senior executives on cybersecurity risks.

We surveyed 597 IT and IT security practitioners in the United States who are responsible for directing cybersecurity activities and/or investments within their organizations. All respondents have a network-based malware detection tool or are familiar with this type of tool.

Getting malware attacks under control continues to be a challenge for companies. Some 68 percent of respondents say their security operations team spends a significant amount of time chasing false positives. However, only 32 percent of respondents say their teams spend a significant amount of time prioritizing alerts that need to be investigated.

Despite such catastrophic data breaches as Target, cyber threats are not getting the appropriate attention from senior leadership they deserve. As shown in the findings of this research, respondents say they do not have the necessary intelligence to make a convincing case to the C-suite about the threats facing their company.

The following findings further reveal the problems IT security faces in safeguarding their companies’ high value and sensitive information.

Companies are ineffective in dealing with malware and advanced threats. Only 39 percent of respondents rate their ability to detect a cyber attack as highly effective, and similarly only 30 percent rate their ability to prevent cyber attacks as highly effective. Respondents also say their organizations are doing poorly in prioritizing alerts and minimizing false positives. As mentioned above, a significant amount time is spent chasing false positives but not prioritizing alerts.

Most respondents say C-level executives aren’t concerned about cyber threats. Respondents admit they do not have the intelligence and necessary information to effectively update senior executives on cyber threats. If they do meet with senior executives, 70 percent of respondents say they report on these risks to C-level executives only on a need-to-know basis (36 percent of respondents) or never (34 percent of respondents).

Sixty-three percent of respondents say their companies had one or more advanced attacks during the past 12 months. On average, it took 170 days to detect an advanced attack, 39 days to contain it and 43 days to remediate it.

The percentage of malware alerts investigated and determined to be false positives. On average, 29 percent of all malware alerts received by their security operations team are investigated and an average of 40 percent are considered to be false positives. Only 18 percent of respondents say their malware detection tool provides a level of risk for each incident.

Do organizations reimage endpoints based on malware detected in the network? More than half (51 percent) of respondents say their organization reimages endpoints based on malware detected in the network. An average of 33 percent of endpoint re-images or remediations are performed without knowing whether it was truly infected. The most effective solutions for the remediation of advanced attacks are network-based sandboxing and network behavior anomaly analysis.

Download and read the rest of the report.