While negligence causes the most breaches, insiders do the most damage

Larry Ponemon

Ponemon Institute and ObserveIT have released The 2018 Cost of Insider Threats: Global Study, on what companies have spent to deal with a data breach caused by a careless or negligent employee or contractor, criminal or malicious insider or a credential thief. While the negligent insider is the root cause of most breaches, the bad actor who steals employees’ credentials is responsible for the most costly incidents.

The first study on the cost of insider threats was conducted in 2016 and focused exclusively on companies in the United States. In this year’s benchmark study, 717 IT and IT security practitioners in 159 organizations in North America (United States and Canada), Europe, Middle East and Africa, and Asia-Pacific were interviewed.

According to the research, if the incident involved a negligent employee or contractor, companies spent an average of $283,281. The average cost more than doubles if the incident involved an imposter or thief who steals credentials ($648,845). Hackers cost the organizations represented in this research an average of $607,745 per incident.

Here are the main findings of the research:

Imposter risk is the most costly.

The cost ranges significantly based on the type of incident. If it involves a negligent
employee or contractor, each incident can average $283,281. The average cost
more than doubles if the incident involves an imposter or thief who steals credentials
($648,845). Hackers cost the organizations represented in this research
an average of $607,745 per incident. The activities that drive costs are: monitoring &
surveillance, investigation, escalation, incident response, containment, ex-post
analysis and remediation.

The negligent insider is the root cause of most incidents

Most incidents in this research were caused by insider negligence. Specifically, the careless
employee or contractor was the root cause of almost 2,081 of the 3,269 incidents reported. The
most expensive incidents are due to imposters stealing credentials and were the least reported.
There were a total of 440 incidents involving stolen credentials.

Organizational size and industry affects the cost per incident

The cost of incidents varies according to organizational size. Large organizations with a
headcount of more than 75,000 spent an average of $2,081 million over the past year to resolve
insider-related incidents. To deal with the consequences of an insider incident, smaller-sized
organizations with a headcount below 500 spent an average of $1.80 million. Companies in
financial services, energy & utilities and industrial & manufacturing incurred average costs of
$12.05 million, $10.23 million and $8.86 million, respectively

All types of threat of insider risks are increasing.

Since 2016 the average number of incidents involving employee or contractor negligence has increased from 10.5 to 13.4. The average number of credential theft incidents has tripled over the past two years, from 1.0 to 2.9.

Employee or contractor negligence costs companies the most.

In terms of total annual costs, it is clear that employee or contractor negligence represents the most expensive insider profile. While credential theft is the most expensive on a unit cost basis, it represents the least expensive profile on an annualized basis.

It takes an average of more than two months to contain an insider incident.

It took an average of 73 days to contain the incident. Only 16 percent of incidents were contained in less than 30 days.

We conclude that companies need to intensify their efforts to minimize the insider risk because of rising costs and frequency of incidents. Since 2016 the average number of incidents involving employee or contractor negligence has increased from 10.5 to 13.4. The average number of credential theft incidents has tripled over the past two years, from 1.0 to 2.9. In addition, these incidents are not resolved quickly.

Click here to read the rest of this study.


Privacy problems? Think of them as side effects

Bob Sullivan

Not long ago, I was approached by someone to help write a book about the race to cure cancer. It was an intriguing idea, and it sent me down a rabbit whole of research so I’d be able to understand what I’d be getting into. What I found was one Greek myth-like tale after another, of a wonderful breakthrough followed by a tragic outcome.  An incredibly promising development followed by crushing consequence.  Of treatments that killed cancer but also killed patients. Of cures that are worse than the disease.

Sometimes, these are stories about egos blinded by a God complex, refusing to see they are hurting instead of helping. Usually, they are stories about people who spend decades in service to humanity and the slow, very unsteady, very unsure march of progress.

And these are stories about damned side effects.

I usually tell people that I’m a tech reporter, but that I focus on the unintended consequences of technology — tech’s dark side.  Privacy, hacking, viruses, manipulation of consumers via big data. These things are kind of like the nuclear waste of “progress.” But lately I’ve been thinking about changing that description.

Now, I think the problem is a lot more like the medical concept of side effects.

Companies like Facebook, Uber, and Google are full of brilliant engineers who spend all their time and energy trying to solve some of the world’s great problems, and they often do.  Uber and its imitators are wonderful at solving vexing transportation problems.  Facebook *has* connected billions of people, and let millions of families share baby photos easily.  These are good tools. Amazing tools.

But tech firms aren’t built to think about side effects.  Long before the Russian trolls in 2016, plenty of people warned Facebook about the crap its service was spewing, about how its tool had been hijacked and weaponized. But Facebook didn’t listen. The firm was too focused on the “cure” it was inventing — maybe too arrogant, maybe too naive — to see the damage it was doing.

There are similar tales all across tech-land.

Banking apps let us pay our friends instantly; they also let criminals steal from us instantly. Talk to banks about this, and you can almost hear the mad-scientist approach (I hear, “Well, consumers really should protect themselves,” as “We can’t let a few victims get in the way of progress!”)

Cell phone companies have created amazing products. And now, we know, they also make it easy for law enforcement to track us.

There’s a cynical way to view this, of course. Facebook is only concerned with making money, Google doesn’t really care about making the world a better place, just making its balance sheet a better place. If you believe that, I’m not trying to talk you out of it.  Corporations are people after all, our Supreme Court says, and greedy people at that. It’s illegal for them to act otherwise; it would be negligent not to maximize shareholder value.

I’ve spent 20 years talking to people in the tech industry, however, and there’s plenty of folks in it who don’t think that way.  I think most folks in tech who fail us are better described a naive Utopians rather than greedy bastards.

In the coming months, I’ll be working on a new set of initiatives around this notion.  The effort really started this year with re-release of Gotcha Capitalism.  My podcast “Breach” is also part of this. So are some new audio projects I’m working on. I’m being vague because I have to, for now.  You might see a bit of a slowdown in posts as I ready this projects, but rest assured, I’m on the beat.

In the new introduction to the new Gotcha Capitalism, I sum up what I feel is the civil rights issue of our time: Big Data being used against consumers.  It fits the Failed Utopia model like a T.  Folks wanted to remove the human element — often susceptible to racial and other forms of bias — from important decisions in realms like credit and criminal punishment. So credit scores are now used to grant mortgages, and formulas are used in sentencing decisions.  Unfortunately, as my Dad taught me in the 1970s, “Garbage In, Garbage Out,” is still the primal rule in computing.  Algorithms can suffer from bias, too. What makes this scary, however, is many folks haven’t woken up to this fact yet.  Just as, once upon a time, people believed that photographs can’t lie, today, many blindly think that data can’t lie.

It can, and does. More important, in the wrong hands, data can be abused.  So now we have the even-worse story of a powerful tool built by a Utopian falling into the wrong hands and being abused by an evil genius.

This is the story of tech today.

I’m hardly the only one who recognizes this. Organizations like the Center for Humane Technology are springing up all over.  This is promising. But the forces aligned against such thoughtful use of tech are powerful, and billions of dollars are at stake.  Sometimes, it can feel like the the onslaught of tech’s takeover is a force of nature, like gravity.  Just ask anyone who’s ever tried to convince a startup to think about security or privacy while it’s racing to release new features.

Not unlike someone racing to invent a cure, side effects be damned.

I hope you’ll join me in this effort. Little things mean a lot — such as this woman’s suggestions for getting people to put down their smartphones when she wants to talk.  Mere awareness of the issue helps a lot. Think about how much news you get from Facebook or Twitter today compared to five years ago. Would your high school civics teacher be proud?

When tech is released in to the world, side effects like privacy and security issues shouldn’t be an afterthought. They should be considered and examined with all the rigor that the medical profession has long practiced. That’s how we’ll make sense out of our future.

‘Knowledge asset’ risk comes into focus; nation-states a bigger concern

Larry Ponemon

The Second Annual Study on the Cybersecurity Risk to Knowledge Assets, produced in collaboration between Kilpatrick Townsend and Ponemon Institute, was done to see whether and in what ways organizations are beginning to focus on how they are safeguarding confidential information critical to the development, performance and marketing of their core businesses in a period of targeted attacks on these assets.

Ponemon Institute surveyed 634 IT security practitioners who are familiar and involved with their organization’s approach to managing knowledge assets. All organizations represented in this study have a program or set of activities for managing knowledge assets. The first study, Cybersecurity Risk to Knowledge Assets, was released in July 2016

Awareness of the risk to knowledge assets increases. More respondents acknowledge that their companies very likely failed to detect a breach involving knowledge assets (an increase from 74 percent of respondents in 2016 to 82 percent of respondents in this year’s research). Moreover, in this year’s research, 65 percent of respondents are aware that one or more pieces of the company’s knowledge assets are now in the hands of a competitor, an increase from 60 percent of respondents in the 2016 study.

The cost to recover from an attack against knowledge assets increases. The average total cost incurred by organizations represented in this research due to the loss, misuse or theft of knowledge assets over the past 12 months increased 26 percent from $5.4 million to $6.8 million.

Eighty-four percent of respondents state that the maximum loss their organizations could experience as a result of a material breach of knowledge assets is greater than $100 million as compared to 67 percent of respondents in 2016.

Actions taken that support the growing awareness of the risk to knowledge assets

Following are findings that illustrate how the growing awareness of the risk to knowledge assets is improving cybersecurity practices in many of the companies represented in this study.

  • Companies are making the protection of knowledge assets an integral part of their IT security strategy (68 percent of respondents vs. 62 percent of respondents in 2016).
  • Boards of directors are requiring assurances that knowledge assets are managed and safeguarded appropriately (58 percent of respondents vs. 50 percent of respondents in 2016).
  • Companies are addressing the risk of employee carelessness in the handling of knowledge assets. Specifically, training and awareness programs are focused on decreasing employee errors in the handling of sensitive and confidential information (73 percent of respondents) and confirming employees’ understanding and ability to apply what they learn to their work (68 percent of respondents).
  • Companies are adopting specific technologies designed to protect knowledge assets. The ones for which use is increasing most rapidly include big data analytics, identity management and authentication and SIEM.
  • There is a greater focus on assessing which knowledge assets are more difficult to secure and will require stricter safeguards for their protection. These are presentations, product/market information and private communications.
  • There is greater recognition that third party access to a company’s knowledge assets is a significant risk. As a result, more companies are requiring proof that the third party meets generally accepted security requirements (an increase from 31 percent of respondents in 2016 to 41 percent in this year’s study) and proof that the third party adheres to compliance mandates (an increase from 25 percent of respondents in 2016 to 34 percent in this year’s study).
  • Companies are aware that nation-state attackers are targeting their company’s knowledge assets (an increase from 50 percent to 61 percent in this year’s study) and 79 percent of respondents believe their companies’ trade secrets or knowledge assets are very valuable or valuable to a nation-state attacker.

To download the full study at Kilpatrick Townsend, click here 

Why my futile search for tuxedo pants shows the Russians are winning

Bob Sullivan

I’ve been raging about Facebook-style privacy invasions for a long time, so I’m glad that folks *seem* to be listening now –though the distance between noise and action is quite far.

I’m not a Luddite, however. My complaints are a lot more practical.  I’ll often make this point: On one side of the ledger, we are surrendering privacy at unprecedented levels, granting black checks to future corporations and governments with consequences we can’t possibly imagine. And we’re getting very little for it.  Meanwhile, Russia, China, and other enemies now have an incredibly powerful weapon to use against us and our freedom. That’s a bad deal. Let me explain.

What are we supposed to be getting in exchange for all this tracking of our every move? Better ads! I will concede that better ads would certainly be lovely. But, as anyone who’s ever worked in advertising knows, there’s still an awful lot of snake oil being sold in the name of better ads.  In fact, today’s “targeted” ads continue to create some of the singularly worst ads imaginable.  Even when some of the biggest and most honorable names in retail and media are involved. Let me show you.

I have a black tie event to attend soon, which means dragging my, ahem, inexpensive tuxedo out of the back of my closet.  Not surprisingly, the pants no longer fit.  So I did what any sensible consumer who attends a black tie event every five years would do — I poked around Nordstrom Rack hoping to find a pair that could pass for a single evening.  I’ll be sitting at a table most of the night, so who’ll notice if they aren’t a perfect match? (Sorry, Kim Peterson. You tried your best.)

I gave up in about 3 minutes, when the small degree of fashion pride I had set in, realizing that my plan wouldn’t work.  So I schlepped to a Nordstrom Rack store the next day and tried on a bunch of black pants to make sure I wouldn’t embarrass myself.  Let me note that I shop at the store often enough that I am a member, because hey, I like deep discounts.

These two great brands are getting hoodwinked.  The consequences are larger than you think.

Fast forward to this morning when I open my daily New York Times email, which came with an enticing headline about allergies.  And what do I see at the top of the email? An ad for tuxedo pants.  I’ve made this point before, and I’ll continue to make it, perhaps for decades.  Do you see what happened there?  Billions of dollars and huge media companies conspired to deliver me an ad that was not just bad, it was uniquely bad. It was catastrophically bad. It was targeted bad.  It was an ad for something that I had just purchased…in fact, something I had just purchased from the very store that paid for the ad. There could be no worse time to show me this ad. Any random ad would be better than an ad for the very thing I need to buy the least, right?  And again, delivering this uniquely, targeted terrible ad required creation of a system that cost billions, robs million of their privacy, and outfits America’s enemies with a devastating weapon.

But wait: There’s even more wrong with my tuxedo-ad experience. Being the game consumer that I am, I clicked on the ad to see what would happen. Maybe there’s a cheaper price for the pants I’d just purchased, and I could return them and save a few bucks. Alas, when I do, I see the curious chart above. While the price for the pants is indeed competitive, fully 16 of the 17 sizes shown are unavailable.  Only a single size — 42×32 — is actually for sale.  Meaning, in reality, I got an ad for something that wasn’t for sale.  And that flat-out irritated me. It wasted my time.

Here’s what I know: Someone is stealing Nordstrom’s advertising money.  (I don’t know why my newsletter doesn’t have a sponsor yet.  I could do better than this.)

I know I’m telling you something you know. We’ve all glanced at a product online, only to be stalked by that product for days, at every website we visit. I’m sure it works to some degree.  For every person shown an ad for a product they’ve purchased, there’s another who needs to see it 5 or 10 times before they pull the trigger. So sure, those ads might be better than random ads in some cases. The ad industry calls this re-targeting, and claims these ads have superior click-through rates.   Solid data from the ad industry is hard to come by, however.

And don’t forget, I’m a Nordstom Rack member.  The firm knows my email address, and what I’ve purchased.  Now, I have clicked opt-out on enough data sharing arrangements that there’s might be some reason the datastream broke down and I got an ad for a product that I couldn’t buy, at the very moment when I least needed it, shortly after I had just purchased that item from the store which paid to get in front of me. More likely, however, that this ad delivery system is just flawed.

So, to repeat my main point: All this technology works great if you want to attack a society with propaganda. It works terribly to help commerce and consumers.

This is my privacy problem. It’s just a bad deal.

Look, I’d love to have seen ads for tuxedo pants that actually fit me last week.  Instead, the only thing I can count on is I now will wonder how all these data points might be used by hackers against me, or by a nation-state to manipulate me and my friends, in the future.

This is not a story about tuxedo pants.  Or about annoying ads. This is a story about the false promise that is the utopia of targeted advertising, and the unexpected consequences that this foolish quest creates.  Years ago, when I first ranted against retargeting, I talked — as I always do — about future unintended consequences.  In my wildest dreams, I didn’t imagine that this kind of data hoarding could help a nation-state attack our democracy.  This is *exactly* the point of today’s story. Who knows how my search for pants today might be used against me tomorrow?  Will it signal to my health insurance company that my rates need to go up?  Will a potential future employer use that information to turn me down for a job?  Will a propaganda pusher in St. Petersburg put me in a “bucket” and prod me with cleverly-crafted political ads?

I don’t know. But I do know these ads didn’t help. And they might hurt. That’s a bad deal for everyone.

Security megatrends — more powerful attacks, more stressed infosec executives

Larry Ponemon

A major deterrent to achieving a strong security posture is the inability for IT professionals to know the big changes or megatrends in security threats that they need to be prepared for. Too many companies are overwhelmed with the daily attacks that are coming fast and furious to think long-term and understand what investments they should be making in people, process and technologies to prevent a catastrophic data breach or cyber attack.

The 2018 Study on Global Megatrends in Cybersecurity was conducted by Ponemon Institute and sponsored by Raytheon to help CISOs throughout the globe prepare for the future threat landscape that will be characterized by an increase in cyber extortion or ransomware attacks and data breaches caused by unsecured IoT devices. Here is the link to download the full report: 

Here is a brief summary:

Around the world, cyberattacks on businesses are getting more powerful and harder to stop. Corporate boards aren’t being briefed on cybersecurity, and executives don’t see it as a strategic priority. Meanwhile, information security officers will become more important – and more stressed out.

Those are among the findings of the 2018 Study on Global Megatrends in Cybersecurity, a survey sponsored by Raytheon and conducted by the Ponemon Institute. The survey, conducted in late 2017, looks at commercial cybersecurity through the eyes of those who work on its front lines. More than 1,100 senior information technology practitioners from the United States, Europe, and the Middle East/North Africa region weighed in on the state of the industry today, and where it’s going over the next few years.

Among their insights:

The Internet of Things is an open door: 82% of respondents predict unsecured IoT devices will likely cause a data breach in their organization. 80% say such a breach could be catastrophic.

More ransomware on the way: 67% believe cyber extortion, such as ransomware, will increase in frequency and payout.

Cyber warfare growing likelier: 60% predicted attacks by nation-state actors against government and commercial companies will worsen and could lead to a cyber war. 51% of respondents say cyber warfare will be a high risk in the next three years, compared to 22% who feel that way today. Similarly, 71% say the risk of breaches involving high-value information will be very high, compared to 43% who believe that risk is high today.

Confidence is slipping: Less than half of IT security practitioners surveyed believe they can protect their organizations from cyber threats. That’s down from 59% three years ago.

For execs, cybersecurity is taking a back seat: Only 36% of respondents say their senior leadership sees cybersecurity as a strategic priority, meaning less investment in technology and personnel.

Corporate boards aren’t engaged: 68% of respondents say their boards of directors are not being briefed on what their organizations are doing to prevent or mitigate the consequences of a cyber attack.

IT professionals are feeling pessimistic about progress: 54% believe their organization’s cybersecurity posture will either stay the same or decline. 58% believe staffing problems will worsen, and 46% predict artificial intelligence will not reduce the need for experts in cybersecurity.

CISOs’ stress levels will rise: When asked to rate their level of stress today and three years from now on a scale from 1 = low stress to 10 = high stress, respondents’ stress rating is expected to rise to a new high of 8.08.

Direct effect on shareholder value: 66% believe data breaches or cybersecurity exploits will seriously diminish their organization’s shareholder value.

The true story behind history’s biggest hack: A podcast

Bob Sullivan

You probably had a Yahoo account. And you probably know that account was hacked. After all, the firm admitted about a year ago that 1 billion accounts had been compromised. Check that…it was actually 3 billion. Every Yahoo account created since, essentially, the dawn of the Internet.  What you probably don’t know is that’s the least bad thing that happened during the Yahoo “hack.”

You might not know that U.S. authorities are convinced that a group of Russian-backed hackers, including two FSB (KGB) agents, probably hacked Yahoo, and you. And I feel pretty certain you don’t know that this group of Russians did much more than the usual snatch-and-grab passwords thing.  They lurked inside Yahoo’s systems for more than two years. The had full access to Yahoo account management tool. Critically, they could read user emails. For years. Maybe they read yours.  At first, they targeted very specific individuals — Russian journalists, U.S. government officials; also employees at French transportation company, a Swiss bitcoin wallet firm, a U.S. airline, and many more.

Then, they started scanning millions of user emails.  Last year, investigators revealed that the group was clever enough to “mint” cookies, giving them access to 32 million Yahoo email accounts, and users’ most intimate life details.

Yahoo was the biggest hack in history — both in depth, and in breadth.

Four months ago, I was contacted by the folks at Spoke Media who came to me with an already-assembled team of  brilliant producers and asked if I wanted to help them try to make sense all this.  I jumped at the chance, and I’ve spent most of my time since learning everything I could about this hack. The result is a five-episode podcast which we just released.

It’s a very different kind of storytelling than I’m used to, and you’re used to. You get to come along for the ride. We admit what we don’t know.  We show our work– as journalists, I think this is critical in our time. Experts get to talk, not for moments, but minutes. Even longer.

As you and I try to make sense out of what’s going on at Facebook, in the election, in the era of fake news, I hope we are making a serious contribution to this discussion.

I’m very proud of the project, which has implications far beyond the seemingly innocuous hack of your 10-year-old, dormant Yahoo email account.

You can sample episode 4 by clicking play below, or visit the iTunes page and subscribe to the podcast.

The state of cybersecurity in healthcare organizations in 2018

Larry Ponemon

A strong cybersecurity posture in healthcare is critical to patient safety. Attacks on patient information, medical devices and a hospital’s systems and operations can have a variety of serious consequences. These can include disrupting the delivery of services, putting patients at risk for medical identity theft and possibly endangering the lives of individuals who have a medical device.

To determine the prognosis for healthcare organizations’ ability to reduce cyber attacks, Ponemon Institute conducted The State of Cybersecurity in Healthcare Organizations in 2018,, sponsored by Merlin. We surveyed 627 IT and IT security practitioners in a variety of healthcare organizations that are subject to HIPAA. According to the research, spending on IT increased from an average of $23 million in 2016 to $30 million annually and the average number of cyber attacks each year increased from 11 to 16. On average, organizations spend almost $4 million to remediate an attack.

Healthcare organizations are not immune to the same threats facing other industries. The threats that are the source of most concern are employee errors and cyber attacks. However, third-party misuse of patient data, process and system failures and insecure mobile apps also create significant risk.

The following factors are affecting healthcare organizations ability to secure sensitive data and systems

  • The existence of legacy systems and disruptive technologies, such as cloud, mobile, big data and Internet of Things, put patient information at risk.
  • More attacks evade intrusion prevention systems (IPS) and advanced persistent threats (APTs).
  • Disruptions to operations and system downtime caused by denial of service (DDoS) attacks are increasing.
  • Healthcare organizations are targeted because of the value of patient medical and billing records.
  • Not enough in-house expertise and security leadership makes it more difficult to reduce risks, vulnerabilities and attacks.

Best practices from high- performing healthcare organizations

As part of the research, we did a special analysis of those respondents (59 respondents out of the total sample of 627 respondents) who rated their organizations’ effectiveness in mitigating risks, vulnerabilities and attacks against their organizations as very high (9+ on a scale of 1 = low effectiveness to 10 = high effectiveness. These respondents are referred to as high performer and the analysis is presented in this report.

According to the research, these high-performing organizations are able to significantly reduce cyber attacks. Following are characteristics of high-performing organizations:

  • More likely to have an incident response plan and a strategy for the security of medical devices.
  • Technologies and in-house expertise improve their ability to prevent the loss or exposure of patient data, DDoS attacks and other attacks that evade their IPS and AV solutions.
  • High-performing organizations are better at increasing employee awareness about cybersecurity risks.
  • High-performing organizations also are more positive about the ability to ensure third-party contracts safeguard the security of patient information.
  • High-performing organizations are more likely to have the necessary in-house expertise, including a CISO or equivalent.

Part 2. Key findings

In this section, we provide a deeper analysis of the research. When possible, we compare the findings in this year’s research to the 2016 study.

Trends in risks facing healthcare organizations: Why more cyber attacks are occurring

Patient information is under attack and at risk. Annually, on average healthcare organizations experience 16 cyber attacks, an increase from 11 attacks in the 2016 study. As shown in Figure 2, more than half (51 percent of respondents) say their organizations have experienced an incident involving the loss or exposure of patient information in the past 12 months, an increase from 48 percent in 2016.

Healthcare organizations are experiencing ransomware attacks. For the first time, ransomware attacks were included and 37 percent of respondents say their organizations experienced such an attack. While some security incidents decreased, healthcare organizations continue to be at great risk from a variety of threats.

More attacks evade intrusion prevention systems (IPS) and advanced persistent threats (APTs). Our survey shows 56 percent of respondents say their organizations have experienced situations where cyber attacks evaded their intrusion prevention, an increase from 49 percent of respondents in 2016. Forty-four percent of respondents say their organizations have experienced cyber attacks that evaded their anti-virus (AV) solutions and/or traditional security controls.

More organizations have systems and controls in place to detect and stop advanced persistent threats (APTs). Thirty-three percent of respondents say their organizations have systems and controls in place to detect and stop APTs, an increase from 26 percent of respondents in 2016.

Denial of service (DDoS) attacks increase. Some 45 percent of respondents report their organization had a DDoS attack, an increase from 37 percent of respondents in the 2016 research. On average, organizations experienced 2.94 DDoS attacks in the past 12 months, an increase from 2.65 in 2016.

Hackers are most interested in stealing patient information. The most lucrative information for hackers can be found in patients’ medical records and billing information according to 77 percent and 56 percent of respondents, respectively.

What types of information do you believe hackers are most interested in stealing?

Read sections 2 and 3 of this report at Merlin’s website. 



Retirement account ID theft soars, report says

Click to read this story at KSL.com

Bob Sullivan

Criminals armed with a flood of data stolen in recent data breaches are newly targeting consumers where it might hurt most: their retirement accounts.   The lucrative crime of brokerage account takeovers isn’t new, but it appears identity thieves are having more luck recently raiding victims’ retirements, tricking brokers into emptying accounts and mailing checks that can exceed $100,000.

It’s critical for consumers to realize that retirement accounts have few of the protections afforded to credit and debit card holders; getting “refunds” after an incident like this involves much more than a few phone calls.

Andrea and Steve Voss of Georgia were lucky; they check their account frequently and noticed something had gone wrong — their account balance was $0, and $42,000 was missing.  A criminal had ordered it liquidated, and a $42,000 check sent to their home — then redirected that check to a local UPS store, according to the Atlanta Journal Constitution. 

The Voss’ alerted authorities and police intercepted the delivery, nabbing two suspects.  They were arrested with an $85,000 check from another victim.

At about the same time, an anonymous writer at investment site Bogleheads.org complained that $52,000 had been taken from his elderly father’s IRA account.

These don’t appear to be isolated incidents.  Tucked in the annual Javelin Strategy & Research survey of ID theft crimes was this grim fact: criminals freshly armed with complete dossiers on potential victims are expanding their arsenal of fraud attacks far beyond traditional credit card account hijackings. So-called existing non-card fraud is up sharply, as criminals hijack everything from hotel reward point accounts to mobile phones to crypto-currency wallets. But the crime that might be most devastating – where many victims probably keep their biggest pile of money — is brokerage account takeovers. Javelin says that in 2016, such crimes accounted for only 2% of existing non card fraud.  In 2017, that swelled to 7% — more the tripling in one year.

Retirement account hijackers have a few things going for them.  Consumers might not check them as often, particularly when there’s bad news. And of course, their balances are usually larger than savings or checking accounts.

One might imagine moving money out of a retirement account would be challenging, but not always.  According to the Atlanta Journal Constitution, “surprisingly little” information was required — Voss’ name, address, date of birth and Social Security number.

(I’ve reached out to the firm involved, Prudential, to see if there’s any update to that process or if it has a comment. I will update this story if it responds.)

The Bogleheads victim offers a similar tale:

The custodian of my father’s IRA states that in early September they received a phone call from a man posing as my father, who passed all the security questions and requested a change in email address and that forms for withdrawal of funds be sent to that email. Around two weeks later the custodian received all the paperwork authorizing the withdrawal of funds from the account, and the electronic transfer of said funds into a bank account under my father’s name at a bank he had never heard of and certainly did not use for banking (Regions Bank). The custodian states that the paperwork had my father’s (alleged) signature notarized, and also included a copy of a check from the bank account into which the funds were to be deposited. At that point, the custodian effected the requested transfer of $52,000. 

That tale also has a happy ending, with the victim reporting the money was returned to the account — but only after about six weeks of back-and-forth discussion, the writer says. (I’ve reached out to the firm involved and will update if it responds.)

Retirement account hacking isn’t new.  Way back in 2007, I wrote about a consumer who lost $179,000 in such a scam.  What I learned then is what you need to learn now: The broker has no clear legal obligation to return the stolen funds. Recall that if your checking account is raided, banks have to restore the funds within days while they investigate. No such protection exists for brokerage accounts. Victims might be able to talk their way into a refund, or sue their way into one, but there’s no shortcut process for that.

That’s why it’s critical to know that ID thieves, armed with their massive databases of consumer data, are targeting these kinds of accounts.  Check them, often. Make it part of your normal routine, when you check all your other accounts for fraud. Otherwise, you might end up missing every dollar you’ve worked your whole life to set aside.

Third Annual Study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way

Larry Ponemon

In a world of increasingly stealthy and sophisticated cyber criminals, it is difficult, costly and ineffective for companies to defend themselves against these threats alone. As revealed in The Third Annual Study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way, more companies are reaching out to their peers and other sources for threat intelligence data. Sponsored by Infoblox, the study provides evidence that participating in initiatives or programs for exchanging threat intelligence with peers, industry groups, IT vendors and government results in a stronger security posture.

According to 1,200 IT and IT security practitioners surveyed in the United States and EMEA, the consumption and exchange of threat intelligence has increased significantly since 2015,

This increase can be attributed to the fact that 66 percent of respondents say they now realize that threat intelligence could have prevented or minimized the consequences of a cyber attack or data breach.

Despite the increase in the exchange and use of threat intelligence, most respondents are not satisfied with it. The inability to be actionable, timely and accurate is the most common complaint about threat intelligence.

Following are 12 trends that describe the current state of threat intelligence sharing.

  1. Most companies engage in informal peer-to-peer exchange of threat intelligence (65 percent of respondents) instead of a more formal approach such as a threat intelligence exchange service or consortium (48 percent and 20 percent of respondents, respectively). Forty-six percent of respondents use manual processes for threat intelligence. This may contribute to the dissatisfaction with the quality of threat intelligence obtained.
  1. Organizations prefer sharing with neutral parties and with an exchange service and trusted intermediary rather than sharing directly with other organizations. This indicates a need for an exchange platform that enables such sharing because it is trusted and neutral.
  1. More respondents believe threat intelligence improves situational awareness, with an increase from 54 percent of respondents in 2014 to 61 percent of respondents in this year’s study.
  1. Sixty-seven percent of respondents say their organizations spend more than 50 hours per week on threat investigations. This is not an efficient use of costly security personnel, which should be conducting threat hunting and not just responding to alerts received.
  1. Forty percent of respondents say their organizations measure the quality of threat intelligence. The most often used measures are the ability to prioritize threat intelligence (61 percent of respondents) and the timely delivery of threat intelligence (53 percent of respondents).
  1. Respondents continue to be concerned about the accuracy, timeliness and ability to be actionable of the threat intelligence they receive. Specifically, more than 60 percent of respondents are only somewhat satisfied (32 percent) or not satisfied (28 percent) with the quality of threat intelligence obtained. However, this is a significant decrease from 70 percent in 2014, which indicates some improvement as the market matures. Concerns about how threat intelligence is obtained persist because information is not timely and is too complicated, according to 66 percent and 41 percent of respondents, respectively.
  1. Companies are paying for threat intelligence because it is considered better than free threat intelligence. Fifty-nine percent of respondents also believe it has proven effective in stopping security incidents.
  1. Seventy-three percent of respondents say they use threat indicators and that the most valuable types of information are indicators of malicious IP addresses and malicious URLs.
  1. The value of threat intelligence is considered to decline within minutes. However, only 24 percent of respondents say they receive threat intelligence in real time (9 percent) or hourly (15 percent).
  1. Forty-five percent of respondents say they use their threat intelligence program to define and rank levels of risk of not being able to prevent or mitigate threats. The primary indicators of risk are uncertainty about the accuracy of threat intelligence and an overall decline in the quality of the provider’s services (66 percent of respondents and 62 percent of respondents).
  1. Many respondents say their organizations are using threat intelligence in a non-security platform, such as DNS. The implication is that there is a blurring of lines in relation to what are considered pure networking tools and what are considered security tools. Security means defense-in-depth, plugging all gaps and covering all products.
  1. Seventy-two percent of respondents are using or plan to use multiple sources of threat intelligence. However, 59 percent of respondents have a lack of qualified staff and, therefore, consolidate threat intelligence manually.

Click here to read the rest of this report from Infoblox.

Consumers average 150 passwords; when your credit card expires, you need to remember ALL of them

Bob Sullivan

I recently had to undertake one of the most arduous, perilous tasks consumers face — updating all my credit card automatic payments. My card had expired of natural causes — rare in the age of account hacking —  so off I went, chasing after every card-paying account I have. These kinds of things make me skin-crawling, hair-raising, blood-pressure exploding, whiskey-shot needing anxious. And I’m sure I’m not alone.

I only had to update my expiration date, but as I’m sure all of you know, this process is fraught with disaster. I once failed to properly update an EZPass account, and faced a whopper of cascading penalty fees.  That’s the perilous part.

The arduous part is logging into every freaking account I had and….well, I mean trying to log into every account I have…and making the small change.  That means dealing with all those user names, all those passwords, and a different process every time.

Taking inventory of every auto-payment isn’t as easy as it sounds.  Some accounts are charged monthly. Some quarterly. Some just occasionally, if I use them rarely.  My bank (USAA) provides a helpful, but incomplete, list of possible automated payments. So I scan through about 6 months of bills, eyeballing potential accounts that USAA might have missed.   Some services have arrangements with banks to ease the expiration change, but you just can’t count on that.

Next, I go through the process of logging into (hacking into?) all these accounts. At some sites, it was enough to just change the expiration. Other places required removing the old card and adding it back in with the new expiration. And at still others, (I’m looking at you, SlingTV) the web update simply didn’t work. Try as I might, the tool wouldn’t let me update my account. So I logged into an online chat, and after an authentication song and dance…well, they told me to call. About half an hour or my day, vaporized.

All this hassle is sort of my own fault, as all these firms are rightly paranoid about credit card security, thanks to journalists like me writing so many stories about credit card hacking.  So I’m glad it wasn’t easy.  But here’s the rub: A recent report claims that consumers now have an average of 150 passwords to remember.  ONE HUNDRED AND FIFTY!!

No wonder I need some whiskey.

More about passwords in a moment, but before I leave the topic of anxiety, let me say that these kinds of stories are precisely why The Red Tape Chronicles came to be.  My anxiety isn’t really about the passwords. I know one way or the other I’d be able to get into these services and update my card.  The stress comes from my assumption that behind every one of these accounts lie the potential for a massive GOTCHA.  If my card were declined, perhaps I’d face a late fee. Perhaps my account would be cut off at a critical time. Perhaps I’d be bumped off whatever discount plan I’d arranged, and end up paying a higher price.  These are not imagined fears. These are real booby traps that create real anxiety, born of experience, and maybe just a little PTSD from all those hacked credit card accounts I’ve had to update during the past few years.  If I could assume that these providers would handle the situation reasonably, then I’d be a lot less on edge.  But you know better than that. It only takes one mistake in the wrong transaction to cost you, bigtime.

So, I’m paranoid. And while I think I updated every account correctly, I don’t trust any of them. I’ll go through the same process in 30 days and make sure all those payments went through. Hey, it’s not paranoia if it’s real.

Now, as for passwords — IBM is out with a password report this week showing that consumers are willing to suffer a little inconvenience in exchange for security, and they are open to use of biometrics (enough with passwords already). Not surprisingly, people are most open to fingerprints, but fully 87% said they were open to other kinds of biometrics, like voiceprints. Companies should take this to heart. Every biometric has its special problem (like in the movies, when an iris scan is foiled by cutting out a victim’s eyeball. ew).  But while we keep arguing about imperfections, security still lags in the password/poorly-implemented-two-factor-authentication world.

Since we have to live in that world, here’s IBM’s tips for now: Note that passphrase recommendation, which is probably the best you can do right now.

IBM’s consumer Tips:
§ Use Multi-Step Authentication: Where possible, enable two-factor authentication (2FA) that confirms a login on multiple levels, such as password + a mobile alert or email confirmation. 
§ Opt for Passphrases vs. Passwords: Skip complex passwords and instead use longer “passphrases” – several unrelated words tied together, at least 20 characters. These are actually harder to crack and easier to remember. 
§ Choose a Password Manager: Rather than try to memorize multiple passwords or store them insecurely, use a password manager, which not only acts as a vault for existing passwords, but can also generate stronger passwords for you